David Wagner wrote:
> If the attacker gets full administrator-level access on your machine,
> there are a gazillion ways the attacker can prevent other admins from
> logging on. This patch can't prevent that. It sounds like this patch
> is trying to solve a fundamentally unsolve
David Wagner wrote:
If the attacker gets full administrator-level access on your machine,
there are a gazillion ways the attacker can prevent other admins from
logging on. This patch can't prevent that. It sounds like this patch
is trying to solve a fundamentally unsolveable problem.
Tetsuo
Tetsuo Handa writes:
>When I attended at Security Stadium 2003 as a defense side,
>I was using devfs for /dev directory. The files in /dev directory
>were deleted by attckers and the administrator was unable to login.
If the attacker gets full administrator-level access on your machine,
there are
Tetsuo Handa writes:
When I attended at Security Stadium 2003 as a defense side,
I was using devfs for /dev directory. The files in /dev directory
were deleted by attckers and the administrator was unable to login.
If the attacker gets full administrator-level access on your machine,
there are a
James Morris wrote:
>A. Pathname labeling - applying access control to pathnames to objects,
>rather than labeling the objects themselves.
>
>Think of this as, say, securing your house by putting a gate in the street
>in front of the house, regardless of how many other possible paths there
James Morris wrote:
>The point is that the pathname model does not generalize, and that
>AppArmor's inability to provide adequate coverage of the system is a
>design issue arising from this.
I don't see it. I don't see why you call this a design issue. Isn't
this just a case where they
I've heard four arguments against merging AA.
Argument 1. SELinux does it better than AA. (Or: SELinux dominates AA.
Or: SELinux can do everything that AA can.)
Argument 2. Object labeling (or: information flow control) is more secure
than pathname-based access control.
Argument 3. AA isn't
Stephen Smalley wrote:
>On Fri, 2007-06-22 at 01:06 -0700, John Johansen wrote:
>> No the "incomplete" mediation does not flow from the design. We have
>> deliberately focused on doing the necessary modifications for pathname
>> based mediation. The IPC and network mediation are a wip.
>
>The
Stephen Smalley wrote:
>That would certainly help, although one might quibble with the use of
>the word "confinement" at all wrt AppArmor (it has a long-established
>technical meaning that implies information flow control, and that goes
>beyond even complete mediation - it requires global and
Stephen Smalley wrote:
>On Thu, 2007-06-21 at 21:54 +0200, Lars Marowsky-Bree wrote:
>> And now, yes, I know AA doesn't mediate IPC or networking (yet), but
>> that's a missing feature, not broken by design.
>
>The incomplete mediation flows from the design, since the pathname-based
>mediation
Stephen Smalley wrote:
On Thu, 2007-06-21 at 21:54 +0200, Lars Marowsky-Bree wrote:
And now, yes, I know AA doesn't mediate IPC or networking (yet), but
that's a missing feature, not broken by design.
The incomplete mediation flows from the design, since the pathname-based
mediation doesn't
Stephen Smalley wrote:
That would certainly help, although one might quibble with the use of
the word confinement at all wrt AppArmor (it has a long-established
technical meaning that implies information flow control, and that goes
beyond even complete mediation - it requires global and
Stephen Smalley wrote:
On Fri, 2007-06-22 at 01:06 -0700, John Johansen wrote:
No the incomplete mediation does not flow from the design. We have
deliberately focused on doing the necessary modifications for pathname
based mediation. The IPC and network mediation are a wip.
The fact that
I've heard four arguments against merging AA.
Argument 1. SELinux does it better than AA. (Or: SELinux dominates AA.
Or: SELinux can do everything that AA can.)
Argument 2. Object labeling (or: information flow control) is more secure
than pathname-based access control.
Argument 3. AA isn't
James Morris wrote:
The point is that the pathname model does not generalize, and that
AppArmor's inability to provide adequate coverage of the system is a
design issue arising from this.
I don't see it. I don't see why you call this a design issue. Isn't
this just a case where they haven't
James Morris wrote:
A. Pathname labeling - applying access control to pathnames to objects,
rather than labeling the objects themselves.
Think of this as, say, securing your house by putting a gate in the street
in front of the house, regardless of how many other possible paths there
are to
[EMAIL PROTECTED] writes:
>Experience over on the Windows side of the fence indicates that "remote bad
>guys get some local user first" is a *MAJOR* part of the current real-world
>threat model - the vast majority of successful attacks on end-user boxes these
>days start off with either "Get user
[EMAIL PROTECTED] writes:
Experience over on the Windows side of the fence indicates that remote bad
guys get some local user first is a *MAJOR* part of the current real-world
threat model - the vast majority of successful attacks on end-user boxes these
days start off with either Get user to
[EMAIL PROTECTED] wrote:
> no, this won't help you much against local users, [...]
Pavel Machek wrote:
>Hmm, I guess I'd love "it is useless on multiuser boxes" to become
>standard part of AA advertising.
That's not quite what david@ said. As I understand it, AppArmor is not
focused on
[EMAIL PROTECTED] wrote:
no, this won't help you much against local users, [...]
Pavel Machek wrote:
Hmm, I guess I'd love it is useless on multiuser boxes to become
standard part of AA advertising.
That's not quite what david@ said. As I understand it, AppArmor is not
focused on preventing
Stephen Smalley wrote:
>Integrity protection requires information flow control; you can't
>protect a high integrity process from being corrupted by a low integrity
>process if you don't control the flow of information. Plenty of attacks
>take the form of a untrusted process injecting data that
Crispin Cowan wrote:
> How is it that you think a buffer overflow in httpd could allow an
> attacker to break out of an AppArmor profile?
James Morris wrote:
> [...] you can change the behavior of the application and then bypass
> policy entirely by utilizing any mechanism other than direct
Stephen Smalley wrote:
>Confinement in its traditional sense (e.g. the 1973 Lampson paper, ACM
>Vol 16 No 10) means information flow control, which you have agreed
>AppArmor does not and cannot provide.
Right, that's how I understand it, too.
However, I think some more caveats are in order. In
James Morris wrote:
>On Wed, 18 Apr 2007, Crispin Cowan wrote:
>> How is it that you think a buffer overflow in httpd could allow an
>> attacker to break out of an AppArmor profile?
>
>Because you can change the behavior of the application and then bypass
>policy entirely by utilizing any
James Morris wrote:
On Wed, 18 Apr 2007, Crispin Cowan wrote:
How is it that you think a buffer overflow in httpd could allow an
attacker to break out of an AppArmor profile?
Because you can change the behavior of the application and then bypass
policy entirely by utilizing any mechanism
Stephen Smalley wrote:
Confinement in its traditional sense (e.g. the 1973 Lampson paper, ACM
Vol 16 No 10) means information flow control, which you have agreed
AppArmor does not and cannot provide.
Right, that's how I understand it, too.
However, I think some more caveats are in order. In
Crispin Cowan wrote:
How is it that you think a buffer overflow in httpd could allow an
attacker to break out of an AppArmor profile?
James Morris wrote:
[...] you can change the behavior of the application and then bypass
policy entirely by utilizing any mechanism other than direct
Stephen Smalley wrote:
Integrity protection requires information flow control; you can't
protect a high integrity process from being corrupted by a low integrity
process if you don't control the flow of information. Plenty of attacks
take the form of a untrusted process injecting data that will
James Morris wrote:
>This is not what the discussion is about. It's about addressing the many
>points in the FAQ posted here which are likely to cause misunderstandings,
>and then subsequent responses of a similar nature.
Thank you. Then I misunderstood, and I owe you an apology. Thank you
James Morris wrote:
>On Tue, 17 Apr 2007, David Wagner wrote:
>> Maybe you'd like to confine the PHP interpreter to limit what it can do.
>> That might be a good application for something like AppArmor. You don't
>> need comprehensive information flow control
James Morris wrote:
>I would challenge the claim that AppArmor offers any magic bullet for
>ease of use.
There are, of course, no magic bullets for ease of use.
I would not make such a strong claim. I simply stated that it
is plausible that AppArmor might have some advantages in some
deployment
Karl MacMillan wrote:
>My private ssh keys need to be protected regardless
>of the file name - it is the "bag of bits" that make it important not
>the name.
I think you picked a bad example. That's a confidentiality policy.
AppArmor can't make any guarantees about confidentiality. Neither can
Karl MacMillan wrote:
>I don't think that the ease-of-use issue is clear cut. The hard part of
>understanding both SELinux policies and AppArmor profiles is
>understanding what access should be allowed. [...]
>Whether the access is allowed with the SELinux or
>AppArmor language seems like a small
Karl MacMillan wrote:
I don't think that the ease-of-use issue is clear cut. The hard part of
understanding both SELinux policies and AppArmor profiles is
understanding what access should be allowed. [...]
Whether the access is allowed with the SELinux or
AppArmor language seems like a small
Karl MacMillan wrote:
My private ssh keys need to be protected regardless
of the file name - it is the bag of bits that make it important not
the name.
I think you picked a bad example. That's a confidentiality policy.
AppArmor can't make any guarantees about confidentiality. Neither can
James Morris wrote:
I would challenge the claim that AppArmor offers any magic bullet for
ease of use.
There are, of course, no magic bullets for ease of use.
I would not make such a strong claim. I simply stated that it
is plausible that AppArmor might have some advantages in some
deployment
James Morris wrote:
On Tue, 17 Apr 2007, David Wagner wrote:
Maybe you'd like to confine the PHP interpreter to limit what it can do.
That might be a good application for something like AppArmor. You don't
need comprehensive information flow control for that kind of use, and
it would likely
James Morris wrote:
This is not what the discussion is about. It's about addressing the many
points in the FAQ posted here which are likely to cause misunderstandings,
and then subsequent responses of a similar nature.
Thank you. Then I misunderstood, and I owe you an apology. Thank you
for
Pavel Machek wrote:
> David Wagner wrote:
>> There was no way to follow fork securely.
>
>Actually there is now. I did something similar called subterfugue and
>we solved this one.
Yes, I saw that. I thought subterfugue was neat. The way that
subterfugue was a clever hack --
Pavel Machek wrote:
David Wagner wrote:
There was no way to follow fork securely.
Actually there is now. I did something similar called subterfugue and
we solved this one.
Yes, I saw that. I thought subterfugue was neat. The way that
subterfugue was a clever hack -- albeit too clever
Indan Zupancic wrote:
>On Thu, April 12, 2007 11:35, Satyam Sharma wrote:
>> 1. First, sorry, I don't think an RSA implementation not conforming to
>> PKCS #1 qualifies to be called RSA at all. That is definitely a *must*
>> -- why break strong crypto algorithms such as RSA by implementing them
>>
Pavel Machek wrote:
>You can do the same with ptrace. If that's not fast enough... improve
>ptrace?
I did my Master's thesis on a system called Janus that tried using ptrace
for this goal. The bottom line is that ptrace sucks for this purpose.
It is a kludge. It is not the right approach. I
Pavel Machek wrote:
You can do the same with ptrace. If that's not fast enough... improve
ptrace?
I did my Master's thesis on a system called Janus that tried using ptrace
for this goal. The bottom line is that ptrace sucks for this purpose.
It is a kludge. It is not the right approach. I do
Indan Zupancic wrote:
On Thu, April 12, 2007 11:35, Satyam Sharma wrote:
1. First, sorry, I don't think an RSA implementation not conforming to
PKCS #1 qualifies to be called RSA at all. That is definitely a *must*
-- why break strong crypto algorithms such as RSA by implementing them
in
Samium Gromoff wrote:
>[...] directly setuid root the lisp system executable itself [...]
Like I said, that sounds like a bad idea to me. Sounds like a recipe for
privilege escalation vulnerabilities. Was the lisp system executable
really implemented to be secure even when you make it setuid
Samium Gromoff wrote:
>the core of the problem are the cores which are customarily
>dumped by lisps during the environment generation (or modification) stage,
>and then mapped back, every time the environment is invoked.
>
>at the current step of evolution, those core files are not relocatable
Samium Gromoff wrote:
the core of the problem are the cores which are customarily
dumped by lisps during the environment generation (or modification) stage,
and then mapped back, every time the environment is invoked.
at the current step of evolution, those core files are not relocatable
in
Samium Gromoff wrote:
[...] directly setuid root the lisp system executable itself [...]
Like I said, that sounds like a bad idea to me. Sounds like a recipe for
privilege escalation vulnerabilities. Was the lisp system executable
really implemented to be secure even when you make it setuid
Samium Gromoff wrote:
>This patch removes the dropping of ADDR_NO_RANDOMIZE upon execution of setuid
>binaries.
>
>Why? The answer consists of two parts:
>
>Firstly, there are valid applications which need an unadulterated memory map.
>Some of those which do their memory management, like lisp
Samium Gromoff wrote:
This patch removes the dropping of ADDR_NO_RANDOMIZE upon execution of setuid
binaries.
Why? The answer consists of two parts:
Firstly, there are valid applications which need an unadulterated memory map.
Some of those which do their memory management, like lisp systems
Continuing the tangent:
Henrique de Moraes Holschuh wrote:
>On Mon, 27 Nov 2006, Ben Pfaff wrote:
>> [EMAIL PROTECTED] (David Wagner) writes:
>> > Well, if you want to talk about really high-value keys like the scenarios
>> > you mention, you probably shouldn't be usi
Continuing the tangent:
Henrique de Moraes Holschuh wrote:
On Mon, 27 Nov 2006, Ben Pfaff wrote:
[EMAIL PROTECTED] (David Wagner) writes:
Well, if you want to talk about really high-value keys like the scenarios
you mention, you probably shouldn't be using /dev/random, either; you
should
Warning: tangent with little practical relevance follows:
Kyle Moffett wrote:
>Actually, our current /dev/random implementation is secure even if
>the cryptographic algorithms can be broken under traditional
>circumstances.
Maybe. But, I've never seen any careful analysis to support this
Phillip Susi wrote:
>David Wagner wrote:
>> Nope, I don't think so. If they could, that would be a security hole,
>> but /dev/{,u}random was designed to try to make this impossible, assuming
>> the cryptographic algorithms are secure.
>>
>> After all, some
Phillip Susi wrote:
>Why are non root users allowed write access in the first place? Can't
>the pollute the entropy pool and thus actually REDUCE the amount of good
>entropy?
Nope, I don't think so. If they could, that would be a security hole,
but /dev/{,u}random was designed to try to make
Phillip Susi wrote:
Why are non root users allowed write access in the first place? Can't
the pollute the entropy pool and thus actually REDUCE the amount of good
entropy?
Nope, I don't think so. If they could, that would be a security hole,
but /dev/{,u}random was designed to try to make
Phillip Susi wrote:
David Wagner wrote:
Nope, I don't think so. If they could, that would be a security hole,
but /dev/{,u}random was designed to try to make this impossible, assuming
the cryptographic algorithms are secure.
After all, some of the entropy sources come from untrusted
Warning: tangent with little practical relevance follows:
Kyle Moffett wrote:
Actually, our current /dev/random implementation is secure even if
the cryptographic algorithms can be broken under traditional
circumstances.
Maybe. But, I've never seen any careful analysis to support this or
David Madore wrote:
>I intend to add a couple of capabilities which are normally available
>to all user processes, including capability to exec(), [...]
Once you have a mechanism that lets you prevent the untrusted program
from exec-ing a setuid/setgid program (such as your bounding set idea),
I
David Madore wrote:
I intend to add a couple of capabilities which are normally available
to all user processes, including capability to exec(), [...]
Once you have a mechanism that lets you prevent the untrusted program
from exec-ing a setuid/setgid program (such as your bounding set idea),
I
David Madore wrote:
>This does not tell me, then, why CAP_SETPCAP was globally disabled by
>default, nor why passing of capabilities across execve() was entirely
>removed instead of being fixed.
I do not know of any good reason. Perhaps the few folks who knew enough
to fix it properly didn't
David Madore wrote:
This does not tell me, then, why CAP_SETPCAP was globally disabled by
default, nor why passing of capabilities across execve() was entirely
removed instead of being fixed.
I do not know of any good reason. Perhaps the few folks who knew enough
to fix it properly didn't feel
Theodore Ts'o wrote:
>For one, /dev/urandom and /dev/random don't use the same pool
>(anymore). They used to, a long time ago, but certainly as of the
>writing of the paper this was no longer true. This invalidates the
>entire last paragraph of Section 5.3.
Ok, you're right, this is a serious
Matt Mackall wrote:
>On Sat, Apr 16, 2005 at 01:08:47AM +0000, David Wagner wrote:
>> http://eprint.iacr.org/2005/029
>
>Unfortunately, this paper's analysis of /dev/random is so shallow that
>they don't even know what hash it's using. Almost all of section 5.3
>is wrong
Lorenzo Hernández García-Hierro wrote:
>El lun, 18-04-2005 a las 15:05 -0400, Dave Jones escribió:
>> This is utterly absurd. You can find out anything thats in /proc/cpuinfo
>> by calling cpuid instructions yourself.
>> Please enlighten me as to what security gains we achieve
>> by not allowing
Lorenzo Hernández García-Hierro wrote:
El lun, 18-04-2005 a las 15:05 -0400, Dave Jones escribió:
This is utterly absurd. You can find out anything thats in /proc/cpuinfo
by calling cpuid instructions yourself.
Please enlighten me as to what security gains we achieve
by not allowing users to
Matt Mackall wrote:
On Sat, Apr 16, 2005 at 01:08:47AM +, David Wagner wrote:
http://eprint.iacr.org/2005/029
Unfortunately, this paper's analysis of /dev/random is so shallow that
they don't even know what hash it's using. Almost all of section 5.3
is wrong (and was when I read
Theodore Ts'o wrote:
For one, /dev/urandom and /dev/random don't use the same pool
(anymore). They used to, a long time ago, but certainly as of the
writing of the paper this was no longer true. This invalidates the
entire last paragraph of Section 5.3.
Ok, you're right, this is a serious
Jean-Luc Cooke wrote:
>The part which suggests choosing an irreducible poly and a value "a" in the
>preprocessing stage ... last I checked the value for a and the poly need to
>be secret. How do you generate poly and a, Catch-22? Perhaps I'm missing
>something and someone can point it out.
I
linux wrote:
>Thank you for pointing out the paper; Appendix A is particularly
>interesting. And the [BST03] reference looks *really* nice! I haven't
>finished it yet, but based on what I've read so far, I'd like to
>*strongly* recommnd that any would-be /dev/random hackers read it
>carefully.
linux wrote:
>3) Fortuna's design doesn't actually *work*. The authors' analysis
> only works in the case that the entropy seeds are independent, but
> forgot to state the assumption. Some people reviewing the design
> don't notice the omission.
Ok, now I understand your objection. Yup,
Hacksaw wrote:
>What I would expect the kernel to do is this:
>
>system_call_data_prep (userdata, size){ [...]
> for each page from userdata to userdata+size
> {
> if the page is swapped out, swap it in
> if the page is not owned by the user process, return
linux wrote:
>David Wagner wrote:
>>linux wrote:
>>> First, a reminder that the design goal of /dev/random proper is
>>> information-theoretic security. That is, it should be secure against
>>> an attacker with infinite computational power.
>
>> I am s
linux wrote:
David Wagner wrote:
linux wrote:
First, a reminder that the design goal of /dev/random proper is
information-theoretic security. That is, it should be secure against
an attacker with infinite computational power.
I am skeptical.
I have never seen any convincing evidence
Hacksaw wrote:
What I would expect the kernel to do is this:
system_call_data_prep (userdata, size){ [...]
for each page from userdata to userdata+size
{
if the page is swapped out, swap it in
if the page is not owned by the user process, return -ENOWAYMAN
linux wrote:
3) Fortuna's design doesn't actually *work*. The authors' analysis
only works in the case that the entropy seeds are independent, but
forgot to state the assumption. Some people reviewing the design
don't notice the omission.
Ok, now I understand your objection. Yup, this
Jean-Luc Cooke wrote:
The part which suggests choosing an irreducible poly and a value a in the
preprocessing stage ... last I checked the value for a and the poly need to
be secret. How do you generate poly and a, Catch-22? Perhaps I'm missing
something and someone can point it out.
I don't
linux wrote:
>/dev/urandom depends on the strength of the crypto primitives.
>/dev/random does not. All it needs is a good uniform hash.
That's not at all clear. I'll go farther: I think it is unlikely
to be true.
If you want to think about cryptographic primitives being arbitrarily
broken, I
Theodore Ts'o wrote:
>With a properly set up set of init scripts, /dev/random is initialized
>with seed material for all but the initial boot [...]
I'm not so sure. Someone posted on this mailing list several months
ago examples of code in the kernel that looks like it could run before
those
Jean-Luc Cooke wrote:
>Info-theoretic randomness is a strong desire of some/many users, [..]
I don't know. Most of the time that I've seen users say they want
information-theoretic randomness, I've gotten the impression that those
users didn't really understand what information-theoretic
>First, a reminder that the design goal of /dev/random proper is
>information-theoretic security. That is, it should be secure against
>an attacker with infinite computational power.
I am skeptical.
I have never seen any convincing evidence for this claim,
and I suspect that there are cases in
Matt Mackall wrote:
>While it may have some good properties, it lacks
>some that random.c has, particularly robustness in the face of failure
>of crypto primitives.
It's probably not a big deal, because I'm not worried about the
failure of standard crypto primitives, but--
Do you know of any
Matt Mackall wrote:
While it may have some good properties, it lacks
some that random.c has, particularly robustness in the face of failure
of crypto primitives.
It's probably not a big deal, because I'm not worried about the
failure of standard crypto primitives, but--
Do you know of any
First, a reminder that the design goal of /dev/random proper is
information-theoretic security. That is, it should be secure against
an attacker with infinite computational power.
I am skeptical.
I have never seen any convincing evidence for this claim,
and I suspect that there are cases in
Theodore Ts'o wrote:
With a properly set up set of init scripts, /dev/random is initialized
with seed material for all but the initial boot [...]
I'm not so sure. Someone posted on this mailing list several months
ago examples of code in the kernel that looks like it could run before
those init
linux wrote:
/dev/urandom depends on the strength of the crypto primitives.
/dev/random does not. All it needs is a good uniform hash.
That's not at all clear. I'll go farther: I think it is unlikely
to be true.
If you want to think about cryptographic primitives being arbitrarily
broken, I
Andrea Arcangeli wrote:
>On Sun, Jan 23, 2005 at 07:34:24AM +0000, David Wagner wrote:
>> [...Ostia...] The jailed process inherit an open file
>> descriptor to its jailor, and is only allowed to call read(), write(),
>> sendmsg(), and recvmsg(). [...]
>
>Why to ca
Andrea Arcangeli wrote:
On Sun, Jan 23, 2005 at 07:34:24AM +, David Wagner wrote:
[...Ostia...] The jailed process inherit an open file
descriptor to its jailor, and is only allowed to call read(), write(),
sendmsg(), and recvmsg(). [...]
Why to call sendmsg/recvmsg when you can call
>The attack is to hardlink some tempfile name to some file you want
>over-written. This usually involves just a little bit of work, such as
>recognizing that a given root cronjob uses an unsafe predictable filename
>in /tmp (look at the Bugtraq or Full-Disclosure archives, there's plenty).
>Then
The attack is to hardlink some tempfile name to some file you want
over-written. This usually involves just a little bit of work, such as
recognizing that a given root cronjob uses an unsafe predictable filename
in /tmp (look at the Bugtraq or Full-Disclosure archives, there's plenty).
Then you
>For those systems that have everything on one big partition, you can often
>do stuff like:
>
>ln /etc/passwd /tmp/
>
>and wait for /etc/passwd to get clobbered by a cron job run by root...
How would /etc/passwd get clobbered? Are you thinking that a tmp
cleaner run by cron might delete
For those systems that have everything on one big partition, you can often
do stuff like:
ln /etc/passwd /tmp/filename_generated_by_mktemp
and wait for /etc/passwd to get clobbered by a cron job run by root...
How would /etc/passwd get clobbered? Are you thinking that a tmp
cleaner run by cron
Chris Wright wrote:
>* David Wagner ([EMAIL PROTECTED]) wrote:
>> There is a simple tweak to ptrace which fixes that: one could add an
>> API to specify a set of syscalls that ptrace should not trap on. To get
>> seccomp-like semantics, the user program coul
Chris Wright wrote:
* David Wagner ([EMAIL PROTECTED]) wrote:
There is a simple tweak to ptrace which fixes that: one could add an
API to specify a set of syscalls that ptrace should not trap on. To get
seccomp-like semantics, the user program could specify {read,write}, but
if the user
Chris Wright wrote:
>Only difference is in number of context switches, and number of running
>processes (and perhaps ease of determining policy for which syscalls
>are allowed). Although it's not really seccomp, it's just restricted
>syscalls...
There is a simple tweak to ptrace which fixes
Chris Wright wrote:
Only difference is in number of context switches, and number of running
processes (and perhaps ease of determining policy for which syscalls
are allowed). Although it's not really seccomp, it's just restricted
syscalls...
There is a simple tweak to ptrace which fixes that:
>More interestingly, it changes the operation of SAK in two ways:
>(a) It does less, namely will not kill processes with uid 0.
I think this is bad for security.
(I assume you meant euid 0, not ruid 0. Using the real uid
for access control decisions is a very odd thing to do.)
-
To unsubscribe
More interestingly, it changes the operation of SAK in two ways:
(a) It does less, namely will not kill processes with uid 0.
I think this is bad for security.
(I assume you meant euid 0, not ruid 0. Using the real uid
for access control decisions is a very odd thing to do.)
-
To unsubscribe
Jesse Pollard wrote:
>2. Any penetration is limited to what the user can access.
Sure, but in practice, this is not a limit at all.
Once a malicious party gains access to any account on your
system (root or non-root), you might as well give up, on all
but the most painstakingly careful
Mohammad A. Haque wrote:
>Why do this in the kernel when it's available in userspace?
Because the userspace implementations aren't equivalent.
In particular, it is not so easy for them to enforce the following
restriction:
(*) If a non-root user requested the chroot, then setuid/setgid
1 - 100 of 136 matches
Mail list logo