http://blogs.washingtonpost.com/securityfix/2006/01/a_timeline_of_m.html

Brian Krebs on Computer Security
A Time to Patch

A few months back while researching a Microsoft patch from way back in 2003,
I began to wonder whether anyone had ever conducted a longitudinal study of
Redmond's patch process to see whether the company was indeed getting more
nimble at fixing security problems.

For many years, Microsoft has been criticized for taking too long to issue
patches, especially when compared with patch releases for flaws found in
operating systems or software applications maintained by the open source
community, such as Linux or Mozilla's Firefox browser. But I wanted to find
out for myself just how long Microsoft takes on average to issue fixes for
known software flaws.

Finding no such comprehensive research, Security Fix set about digging
through the publicly available data for each patch that Microsoft issued
over the past three years that earned a "critical" rating. Microsoft
considers a patch "critical" if it fixes a security hole that attackers
could use to break into and take control over vulnerable Windows computers.

For each patch, Security Fix looked at the date Microsoft Corp. was notified
about a problem and then how long it took the company to issue a fix for
said problem. In most cases, information about who discovered the
vulnerability and when they reported it to Microsoft or disclosed it in
public was readily available through various citations by Mitre, which
maintains much of that data on the common vulnerabilities and exposures
(CVE) list.

In some cases, however, that submission or disclosure date was not publicly
available, and required Security Fix to contact the individual discoverer
and get the dates directly from them. In about a dozen cases, the discoverer
of a vulnerability did not respond to information requests or the flaw
appeared to have been found internally at Redmond, and in those instances
Microsoft filled in the blanks.

Here's what we found: Over the past three years, Microsoft has actually
taken longer to issue critical fixes when researchers waited to disclose
their research until after the company issued a patch. In 2003, Microsoft
took an average of three months to issue patches for problems reported to
them. In 2004, that time frame shot up to 134.5 days, a number that remained
virtually unchanged in 2005.

Below are three spreadsheets detailing our findings for the past three
years. The documents are downloadable either as Microsoft Excel files or
regular HTML files:

Download 2005patchlist.xls
Download 2005patchlist.htm

Download 2004patchlist.xls
Download 2004patchlist.htm

Download 2003patchlist.xls
Download 2003patchlist.htm

In the first column of each spreadsheet, you should see a hyperlinked MS
number that will take you to the Microsoft advisory for that patch. Next to
that column is a link to the CVE entry, which contains quite a bit more
information about how each flaw was discovered and by whom.

The data show that one area where Microsoft appears to be fixing problems
more quickly is when the company learns of security holes in its products at
the same time as everyone else. Advocates of this controversial "full
disclosure" approach believe companies tend to fix security flaws more
quickly when their dirty laundry is aired for all the world to see, and at
least on the surface that appears to be the case with Microsoft.

It is important to note, however, that in nearly all full-disclosure cases
cited here, news of the vulnerability was also issued alongside computer
code demonstrating how attackers might exploit the flaw.

In cases where Microsoft learned of a flaw in its products through full
disclosure, the company has indeed gotten speedier. In 2003, it took an
average of 71 days to release a fix for one of these flaws. In 2004 that
time frame decreased to 55 days, and in 2005 shrank further to 46 days.

The company also seems to have done a better job convincing security
researchers to give it time to develop a patch before going public with
their vulnerability findings. In 2003, Microsoft learned of at least eight
critical Windows vulnerabilities through full disclosure. Last year, this
happened half as many times.

I spoke at length about this project with Stephen Toulouse, a security
program manager at Microsoft. (Toulouse's team also verified the data in the
Excel spreadsheets that accompany this post). Toulouse said that if
Microsoft is taking longer to release patches for known vulnerabilities, it
is because the company has placed a renewed focus on ensuring that each
patch comprehensively fixes the problem throughout the Windows operating
system and that each fix does not introduce new glitches in the process.

Toulouse said developing a patch to mend a security hole is usually the
easiest part. Things get more problematic, he said, during the testing
process. If testers find a bug, the patch developers incorporate the fix
into all relevant portions of the patch and the testing process is reset,
forcing the testers to start from scratch. What's more, Microsoft also has
been more willing of late to give the discoverers of each vulnerability the
opportunity to vet the patches before they are released to the public.

Toulouse pointed to one particularly problematic patch that took the company
200 days to fix: a vulnerability in a component of Windows (and many other
networking applications) known as ASN.1, at the time considered the largest
vulnerability in the history of the Windows operating system. In the course
of testing the patch for that flaw --  reported by security researchers at
Aliso Viejo, Calif.-based eEye Digital Security -- Microsoft was forced to
reset the process at least twice as internal developers found additional
problems that were being masked by previously unknown glitches in the fix. 

"We learned that it's far better for us to find those issues than for
customers to run into them," Toulouse said.

Some of those lessons Microsoft learned when it tried to fix a critical flaw
in Windows that was later exploited by the infamous Blaster worm. Microsoft
turned around patch for that vulnerability, reported by researchers in the
hacker group The Last Stage of Delirium, in just 38 days because it
recognized that while the initial fix for the problem might not have not
eradicated the flaw, there was a great deal of concern within Microsoft
"about the breadth and depth of the vulnerability."

Two days after Microsoft released the patch, researchers alerted Microsoft
that the flaw was present in three other areas of the operating system that
the initial fix did not address. Roughly two weeks after that, the Blaster
worm would infect millions of Windows PCs worldwide. Some security experts
believe the worm may have been developed with the help of the initial
Microsoft patch, which could have given the worm's authors a better idea of
how to exploit the flaw.

"It was a conscious decision at the time to release that patch so quickly,
but we later looked back and decided we really should have conducted a more
thorough review process," Toulouse said.

According to Toulouse, Blaster resulted in two key changes at Microsoft. For
starters, the company instituted a more thorough patch-review process across
all company product teams that had a hand in developing the original
vulnerable code. Microsoft also "retasked" its Secure Windows Initiative
Team to research and attack each vulnerability the way a malicious hacker
might.

"That team's job is to take the vulnerability, turn it sideways and upside
down and to think 'Is there any other way to exploit this?'" Toulouse said.

I shared some of this data with a few of the security researchers and
organizations credited with discovering flaws in the above-mentioned
advisories, and got mixed responses to Microsoft's claims.

Pete Allor, manager of the X-Force vulnerability research division at
Atlanta-based Internet Security Systems, praised Microsoft for "doing a
fantastic job over the past year and a half on the [quality assurance] side
of patching. We're not seeing the recalls and reissues that we used to. What
we're hearing in today's corporate environment is, 'Make sure you get it
right the first time. We don't want to hear how a patch is broken because
you didn't 'take the time."

Not everyone sees Microsoft's recent patch efforts in such glowing light.
Marc Maiffret, "chief hacking officer" for the aforementioned eEye, noted
that the longer a patch is in the works, the longer customers remain
unprotected. Maiffret says it is not uncommon for exploits to be available
in the malicious hacker underground for vulnerabilities that have been
reported in the meantime by well-meaning security researchers. 

"You'd think that by taking that much longer on patches Microsoft is being
more thorough, but that¹s not always the case as we've seen," Maiffret said.
"The truth is that unpatched Windows flaws have a value to the underground
community, and it is not at all uncommon to see these things sold or traded
among certain groups who use them by quietly attacking just a few key
targets. So, the longer Microsoft takes to patch vulnerabilities the longer
they are leaving customers exposed."

Last Thursday, Microsoft released a patch to fix a critical flaw in the way
Windows renders certain image files. That update, which mended a 0day ("zero
day") vulnerability for which an exploit was publicly disclosed and very
soon in use by attackers, took Microsoft just 10 days to produce, though the
company was able to take some pointers from an unofficial patch that was
released by an independent security researcher. Because the patch was issued
in 2006, however, Security Fix did not include those 10 days in the 2005
time-to-patch averages.

I mention the WMF patch because earlier this week security researchers
posted to the public Bugtraq software vulnerability list some exploit code
for at least two more security flaws in the same WMF engine Microsoft
patched last week. While those flaws (at least for now) are considered less
dangerous than the problem Redmond fixed last week, it does raise questions
about the Microsoft team charged with finding these problems. The
vulnerabilities have apparently been present in the Windows operating system
code dating back to Windows 3.0. Toulouse maintains that Microsoft had
already flagged those glitches prior to the exploit code posting on Bugtraq,
but because the company didn't see them as a big security threat, it did not
hold up the WMF patch to include fixes for them.

One final note: Security Fix did not attempt to determine whether there was
a correlation between the speed with which Microsoft issues patches and the
quality or effectiveness of those updates. A real glutton for punishment
might be able to learn just how many Windows patches were later updated with
subsequent fixes -- either because the initial patch failed to fully fix the
problem or introduced new troubles. I purposely did not undertake that task,
in part because I figured I'd still be working on the project this time next
year if I did.

I'd like to thank everyone who helped me assemble the data in the above
graphs -- including (but certainly not limited to): Cesar Cerrudo, "Fozzy,"
Joao Gouveia, Maolin Gu, Kostya Kortchinsky, Marc Maiffret, Brett Moore, and
Peter Winter-Smith. Please forgive me if I have forgotten to name anyone,
and if I did just send me an e-mail and I'll update this post.






You are a subscribed member of the infowarrior list. Visit
www.infowarrior.org for list information or to unsubscribe. This message
may be redistributed freely in its entirety. Any and all copyrights
appearing in list messages are maintained by their respective owners.

Reply via email to