Re: openbsd and badusb

2014-08-02 Thread Dmitry Orlov

OpenBSD deny such devices.
Don't worry :) And, infection does not penetrate NON-Windows systems.
Wait blackhat and read reports.

On 02.08.2014 03:27, patrick keshishian wrote:

On 8/1/14, Gustav Fransson Nyvell gus...@nyvell.se wrote:

On 08/01/14 23:01, Ted Unangst wrote:

You may have heard about the badusb talk coming at blackhat. In
theory, we should wait to watch the talk and see what it's actually
about, but since some people can't wait that long, here's a few
thoughts. (I'm a little surprised nobody has asked here already. I have
some time free, thought I'd beat the rush. :))

The claims on the main page, https://srlabs.de/badusb/, are fairly
reasonable if a little vague. Other claims I'm reading elsewhere
appear a little overhyped. In order of increasing danger...

0. The final claim is that once infected, you'll always be infected
because disinfection is nigh impossible. Meh. The same could be said
of the firefox exploit of the week. It too can reprogram your bios or
persist itself in any number of ways.

1. They're exploiting all manner of Windows specific autorun
functionality to install or configure drivers. By default, OpenBSD
will do just about nothing when a USB device is plugged in, so this is
not a serious concern.

2. They have created a rogue keyboard device which will type naughty
commands. In theory, the same keyboard could type rm -rf ~ into an
xterm. This is a tiny bit more challenging since it probably depends
on your desktop environment and window manager, but presumably your
attacker will know all that. So yeah, vulnerable. But at the same
time, I could also train a monkey to type that command and strap it to
your normal not backdoored keyboard. Beware the badmonkey attack, too.

3. A storage device may lie about the contents of a file. Sometimes it
will say one thing (so it looks safe on this computer), sometimes it
will say another (so it installs a backdoor on that computer). Don't
install OpenBSD from media you don't control. Technically, signing the
sets won't help since the backdoor installer might have a bogus key on
it (or a bogus installer that doesn't check). You can always pxeboot
and hope that the firmware in your ethernet card is more trustworthy.

They don't appear to mention two other avenues of exploitation,
which may be more practical. I refer specifically to OpenBSD,
though there's no such limitation. First, the USB stack has a number
of known races and other bugs, especially around attach/detach and
error handling. If a rogue device attached and detached itself several
times, it could likely corrupt the kernel heap. Game over.

Second, any USB disk, even one with a normal firmware, can have an evil
filesystem image on it. By this, I mean the metadata that the kernel
reads is corrupt, not that it has naughty files. There have been crash
reports when mounting corrupted (and even not corrupted) (e.g.)
MSDOSFS disks. The kernel is a little too trusting that a filesystem
is what it claims to be. There are probably exploitable vulns here,
too.

All that to basically say don't panic (that's the kernel's job).
Fixing filesystem bugs is something we'll do, of course, but it's not
a priority for me to sit down and start fuzzing Right Now. Same for
miscellaneous bugs in the USB stack.


This wouldn't be such a big problem if hardware manufacturers weren't so
mysterious about their firmware and ways to install such firmware. I
mean from the owner/operator/maintenance perspective. Maybe the EU
should force them to help us...

oh the irony...




Re: usb audio interfaces

2014-08-02 Thread Erwin Geerdink
Thanks, the Presonus and Alesis interfaces are class compliant devices
according to their user manuals. So I ordered the Alesis IO|2;
the Presonus appears to lack line-level inputs.

The interface will be connected to an older notebook with USB1.1
hubs, which should be fine. Will try it with some USB2.0 hubs as well. 

Best,
-- 
Erwin



On Thu, 31 Jul 2014 12:55:47 +0200
Alexandre Ratchov a...@caoua.org wrote:

 On Thu, Jul 31, 2014 at 12:48:35AM +0200, Erwin Geerdink wrote:
  Hi,
  
  I'm considering the following usb interfaces for my audio setup:
  
  E-MU 0204 usb
  E-MU Tracker Pre
  Presonus Audiobox usb
  Alesis IO|2 Express
  
  Recording will be done on a Windows machine, however it would be
  nice if I can use it for audio playback from an OpenBSD machine as
  well. I found the envy(4) and emu(4) man pages but I'm still not
  sure whether playback would work with any of these devices.
  
  Anyone experiences or suggestions?
  
 
 Hi,
 
 These devices are handled by the uaudio driver, assuming they are
 USB class compliant (driverless ones are likely to be).
 
 Unfortunately, on OpenBSD, USB1.1 devices using isochronous
 transfers don't work behind USB 2.0 hubs yet. In other words USB1.1
 audio cards are unlikely work on modern machines. I'd suggest you
 to test the cards if possible (just plug it try to play a simple
 .wav file).
 
 Another option, would be to get a old USB1.1 adapter and attach the
 USB1.1 card on it.



nginx/slowcgi: cgi program keep running

2014-08-02 Thread Sébastien Marie
Hi,

I do some tests with a cgi program that go in infinite loop.

In order to test it with nginx (-current GENERIC.MP#304), I started
slowcgi to run the cgi program (and configured ngnix).

The configuration is ok: the cgi program reply as expected in normal
cases.

But when it go in infinite loop: the connection timeout after 1
minute. ngnix report 504 Gateway Time-out and the connection is
closed.

But, the cgi program is still running (in infinite loop at max CPU).

Is it an expected behaviour or not ? It seems odd to me.
Does slowcgi implements some timeout ?
-- 
Sébastien Marie



Re: pfctl: DIOCADDQUEUE: No such process

2014-08-02 Thread Henning Brauer
* Loïc Blot loic.b...@unix-experience.fr [2014-07-23 17:12]:
 pfctl: DIOCADDQUEUE: No such process

that most likely means you're trying to create a queue on a nonexistant
inmterface. 

-- 
Henning Brauer, h...@bsws.de, henn...@openbsd.org
BS Web Services GmbH, http://bsws.de, Full-Service ISP
Secure Hosting, Mail and DNS. Virtual  Dedicated Servers, Root to Fully Managed
Henning Brauer Consulting, http://henningbrauer.com/



Re: carp setup firewall

2014-08-02 Thread Henning Brauer
* Kim Zeitler kim.zeit...@konzept-is.de [2014-07-25 11:19]:
 we have a similar setup here, with only a /29 range of external addresses.
 Until now, we have had no problems so far running this using only one
 external carp IF (using a private IP) and adding all external addresses
 as aliases. But we do not use bi-nat for our DMZ Servers.

there really is nothing wrong with aliases on carp interfaces.

you ahve to keep them in sync of course. just like the vhid and the
passphrase...

-- 
Henning Brauer, h...@bsws.de, henn...@openbsd.org
BS Web Services GmbH, http://bsws.de, Full-Service ISP
Secure Hosting, Mail and DNS. Virtual  Dedicated Servers, Root to Fully Managed
Henning Brauer Consulting, http://henningbrauer.com/



Package installation

2014-08-02 Thread Gustav Fransson Nyvell

Hi, there,

I wanted to run something by you, mkay. About package management. I 
wonder if this has been shouted at already. I remember from SunOS that 
packages are installed in a different manner than let's say Red Hat and 
of course OpenBSD. They install it in the form /pkgs/PROGRAM/VERSION, 
example /pkgs/gimp/1.0. GoboLinux does this. I think this has some 
advantages over installing /usr/local/bin/gimp1.1 and 
/usr/local/bin/gimp2.0. What do you think? What have you said?


Ready to be shouted at;

//Gustav

--
This e-mail is confidential and may not be shared with anyone other than 
recipient(s) without written permission from sender.



Re: nginx/slowcgi: cgi program keep running

2014-08-02 Thread Florian Obser
On Sat, Aug 02, 2014 at 12:11:18PM +0200, S?bastien Marie wrote:
 Hi,
 
 I do some tests with a cgi program that go in infinite loop.
 
 In order to test it with nginx (-current GENERIC.MP#304), I started
 slowcgi to run the cgi program (and configured ngnix).
 
 The configuration is ok: the cgi program reply as expected in normal
 cases.
 
 But when it go in infinite loop: the connection timeout after 1
 minute. ngnix report 504 Gateway Time-out and the connection is
 closed.
 
 But, the cgi program is still running (in infinite loop at max CPU).
 
 Is it an expected behaviour or not ? It seems odd to me.
 Does slowcgi implements some timeout ?

Yes, 120 seconds. After that it considers the cgi script to be dead
and cleans up the connection to nginx.

However it does not kill the cgi script. I talked to some people when
I implemented that and we decided it would be best to not kill the cgi
script as it probably cannot handle that. I.e. leaving half written
files behind or some other crap. So it comes down to fixing the cgi
script to not get into an infinit loop ;)

 -- 
 S?bastien Marie
 

-- 
I'm not entirely sure you are real.



Re: Package installation

2014-08-02 Thread Marc Espie
On Sat, Aug 02, 2014 at 12:26:06PM +0200, Gustav Fransson Nyvell wrote:
 Hi, there,
 
 I wanted to run something by you, mkay. About package management. I wonder
 if this has been shouted at already. I remember from SunOS that packages are
 installed in a different manner than let's say Red Hat and of course
 OpenBSD. They install it in the form /pkgs/PROGRAM/VERSION, example
 /pkgs/gimp/1.0. GoboLinux does this. I think this has some advantages over
 installing /usr/local/bin/gimp1.1 and /usr/local/bin/gimp2.0. What do you
 think? What have you said?
 
 Ready to be shouted at;

This puts more strain on the file system actually, which is probably
the main reason we don't do it. Also, there is generally a lot of churning
to do to make the package self-contained.

As far as policy goes, having stuff set up like that looks more flexible, but
it is a fallacy. Instead of having the distribution solve issues concerning
incompatible versions and updates, the toll falls instead on the individual
sysadmin, to make sure things they have work together. It can lead to
security nightmares, because it's so simple to have the newer version 
alongside the old version that sticky points of updating take much longer
to resolve.

It's a bit like having mitigation measures that you can turn on and off...
if it's possible to turn these off, there's not enough incentive to actually
fix issues.

Likewise for packages. By making it somewhat LESS convenient to install
several versions of the same piece of software, we make it more important
to do timely updates.

Also, we don't have the manpower to properly manage lots of distinct versions 
of the same software. So  this kind of setup would be detrimental to
actually testing stuff.



Re: Package installation

2014-08-02 Thread Gustav Fransson Nyvell

On 08/02/14 12:54, Marc Espie wrote:

On Sat, Aug 02, 2014 at 12:26:06PM +0200, Gustav Fransson Nyvell wrote:

Hi, there,

I wanted to run something by you, mkay. About package management. I wonder
if this has been shouted at already. I remember from SunOS that packages are
installed in a different manner than let's say Red Hat and of course
OpenBSD. They install it in the form /pkgs/PROGRAM/VERSION, example
/pkgs/gimp/1.0. GoboLinux does this. I think this has some advantages over
installing /usr/local/bin/gimp1.1 and /usr/local/bin/gimp2.0. What do you
think? What have you said?

Ready to be shouted at;

This puts more strain on the file system actually, which is probably
the main reason we don't do it. Also, there is generally a lot of churning
to do to make the package self-contained.

As far as policy goes, having stuff set up like that looks more flexible, but
it is a fallacy. Instead of having the distribution solve issues concerning
incompatible versions and updates, the toll falls instead on the individual
sysadmin, to make sure things they have work together. It can lead to
security nightmares, because it's so simple to have the newer version
alongside the old version that sticky points of updating take much longer
to resolve.

It's a bit like having mitigation measures that you can turn on and off...
if it's possible to turn these off, there's not enough incentive to actually
fix issues.

Likewise for packages. By making it somewhat LESS convenient to install
several versions of the same piece of software, we make it more important
to do timely updates.

Also, we don't have the manpower to properly manage lots of distinct versions
of the same software. So  this kind of setup would be detrimental to
actually testing stuff.
I guess there could be both. But I think that if there's a security 
issue with one version of a software then there quite possibly are 
multiple ways of limiting the impact of that issue. Disallowing multiple 
versions to force people to upgrade is not really a good reason, from 
how I see it. Old software will always have more holes, because they're 
older and more well observed, but they have qualities, too, like speed. 
GIMP-1.0 is amazing on Lenovo X41 from 2005, but probably has bugs. Of 
course none of these systems will stop someone who wants to run version 
x of a software. Maybe something entirely different is needed? Okay, 
maybe I should complain about the status quo... thing is when packages 
install in /var, /usr, /etc and /opt they're so spread out it's hard to 
know what is what. This might be because I'm new but/and scripts can 
find orphan files in this structures, but you need the scripts for that. 
Having everything in /pkgs/PKG/VER would not cause this splatter. 
Programs without dependees (i.e. non-libs, non-utilprograms) could fit 
in this structure without any extra filesystem magic. Well, the grass is 
always greener.


--
This e-mail is confidential and may not be shared with anyone other than 
recipient(s) without written permission from sender.



Re: nginx/slowcgi: cgi program keep running

2014-08-02 Thread Sébastien Marie
On Sat, Aug 02, 2014 at 10:37:41AM +, Florian Obser wrote:
 On Sat, Aug 02, 2014 at 12:11:18PM +0200, S?bastien Marie wrote:
  
  But, the cgi program is still running (in infinite loop at max CPU).
  
  Is it an expected behaviour or not ? It seems odd to me.
  Does slowcgi implements some timeout ?
 
 Yes, 120 seconds. After that it considers the cgi script to be dead
 and cleans up the connection to nginx.
 
 However it does not kill the cgi script. I talked to some people when
 I implemented that and we decided it would be best to not kill the cgi
 script as it probably cannot handle that. I.e. leaving half written
 files behind or some other crap. So it comes down to fixing the cgi
 script to not get into an infinit loop ;)
 
ok, it make sens.

Thanks a lot for your answer.
-- 
Sébastien Marie



Re: Package installation

2014-08-02 Thread Gustav Fransson Nyvell

On 08/02/14 13:13, Gustav Fransson Nyvell wrote:

On 08/02/14 12:54, Marc Espie wrote:

On Sat, Aug 02, 2014 at 12:26:06PM +0200, Gustav Fransson Nyvell wrote:

Hi, there,

I wanted to run something by you, mkay. About package management. I 
wonder
if this has been shouted at already. I remember from SunOS that 
packages are

installed in a different manner than let's say Red Hat and of course
OpenBSD. They install it in the form /pkgs/PROGRAM/VERSION, example
/pkgs/gimp/1.0. GoboLinux does this. I think this has some 
advantages over
installing /usr/local/bin/gimp1.1 and /usr/local/bin/gimp2.0. What 
do you

think? What have you said?

Ready to be shouted at;

This puts more strain on the file system actually, which is probably
the main reason we don't do it. Also, there is generally a lot of 
churning

to do to make the package self-contained.

As far as policy goes, having stuff set up like that looks more 
flexible, but
it is a fallacy. Instead of having the distribution solve issues 
concerning
incompatible versions and updates, the toll falls instead on the 
individual

sysadmin, to make sure things they have work together. It can lead to
security nightmares, because it's so simple to have the newer version
alongside the old version that sticky points of updating take much 
longer

to resolve.

It's a bit like having mitigation measures that you can turn on and 
off...
if it's possible to turn these off, there's not enough incentive to 
actually

fix issues.

Likewise for packages. By making it somewhat LESS convenient to install
several versions of the same piece of software, we make it more 
important

to do timely updates.

Also, we don't have the manpower to properly manage lots of distinct 
versions

of the same software. So  this kind of setup would be detrimental to
actually testing stuff.
I guess there could be both. But I think that if there's a security 
issue with one version of a software then there quite possibly are 
multiple ways of limiting the impact of that issue. Disallowing 
multiple versions to force people to upgrade is not really a good 
reason, from how I see it. Old software will always have more holes, 
because they're older and more well observed, but they have qualities, 
too, like speed. GIMP-1.0 is amazing on Lenovo X41 from 2005, but 
probably has bugs. Of course none of these systems will stop someone 
who wants to run version x of a software. Maybe something entirely 
different is needed? Okay, maybe I should complain about the status 
quo... thing is when packages install in /var, /usr, /etc and /opt 
they're so spread out it's hard to know what is what. This might be 
because I'm new but/and scripts can find orphan files in this 
structures, but you need the scripts for that. Having everything in 
/pkgs/PKG/VER would not cause this splatter. Programs without 
dependees (i.e. non-libs, non-utilprograms) could fit in this 
structure without any extra filesystem magic. Well, the grass is 
always greener.


BTW, you create multiple versions by your mere existence. There are lots 
of old versions laying around, but they can't be installed together 
right now.


--
This e-mail is confidential and may not be shared with anyone other than 
recipient(s) without written permission from sender.



login.conf default openfiles

2014-08-02 Thread Ed Hynan

Saturday morning, saw this in /var/log/messages:

Aug  2 08:29:12 lucy su: default: setting resource limit openfiles: Invalid 
argument

That's from /etc/weekly, which uses 'su -m nobody' for locate db update
on line 52. The log message can be produced by hand with, e.g.:

# echo /bin/echo FOO | SHELL=/bin/sh nice -5 su -m nobody

invoked by root.

Checking userinfo nobody shows no login class, so presumably default:
applies.

I installed the original login.conf from etc55.tgz. Same message;
anyway, I hadn't edited default:.

The default: entry has openfiles-cur, but not -max. According to
login.conf(5) resource limit entries without -{cur,max} will specify
both, but using -{cur,max} specifies that limit individually. So,
using only foo-cur leaves foo-max unspecified.

Adding openfiles-max and checking again, no message is logged.

BTW, I jumped from 4.9 to 5.5 so the 4.9 login.conf is the most
recent I have handy. The 4.9 login.conf likewise has only
openfiles-cur in default:, but I don't think I've seen that log
message before. Some verbosity recently added?

-Ed

--

The rights you have are the rights given you by this Committee [the
House Un-American Activities Committee].  We will determine what rights
you have and what rights you have not got.

-- J. Parnell Thomas



Re: Package installation

2014-08-02 Thread Marc Espie
On Sat, Aug 02, 2014 at 01:13:56PM +0200, Gustav Fransson Nyvell wrote:
 entirely different is needed? Okay, maybe I should complain about the status
 quo... thing is when packages install in /var, /usr, /etc and /opt they're
 so spread out it's hard to know what is what. This might be because I'm new
 but/and scripts can find orphan files in this structures, but you need the
 scripts for that. Having everything in /pkgs/PKG/VER would not cause this
 splatter. Programs without dependees (i.e. non-libs, non-utilprograms) could
 fit in this structure without any extra filesystem magic. Well, the grass is
 always greener.

That's right, you're new.

We have all the tools to deal with that.

Want to know where a file comes from ? install pkglocatedb. The base
system also has locate databases now (new for 5.6)

Want to check your file system for weird shit ? pkg_check does that now.
There are some rough edges, but I have everything to know exactly what
belongs where.

Oh, and inter-dependencies are trivially recorded.

Simpler is better.

You're free to leave if you don't like it :)



Re: Package installation

2014-08-02 Thread Nick Holland
On 08/02/14 06:25, Gustav Fransson Nyvell wrote:
 Hi, there,
 
 I wanted to run something by you, mkay. About package management. I 
 wonder if this has been shouted at already. I remember from SunOS that 
 packages are installed in a different manner than let's say Red Hat and 
 of course OpenBSD. They install it in the form /pkgs/PROGRAM/VERSION, 
 example /pkgs/gimp/1.0. GoboLinux does this. I think this has some 
 advantages over installing /usr/local/bin/gimp1.1 and 
 /usr/local/bin/gimp2.0. What do you think? What have you said?
 
 Ready to be shouted at;

no.  Plain and simple.

You have to understand a few things about Solaris and OpenBSD.

Solaris takes great pride in binary compatibility between versions --
Solaris 6 binaries can run on Solaris 11, etc.

OpenBSD takes great pride in its unified system approach -- your third
party apps will be compiled for the version you are running, and you are
expected to upgrade all together regularly.

Now, a lot of people will look at that and think, Solaris is better!
It's easier to maintain!.

Having worked for companies with decade-old applications on Solaris (or
a long-term support version of RedHat) because no one really understands
them anymore, I'd argue the real life impact of the Solaris model is
sloppy administration, because it permits it.  And ten years later, you
have a mission critical app no one understands, no one is willing to
touch...and it's got more security holes than a fresh Windows 2000
install.

It is much better to be looking at dmesg with a date one year old saying
to you, you need to upgrade me, you bum! than a ten year old system
that is one crash away from shutting you down.

Nick.



Re: two wireless networks on one interface?

2014-08-02 Thread sven falempin
On Fri, Aug 1, 2014 at 3:52 PM, Daniel Melameth dan...@melameth.com wrote:
 On Fri, Aug 1, 2014 at 12:50 PM, Tobias Stoeckmann
 tob...@stoeckmann.org wrote:
 Is it (technically) possible to join two wireless networks with just
 one chip?  My system has an athn0 interface, would be nice if I can
 join two networks with that.

 I don't believe this is possible with OpenBSD.


no it is not, it requires an update inside the kernel, net and free
does it and the code if not that far from those version,
not enough time for this here.

-- 
-
() ascii ribbon campaign - against html e-mail
/\



Re: openbsd and badusb

2014-08-02 Thread Theo de Raadt
 #badbios redux?
 I seem to recall it was suspected that badbios started
 with an infected USB stick.

I recall differently: badbios required a yellow reporter.



Re: Package installation

2014-08-02 Thread Charles Musser
The need for multiple versions of an application on one machine
doesn't manifest that often. Asking the system to tie itself into
knots for this purpose is likely to result in bloat, convolution and
less reliability.

Some contexts support and indeed encourage the notion of many
versions. For instance, the Ruby Version Manager (RVM) allows
different versions of the Ruby interpreter and its attendant libraries
to be in use at a given time. It seems to work perfectly well, but one
has to wonder if this is really a good thing. Do you really want the
mental overload that results from having to deal with multiple
versions of a language, library, API, user tool, or whatever?

The original poster might want to consider whether this kind of thing
is necessary or desirable. It sounds symptomatic of half-baked ideas
about what needs to be accomplished and how to accomplish it. Also
worth considering is OpenBSD's stance on how to maintain a system. You
are encouraged to refresh the system at six month intervals and, in so
doing, become familiar with the nature of the software you're
running. Chances are, the version they've packaged works well enough,
probably better than older incarnations.

Incidentally, you can learn what files comprise a package with:

pkg_info -L package-name

You can learn about the package related commands by typing:

apropos pkg_

And then reading the listed manpages. As always with OpenBSD, these
documents are of high quality.

Chuck

On Aug 2, 2014, at 4:17 AM, Gustav Fransson Nyvell gus...@nyvell.se wrote:

 On 08/02/14 13:13, Gustav Fransson Nyvell wrote:
 On 08/02/14 12:54, Marc Espie wrote:
 On Sat, Aug 02, 2014 at 12:26:06PM +0200, Gustav Fransson Nyvell wrote:
 Hi, there,
 
 I wanted to run something by you, mkay. About package management. I wonder
 if this has been shouted at already. I remember from SunOS that packages 
 are
 installed in a different manner than let's say Red Hat and of course
 OpenBSD. They install it in the form /pkgs/PROGRAM/VERSION, example
 /pkgs/gimp/1.0. GoboLinux does this. I think this has some advantages over
 installing /usr/local/bin/gimp1.1 and /usr/local/bin/gimp2.0. What do you
 think? What have you said?
 
 Ready to be shouted at;
 This puts more strain on the file system actually, which is probably
 the main reason we don't do it. Also, there is generally a lot of churning
 to do to make the package self-contained.
 
 As far as policy goes, having stuff set up like that looks more flexible, 
 but
 it is a fallacy. Instead of having the distribution solve issues concerning
 incompatible versions and updates, the toll falls instead on the individual
 sysadmin, to make sure things they have work together. It can lead to
 security nightmares, because it's so simple to have the newer version
 alongside the old version that sticky points of updating take much longer
 to resolve.
 
 It's a bit like having mitigation measures that you can turn on and off...
 if it's possible to turn these off, there's not enough incentive to actually
 fix issues.
 
 Likewise for packages. By making it somewhat LESS convenient to install
 several versions of the same piece of software, we make it more important
 to do timely updates.
 
 Also, we don't have the manpower to properly manage lots of distinct 
 versions
 of the same software. So  this kind of setup would be detrimental to
 actually testing stuff.
 I guess there could be both. But I think that if there's a security issue 
 with one version of a software then there quite possibly are multiple ways 
 of limiting the impact of that issue. Disallowing multiple versions to force 
 people to upgrade is not really a good reason, from how I see it. Old 
 software will always have more holes, because they're older and more well 
 observed, but they have qualities, too, like speed. GIMP-1.0 is amazing on 
 Lenovo X41 from 2005, but probably has bugs. Of course none of these systems 
 will stop someone who wants to run version x of a software. Maybe something 
 entirely different is needed? Okay, maybe I should complain about the status 
 quo... thing is when packages install in /var, /usr, /etc and /opt they're 
 so spread out it's hard to know what is what. This might be because I'm new 
 but/and scripts can find orphan files in this structures, but you need the 
 scripts for that. Having everything in /pkgs/PKG/VER would not cause this 
 splatter. P!
 rograms without dependees (i.e. non-libs, non-utilprograms) could fit in this 
structure without any extra filesystem magic. Well, the grass is always greener.
 
 BTW, you create multiple versions by your mere existence. There are lots of 
 old versions laying around, but they can't be installed together right now.
 
 -- 
 This e-mail is confidential and may not be shared with anyone other than 
 recipient(s) without written permission from sender.