Steve and Kenny,

There is a balance.  Not all of Pier's issues may apply, but many of the
important ones do.

Frankly, I don't want to run anything at root that can avoid it.  That is
just good practice.

Consider Vincenzo's anti-spam matcher.  Would you want that to run as root?
I am not convinced that an exploit cannot be written that would cause the
commercial, native code, virus scanner to have a buffer overflow causing it
to execute code embedded in a carefully crafted message.  Can you prove
otherwise?  How big a bet would you be willing to place?  Because that is
the core of decision theory: what are you willing to risk on your choice?

> There is no need to distribute James' components over multiple JVMs to
> enhance security or robustness.

I disagree as a technical matter.  As a practical matter, it depends upon
one's configuration and comfort level.  Clearly most users experience levels
of both security and robustness with James that meet their requirements.

> The paranoia comes from what's called a buffer overflow
> exploit.

> Java however doesn't suffer from this kind of attack

As a matter of fact, there was one involving the native compression code.

I agree that Java is safer than C/C++, but that does not diminish the point.
I can give you plenty of good reasons for why you want the ability to run
the mailet pipeline at privileges other than root:

 - the chance of a JVM exploit.
 - potential exploits via native code in
   a JDBC driver.
 - the use of native code in matchers/mailets,
   e.g., the anti-virus matcher.
   ---------------------------------------------------
 - the use of third party matchers/mailets.
 - the use of user-defined scripting matchers/mailets
 - support for SOAP
 - one pipeline being extra busy or big
   performing lots of processing and/or
   handling large messages, should not
   deny service to other users.

None of the items below the bar are related to native code exploits.  A
single JVM is not secure when you start taking into account such things as
SOAP, user-defined scripts, etc.  Would you run Tomcat as root, and allow
users to install whatever servlets they want at root privilege?

For most users, the risk involved may be effectively non-existent, but the
more flexible a system becomes, the more the system must be architected to
prevent exploits due to an oversight failure.  For Tomcat, I make sure that
each virtual host's JVM(s) run(s) with the access right's of the host's
owner.  If I have mail applications for multiple hosts, I also want to
ensure separation, security, and privacy.

Next, consider the dynamic reconfiguration issue, which thread I have no had
time to respond to over the past couple of days.  The key issue there is
that to reconfigure the pipeline, I should not have to impact the other
services.  What you might do for a graceful reconfig is to put the pipeline
into a mode where it will process all messages EXCEPT for those in the root
spool.  That lets you flush out the current pipeline, while allowing the
SMTP server to continue spooling messages.  Then I can gracefully restart
the pipeline processor without any impact on SMTP, POP3, NNTP or other
service.

We've already had the plan for a distributed pipeline cluster, so I don't
see any reason to be arguing the point.  If you look at Pier's message, you
should see GOOD things, not bad.  The architecture is basically exactly what
he wants.

> Using multiple VMs forces the use of some kind of remote interface.

We already have one: the spooler and message stores.  Separation would occur
where natural divisions exist.

        --- Noel


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to