Re: mass hosting + cgi [WAS: Technical committee acting in gross violation of the Debian constitution]

2014-12-06 Thread Christoph Anton Mitterer
On Fri, 2014-12-05 at 03:47 +0100, Enrico Weigelt, metux IT consult
wrote: 
 No, it's not, and it's pretty cheap, if done right.
Yes it definitely is, because simply by having gazillions of different
users on the same host, you increase the chance that someone is doing
something stupid, which can be exploited.

And I explicitly didn't talk about what's cheap, but if you count on
security, you probably shouldn't look at the money. 


 IIRC at that time they've been using cgiexec. I just don't recall
 why they didn't use my muxmpm. (maybe because apache upstream was
 too lazy to pick it up, even though it had been shipped by several
 large distros).
Is that merged upstream in the meantime or currently shipped in any
major distro? Admittedly I haven't had heard about it before.


 It adds additional complexity, especially when you're going to manage
 a _large_ number (several k) of users per box.
Security is of course never easy.

  In such scenarios
 you wanna be careful about system resources like sockets, fds, etc.
I wouldn't see how this is different with other solutions.


Cheers,
Chris.


smime.p7s
Description: S/MIME cryptographic signature


mass hosting + cgi [WAS: Technical committee acting in gross violation of the Debian constitution]

2014-12-04 Thread Enrico Weigelt, metux IT consult
On 04.12.2014 22:23, Christoph Anton Mitterer wrote:

 Apart from that, when you speak of non-trivial quantities - I'd
 probably say that running gazillion websites from different entities on
 one host is generally a really bad idea.

No, it's not, and it's pretty cheap, if done right.

Several years ago, I was working for some large ISP (probably the
largest in Germany). Hosting more than 1000 sites per box, several
millions in total. (yes, most of them are pretty small and low
traffic).

IIRC at that time they've been using cgiexec. I just don't recall
why they didn't use my muxmpm. (maybe because apache upstream was
too lazy to pick it up, even though it had been shipped by several
large distros).

A few years earlier I've developed muxmpm for exactly that purpose:
a derivative of worker/perchild, running individual sites under their
own UID, spawning on-demand. This approach not just worked for CGI,
but also builtin content processor like mod_php, mod_perl, etc.

 FastCGI is just a slightly more fancy way of doing this.
 FastCGI is another thing that almost nobody can afford when hosting 
 a significant number of web sites.
 Why not?

It adds additional complexity, especially when you're going to manage
a _large_ number (several k) of users per box. In such scenarios
you wanna be careful about system resources like sockets, fds, etc.

I'm not up to date whether there's meanwhile an efficient solution
for fully on-demand startup (and auto-cleanup) of fcgi slaves
with arbitrary UIDs, and how much overhead copying between
processes (compared to socket-passing) produces on modern systems
(back when I wrote muxmpm, it still was quite significant)

OTOH, for high-volume scenarios, apache might not be the first choice.


cu
--
Enrico Weigelt,
metux IT consulting
+49-151-27565287


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/54811cde.5000...@gr13.net