> > Cool.. Dynamically adjusting number of clients based on load works well too, > as you've said below. > Best method for checking load on Linux, without incurring much system time, is > to open /proc/loadavg and read that. If you seek to start of file and re-read(), > you get the updated loadavg, so you don't need to close and reopen it. >
Thanks for the tip ... > Do you think your server should be doing better than it is? Yes ... during development I actually tested it with spawning 100 processes for each search query ... and I could run a few queries in parallel so it should be able to handle more. > If you don't mind making it public, can I ask what the spec of the machine is, > and what sort of performance you're getting? > Not at all. It's Red Hat Linux 7.3, 1 GIG of RAM, single 1.3 Ghz processor and a single IDE drive. > Another thing -- Are your processes doing a single task per instance? > Could you convert the workers to process (serially) multiple requests? This > would save on instantiation costs, which can be substantial. Cool idea ... I may try this next. > > I've created systems using shared pipes like that before. works ok, remember > to turn on autoflush thou :) > I've just put a pipe reading server into production ... it is handling the load better than any of the previous servers ... I've stripped out the flocking too - the server is now coping - subject to the load limiter - phew. One advantage of the pipe is I can GZIP it as it goes ... server survival is the first thing though .... time for a beer too! :-) Thanks for all the tips and help, Nige -- Nigel Hamilton Turbo10 Metasearch Engine email: [EMAIL PROTECTED] tel: +44 (0) 207 987 5460 fax: +44 (0) 207 987 5468 ________________________________________________________________________________ http://turbo10.com Search Deeper. Browse Faster.