On 07/02/2012 08:58, André Warnier wrote:
Tobias Wagener wrote:
Hello,

I'm currently developing a huge application with mod_perl, unixODBC and MaxDB/SAPDB.
On my developing system everything is fine. But on the productive system
with > 50 users, I have database connection errors and request aborts and
so on.

Now I want to ask if someone knows a tool or perl modules, where I can simulate 50 users. I have a list with some common request including the query parameter in order of appearence. But I don't know, how to send them to my developing system to create the same load as it will be on the productive system.

Can someone help me with this issue?

As a simple tool, have a look at the "ab" program that comes with Apache.

ab isn't usually that much use as it does strain the databases in the same way - especially as it only takes 1 URL - any "production" scaling e.g. caching etc will make it even worse. I have a simple one I wrote based on curl which can take a list of URLs which is generally a lot better. Without being careful it is still difficult to cope with large sites acting like users (you will need to extend the code to do cookie jars!)...

Things we have come across in large production systems are.

 * Apache::DBI sometimes cause issues with too many database
   connections - we tend to turn it off and use DBIx::Connector as
   mentioned (and carefully selected caching) here to cope with
   persistence of connections;
 * You may be serving too many requests on the server to the users
   (other than the main page request).... look at
   caching/minimising/merging page design elements - background images,
   javascript, CSS;
 * OR  - serve them from a second apache (use it either as a proxy or
   have a proxy in front of the two apaches)
 * Don't over AJAX pages - lots of "parallel" AJAX requests can
   effectively DOS your service;

Another trick is to add extra diagnostics to your apache logs with a couple of mod_perl handlers:


   *Apache configuration file:*

PerlLoadModule              Pagesmith::Apache::Timer
PerlChildInitHandler        Pagesmith::Apache::Timer::child_init_handler
PerlChildExitHandler        Pagesmith::Apache::Timer::child_exit_handler
PerlPostReadRequestHandler Pagesmith::Apache::Timer::post_read_request_handler
PerlLogHandler              Pagesmith::Apache::Timer::log_handler

LogFormat "%V [*%P/%{CHILD_COUNT}e %{SCRIPT_TIME}e* %{outstream}n/%{instream}n=%{ratio}n] %h/%{X-Forwarded-For}i %l/%{SESSION_ID}e %u/%{user_name}e %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\ " \"%{Cookie}i\" \"%{X-Requested-With}i\" *%{SCRIPT_START}e/%{SCRIPT_END}e*" diagnostic

The following module sets up the four environment variables: *CHILD_COUNT, SCRIPT_START, SCRIPT_END, SCRIPT_TIME* so you can see which requests are slow and you can also see if you have any "clustering" of requests...Setting the "Readonly my $LEVEL" to either normal or noisy will give you more diagnostics in the error log as well....


   *Module:*

package Pagesmith::Apache::Timer;

## Component
## Author         : js5
## Maintainer     : js5
## Created        : 2009-08-12
## Last commit by : $Author: js5 $
## Last modified  : $Date: 2011-10-26 12:44:20 +0100 (Wed, 26 Oct 2011) $
## Revision       : $Revision: 1489 $
## Repository URL : $HeadURL: svn+ssh://web-svn.internal.sanger.ac.uk/repos/svn/shared-content/trunk/lib/Pagesmith/Apache/Timer.pm $

use strict;
use warnings;
use utf8;

use version qw(qv); our $VERSION = qv('0.1.0');

use Readonly qw(Readonly);
Readonly my $VERY_LARGE_TIME => 1_000_000;
Readonly my $CENT            => 100;
Readonly my $LEVEL           => 'normal'; # (quiet,normal,noisy)

use Apache2::Const qw(OK DECLINED);
use English qw(-no_match_vars $PID);
use Time::HiRes qw(time);

my $child_started;
my $request_started;
my $requests;
my $total_time;
my $min_time;
my $max_time;
my $total_time_squared;

sub post_config_handler {
  return DECLINED if $LEVEL eq 'quiet';

  printf {*STDERR} "TI:   Start apache  %9d\n", $PID;
  return DECLINED;
}

sub child_init_handler {
  return DECLINED if $LEVEL eq 'quiet';

  $child_started = time;
  $requests      = 0;
  $total_time    = 0;
  $min_time      = $VERY_LARGE_TIME;
  $max_time      = 0;
  $total_time_squared  = 0;

  printf {*STDERR} "TI:   Start child   %9d\n", $PID;
  return DECLINED;
}

sub post_read_request_handler {
  my $r = shift;

  return DECLINED if $LEVEL eq 'quiet';

  $request_started = time;
  $requests++;

  return DECLINED unless $LEVEL eq 'noisy';

  printf {*STDERR} "TI:   Start request %9d - %4d              %s\n",
    $PID,
    $requests,
    $r->uri;
  return DECLINED;
}

sub log_handler {
  my $r             = shift;

  return DECLINED if $LEVEL eq 'quiet';

  my $request_ended = time;
  my $t             = $request_ended - $request_started;

  $total_time += $t;
  $min_time = $t if $t < $min_time;
  $max_time = $t if $t > $max_time;
  $total_time_squared += $t * $t;
  $r->subprocess_env->{'CHILD_COUNT'}  = $requests;
  $r->subprocess_env->{'SCRIPT_START'} = sprintf '%0.6f', $request_started;
  $r->subprocess_env->{'SCRIPT_END'}   = sprintf '%0.6f', $request_ended;
  $r->subprocess_env->{'SCRIPT_TIME'}  = sprintf '%0.6f', $t;

  return DECLINED unless $LEVEL eq 'noisy';

printf {*STDERR} "TI: End request %9d - %4d %10.6f %s\n", $PID, $requests, $t, $r->uri;
  return DECLINED;
}

sub child_exit_handler {
  return DECLINED if $LEVEL eq 'quiet';

  my $time_alive = time - $child_started;
printf {*STDERR} "TI: End child %9d - %4d %10.6f %10.6f %7.3f%% %10.6f [%10.6f,%10.6f]\n",
    $PID,
    $requests,
    $total_time,
    $time_alive,
    $time_alive  ? $CENT * $total_time / $time_alive : 0,
    $requests    ? $total_time / $requests           : 0,
    $min_time,
    $max_time;
  return DECLINED;
}

1;




--
The Wellcome Trust Sanger Institute is operated by Genome Research Limited, a charity registered in England with number 1021457 and a company registered in England with number 2742969, whose registered office is 215 Euston Road, London, NW1 2BE.

Attachment: curl-get-pages.tgz
Description: application/compressed

Reply via email to