[GENERAL] Load Testing

2009-02-13 Thread Abdul Rahman
Hi all,

Can any body tell me about tool for  PostgreSQL load testing preferably 
freeware.

Regards,
Abdul Rehman.



  

Re: [GENERAL] Load Testing

2009-02-13 Thread Ashish Karalkar

Ashish Karalkar wrote:

Abdul Rahman wrote:

Hi all,

Can any body tell me about tool for PostgreSQL load testing 
preferably freeware.


Regards,
Abdul Rehman.


I am not sure its a freeware or not but looks promising

http://bristlecone.continuent.org/HomePage


--Ashish


And ofcourse the PGbench which is freeware:

http://www.postgresql.org/docs/current/static/pgbench.html

--Ashish

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Load Testing

2009-02-13 Thread Ashish Karalkar

Abdul Rahman wrote:

Hi all,

Can any body tell me about tool for PostgreSQL load testing preferably 
freeware.


Regards,
Abdul Rehman.


I am not sure its a freeware or not but looks promising

http://bristlecone.continuent.org/HomePage


--Ashish

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Load Testing

2009-02-13 Thread Scott Marlowe
On Fri, Feb 13, 2009 at 4:48 AM, Ashish Karalkar ashis...@synechron.com wrote:
 Ashish Karalkar wrote:

 Abdul Rahman wrote:

 Hi all,

 Can any body tell me about tool for PostgreSQL load testing preferably
 freeware.

 Regards,
 Abdul Rehman.

 I am not sure its a freeware or not but looks promising

 http://bristlecone.continuent.org/HomePage


 --Ashish

 And ofcourse the PGbench which is freeware:

 http://www.postgresql.org/docs/current/static/pgbench.html

Yep. pgbench is kind of my basic acceptance testing benchmark.  If
you've got a 16 core 128G ram machine hitched onto a 100+15k5 SAS disk
san array and you're getting 20 tps on pgbench there's not much use in
running other benchmarks until you figure out what's so wrong.

It's also good for applying burn in loads over long periods.  Nothing
like a week of running pgbench to find problems with RAID controllers,
drives, memory, cpus, cooling, power supplies or kernels.  I had a
kernel bug on a server last year that took about 12 hours of heavy
pgbench to show up.  Had a bad RAID controller that took 24 to 36
hours of pgbench to hang.

Plus, pgbench has the ability to run custom SQL for benchmarking, so
it's an easy way to build a custom test.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Load testing across 2 machines

2006-04-10 Thread Richard Huxton

Gavin Hamill wrote:

On Sun, 09 Apr 2006 17:00:14 +0100
Simon Riggs [EMAIL PROTECTED] wrote:


Sniff the live log for SELECT statements (plus their live durations),


Wow, how wonderfully low-tech - hence it's right up my street :) Yay,
some tail + psql fun coming up!


Be careful though - concurrency issues mean queries might not be 
returning the same results, so not taking the same time anyway. Should 
work nicely for load-testing though.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Load testing across 2 machines

2006-04-09 Thread Simon Riggs
On Sat, 2006-04-08 at 15:10 +0100, Gavin Hamill wrote:

 SELECTS go to *both* live and test, but only the answers from live are
 sent back to clients - the answers from test are discarded... 

Put log_min_duration_statement = 0 so all SELECTs go to the log.

Sniff the live log for SELECT statements (plus their live durations),
then route those same statements to the dev box and get a timing from
there also. That way you'll be able to do this without any C coding,
plus you'll have both the live and test elapsed times as a comparison.

Best Regards, Simon Riggs


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Load testing across 2 machines

2006-04-09 Thread Gavin Hamill
On Sun, 09 Apr 2006 17:00:14 +0100
Simon Riggs [EMAIL PROTECTED] wrote:

 Sniff the live log for SELECT statements (plus their live durations),

Wow, how wonderfully low-tech - hence it's right up my street :) Yay,
some tail + psql fun coming up!

Cheers,
Gavin.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Load testing across 2 machines

2006-04-09 Thread Joshua D. Drake

Gavin Hamill wrote:

On Sun, 09 Apr 2006 17:00:14 +0100
Simon Riggs [EMAIL PROTECTED] wrote:


Sniff the live log for SELECT statements (plus their live durations),


Wow, how wonderfully low-tech - hence it's right up my street :) Yay,
some tail + psql fun coming up!


You can even tell it to only show you queries that taken longer the (n) 
where (n) is milliseconds.


Joshua D. Drake



Cheers,
Gavin.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings




--

=== The PostgreSQL Company: Command Prompt, Inc. ===
  Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
  Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Load testing across 2 machines

2006-04-09 Thread Gavin Hamill
On Sun, 09 Apr 2006 17:00:14 +0100
Simon Riggs [EMAIL PROTECTED] wrote:

 On Sat, 2006-04-08 at 15:10 +0100, Gavin Hamill wrote:
 
  SELECTS go to *both* live and test, but only the answers from live
  are sent back to clients - the answers from test are discarded... 
 
 Put log_min_duration_statement = 0 so all SELECTs go to the log.
 
 Sniff the live log for SELECT statements (plus their live durations),
 then route those same statements to the dev box and get a timing from
 there also. That way you'll be able to do this without any C coding,
 plus you'll have both the live and test elapsed times as a comparison.

Ah, having eaten and had my brain finally switch on, I've realised that
there's an unfortunate flaw in the plan; only a single process will be
executing the SELECT-log which pretty much defeats the purpose of the
experiment to simulate identical load patterns on both machines.

I might be wrong, but if I just end up grepping for 'SELECT' then
feeding the results into psql, then only a single connection will be
made to the test server, and all queries will be processed serially on
a single CPU, no?

Cheers,
Gavin.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Load testing across 2 machines

2006-04-08 Thread Gavin Hamill
Hi,

I'm asking here in case this kind of thing has been done before, but
I've not been able to find it..

We have two pg 8.1.3 servers, one live and one test. What I'd like to do
is have something like pgpool to act as a connection broker, but
instead of using pgpool's own replication where all queries are sent to
both servers, and SELECTs are split between both servers, I'm aiming
for this scenario:

UPDATE/DELETE/INSERT go only to live, - Slony is replicating live
to test. This permits test to go offline if necessary and easily 'catch
up' later - much more convenient than pgpool's suggestion of 'stop both
servers, then rsync the db files from master to slave'.

SELECTS go to *both* live and test, but only the answers from live are
sent back to clients - the answers from test are discarded... 

This would very gracefully allow the test machine to be monitored with
real workload but without any danger of affecting the performance of
the live system / returning bad data..

Has this been done already? Can it be done by extending pgpool or
otherwise without requiring C coding skills? :)

Cheers,
Gavin.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] load testing

2004-03-10 Thread scott.marlowe
On Tue, 9 Mar 2004, Sally Sally wrote:

 I wanted to do some quick load testing on a postgres database. Does anyone 
 have any tips on how to go about doing this?
 Thanks much.

If you just wanna beat the database a bit to test for reliability etc, 
look at contrib/pgbench in the distro.

If you want to test massive workloads, look at the OSDL tests on 
www.osdl.org:

http://www.osdl.org/lab_activities/kernel_testing/osdl_database_test_suite/




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] load testing

2004-03-10 Thread Steve Wolfe
 I wanted to do some quick load testing on a postgres database. Does
anyone
 have any tips on how to go about doing this?
 Thanks much.

   Sure.  Test after the manner in which the database is normally used,
and with real-world data.

   I've seen far too many people benchmark a database system by opening
a single connection, and issuing a number of queries.  However, it's more
common for a database server to be handling multiple queries at the same
time.

   Also, I've seen far too many people use contrived test data and
contrived queries.  However, the nature of queries may be very different
from the actual queries you run.  Test with what you use!

  For my own benchmarking, I usually log ~10,000 queries from our
production server, and start a Perl script that I whipped up quickly.  It
will split the queries into chunks, and test with 1 through 10
simultaneous connections, and that's been a very good indicator of how the
machine in question will behave once it's put into production.

steve


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly