On Tue, Jul 26, 2011 at 12:24 PM, Robert Haas <robertmh...@gmail.com> wrote: > On Tue, Jul 26, 2011 at 11:40 AM, Pavan Deolasee > <pavan.deola...@gmail.com> wrote: >> On Tue, Jul 26, 2011 at 9:07 AM, Robert Haas <robertmh...@gmail.com> wrote: >>> On Mon, Jul 25, 2011 at 10:14 PM, Greg Smith <g...@2ndquadrant.com> wrote: >>>> On 07/25/2011 04:07 PM, Robert Haas wrote: >>>>> >>>>> I did 5-minute pgbench runs with unlogged tables and with permanent >>>>> tables, restarting the database server and reinitializing the tables >>>>> between each run. >>>> >>>> Database scale? One or multiple pgbench worker threads? A reminder on the >>>> amount of RAM in the server would be helpful for interpreting the results >>>> too. >>> >>> Ah, sorry. scale = 100, so small. pgbench invocation is: >>> >> >> It might be worthwhile to test only with the accounts and history >> table and also increasing the number of statements in a transaction. >> Otherwise the tiny tables can quickly become a bottleneck. > > What kind of bottleneck? >
So many transactions trying to update a small set of rows in a table. Is that what we really want to measure ? My thinking is that we might see different result if they are updating different parts of the table and the transaction start/stop overhead is spread across few statements. Thanks, Pavan -- Pavan Deolasee EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers