On Tue, Feb 16, 2016 at 5:18 AM, Fabien COELHO <coe...@cri.ensmp.fr> wrote:
>>> Good point. One simple idea here would be to use a custom pgbench
>>> script that has no SQL commands and just calculates the values of some
>>> parameters to measure the impact without depending on the backend,
>>> with a fixed number of transactions.
>>
>> Sure, we could do that.  But whether it materially changes pgbench -S
>> results, say, is a lot more important.
>
>
> Indeed. Several runs on my laptop:
>
>   ~ 400000-540000 tps with master using:
>     \set naccounts 100000 * :scale
>     \setrandom aid 1 :naccounts
>
>   ~ 430000-530000 tps with full function patch using:
>     \set naccounts 100000 * :scale
>     \setrandom aid 1 :naccounts
>
>   ~ 730000-890000 tps with full function patch using:
>     \set aid random(1, 100000 * :scale)
>
> The performance is pretty similar on the same script. The real pain is
> variable management, avoiding some is a win.

Wow, that's pretty nice.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to