[PERFORM] ~400 TPS - good or bad?

2010-06-12 Thread Szymon Kosok
Hello,

We are trying to optimize our box for Postgresql. We have i7, 8GB of
ram, 2xSATA RAID1 (software) running on XFS filesystem. We are running
Postgresql and memcached on that box. Without any optimizations (just
edited PG config) we got 50 TPS with pg_bench default run (1 client /
10 transactions), then we've added to /home partition (where PGDATA
is) logbuf=8 and nobarrier. With that fs setup TPS in default test is
unstable, 150-300 TPS. So we've tested with -c 100 -t 10 and got
stable ~400 TPS. Question is - is it decent result or we can get much
more from Postgres on that box setup? If yes, what we need to do? We
are running Gentoo.

Here's our config: http://paste.pocoo.org/show/224393/

PS. pgbench scale is set to "1".

-- 
Greetings,
Szymon

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] ~400 TPS - good or bad?

2010-06-12 Thread Szymon Kosok
2010/6/12 Szymon Kosok :
> PS. pgbench scale is set to "1".

I've found in mailing list archive that scale = 1 is not good idea. So
we have ran pgbench -s 200 (our database is ~3 GB) -c 10 -t 3000 and
get about ~600 TPS. Good or bad?

-- 
Greetings,
Szymon

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] ~400 TPS - good or bad?

2010-06-12 Thread Merlin Moncure
On Sat, Jun 12, 2010 at 8:37 AM, Szymon Kosok  wrote:
> 2010/6/12 Szymon Kosok :
>> PS. pgbench scale is set to "1".
>
> I've found in mailing list archive that scale = 1 is not good idea. So
> we have ran pgbench -s 200 (our database is ~3 GB) -c 10 -t 3000 and
> get about ~600 TPS. Good or bad?

You are being bound by the performance of your disk drives.  Since you
have 8gb ram, your database fit in memory once the cache warms up.  To
confirm this, try running a 'select only' test with a longer
transaction count:

 pgbench -c 10 -t 1 -S

And compare the results.  If you get much higher results (you should),
then we know for sure where the problem is.  Your main lines of attack
on fixing disk performance issues are going to be:

*) simply dealing with 400-600tps
*) getting more/faster disk drives
*) doing some speed/safety tradeoffs, for example synchronous_commit

merlin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] ~400 TPS - good or bad?

2010-06-12 Thread Greg Smith

Szymon Kosok wrote:

I've found in mailing list archive that scale = 1 is not good idea. So
we have ran pgbench -s 200 (our database is ~3 GB) -c 10 -t 3000 and
get about ~600 TPS. Good or bad?
  
pgbench in its default only really tests commit rate, and often that's 
not what is actually important to people.  Your results are normal if 
you don't have a battery-backed RAID controller.  In that case, your 
drives are only capable of committing once per disk rotation, so if you 
have 7200RPM drives that's no more than 120 times per second.  On each 
physical disk commit, PostgreSQL will include any other pending 
transactions that are waiting around too.  So what I suspect you're 
seeing is about 100 commits/second, and on average 6 of the 10 clients 
have something ready to commit each time.  That's what I normally see 
when running pgbench on regular hard drives without a RAID controller, 
somewhere around 500 commits/second.


If you change the number of clients to 1 you'll find out what the commit 
rate for a single client is, that should help validate whether my 
suspicion is correct.  I'd expect a fairly linear increase from 100 to 
~600 TPS as your client count goes from 1 to 10, topping out at under 
1000 TPS even with much higher client counts.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Analysis Function

2010-06-12 Thread Heikki Linnakangas

On 11/06/10 23:38, David Jarvis wrote:

I added an explicit cast in the SQL:

 dateserial(extract(YEAR FROM
m.taken)::int,'||p_month1||','||p_day1||') d1,
 dateserial(extract(YEAR FROM
m.taken)::int,'||p_month2||','||p_day2||') d2

The function now takes three integer parameters; there was no performance
loss.


We had a little chat about this with Magnus. It's pretty surprising that 
there's no built-in function to do this, we should consider adding one.


We could have a function like:

construct_timestamp(year int4, month int4, date int4, hour int4, minute 
int4, second int4, milliseconds int4, timezone text)


Now that we have named parameter notation, callers can use it to 
conveniently fill in only the fields needed:


SELECT construct_timestamp(year := 1999, month := 10, date := 22);

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance