Re: [PERFORM] Two identical systems, radically different performance

2012-10-17 Thread Andrea Suisani
On 10/17/2012 06:35 PM, Scott Marlowe wrote: On Wed, Oct 17, 2012 at 9:45 AM, Andrea Suisani wrote: On 10/15/2012 05:34 PM, Scott Marlowe wrote: I'd recommend more synthetic benchmarks when trying to compare systems like this. bonnie++, you were right. bonnie++ (-f -n 0 -c 4) show that the

Re: [PERFORM] LIKE op with B-Tree Index?

2012-10-17 Thread Sam Wong
> Moncure wrote on Thursday, October 18, 2012 1:45 > On Tue, Oct 16, 2012 at 8:01 PM, Sam Wong wrote: > >> On Wednesday, October 17, 2012 4:30, Merlin Moncure wrote, > >> > >> On Tue, Oct 16, 2012 at 3:15 AM, Sam Wong wrote: > >> > Hi communities, > >> > > >> > I am investigating a performance i

Re: [PERFORM] LIKE op with B-Tree Index?

2012-10-17 Thread Merlin Moncure
On Tue, Oct 16, 2012 at 8:01 PM, Sam Wong wrote: >> On Wednesday, October 17, 2012 4:30, Merlin Moncure wrote, >> >> On Tue, Oct 16, 2012 at 3:15 AM, Sam Wong wrote: >> > Hi communities, >> > >> > I am investigating a performance issue involved with LIKE '%' on >> > an index in a complex quer

Re: [PERFORM] Two identical systems, radically different performance

2012-10-17 Thread Scott Marlowe
On Wed, Oct 17, 2012 at 9:45 AM, Andrea Suisani wrote: > On 10/15/2012 05:34 PM, Scott Marlowe wrote: >> I'd recommend more synthetic benchmarks when trying to compare systems >> like this. bonnie++, > > > you were right. bonnie++ (-f -n 0 -c 4) show that there's very little (if > any) > differen

Re: [PERFORM] Two identical systems, radically different performance

2012-10-17 Thread Andrea Suisani
On 10/15/2012 05:34 PM, Scott Marlowe wrote: On Mon, Oct 15, 2012 at 9:28 AM, Claudio Freire wrote: On Mon, Oct 15, 2012 at 12:24 PM, Andrea Suisani wrote: sure you're right. It's just that my bet was on a higher throughput when HT was isabled from the BIOS (as you stated previously in this

Re: [PERFORM] Out of shared mem on new box with more mem, 9.1.5 -> 9.1.6

2012-10-17 Thread Maciek Sakrejda
On Wed, Oct 17, 2012 at 7:18 AM, Tom Lane wrote: >> ERROR: out of shared memory >> HINT: You might need to increase max_pred_locks_per_transaction. > > This has nothing to do with work_mem nor maintenance_work_mem; rather, > it means you're running out of space in the database-wide lock table. >

Re: [PERFORM] Out of shared mem on new box with more mem, 9.1.5 -> 9.1.6

2012-10-17 Thread Tom Lane
Maciek Sakrejda writes: > We've run into a perplexing issue with a customer database. He moved > from a 9.1.5 to a 9.1.6 and upgraded from an EC2 m1.medium (3.75GB > RAM, 1.3 GB shmmax), to an m2.xlarge (17GB RAM, 5.7 GB shmmax), and is > now regularly getting constant errors regarding running out

[PERFORM] High cost estimates when n_distinct is set

2012-10-17 Thread Niko Kiirala
Hello I am working on a potentially large database table, let's call it "observation", that has a foreign key to table "measurement". Each measurement is associated with either none or around five observations. In this kind of situation, it is well known that the statistics on the foreign key c

Re: [PERFORM] SELECT AND AGG huge tables

2012-10-17 Thread houmanb
Hi all, Thanks for your advice and the link about posting my question in an appropriate form. Here are the info. I thank all of you in advance. Best regards Houman Postgres version: 9.1.4 = Postgres.conf max_connections = 100 shared_buffers = 8192

Re: [PERFORM] Out of shared mem on new box with more mem, 9.1.5 -> 9.1.6

2012-10-17 Thread Martin French
> On Wed, Oct 17, 2012 at 1:53 AM, Martin French > wrote: > > Thanks for your response. > > > What are the settings for: > > work_mem > 100MB This is a little higher than I would ordinarily set. I tend to cap at about 64MB > > > maintenance_work_mem > 64MB In Contrast, this is a little low

Re: [PERFORM] Out of shared mem on new box with more mem, 9.1.5 -> 9.1.6

2012-10-17 Thread Maciek Sakrejda
On Wed, Oct 17, 2012 at 1:53 AM, Martin French wrote: Thanks for your response. > What are the settings for: > work_mem 100MB > maintenance_work_mem 64MB > How many concurrent connections are there? ~20 > Have you ran explain analyze on the query that doesn't crash (i.e the old > box) to ge

Re: [PERFORM] have: seq scan - want: index scan

2012-10-17 Thread Samuel Gendler
On Tue, Oct 16, 2012 at 4:45 PM, Chris Ruprecht wrote: > Hi guys, > > PG = 9.1.5 > OS = winDOS 2008R8 > > I have a table that currently has 207 million rows. > there is a timestamp field that contains data. > more data gets copied from another database into this database. > How do I make this do

Re: [PERFORM] Out of shared mem on new box with more mem, 9.1.5 -> 9.1.6

2012-10-17 Thread Martin French
> > We've run into a perplexing issue with a customer database. He moved > from a 9.1.5 to a 9.1.6 and upgraded from an EC2 m1.medium (3.75GB > RAM, 1.3 GB shmmax), to an m2.xlarge (17GB RAM, 5.7 GB shmmax), and is > now regularly getting constant errors regarding running out of shared > memory (t

[PERFORM] Out of shared mem on new box with more mem, 9.1.5 -> 9.1.6

2012-10-17 Thread Maciek Sakrejda
We've run into a perplexing issue with a customer database. He moved from a 9.1.5 to a 9.1.6 and upgraded from an EC2 m1.medium (3.75GB RAM, 1.3 GB shmmax), to an m2.xlarge (17GB RAM, 5.7 GB shmmax), and is now regularly getting constant errors regarding running out of shared memory (there were non

[PERFORM] have: seq scan - want: index scan

2012-10-17 Thread Chris Ruprecht
Hi guys, PG = 9.1.5 OS = winDOS 2008R8 I have a table that currently has 207 million rows. there is a timestamp field that contains data. more data gets copied from another database into this database. How do I make this do an index scan instead? I did an "analyze audittrailclinical" to no avail.

[PERFORM] pgbounce max_client_conn and default_pool_size

2012-10-17 Thread Yetkin Öztürk
What is the adequate *pgbounce* *max_client_conn ,default_pool_size* values for a postgres config which has *max_connections = 400*. We want to move to pgbouncer to let postgres do the only db job but it confused us. We have over 120 databases in a single postgres engine with as i said max_conne