On 10/17/2012 06:35 PM, Scott Marlowe wrote:
On Wed, Oct 17, 2012 at 9:45 AM, Andrea Suisani wrote:
On 10/15/2012 05:34 PM, Scott Marlowe wrote:
I'd recommend more synthetic benchmarks when trying to compare systems
like this. bonnie++,
you were right. bonnie++ (-f -n 0 -c 4) show that the
> Moncure wrote on Thursday, October 18, 2012 1:45
> On Tue, Oct 16, 2012 at 8:01 PM, Sam Wong wrote:
> >> On Wednesday, October 17, 2012 4:30, Merlin Moncure wrote,
> >>
> >> On Tue, Oct 16, 2012 at 3:15 AM, Sam Wong wrote:
> >> > Hi communities,
> >> >
> >> > I am investigating a performance i
On Tue, Oct 16, 2012 at 8:01 PM, Sam Wong wrote:
>> On Wednesday, October 17, 2012 4:30, Merlin Moncure wrote,
>>
>> On Tue, Oct 16, 2012 at 3:15 AM, Sam Wong wrote:
>> > Hi communities,
>> >
>> > I am investigating a performance issue involved with LIKE '%' on
>> > an index in a complex quer
On Wed, Oct 17, 2012 at 9:45 AM, Andrea Suisani wrote:
> On 10/15/2012 05:34 PM, Scott Marlowe wrote:
>> I'd recommend more synthetic benchmarks when trying to compare systems
>> like this. bonnie++,
>
>
> you were right. bonnie++ (-f -n 0 -c 4) show that there's very little (if
> any)
> differen
On 10/15/2012 05:34 PM, Scott Marlowe wrote:
On Mon, Oct 15, 2012 at 9:28 AM, Claudio Freire wrote:
On Mon, Oct 15, 2012 at 12:24 PM, Andrea Suisani wrote:
sure you're right.
It's just that my bet was on a higher throughput
when HT was isabled from the BIOS (as you stated
previously in this
On Wed, Oct 17, 2012 at 7:18 AM, Tom Lane wrote:
>> ERROR: out of shared memory
>> HINT: You might need to increase max_pred_locks_per_transaction.
>
> This has nothing to do with work_mem nor maintenance_work_mem; rather,
> it means you're running out of space in the database-wide lock table.
>
Maciek Sakrejda writes:
> We've run into a perplexing issue with a customer database. He moved
> from a 9.1.5 to a 9.1.6 and upgraded from an EC2 m1.medium (3.75GB
> RAM, 1.3 GB shmmax), to an m2.xlarge (17GB RAM, 5.7 GB shmmax), and is
> now regularly getting constant errors regarding running out
Hello
I am working on a potentially large database table, let's call it
"observation", that has a foreign key to table "measurement". Each measurement
is associated with either none or around five observations. In this kind of
situation, it is well known that the statistics on the foreign key c
Hi all,
Thanks for your advice and the link about posting my question in an
appropriate form.
Here are the info. I thank all of you in advance.
Best regards
Houman
Postgres version: 9.1.4
=
Postgres.conf
max_connections = 100
shared_buffers = 8192
> On Wed, Oct 17, 2012 at 1:53 AM, Martin French
> wrote:
>
> Thanks for your response.
>
> > What are the settings for:
> > work_mem
> 100MB
This is a little higher than I would ordinarily set. I tend to cap at
about 64MB
>
> > maintenance_work_mem
> 64MB
In Contrast, this is a little low
On Wed, Oct 17, 2012 at 1:53 AM, Martin French
wrote:
Thanks for your response.
> What are the settings for:
> work_mem
100MB
> maintenance_work_mem
64MB
> How many concurrent connections are there?
~20
> Have you ran explain analyze on the query that doesn't crash (i.e the old
> box) to ge
On Tue, Oct 16, 2012 at 4:45 PM, Chris Ruprecht wrote:
> Hi guys,
>
> PG = 9.1.5
> OS = winDOS 2008R8
>
> I have a table that currently has 207 million rows.
> there is a timestamp field that contains data.
> more data gets copied from another database into this database.
> How do I make this do
>
> We've run into a perplexing issue with a customer database. He moved
> from a 9.1.5 to a 9.1.6 and upgraded from an EC2 m1.medium (3.75GB
> RAM, 1.3 GB shmmax), to an m2.xlarge (17GB RAM, 5.7 GB shmmax), and is
> now regularly getting constant errors regarding running out of shared
> memory (t
We've run into a perplexing issue with a customer database. He moved
from a 9.1.5 to a 9.1.6 and upgraded from an EC2 m1.medium (3.75GB
RAM, 1.3 GB shmmax), to an m2.xlarge (17GB RAM, 5.7 GB shmmax), and is
now regularly getting constant errors regarding running out of shared
memory (there were non
Hi guys,
PG = 9.1.5
OS = winDOS 2008R8
I have a table that currently has 207 million rows.
there is a timestamp field that contains data.
more data gets copied from another database into this database.
How do I make this do an index scan instead?
I did an "analyze audittrailclinical" to no avail.
What is the adequate *pgbounce* *max_client_conn ,default_pool_size* values
for a postgres config which has *max_connections = 400*.
We want to move to pgbouncer to let postgres do the only db job but it
confused us. We have over 120 databases in a single postgres engine with
as i said max_conne
16 matches
Mail list logo