On Mon, Oct 24, 2011 at 10:23 PM, David Boreham wrote:
>
> A few quick thoughts:
>
> 1. 320 would be the only SSD I'd trust from your short-list. It's the only
> one with proper protection from unexpected power loss.
> 2. Multiple RAID'ed SSDs sounds like (vast) overkill for your workload. A
> sin
Thanks Jeff. This should help in getting a fairly approximate values.
Amitabh
On Fri, Oct 28, 2011 at 11:24 AM, Jeff Davis wrote:
> On Mon, 2011-10-24 at 19:34 +0530, Amitabh Kant wrote:
> > If I read the xact_commit field returned by "Select * from
> > pg_stat_database" multiple times, and the
On Mon, 2011-10-24 at 19:34 +0530, Amitabh Kant wrote:
> If I read the xact_commit field returned by "Select * from
> pg_stat_database" multiple times, and then average the difference
> between consecutive values, would this give an approx idea about the
> transactions per second in my database?
Hello
plpgsql uses a cached prepared plans for queries - where optimizations
is based on expected values - not on real values. This feature can do
performance problems some times. When you have these problems, then
you have to use a dynamic SQL instead. This generate plans for only
one usage and
CS DBA writes:
> I have code that drops a table, re-create's it (based on a long set of
> joins) and then re-creates the indexes.
> It runs via psql in about 10 seconds. I took the code and simply
> wrapped it into a plpgsql function and the function version takes almost
> 60 seconds to run.
Hi All ;
I have code that drops a table, re-create's it (based on a long set of
joins) and then re-creates the indexes.
It runs via psql in about 10 seconds. I took the code and simply
wrapped it into a plpgsql function and the function version takes almost
60 seconds to run.
I always tho
On 10/26/11 8:47 AM, Sören Meyer-Eppler wrote:
> A fairly complex insert query on an empty destination table will run for
> an indefinite amount of time (we waited literally days for the query to
> complete). This does not happen every time we run the query but often.
> Now ordinarily I'd assume we
=?ISO-8859-1?Q?S=F6ren_Meyer-Eppler?= writes:
> A fairly complex insert query on an empty destination table will run for
> an indefinite amount of time (we waited literally days for the query to
> complete). This does not happen every time we run the query but often.
> Now ordinarily I'd assume
On Thu, Oct 27, 2011 at 2:15 PM, Samuel Gendler
wrote:
>
>
> There are definitely no bloated tables. The large tables are all
> insert-only, and old data is aggregated up and then removed by dropping
> whole partitions. There should be no bloat whatsoever. The OLTP side of
> things is pretty min
Shaun Thomas writes:
> On 10/27/2011 02:13 PM, Robert Haas wrote:
>> If I had to guess, I'd bet that the second one is trying to spool the
>> resultset in memory someplace and that's driving the machine into
>> swap.
> That would be my guess too. SELECT * on a 40-million row table is a
> *lot* d
>From: pgsql-performance-ow...@postgresql.org
>[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Samuel Gendler
>Sent: Thursday, October 27, 2011 12:47 PM
>To: pgsql-performance@postgresql.org
>Subject: [PERFORM] backups blocking everything
>
>I've got a large mixed-used database, with
On 10/27/2011 02:13 PM, Robert Haas wrote:
If I had to guess, I'd bet that the second one is trying to spool the
resultset in memory someplace and that's driving the machine into
swap.
That would be my guess too. SELECT * on a 40-million row table is a
*lot* different than getting the count,
On Thu, Oct 27, 2011 at 1:45 PM, Nicholson, Brad (Toronto, ON, CA) <
bnichol...@hp.com> wrote:
> >From: pgsql-performance-ow...@postgresql.org [mailto:
> pgsql-performance-ow...@postgresql.org] On Behalf Of Samuel Gendler
> >Sent: Thursday, October 27, 2011 12:47 PM
> >To: pgsql-performance@postgr
What is a bit strange about this is that you can do this:
On Fri, Oct 7, 2011 at 6:04 AM, Giovanni Mancuso wrote:
> select count(*) from dm_object_perm;
> count
> --
> 38928077
> (1 row)
>
But not this:
If i run "explain analyze select * from dm_object_perm;" it goes on for many
>
On Fri, Sep 30, 2011 at 2:56 PM, Tom Lane wrote:
> Josh Berkus writes:
>>> In principle, yeah, we could make it do that, but it seems like a likely
>>> source of maintenance headaches. This example is not exactly compelling
>>> enough to make me want to do it. Large OFFSETs are always going to
On Thu, Oct 27, 2011 at 11:47 AM, Samuel Gendler
wrote:
> I've got a large mixed-used database, with the data warehouse side of things
> consisting of several tables at hundreds of millions of rows, plus a number
> of tables with tens of millions. There is partitioning, but as the volume
> of dat
I've got a large mixed-used database, with the data warehouse side of things
consisting of several tables at hundreds of millions of rows, plus a number
of tables with tens of millions. There is partitioning, but as the volume
of data has risen, individual partitions have gotten quite large. Hard
On Thu, Oct 27, 2011 at 4:42 AM, Gregg Jaskiewicz wrote:
> What does 'select * from pg_stat_activity' say, more precisely - the
> "waiting" column.
Whether that particular process is waiting for it to be granted some
kind of database-level lock.
--
Sent via pgsql-performance mailing list (pgsql
On 2011-10-27 09:42, Gregg Jaskiewicz wrote:
What does 'select * from pg_stat_activity' say, more precisely - the
"waiting" column.
Waiting is "false" for all rows. If I use pgadmin to lock at the server
status all locks have been granted for hours.
Although the process does in fact use CPU
What does 'select * from pg_stat_activity' say, more precisely - the
"waiting" column.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
We are running into a significant performance issue with
"PostgreSQL 9.0.4 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real
(Debian 4.4.5-8) 4.4.5, 64-bit"
(the version string pqAdmin displays).
A fairly complex insert query on an empty destination table will run for
an indefinite amount o
Hello, Jens
On 2011.10.26 13:23, Jens Reufsteck wrote:
I’ve got a lengthy query, that doesn't finish in reasonable time (i.e.
10min+). I suspect, that the query optimizer miscalculates the number of
rows for part of the query.
I'm sorry for the silly question, but have you tried analyze on your
22 matches
Mail list logo