On Wed, 2005-03-23 at 14:22 -0500, Keith Browne wrote:
Simon Riggs wrote:
The EXPLAINs you've enclosed are for SELECTs, yet your bug report
describes INSERTs as being the things that are slow.
[You may find better performance from using COPY]
We're starting with an empty database,
Simon Riggs [EMAIL PROTECTED] writes:
I think we should spawn a TODO item from this:
* Coerce FK lookups to always use an available index
No, we aren't doing that.
The correct TODO item is Replan cached plans when table size has
changed a lot which of course depends on having a framework to do
On Fri, 2005-03-25 at 01:58 -0500, Tom Lane wrote:
Christopher Kings-Lynne [EMAIL PROTECTED] writes:
In that case there's a datatype mismatch between the referencing and
referenced columns, which prevents the index from being used for the
FK check.
Is creating such a foreign key a
Hi,
I am thinking about how to continuously monitor the performance of a
PostgreSQL 8 database. I am interested in two things: (1) the growth of
tables with TOAST and indexes; and (2) the respond time breakdown for a
query.
In Chapters 23 and 24 of the big manual, I found enough materials to
Simon Riggs [EMAIL PROTECTED] writes:
I vote to make this an ERROR in 8.1 - I see little benefit in allowing
this situation to continue.
Other than spec compliance, you mean? SQL99 says
... The declared type of each referencing column shall be
comparable to the
On Fri, 2005-03-25 at 03:50 -0700, Karim Nassar wrote:
On Fri, 2005-03-25 at 10:18 +, Simon Riggs wrote:
When running with PostgreSQL 7.4 on a dual-CPU Athlon MP2400+ machine
with a gigabyte of RAM, running Debian Linux version 2.6.8.1, we were
able to insert all this data in 5-7
On Fri, 2005-03-25 at 10:17 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
I vote to make this an ERROR in 8.1 - I see little benefit in allowing
this situation to continue.
Other than spec compliance, you mean? SQL99 says
... The declared type of each
On Fri, 25 Mar 2005, Simon Riggs wrote:
On Fri, 2005-03-25 at 10:17 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
I vote to make this an ERROR in 8.1 - I see little benefit in allowing
this situation to continue.
Other than spec compliance, you mean? SQL99 says
Karim,
I guess I was wondering if there is other general tuning advice for such
large table indexes such as increasing statistics, etc.
Well, your index use problem is being explained by Tom, Stephan and Simon;
basically your FKed data types are incompatible for index use purposes so the
Jack,
I am thinking about how to continuously monitor the performance of a
PostgreSQL 8 database. I am interested in two things: (1) the growth of
tables with TOAST and indexes;
This is queryable from the system tables, if you don't mind an approximate.
and (2) the respond time breakdown
Jack,
This script can be used to focus tuning attention on the most important
issues. It reports a breakdown of total foreground response time into
four major categories: CPU usage, disk I/O, resource waits, and routine
latencies. These categories are broken down further into sub-categories,
Josh,
The description of the Oracle script is:
This script can be used to focus tuning attention on the most important
issues. It reports a breakdown of total foreground response time into
four major categories: CPU usage, disk I/O, resource waits, and routine
latencies. These categories are
It looks like the reltuples-values are screwed up. Even though rows are
constantly being removed from the table the reltuples keep going up. If I
understand correctly that also makes the Vacuum threshold go up and we end
up in a vicious circle. Right after pg_autovacuum performed a vacuum analyze
On Fri, 2005-03-25 at 08:23 -0800, Stephan Szabo wrote:
On Fri, 25 Mar 2005, Simon Riggs wrote:
On Fri, 2005-03-25 at 10:17 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
I vote to make this an ERROR in 8.1 - I see little benefit in allowing
this situation to
Simon Riggs [EMAIL PROTECTED] writes:
On Fri, 2005-03-25 at 10:17 -0500, Tom Lane wrote:
Other than spec compliance, you mean? SQL99 says
... The declared type of each referencing column shall be
comparable to the declared type of the corresponding referenced
column.
Tom had said SQL99
hmm the value in reltuples should be accurate after a vacuum (or
vacuum analyze) if it's not it's a vacuum bug or something is going on
that isn't understood. If you or pg_autovacuum are running plain
analyze commands, that could explain the invalid reltules numbers.
Was reltuples =
Matthew T. O'Connor matthew@zeut.net writes:
hmm the value in reltuples should be accurate after a vacuum (or
vacuum analyze) if it's not it's a vacuum bug or something is going on
that isn't understood. If you or pg_autovacuum are running plain
analyze commands, that could explain the
Was reltuples = 113082 correct right after the vacuum?
No, There where about 31000 rows after the vacuum. I'm no expert but tuples
= rows, right ?
This is not a normal table though, in the sence that it is only a
temporary holding ground as I explained earlier. I create 5 records and
these
I wrote:
One thing that is possibly relevant here is that in 8.0 a plain VACUUM
doesn't set reltuples to the exactly correct number, but to an
interpolated value that reflects our estimate of the steady state
average between vacuums. I wonder if that code is wrong, or if it's
operating as
Another thing to check is whether the reltuples (and relpages!) that
autovacuum is reporting are the same as what's actually in the pg_class
row for the relation. I'm wondering if this could be a similar issue
to the old autovac bug where it wasn't reading the value correctly.
These values
On Fri, 2005-03-25 at 13:47 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
On Fri, 2005-03-25 at 10:17 -0500, Tom Lane wrote:
Other than spec compliance, you mean? SQL99 says
... The declared type of each referencing column shall be
comparable to the declared type of the
Tom Lane wrote:
Matthew T. O'Connor matthew@zeut.net writes:
hmm the value in reltuples should be accurate after a vacuum (or
vacuum analyze) if it's not it's a vacuum bug or something is going on
that isn't understood. If you or pg_autovacuum are running plain
analyze commands, that
On Fri, 25 Mar 2005, Simon Riggs wrote:
On Fri, 2005-03-25 at 13:47 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
On Fri, 2005-03-25 at 10:17 -0500, Tom Lane wrote:
Other than spec compliance, you mean? SQL99 says
... The declared type of each referencing column
Tom Lane wrote:
I wrote:
One thing that is possibly relevant here is that in 8.0 a plain VACUUM
doesn't set reltuples to the exactly correct number, but to an
interpolated value that reflects our estimate of the steady state
average between vacuums. I wonder if that code is wrong, or if it's
Stephan Szabo [EMAIL PROTECTED] writes:
On Fri, 25 Mar 2005, Simon Riggs wrote:
Could it be that because PostgreSQL has a very highly developed sense of
datatype comparison that we might be taking this to extremes? Would any
other RDBMS consider two different datatypes to be comparable?
We
On Fri, 2005-03-25 at 15:10 +, Simon Riggs wrote:
Karim: Did this happen? If not, can you drop and re-create and confirm
that you get the WARNING? If not, we have problems.
No. Nor do I think that I should. SERIAL is shortcut for INTEGER, no? I
think there is some other (TBD) problem
On Fri, 2005-03-25 at 16:25 -0500, Tom Lane wrote:
Stephan Szabo [EMAIL PROTECTED] writes:
On Fri, 25 Mar 2005, Simon Riggs wrote:
Could it be that because PostgreSQL has a very highly developed sense of
datatype comparison that we might be taking this to extremes? Would any
other RDBMS
On Fri, 2005-03-25 at 15:22 -0500, Tom Lane wrote:
2. Dead tuples don't have that much influence on scan costs either, at
least not once they are marked as known-dead. Certainly they shouldn't
be charged at full freight.
Yes, minor additional CPU time, but the main issue is when the dead
Simon Riggs [EMAIL PROTECTED] writes:
On Fri, 2005-03-25 at 15:22 -0500, Tom Lane wrote:
2. Dead tuples don't have that much influence on scan costs either, at
least not once they are marked as known-dead. Certainly they shouldn't
be charged at full freight.
Yes, minor additional CPU time,
Otto Blomqvist wrote:
This table is basically a queue full of records waiting to get transfered
over from our 68030 system to the PG database. The records are then moved
into folders (using a trigger) like file_92_myy depending on what month the
record was created on the 68030. During normal
I'm not sure if autovacuum could be taught to do that --- it could
perhaps launch a vacuum as soon as it notices a large fraction of the
table got deleted, but do we really want to authorize it to launch
VACUUM FULL? It'd be better to issue the vacuum synchronously
as part of the batch
Bruce Momjian pgman@candle.pha.pa.us writes:
Tom Lane wrote:
I'm not sure if autovacuum could be taught to do that --- it could
perhaps launch a vacuum as soon as it notices a large fraction of the
table got deleted, but do we really want to authorize it to launch
VACUUM FULL?
One problem
Tom Lane wrote:
Otto Blomqvist wrote:
This table is basically a queue full of records waiting to get transfered
over from our 68030 system to the PG database. The records are then moved
into folders (using a trigger) like file_92_myy depending on what month the
record was created on the
Tom Lane wrote:
Bruce Momjian pgman@candle.pha.pa.us writes:
Tom Lane wrote:
I'm not sure if autovacuum could be taught to do that --- it could
perhaps launch a vacuum as soon as it notices a large fraction of the
table got deleted, but do we really want to authorize it to launch
VACUUM
ok, Thanks a lot for your time guys ! I guess my table is pretty unusual
and thats why this problem has not surfaced until now. Better late then
never ;) I'll cron a manual vacuum full on the table.
Tom Lane [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Otto Blomqvist wrote:
Situation: An 24/7 animal hospital (100 employees) runs their business
on Centos 3.3 (RHEL 3) Postgres 7.4.2 (because they have to) off a 2-CPU
Xeon 2.8MHz, 4GB of RAM, (3) SCSI disks RAID 0 (zcav value 35MB per
sec). The databse is 11GB comprised over 100 tables and indexes from 1MB
to 2GB
Steve Poe [EMAIL PROTECTED] writes:
Situation: An 24/7 animal hospital (100 employees) runs their business
on Centos 3.3 (RHEL 3) Postgres 7.4.2 (because they have to)
[ itch... ] Surely they could at least move to 7.4.7 without pain.
There are serious data-loss bugs known in 7.4.2.
You can purchase a whole new dual opteron 740,with 6 gigs of ram, a
case to match and 6 74 gig ultra320 sca drives for about $7k
I know because that's what I bought one for 2 weeks ago. Using Tyan's
dual board.
If you need some details and are willing to go that route, let me
Steve Poe wrote:
Situation: An 24/7 animal hospital (100 employees) runs their
business on Centos 3.3 (RHEL 3) Postgres 7.4.2 (because they have to)
off a 2-CPU Xeon 2.8MHz, 4GB of RAM, (3) SCSI disks RAID 0 (zcav value
35MB per sec). The databse is 11GB comprised over 100 tables and
indexes
Tom,
From what I understand, the vendor used ProIV for development, when
they attempted to use 7.4.3, they had ODBC issues and something else I
honestly don't know, but I was told that data was not coming through
properly. Their somewhat at the mercy of the ProIV people to give them
the stamp
There is clear benefit from forcing them to be the same. In logical data
terms, they *should* be the same. I don't check fruit.apple_grade
against fruit_type.orange_grade. When would I want to make a check of
that nature? If there is a reason, thats great, lets keep status quo
then.
I respect the
41 matches
Mail list logo