I discovered this issue a bit more. -j option is slowing down BLOBs
restoring. It's about 1000x times slower if you specify this option.
Does anybody plan to fix it?
I have encountered a problem while restoring the database. There is a
table that contains XML data (BLOB), ~ 3 000 000 records, ~
Robert Haas robertmh...@gmail.com writes:
Yeah, that was my concern, too, though Tom seems skeptical (perhaps
rightly). And I'm not really sure why the PROCLOCKs need to be in a
hash table anyway - if we know the PROC and LOCK we can surely look up
the PROCLOCK pretty expensively by following
Vlad Arkhipov arhi...@dc.baikal.ru writes:
I discovered this issue a bit more. -j option is slowing down BLOBs
restoring. It's about 1000x times slower if you specify this option.
Are you by any chance restoring from an 8.3 or older pg_dump file made
on Windows? If so, it's a known issue.
Can you help me understand how to optimize the following. There's a
subplan which in this case returns 3 rows,
but it is really expensive:
=
explain analyze SELECT contexts.context_key
FROM contexts
JOIN articles
On Mon, 2010-12-06 at 23:52 -0500, Greg Smith wrote:
Jignesh Shah wrote:
On Tue, Dec 7, 2010 at 1:55 AM, Tom Lane t...@sss.pgh.pa.us wrote:
I could have sworn we'd refactored that to something like
bool ThereAreAtLeastNActiveBackends(int n)
which could drop out of the loop as
Bryce,
The two queries are different:
You are looking for contexts.context_key in first query
WHERE (contexts.parent_key = 392210
OR contexts.context_key IN
(SELECT collection_data.context_key
FROM collection_data
WHERE collection_data.collection_context_key =
Shrirang Chitnis wrote:
Bryce,
The two queries are different:
Ah, due to a mistake. The first version with the hashed subplan is from
production.
The second version should have read:
production= SELECT
Hello,
are the table freshly analyzed, with a sufficient default_statistics_target ?
You may try to get a better plan while rewriting the query as an UNION to get
rid of the OR clause.
Something like (not tested):
SELECT contexts.context_key
FROM contexts
JOIN articles
ON
Shrirang Chitnis shrirang.chit...@hovservices.com writes:
Bryce,
The two queries are different:
I suspect the second one is a typo and not what he really wanted.
WHERE (contexts.parent_key = 392210
OR contexts.context_key IN
(SELECT collection_data.context_key
FROM
Another point: would a conditionl index help ?
on articles (context_key) where indexed
regards,
-Ursprüngliche Nachricht-
Von: pgsql-performance-ow...@postgresql.org im Auftrag von Marc Mamin
Gesendet: Mi 12/8/2010 9:06
An: Shrirang Chitnis; Bryce Nesbitt;
2010/12/8 Tom Lane t...@sss.pgh.pa.us:
Shrirang Chitnis shrirang.chit...@hovservices.com writes:
Bryce,
The two queries are different:
I suspect the second one is a typo and not what he really wanted.
WHERE (contexts.parent_key = 392210
OR contexts.context_key IN
(SELECT
Title: AW: [PERFORM] hashed subplan 5000x slower than two sequential
operations
Marc Mamin wrote:
Hello,
are the table freshly analyzed, with a sufficient
default_statistics_target ?
autovacuum = on # Enable autovacuum
subprocess? 'on'
autovacuum_naptime = 5min # time
Marc Mamin wrote:
Another point: would a conditionl index help ?
on articles (context_key) where indexed
no.
production= select count(*),indexed from articles group by indexed;
count | indexed
+-
517433 | t
695814 | f
--
Sent via pgsql-performance mailing list
2010/12/8 Tom Lane t...@sss.pgh.pa.us:
Robert Haas robertmh...@gmail.com writes:
Yeah, that was my concern, too, though Tom seems skeptical (perhaps
rightly). šAnd I'm not really sure why the PROCLOCKs need to be in a
hash table anyway - if we know the PROC and LOCK we can surely look up
the
Robert Haas robertmh...@gmail.com writes:
2010/12/8 Tom Lane t...@sss.pgh.pa.us:
Now, it's possible that you could avoid *ever* needing to search for a
specific PROCLOCK, in which case eliminating the hash calculation
overhead might be worth it.
That seems like it might be feasible. The
I need to build a new high performance server to replace our current production
database server.
The current server is a SuperMicro 1U with 2 RAID-1 containers (one for data,
one for log, SAS - data is 600GB, Logs 144GB), 16GB of RAM, running 2 quad core
processors (E5405 @ 2GHz), Adaptec 5405
If you are IO-bound, you might want to consider using SSD.
A single SSD could easily give you more IOPS than 16 15k SAS in RAID 10.
--- On Wed, 12/8/10, Benjamin Krajmalnik k...@servoyant.com wrote:
From: Benjamin Krajmalnik k...@servoyant.com
Subject: [PERFORM] Hardware recommendations
To:
The hardware it
is running on is fairly good, dual Xeon CPUs, 4 GB of RAM, Raid 5.
For a database you'd want to consider replacing the RAID1 with a RAID1 (or
RAID10). RAID5 is slow for small random updates, which are common in
databases. Since you probably have enough harddisks anyway, this
Sent from my android device.
-Original Message-
From: Benjamin Krajmalnik k...@servoyant.com
To: pgsql-performance@postgresql.org
Sent: Wed, 08 Dec 2010 17:14
Subject: [PERFORM] Hardware recommendations
Received: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP
(EHLO
John,
The platform is a network monitoring system, so we have quite a lot of
inserts/updates (every data point has at least one record insert as well as at
least 3 record updates). The management GUI has a lot of selects. We are
refactoring the database to some degree to aid in the
Ben,
It would help if you could tell us a bit more about the read/write mix and
transaction requirements. *IF* you are heavy writes I would suggest moving off
the RAID1 configuration to a RAID10 setup. I would highly suggest looking at
SLC based solid state drives or if your budget has legs,
On Thu, Dec 9, 2010 at 01:26, Andy angelf...@yahoo.com wrote:
If you are IO-bound, you might want to consider using SSD.
A single SSD could easily give you more IOPS than 16 15k SAS in RAID 10.
Are there any that don't risk your data on power loss, AND are cheaper
than SAS RAID 10?
Regards,
Sent from my android device.
-Original Message-
From: Benjamin Krajmalnik k...@servoyant.com
To: pgsql-performance@postgresql.org
Sent: Wed, 08 Dec 2010 17:14
Subject: [PERFORM] Hardware recommendations
Received: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP
(EHLO
If you are IO-bound, you might want to consider using
SSD.
A single SSD could easily give you more IOPS than 16
15k SAS in RAID 10.
Are there any that don't risk your data on power loss, AND
are cheaper
than SAS RAID 10?
Vertex 2 Pro has a built-in supercapacitor to save data on
Sent from my android device.
-Original Message-
From: Benjamin Krajmalnik k...@servoyant.com
To: pgsql-performance@postgresql.org
Sent: Wed, 08 Dec 2010 17:14
Subject: [PERFORM] Hardware recommendations
Received: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP
(EHLO
Sent from my android device.
-Original Message-
From: Benjamin Krajmalnik k...@servoyant.com
To: pgsql-performance@postgresql.org
Sent: Wed, 08 Dec 2010 17:14
Subject: [PERFORM] Hardware recommendations
Received: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP
(EHLO
Sent from my android device.
-Original Message-
From: Benjamin Krajmalnik k...@servoyant.com
To: pgsql-performance@postgresql.org
Sent: Wed, 08 Dec 2010 17:14
Subject: [PERFORM] Hardware recommendations
Received: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP
(EHLO
Sent from my android device.
-Original Message-
From: Benjamin Krajmalnik k...@servoyant.com
To: pgsql-performance@postgresql.org
Sent: Wed, 08 Dec 2010 17:14
Subject: [PERFORM] Hardware recommendations
Received: from mx2.hub.org [200.46.204.254] by mail.pengdows.com with SMTP
(EHLO
2010/12/8 Tom Lane t...@sss.pgh.pa.us:
Robert Haas robertmh...@gmail.com writes:
2010/12/8 Tom Lane t...@sss.pgh.pa.us:
Now, it's possible that you could avoid *ever* needing to search for a
specific PROCLOCK, in which case eliminating the hash calculation
overhead might be worth it.
That
On Wed, Dec 8, 2010 at 5:03 PM, Benjamin Krajmalnik k...@servoyant.com wrote:
John,
The platform is a network monitoring system, so we have quite a lot of
inserts/updates (every data point has at least one record insert as well as
at least 3 record updates). The management GUI has a lot of
08.12.2010 22:46, Tom Lane writes:
Are you by any chance restoring from an 8.3 or older pg_dump file made
on Windows? If so, it's a known issue.
No, I tried Linux only.
Not without a complete reproducible example ... and not at all if it's
the known problem. The fix for that is to
08.12.2010 22:46, Tom Lane writes:
Are you by any chance restoring from an 8.3 or older pg_dump file made
on Windows? If so, it's a known issue.
No, I tried Linux only.
Not without a complete reproducible example ... and not at all if it's
the known problem. The fix for that is to update
Is there any performance penalty when I use ODBC library vs using libpq?
Best Regards,
Divakar
,--- You/Divakar (Wed, 8 Dec 2010 20:31:30 -0800 (PST)) *
| Is there any performance penalty when I use ODBC library vs using libpq?
In general, yes.
In degenerate cases when most of the work happens in the server, no.
You need to measure in the contents of your specific application.
--
So it means there will be visible impact if the nature of DB interaction is DB
insert/select. We do that mostly in my app.
Performance difference would be negligible if the query is server intensive
where execution time is far more than time taken by e.g. communication
interface
or transaction
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Andy
Sent: Wednesday, December 08, 2010 5:24 PM
To: Marti Raudsepp
Cc: pgsql-performance@postgresql.org; Benjamin Krajmalnik
Subject: Re: [PERFORM] Hardware
Vlad Arkhipov arhi...@dc.baikal.ru writes:
08.12.2010 22:46, Tom Lane writes:
Are you by any chance restoring from an 8.3 or older pg_dump file made
on Windows? If so, it's a known issue.
No, I tried Linux only.
OK, then it's not the missing-data-offsets issue.
I think you can reproduce
,--- You/Divakar (Wed, 8 Dec 2010 21:17:22 -0800 (PST)) *
| So it means there will be visible impact if the nature of DB interaction is
DB
| insert/select. We do that mostly in my app.
You can't say a visible impact unless you can measure it in your
specific application.
Let's say ODBC
hmm
If I understand it correctly you argument is valid from performance point of
view.
But in practical scenarios, it would make more sense to do ODBC if the
difference is only 5% or so, because it opens up so many choices of databases
for me.
Do we have some published data in this area.
Hello
2010/12/9 Divakar Singh dpsma...@yahoo.com:
hmm
If I understand it correctly you argument is valid from performance point of
view.
But in practical scenarios, it would make more sense to do ODBC if the
difference is only 5% or so, because it opens up so many choices of
databases for
40 matches
Mail list logo