u hit some other slow query, you might want to report it in
the manner suggested here:
https://wiki.postgresql.org/wiki/SlowQueryQuestions
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@p
leneck. That's when I learned about hint bits.)
You should also make sure that autovacuum is aggressive enough on
the new cluster. Without that, any performance benefit from the
above will slowly disappear.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
-
a LAN for storage.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
current clients,
but perhaps not as good at cranking out a single big report or
running dump/restore.
Yes, it is quite possible that the new machine could be faster at
some things and slower at others.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sen
the results of both of the
above-mentioned tests for the two environments.
Just one observation, based on the limited data -- a higher network
latency between the client and the database might explain what
you've presented. I would check that, too.
--
Kevin Grittner
EDB: http://www.enterprisedb.
ved to a temporary workspace and this is "diffed"
against the existing permanent copy, which is modified to match the
new data through simple DML statements. No explicit index rebuild
is needed; entries are adjusted as part of running the DML.
--
Kevin Grittner
EDB: http://www.enterprisedb.c
t options, but I suspect that you
will see better performance by putting each database on a separate
cluster and using cpusets (or the equivalent) so that each cluster
uses a subset of the 160 cores and the RAM directly attached to the
subset.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
r for the start of such an event and capture the full
contents of pg_stat_activity and pg_locks during that 2 minute
window.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
minimum for something greater than the maximum for
that same thing. That should have no bearing the performance issue
raised on the thread, but you might want to fix it anyway.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing
ng psql '-f'.
... but I would be surprised if that happened when reading from a file.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscriptio
ing this theory would be to restore a copy
and reindex all indexes used by the problem query to see if that
fixes it. If it does, close examination of the corrupted index
might provide clues about how the corruption occurred.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise Postg
e-id/flat/55783940.8080...@wi3ck.info#55783940.8080...@wi3ck.info
The short version is that in existing production versions you can
easily run in to such symptoms when you get to 8 or more CPU
packages. The problem seems to be solved in the development
versions of 9.5 (with changes not suitable for back-p
ly block them for < 3 seconds.
Visibility hinting and/or hot pruning?
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
acuum and/or background writer settings. Various
OS settings may matter, too.
To get a handle on all this, it might be worth looking for Greg
Smith's book on PostgreSQL high performance.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pg
-through).
Reduce the OS vm.dirty_background_bytes setting to less than the
size of the persistent write cache. Make sure that vm.dirty_ratio
is at least 20, possibly higher. Configure the PostgreSQL
background writer to be more aggressive. If those don't do it,
reduce the size of shared_bu
caused by failure to disable
transparent huge page support. The larger shared_buffers is
configured, the bigger the problem.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To
OR (EXTRACT(MONTH FROM dob) = 7
AND EXTRACT(DAY FROM dob) <= 4);
The first query I showed is faster than either of the alternatives,
especially if there is an index on dob.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performan
ow much time the top level nested loop took to do its
work.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
t consider changing it if we have no other choice...
Regards,
Kevin
-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Dienstag, 31. März 2015 15:59
To: Kevin Viraud
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Weird CASE WHEN behaviour causing query to b
don’t have the control over this one and that I’m using it as if.
This is only my debugging query.
Best regards,
Kevin
From: Pavel Stehule [mailto:pavel.steh...@gmail.com]
Sent: Dienstag, 31. März 2015 11:09
To: Kevin Viraud
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM
really understand it. If I add more conditions to the query (let
say 1 or 2) it is also getting slower. And it's not a few ms, it is around 5
sec or so. (which is huge considering I only take in my example 1/500 of my
data with LIMIT.
Before we deviate from the problem I have (which is why the sudden drop of
performance) let me clarify a few things about this query :
- The purpose is not to rewrite it, with a join or whatever, the
case when actually comes from a function which is auto-generated by another
app we have
- My example is pretty simple and regex expressions could be
replaced by equals, the real case when query contains way more complicated
regex
- This is subset of my CASE WHEN, it is much bigger, I cut it at
the "bottleneck" point for this post.
Thanks a lot.
Best Regards,
Kevin
ence test that needs to be met in addition to the percentage
difference, as kind of a "safety" on this foot-gun.
I'm not sold on this as being a good idea, and had not been
planning on raising it without further research; but since it plays
into this other scenario it seems worth
determine good settings for the new server.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
arge number of index tuples and chasing
them to the heap just to find that the tuples are not visible is
about as expensive as if they were visible *for the index scan step
itself*. I wonder whether there would be any way to allow the
index scan cost to be based on the work it has to do while somehow
nd see what might apply, and if you still have a problem
pick a specific slow-running query and use the process described
here:
https://wiki.postgresql.org/wiki/SlowQueryQuestions
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performa
up any
case, and click the "Court Record Events" button, it will search a
table with hundreds of millions of rows. The table is not
partitioned, but has several indexes on it which are useful for
queries such as the one that is used when you click the button.
--
Kevin Grittner
EDB: http
4..12.124 rows=24 loops=1)
Index Cond: (word ~~* '%john%'::text)
Planning time: 0.392 ms
Execution time: 12.252 ms
(7 rows)
Note that a trigram index is case-insensitive; doing a
case-sensitive search requires an extra Recheck node to eliminate
the rows that match in the case-insen
two ways to write a query that are logically
equivalent, it is better to put the AND at the higher level than
the OR. On the other hand, why not simply write it as?:
select *
from commons.financial_documents fd
where (fd.creation_time, fd.financial_document_id)
< ('2011-11-07 10
e second you might want to
make autovacuum more aggressive.
To get more specific advice, you may want to read this page and
follow the advice there:
https://wiki.postgresql.org/wiki/SlowQueryQuestions
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via
Björn Wittich wrote:
> I do not want the db server to prepare the whole query result at
> once, my intention is that the asynchronous retrieval starts as
> fast as possible.
Then you probably should be using a cursor.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise P
rhaps to 0.03.
You might want to play with the above, and if you still have a
problem, read this page and post with more detail:
http://wiki.postgresql.org/wiki/SlowQueryQuestions
> Is there some column level setting I can set?
The statistics looked pretty accurate, so that shouldn
Alex Goncharov wrote:
> Kevin Grittner wrote:
>> The rows will all be in the table, but not visible to any other
>> transaction.
>
> How much data can I fit there while doing COPY? Not 1 TB?
As has already been said, why not? This is not some special
section of the
feed -- how will this all error out?
The rows will all be in the table, but not visible to any other
transaction. Autovacuum will clean them out in the background, but
if you want to restart your load against an empty table it might be
a good idea to TRUNCATE that table; it will be a lot faster.
Kevin Grittner wrote:
> Dave Owens wrote:
>
>> I now have 8 hours worth of snapshots from pg_stat_activity and
>> pg_locks (16 snapshots from each table/view). I have turned off
>> collection at this point, but I am still able to query pg_locks
>
> Could you take t
d transactions, they are probably from
the summarization. But you would not normally accumulate much
there unless you have a long-running transaction which is not
flagged as READ ONLY.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-p
Dave Owens wrote:
> On Tue, Aug 19, 2014 at 11:01 AM, Kevin Grittner wrote:
>> CREATE TABLE activity_snap_1 AS SELECT * FROM pg_stat_activity;
> Would the you or the list be interested in snapshots of pg_locks as well?
Most definitely! I'm sorry that copied/pasted the
ber for each subsequent run.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
gly workaround, but it might get you
into better shape. If that does work, it's good evidence that we
should tweak those heuristics.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.o
pt until an overlapping
transaction completes, a single long-running transaction can bloat
the lock count.
Also, could you show use the output from?:
SELECT version();
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing
Tom Lane wrote:
> Kevin Grittner writes:
>> Dave Owens wrote:
>>> max_connections = 450 ...we have found that we run out of shared
>>> memory when max_pred_locks_per_transaction is less than 30k.
>
>> It gathers the information in memory to return for all tho
so
it can spill to disk when it gets to be more than work_mem.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
n-default configuration
settings. In particular, I'm curious whether there is an index on
the message_id column of origo_email_delivery.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
T
sible to reduce the rate of false
positives, which has largely gone undone so far due to a general
lack of problem reports from people which could not be solved
through tuning. If you have such a case, it would be interesting
to have all relevant details, so that we can target which of the
many e
Mark Kirkwood wrote:
> On 12/07/14 01:19, Kevin Grittner wrote:
>>
>> It might be worth a test using a cpuset to interleave OS cache and
>> the NUMA patch I submitted to the current CF to see whether this is
>> getting into territory where the patch makes a bigger diffe
do much better than using numactl --interleave
because work_mem and other process-local memory would be allocated
in "near" memory for each process.
http://www.postgresql.org/message-id/1402267501.4.yahoomail...@web122304.mail.ne1.yahoo.com
--
Kevin Grittner
EDB: http://www.enterp
t-filled) tracks on a disk drive and the
inner tracks. One of the reasons performance falls as a drive
fills is that the OS is compelled to use slower and slower portions
of the disk. Part of the benefit you are seeing might be due to
freeing "fast" tracks and data being relocated there.
ve a way to reproduce this from a new
cluster, please share it. That always makes diagnosis much easier.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
er than
for their semantic merit takes some effort.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ons;
select * from pg_prepared_xacts;
Thanks.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Sorry for the delay; back on this, and thanks for the response.
On Mar 11, 2014, at 6:23 PM, Tom Lane wrote:
> "Murphy, Kevin" writes:
>> Synopsis: 8-table join with one "WHERE foo IN (...)" condition; works OK
>> with fewer
>> than 5 items in the IN l
ng what plays a role in this switch of plans (or the
unanticipated relative slowness of the N=5 plan). TIA for any wisdom; I've
finally
made a commitment to really delve into PG. -Kevin
1. Queries and plans
2. Answers to standard questions as per
http://wiki.postgresql.org/wiki/Guide_to_
n other words, setting this too high leads to
unstable performance. It looks better than a lower setting until
too many users hit Enter at about the same time, causing
performance to collapse for a while.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sen
what I
consider to be rote good practice, tried it, and it solved the
problem.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgres
ults with
graph here:
http://www.postgresql.org/message-id/4b71358e02250002f...@gw.wicourts.gov
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscript
oth minor and major releases. If a little
sensible tuning of cost factors to better match reality doesn't do
it for you, you might want to consider an upgrade.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance m
ms per
page read on my Linux experience, and I remember benchmarking the
same application hitting PostgreSQL on the same hardware as about
30% faster on Linux than on Windows, so that *almost* makes up for
the difference.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL
jacket41142 wrote:
> [ subject issue in detail ]
Please review this thread:
http://www.postgresql.org/message-id/flat/CAPNY-2Utce-c+kNTwsMCbAk58=9myeaevitxt9lo7r1k77j...@mail.gmail.com#CAPNY-2Utce-c+kNTwsMCbAk58=9myeaevitxt9lo7r1k77j...@mail.gmail.com
--
Kevin Grittner
EDB: h
tsv @@ to_tsquery('english', 'provinces & distance');
QUERY PLAN
-----
Bitmap Heap Scan on war_and_peace (cost=40.00..44.02 rows=1 width=115)
(actual time=0.080..0.080 rows=1 loops=1)
Recheck Cond: ((tsv @@ '''ladi
salah jubeh wrote:
> The hardware is pretty good, I have 8 cpus of Intel(R) Core(TM)
> i7, 2.4 GH , and 16 Gib of RAM. Is there any configuration
> parameter that can lead to this issue.
What OS?
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
d extract(year from tanggal) = '2013')
... not:
where (jualid is not null or returjualid is not null)
and extract(year from tanggal) = '2013'
AND has higher priority than OR; so if you want to limit by year
from tanggal even when jualid is not null, you must use
parentheses.
ok like Robert did either, if you read the whole
message. In fact, he also questioned why index tuples which would
need to be read if we process from that end of the index don't
matter for purposes of cost estimation.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise Pos
The plan looks reasonable to me; it looks like
you need more RAM to cache data if you want better speed.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your sub
onfigured to be aggressive enough. Another possible
cause is a transaction which has been left open for too long. Look
at pg_stat_activity and pg_prepared_xacts for xact_start or
prepared more than an hour old.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
aw connect/disconnect speed. I would
benchmark RAM using STREAM and disk using bonnie++.
You might want to review this page, and post a more detailed report
to the pgsql-performance list:
http://wiki.postgresql.org/wiki/SlowQueryQuestions
Posting to multiple lists is generally considered bad
ty of the row
related to the most extreme index entry? Should we even go to the
heap during the plan phase?
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make change
Jim Nasby wrote:
> If you setup some form of replication it's very easy to move to
> larger servers as you grow. I'm sure that when Kevin moved their
> database it was a complete non-event.
Yeah, replication was turned on for the new server in addition to
the old one. When e
ou should boost autovacuum_max_workers until that problem is
solved.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
"every time I delete a row, delete it immedialty and
> don't take care of other transactions" ?
You can configure autovacuum to be more aggressive, or you could
run VACUUM statements.
> Do you have any suggestion for me?
8.4 is getting pretty old; there have been a lot of autov
allow
people to give the most helpful advice:
http://wiki.postgresql.org/wiki/SlowQueryQuestions
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your sub
around in this application here, if you like:
http://wcca.wicourts.gov/
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/
dices on user_id, owner_id, email, and significant.
Have you tried those queries with an index like this?:
CREATE INDEX contacts_owner_null_user
ON contacts (owner_id)
WHERE user_id IS NULL;
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via
my system will only ever have just one
> phone number. Hence the JOIN to "phone_numbers" versus the column
> in "my_users".
In looking it over, nothing jumped out at me as a problem. Are you
having some problem with it, like poor performance or getting
results diffe
dditional* 900 GB of RAM which would be needed
to avoid problems.
Reducing connections through a pooler is strongly indicated, and
you may still need to reduce work_mem or temp_buffers.
http://wiki.postgresql.org/wiki/Number_Of_Database_Connections
--
Kevin Grittner
EDB: http://www.enterprised
olatile and easier to tune
with cpu_tuple_cost increased. I just always start by bumping
that to 0.03.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your s
for that connection for as long as the connection lasts*.
So, for 900 connections, that could be 112.5 GB. I would expect to
see performance decrease and eventually completely tank as more
connections reserved memory for this purpose.
--
Kevin Grittner
EnterpriseDB: http://www.enterprised
at least without hours digging in the source code).
> On Jun 25, 2013, at 6:20 AM, Kevin Grittner wrote:
>> Ben wrote:
>>
>>> PostgreSQL 9.1.1 on x86_64-suse-linux-gnu, compiled by gcc (SUSE Linux)
>>> 4.6.2, 64-bit
>>
>> Consider applying the latest bu
dex usage in queries like your example.)
Do you get a different plan if you set cpu_tuple_cost = 0.03? How
about 0.05? You can set this just for a single connection and run
explain on the query to do a quick check.
--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgr
>> checkpointer process
>> 3975 postgres 20 0 8051m 895m 891m S 0 2.8 0:04.98 postgres: writer
>> process
>> 3976 postgres 20 0 8051m 9m 9072 S 0 0.0 0:35.17 postgres: wal
>> writer process
>> 3977 postgres 20 0 70932 3352 716 S 0 0.0
time=0.011..0.047 rows=16 loops=1)
> Output: c1.id, c1.blocked, c1.first_name, c1.last_name,
>c1.owner_id, c1.user_id
> Index Cond: ((c1.user_id IS NOT NULL) AND (c1.user_id = 24))
> Total runtime: 0.224 ms
So, it looks like you can get about 3000 to 400
not only
globally adjustable, you can override the setting for individual
columns -- again, we don't go to the trouble of supporting that
without a good reason.
--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list
;re doing an inner join to data_area and that has a
foreign key to area, there should always be a match anyway, right?
The optimizer doesn't recognize that, so it can't start from the
area and just match to the appropriate points.
--
Kevin Grittner
EnterpriseDB: http://www.enterpri
l columns as you find plans which
benefit. Don't set it right at the edge of the tipping point, but
don't automatically jump to 5000 every time either.
--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance
current cost settings aren't accurately modeling
actual costs in your environment for your workload. You need to
adjust them.
One of the estimates was off, so increasing the statistics sample
size might help, but I suspect that you need to make adjustments
like the above in any event.
--
order of the columns in the primary key, add a unique index
with the columns switched, or add an index on just the area ID.
Perhaps you thought that the foreign key constraints would create
indexes? (They don't.)
--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postg
adjust estimates based
on such correlations. If an inefficient plan is being chosen due
to this, there are a few tricks to coerce the plan.
--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-p
pushing the
data around in RAM. I'm not sure why 100% CPU usage would surprise
you. Are you wondering why the CPU works on the query straight
through until it is done, rather than taking a break periodically
and letting the unfinished work sit there?
--
Kevin Grittner
EnterpriseDB: http://www.en
ts report better performance assigning over 50% of RAM to
shared_buffers; OLTP loads often need to reduce this to prevent
periodic episodes of high latency.
--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (p
ctly?
In my experience, it can make a big difference. If you are just
using the pooler for this reason, and don't need any of the other
features of pgpool, I suggest pgbouncer. It is a simpler, more
lightweight tool.
--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise P
Alexander Staubo wrote:
> On Friday, February 22, 2013 at 21:47 , Kevin Grittner wrote:
>> In my experience these problems come largely from the planner
>> not knowing the cost of dealing with each tuple. I see a lot
>> less of this if I raise cpu_tuple_cost to something i
ort them to pick the top 13 after the sort.]
In my experience these problems come largely from the planner not
knowing the cost of dealing with each tuple. I see a lot less of
this if I raise cpu_tuple_cost to something in the 0.03 to 0.05
range.
--
Kevin Grittner
EnterpriseDB: http://www
<> 0
OR cStatus <> 'NEW'::StatusT
OR bOnSetBlue IS DISTINCT FROM false
OR bOnSetYellow IS DISTINCT FROM false
OR nLastBackupTS <> '0001-01-01 00:00:00');
Another way to accomplish this is with the
suppress_redundant_updates_trigger(
Beyond a certain point, starting the query sooner will cause it to
finish later. Really.
--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscri
yQuestions
That is likely to yield good advice which will help all queries,
but if anything remains slow after the first one is sorted out,
pick another.
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgre
ming
the RAID controllers battery-backed cache. There may be other
things which could cause these symptoms, so I'm not certain that
this will help; but I have seen this as the cause and seen the
suggested changes help.
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@
ng
>> spent though. It could be ExecutorStart/End, but have no idea why they
>> should take so long.
> Any ideas on how to troubleshoot this delay?
Is the client which is running the query on the same machine as the
server? If not, what's the ping time between them?
-Kevin
ers.
It might be something else entirely in this case, but it would at
least be worth trying a reduced shared_buffers setting combined
with more aggressive bgwriter settings. I might try something like
the following changes, as an experiment:
shared_buffers = 8GB
bgwriter_lru_maxpages = 1000
bgw
NALYZE will fix
things -- at least for a while.
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
es are defined for each partition. It is
not enough to define them on just the parent level.
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
o help; but sometimes it's a matter
that there is an uneven distribution among values not included in
the "most common values", in which case boosting the target to
store more values and finer-grained information on ranges will be
exactly what you need.
-Kevin
--
Sent via pgsql-
Huan Ruan wrote:
> Kevin Grittner wrote:
>> Frankly, at 12 microseconds per matched pair of rows, I think
>> you're doing OK.
>
> This plan is the good one, I want the indexscan nested loop join and this
> is only achieved after making all these costing factors chang
1 - 100 of 1177 matches
Mail list logo