That work_mem value could be way too high depending on how much ram your
server has...which would be a very important bit of information to help
figure this out. Also, what Postgres / OS versions?
I'd personally bake an analyze call on that table (column) into whatever
job is responsible for changing the state of the table that much, if it's
possible to do it as a last step.
> You can create a library of
> reusable views that are small, easy-to-understand and readable. Then
> you build them up into bigger views, and finally query from them. But
> then you end up with lots of (hidden) self-joins.
I will concur with this use case being pretty common, but also something
I will say I've seen count(1) in the wild a ton, as well as at my own
company from developers who were used to it not making a difference.
There have been a couple queries in the hot path that I have had to changed
from count(1) to count(*) as part of performance tuning, but in general
it's not wo
I've had a similar issue in the past.
I used the md5 hash function and stored it in a UUID column for my
comparisons. Bigger than a bigint, but still much faster than string
comparisons directly for my use case.
UUID works fine for storing md5 hashes and gives you the ability to
piggyback on all t
Hello all,
Just wondering if there is anything else I can provide to help figure this
out.
One thing I did notice, is there is a discussion about "invisible indexes"
going on, which seems that if it was implemented, would be one way to "fix"
my problem:
https://www.postgresql.org/message-id/flat/e
Have you run an analyze on all your tables after the upgrade to 10? The
estimates are way off.
Alright, the first two attempts to reply to this thread I don't believe
worked, likely due to the attachment size. Hoping this time it does...
> > Self contained examples do wonders
> Good point, will work on that and post once I have something usable.
Finally got around to making a self containe
> Self contained examples do wonders
Good point, will work on that and post once I have something usable.
Just wondering if anyone has any thoughts on what I can do to alleviate
this issue?
I'll kinda at a loss as to what to try to tweak for this.
Hey all,
I'm using Postgres 10.3
6 core VM with 16gb of ram
My database schema requires a good bit of temporal data stored in a
few my tables, and I make use of ranges and exclusion constraints to
keep my data consistent.
I have quite a few queries in my DB which are using a very sub-optimal
inde
11 matches
Mail list logo