Hi Tom,
Thanks for your reply, that’s very helpful and informative.
Although there's no way to have any useful pg_statistic stats if you won't
do an ANALYZE, the planner nonetheless can see the table's current
physical size, and what it normally does is to multiply the last-reported
tuple
David Wheeler writes:
> I'm having performance trouble with a particular set of queries. It goes a
> bit like this
> 1) queue table is initially empty, and very narrow (1 bigint column)
> 2) we insert ~30 million rows into queue table
> 3) we do a join with queue table to delete from another
On Wed, Jun 27, 2018 at 03:45:26AM +, David Wheeler wrote:
> Hi All,
>
> I’m having performance trouble with a particular set of queries. It goes a
> bit like this
>
> 1) queue table is initially empty, and very narrow (1 bigint column)
> 2) we insert ~30 million rows into queue table
> 3)
OID is a temp-var that is only consistent within a Query Execution.
https://www.postgresql.org/docs/current/static/datatype-oid.html
On Thu, Jun 28, 2018 at 12:50 AM, Steve Crawford <
scrawf...@pinpointresearch.com> wrote:
>
>
> On Wed, Jun 27, 2018 at 8:31 AM Rambabu V wrote:
>
>> Hi Team,
>>
Hi All,
I’m having performance trouble with a particular set of queries. It goes a bit
like this
1) queue table is initially empty, and very narrow (1 bigint column)
2) we insert ~30 million rows into queue table
3) we do a join with queue table to delete from another table (delete from a
Hi Laurenz,
You’re right about the table being bloated, the videos.description column is
large. I thought about moving it to a separate table, but having an index only
on the columns used in the query seems to have compensated for that already.
Thank you.
> On Jun 27, 2018, at 10:19 AM,
Roman Kushnir wrote:
> The following basic inner join is taking too much time for me. (I’m using
> count(videos.id)
> instead of count(*) because my actual query looks different, but I simplified
> it here to the essence).
> I’ve tried following random people's suggestions and adjusting the
>