> On 4 Mar 2024, at 07:46, Michał Kłeczek wrote:
>
>
>
>> On 3 Mar 2024, at 18:42, Tom Lane wrote:
>>
>> =?utf-8?Q?Micha=C5=82_K=C5=82eczek?= writes:
>>> I’ve performed some more tests and it seems expressions with “extract”
>>> function are not pushed down at all -
>>
>> Yeah :-(. I
> On 3 Mar 2024, at 18:42, Tom Lane wrote:
>
> =?utf-8?Q?Micha=C5=82_K=C5=82eczek?= writes:
>> I’ve performed some more tests and it seems expressions with “extract”
>> function are not pushed down at all -
>
> Yeah :-(. I traced through this, and it seems it's a collation
> problem.
Additionally if a query was working fine but suddenly takes a
suboptimal plan because of missing stats , do we have any hash value column
on any performance view associated with the queryid which we can refer to
see past vs current plans difference and identify such issues quickly and
fix it?
I am
On Mon, 4 Mar 2024 at 09:42, Laurenz Albe wrote:
>
> The only things that require manual ANALYZE are
>
> 1. partitioned tables (autoanalyze will collect statistics on the
>partitions, but not the partitioned table itself)
>
>
So the partitioned table stats is nothing but the rolledover stats
Laurenz Albe writes:
> The only things that require manual ANALYZE are
> ...
You missed one important exception: autovacuum/autoanalyze cannot
process temporary tables, because those are not accessible outside
the owning session. So you need to do those manually if it's
important for
On Sat, 2024-03-02 at 22:29 +, Baziotis, Stefanos wrote:
> Can I maybe get accurate information if the column has an index? In other
> words,
> are there any type of indexes through which I can get the number of distinct
> values or the values themselves, without needing to scan the column?
On Mon, 2024-03-04 at 01:33 +0530, veem v wrote:
> We see in one of the RDS postgres instances, from pg_stat_user_tables ,
> the auto vacuum and auto analyze happening on the tables without our
> manual intervention.
That's exactly the idea behind autovacuum.
> So is auto vacuum analyze is
On Mon, 2024-03-04 at 00:04 +0100, Peter J. Holzer wrote:
> On 2024-02-28 20:30:35 +0100, Dimitrios Apostolou wrote:
> > On Wed, 28 Feb 2024, Laurenz Albe wrote:
> > > On Wed, 2024-02-28 at 12:16 +0100, Dimitrios Apostolou wrote:
> > > > So what is the moral of the story? How to guard against
On 2024-02-28 20:30:35 +0100, Dimitrios Apostolou wrote:
> On Wed, 28 Feb 2024, Laurenz Albe wrote:
> > On Wed, 2024-02-28 at 12:16 +0100, Dimitrios Apostolou wrote:
> > > So what is the moral of the story? How to guard against this?
> >
> > Monitor disk usage ...
>
> It happened *fast*. And it
Hi,
We see in one of the RDS postgres instances, from pg_stat_user_tables , the
auto vacuum and auto analyze happening on the tables without our manual
intervention.
So is auto vacuum analyze is sufficient to make sure optimal stats and
unbloated table structure in place or should we do it
> On Mar 3, 2024, at 12:00, Christophe Pettus wrote:
> Remember that dropping the NULL constraint afterwards will require a full
> table read (although not a rewrite).
Sorry, badly put: Adding a NOT NULl constraint afterwards will require a full
table read (although not a rewrite).
> On Mar 3, 2024, at 11:40, yudhi s wrote:
> Thanks for the clarification. In case of adding the column as volatile
> default (like current_timestamp function as default) or say adding NOT NULL
> column with some conditional population of existing values will be a full
> table rewrite. In
On Mon, Mar 4, 2024 at 12:43 AM Christophe Pettus wrote:
>
> > On Mar 3, 2024, at 11:06, yudhi s wrote:
> > as the column addition using the traditional "Alter table" command in
> postgres looks to be a full table rewrite
>
> That's not always (or, really, usually) true. Adding a new column in
On Sun, Mar 3, 2024 at 2:06 PM yudhi s wrote:
> Hello,
> We have an application in which the tables will have approx ~200 columns
> in some of the big transaction tables when we will be serving all the
> business use cases. Currently it has ~100 columns to serve current business
> use cases to
> On Mar 3, 2024, at 11:06, yudhi s wrote:
> as the column addition using the traditional "Alter table" command in
> postgres looks to be a full table rewrite
That's not always (or, really, usually) true. Adding a new column in any
recent version of PostgreSQL just alters the system
Hello,
We have an application in which the tables will have approx ~200 columns in
some of the big transaction tables when we will be serving all the business
use cases. Currently it has ~100 columns to serve current business use
cases to start with. As the team is following an agile approach ,
Ok, so that means the workaround is to create an extension with year(date) and
month(date) functions that internally would call extract.
Not ideal but workable I guess.
—
Michał
> On 3 Mar 2024, at 18:42, Tom Lane wrote:
>
> =?utf-8?Q?Micha=C5=82_K=C5=82eczek?= writes:
>> I’ve performed
=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= writes:
> I’ve performed some more tests and it seems expressions with “extract”
> function are not pushed down at all -
Yeah :-(. I traced through this, and it seems it's a collation
problem. Internally, that call looks like
extract('year'::text,
Thanks for help.
>> I’ve performed some more tests and it seems expressions with “extract”
>> function are not pushed down at all -
>> the WHERE criteria from the following query are not pushed down as well and
>> filter is performed locally:
>> SELECT
>> *
>> FROM
>> t1
>> WHERE
On 3/3/24 05:16, Michał Kłeczek wrote:
On 3 Mar 2024, at 10:34, Michał Kłeczek wrote:
Hi,
I have the following foreign table:
CREATE FOREIGN TABLE t1 (
grouping_column text,
date_column date,
whatever_data int
);
The query is:
SELECT
sum(whatever_data)
FROM
t1
GROUP BY
> On 3 Mar 2024, at 10:34, Michał Kłeczek wrote:
>
> Hi,
>
> I have the following foreign table:
>
> CREATE FOREIGN TABLE t1 (
> grouping_column text,
> date_column date,
> whatever_data int
> );
>
> The query is:
>
> SELECT
> sum(whatever_data)
> FROM
> t1
> GROUP BY
>
Hi,
I have the following foreign table:
CREATE FOREIGN TABLE t1 (
grouping_column text,
date_column date,
whatever_data int
);
The query is:
SELECT
sum(whatever_data)
FROM
t1
GROUP BY
grouping_colulmn, extract(YEAR FROM date_column), extract(MONTH FROM
date_column);
From my
22 matches
Mail list logo