> >
> >> So except the limitation mentioned above (which I do not consider as
> critical) there is only one problem which was not addressed: maintaining
> statistics for GTT.
> >> If all of the following conditions are true:
> >>
> >> 1) GTT are used in joins
> >> 2) There are indexes defined for GTT
> >> 3) Size and histogram of GTT in different backends can significantly
> vary.
> >> 4) ANALYZE was explicitly called for GTT
> >>
> >> then query execution plan built in one backend will be also used for
> other backends where it can be inefficient.
> >> I also do not consider this problem as "show stopper" for adding GTT to
> Postgres.
> > I think that's *definitely* a show stopper.
> Well, if both you and Pavel think that it is really "show stopper", then
> this problem really has to be addressed.
> I slightly confused about this opinion, because Pavel has told me
> himself that 99% of users never create indexes for temp tables
> or run "analyze" for them. And without it, this problem is not a problem
> at all.
>
>
Users doesn't do ANALYZE on temp tables in 99%. It's true. But second fact
is so users has lot of problems. It's very similar to wrong statistics on
persistent tables. When data are small, then it is not problem for users,
although from my perspective it's not optimal. When data are not small,
then the problem can be brutal. Temporary tables are not a exception. And
users and developers are people - we know only about fatal problems. There
are lot of unoptimized queries, but because the problem is not fatal, then
it is not reason for report it. And lot of people has not any idea how fast
the databases can be. The knowledges of  users and app developers are sad
book.

Pavel

Reply via email to