Hi,
This thread seemed to trail off without a resolution. Was anything done?
(See more below.)
On 07/09/15 19:06, Tom Lane wrote:
Andres Freund writes:
On 2015-07-06 11:49:54 -0400, Tom Lane wrote:
Rather than reverting cab9a0656c36739f, which would re-introduce a
different performance prob
Peter Eisentraut wrote:
Another point, as there appear to be diverging camps about
supertransactional stored procedures vs. autonomous transactions, what
would be the actual use cases of any of these features?
Looping over hundreds of identical schema executing DDL statements on
each. We can'
Tom Lane wrote:
> If you roll back a truncate, do you get the expected state?
I did a number of variations on the test below, with and without "on drop
commit",
and similar tests where the "create table" is done before the "begin". After
the
checkpoint, the number of files in the database d
Alex Hunsaker wrote:
FYI, on my 8.2.13 system, the test created 30001 files which were all
deleted during the commit. Â On my 8.4.0 system, the test created 60001
files, of which 3 were deleted at commit and 30001 disappeared
later (presumably during a checkpoint?).
Smells like fsm?
Yes
Tom Lane wrote:
Actually, this is easier than I thought, because there is already
bookkeeping being done that (in effect) tracks whether a table has
already been truncated in the current transaction. So we can rely
on that, and with only a very few lines of code added, ensure that
a situation l
Tom Lane wrote:
The attached prototype patch does this
and seems to fix the speed problem nicely. It's not tremendously
well tested, but perhaps you'd like to test? Should work in 8.4.
I'll give it a try and report back (though probably not until tomorrow).
-- todd
--
Sent via pgsql-hacker
Tom Lane wrote:
I took a look through the CVS history and verified that there were
no post-8.4 commits that looked like they'd affect performance in
this area. So I think it's got to be a platform difference not a
PG version difference. In particular I think we are probably looking
at a filesy
Tom Lane wrote:
So what I'm seeing is entirely explained by the buildup of dead versions
of the temp table's pg_class row --- the index_getnext time is spent
scanning over dead HOT-chain members. It might be possible to avoid
that by special-casing temp tables in TRUNCATE to recycle the existin
Hi,
I've noticed that on 8.4.0, commits can take a long time when a temp table is
repeatedly
filled and truncated within a loop. A very contrived example is
begin;
create or replace function commit_test_with_truncations()
returns void
language 'plpgsql'
as $_func_$
declare
Peter Eisentraut wrote:
On Monday 19 January 2009 23:22:21 Todd A. Cook wrote:
The docs at
http://developer.postgresql.org/pgdocs/postgres/functions-aggregate.html
don't prohibit using array values with array_arg(), so I assumed that it
would work.
test=> select array_agg(v.a) from
Hi,
The docs at
http://developer.postgresql.org/pgdocs/postgres/functions-aggregate.html
don't prohibit using array values with array_arg(), so I assumed that it would
work.
However, with CVS HEAD from Friday afternoon, I get
test=> select version() ;
Josh Berkus wrote:
Greg, Tom,
But for most users analyze doesn't really have to run as often as
vacuum. One sequential scan per night doesn't seem like that big a deal
to me.
Clearly you don't have any 0.5 TB databases.
Perhaps something like "ANALYZE FULL"? Then only those who need the
12 matches
Mail list logo