"Todd A. Cook" writes:
> Tom Lane wrote:
>>> If you roll back a truncate, do you get the expected state?
> I did a number of variations on the test below, with and without "on drop
> commit",
> and similar tests where the "create table" is done before the "begin". After
> the
> checkpoint, the
Tom Lane wrote:
the function time and the commit time a lot better. But I'm not sure
if the use-case is popular enough to deserve such a hack.
For some OLTP workloads, it makes a lot of sense to spool tuples of
primary key plus new fields into a temp table and then doing a single update
or
Tom Lane wrote:
> If you roll back a truncate, do you get the expected state?
I did a number of variations on the test below, with and without "on drop
commit",
and similar tests where the "create table" is done before the "begin". After
the
checkpoint, the number of files in the database d
On Fri, Aug 7, 2009 at 10:10, Todd A. Cook wrote:
> Alex Hunsaker wrote:
>> With double the number of files maybe something
>>
>> simple like turning on dir_index if you are ext3 will help?
>
> Thanks for the tip. Doing "tune2fs -O +dir_index" didn't seem to make
> a difference, which is kinda ex
"Todd A. Cook" writes:
> Tom Lane wrote:
>> The attached prototype patch does this
>> and seems to fix the speed problem nicely. It's not tremendously
>> well tested, but perhaps you'd like to test? Should work in 8.4.
> With the patch applied, the test only took 35 seconds, and the commit
> wa
Alex Hunsaker wrote:
FYI, on my 8.2.13 system, the test created 30001 files which were all
deleted during the commit. Â On my 8.4.0 system, the test created 60001
files, of which 3 were deleted at commit and 30001 disappeared
later (presumably during a checkpoint?).
Smells like fsm?
Yes
Tom Lane wrote:
Actually, this is easier than I thought, because there is already
bookkeeping being done that (in effect) tracks whether a table has
already been truncated in the current transaction. So we can rely
on that, and with only a very few lines of code added, ensure that
a situation l
On Thu, Aug 6, 2009 at 11:32, Todd A. Cook wrote:
> Tom Lane wrote:
>>
>> I took a look through the CVS history and verified that there were
>> no post-8.4 commits that looked like they'd affect performance in
>> this area. So I think it's got to be a platform difference not a
>> PG version differ
Tom Lane wrote:
The attached prototype patch does this
and seems to fix the speed problem nicely. It's not tremendously
well tested, but perhaps you'd like to test? Should work in 8.4.
I'll give it a try and report back (though probably not until tomorrow).
-- todd
--
Sent via pgsql-hacker
Tom Lane wrote:
I took a look through the CVS history and verified that there were
no post-8.4 commits that looked like they'd affect performance in
this area. So I think it's got to be a platform difference not a
PG version difference. In particular I think we are probably looking
at a filesy
I wrote:
> As I said, my inclination for improving this area, if someone wanted
> to work on it, would be to find a way to do truncate-in-place on
> temp tables. ISTM that in the case you're showing --- truncate that's
> not within a subtransaction, on a table that's drop-on-commit anyway
> --- we
"Todd A. Cook" writes:
> Tom Lane wrote:
>> I'm not seeing the very long CPU-bound commit phase that Todd is seeing.
> The commit looks CPU-bound when I let the residual I/O from the function
> execution die out before I issue the commit.
Well, mine is CPU-bound too, it just is much shorter rela
Tom Lane wrote:
So what I'm seeing is entirely explained by the buildup of dead versions
of the temp table's pg_class row --- the index_getnext time is spent
scanning over dead HOT-chain members. It might be possible to avoid
that by special-casing temp tables in TRUNCATE to recycle the existin
"Kevin Grittner" writes:
> Tom Lane wrote:
>> "Todd A. Cook" writes:
>>> I've noticed that on 8.4.0, commits can take a long time when a
>>> temp table is repeatedly filled and truncated within a loop.
>> The commit time doesn't seem tremendously out of line, but it looks
>> like there's somet
Tom Lane wrote:
> "Todd A. Cook" writes:
>> I've noticed that on 8.4.0, commits can take a long time when a
>> temp table is repeatedly filled and truncated within a loop.
> The commit time doesn't seem tremendously out of line, but it looks
> like there's something O(N^2)-ish in the function e
"Todd A. Cook" writes:
> I've noticed that on 8.4.0, commits can take a long time when a temp table is
> repeatedly
> filled and truncated within a loop. A very contrived example is
Hmm. I tweaked the function to allow varying the number of truncates:
regression=# begin;
BEGIN
Time: 1.037 ms
Hi,
I've noticed that on 8.4.0, commits can take a long time when a temp table is
repeatedly
filled and truncated within a loop. A very contrived example is
begin;
create or replace function commit_test_with_truncations()
returns void
language 'plpgsql'
as $_func_$
declare
17 matches
Mail list logo