Tom Lane wrote:
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> >> Seems like a pretty serious (not to say fatal) objection to me. Surely
> >> you can fix that.
>
> > OK, suggestions. I know CommandCounterIncrement will not help. Should
> > I do more pfree'ing?
>
> No, retail pfree'ing is not a
Bruce Momjian <[EMAIL PROTECTED]> writes:
>> Seems like a pretty serious (not to say fatal) objection to me. Surely
>> you can fix that.
> OK, suggestions. I know CommandCounterIncrement will not help. Should
> I do more pfree'ing?
No, retail pfree'ing is not a maintainable solution. I was t
Tom Lane wrote:
> Bruce Momjian <[EMAIL PROTECTED]> writes:
> > One change in this patch is that because analyze now runs in the outer
> > transaction, I can't clear the memory used to support each analyzed
> > relation. Not sure if this is an issue.
>
> Seems like a pretty serious (not to say f
Bruce Momjian <[EMAIL PROTECTED]> writes:
> One change in this patch is that because analyze now runs in the outer
> transaction, I can't clear the memory used to support each analyzed
> relation. Not sure if this is an issue.
Seems like a pretty serious (not to say fatal) objection to me. Sure
Bruce Momjian wrote:
> Tom Lane wrote:
> > I tried to repeat this:
> >
> > regression=# begin;
> > BEGIN
> > regression=# create table foo (f1 int);
> > CREATE
> > regression=# insert into foo [ ... some data ... ]
> >
> > regression=# analyze foo;
> > ERROR: ANALYZE cannot run inside a BEGIN/E
Tom Lane wrote:
> I tried to repeat this:
>
> regression=# begin;
> BEGIN
> regression=# create table foo (f1 int);
> CREATE
> regression=# insert into foo [ ... some data ... ]
>
> regression=# analyze foo;
> ERROR: ANALYZE cannot run inside a BEGIN/END block
>
> This seems a tad silly; I can
Lincoln Yeoh <[EMAIL PROTECTED]> writes:
> My limited understanding of current behaviour is the search for a valid
> row's tuple goes from older tuples to newer ones via forward links
No. Each tuple is independently indexed and independently visited.
Given the semantics of MVCC I think that's c
Hi Tom,
(Please correct me where I'm wrong)
Is it possible to reduce the performance impact of dead tuples esp when the
index is used? Right now performance goes down gradually till we vacuum
(something like a 1/x curve).
My limited understanding of current behaviour is the search for a valid
"Rod Taylor" <[EMAIL PROTECTED]> writes:
> Since dead, or yet to be visible tuples affect the plan that should be
> taken (until vacuum anyway) are these numbers reflected in the stats
> anywhere?
Analyze just uses SnapshotNow visibility rules, so it sees the same set
of tuples that you would see
I've run into an interesting issue. A very long running transaction
doing data loads is getting quite slow. I really don't want to break
up the transactions (and for now it's ok), but it makes me wonder what
exactly analyze counts.
Since dead, or yet to be visible tuples affect the plan that sh
10 matches
Mail list logo