On 2014-05-19 13:45:15 -0400, Tom Lane wrote:
> Andres Freund writes:
> > On 2014-05-19 11:25:04 -0400, Tom Lane wrote:
> >> No, we'd have two independent entries, each with its own correct refcount.
> >> When the refcount on the no-longer-linked-in-the-hashtable entry goes to
> >> zero, it'd be l
Andres Freund writes:
> On 2014-05-19 23:40:32 +0200, Tomas Vondra wrote:
>> I was however wondering if this might be related to OOM errors a few
>> local users reported to us. IIRC they've been using temporary tables
>> quite heavily - not sure if that could be related.
> I've significant doubts
On 2014-05-19 23:40:32 +0200, Tomas Vondra wrote:
> On 19.5.2014 22:11, Tom Lane wrote:
> > Tomas Vondra writes:
> > I intentionally didn't do that, first because I have only a limited
> > amount of confidence in the patch, and second because I don't think
> > it matters for anything except CLOBBE
On 19.5.2014 22:11, Tom Lane wrote:
> Tomas Vondra writes:
>> On 18.5.2014 20:49, Tom Lane wrote:
>>> With both of these things fixed, I'm not seeing any significant memory
>>> bloat during the first parallel group of the regression tests. I don't
>>> think I'll have the patience to let it run mu
Tomas Vondra writes:
> On 18.5.2014 20:49, Tom Lane wrote:
>> With both of these things fixed, I'm not seeing any significant memory
>> bloat during the first parallel group of the regression tests. I don't
>> think I'll have the patience to let it run much further than that
>> (the uuid and enum
On 18.5.2014 20:49, Tom Lane wrote:
> With both of these things fixed, I'm not seeing any significant memory
> bloat during the first parallel group of the regression tests. I don't
> think I'll have the patience to let it run much further than that
> (the uuid and enum tests are still running aft
Andres Freund writes:
> On 2014-05-19 11:25:04 -0400, Tom Lane wrote:
>> No, we'd have two independent entries, each with its own correct refcount.
>> When the refcount on the no-longer-linked-in-the-hashtable entry goes to
>> zero, it'd be leaked, same as it's always been. (The refcount presumab
On 2014-05-19 11:25:04 -0400, Tom Lane wrote:
> Andres Freund writes:
> > On 2014-05-18 14:49:10 -0400, Tom Lane wrote:
> >> if (RelationHasReferenceCountZero(oldrel))
> >> RelationDestroyRelation(oldrel, false);
> >> else
> >> elog(WARNING, "leaking still-referenced duplicate relation");
Andres Freund writes:
> On 2014-05-18 14:49:10 -0400, Tom Lane wrote:
>> if (RelationHasReferenceCountZero(oldrel))
>> RelationDestroyRelation(oldrel, false);
>> else
>> elog(WARNING, "leaking still-referenced duplicate relation");
> If that happens we'd essentially have a too low referen
Hi,
On 2014-05-18 14:49:10 -0400, Tom Lane wrote:
> AFAICT, the inner invocation's Relation should always have zero reference
> count by the time we get back to the outer invocation. Therefore it
> should be possible for RelationCacheInsert to just delete the
> about-to-be-unreachable Relation st
Andres Freund writes:
> On 2014-05-17 22:55:14 +0200, Tomas Vondra wrote:
>> And another memory context stats for a session executing CREATE INDEX,
>> while having allocated The interesting thing is there are ~11k lines
>> that look exactly like this:
>>
>> pg_namespace_oid_index: 1024 total in
On 2014-05-17 22:55:14 +0200, Tomas Vondra wrote:
> And another memory context stats for a session executing CREATE INDEX,
> while having allocated The interesting thing is there are ~11k lines
> that look exactly like this:
>
> pg_namespace_oid_index: 1024 total in 1 blocks; 88 free (0 chunks);
Hi,
On 2014-05-17 22:33:31 +0200, Tomas Vondra wrote:
> Anyway, the main difference between the analyze snapshot seems to be this:
>
> init: CacheMemoryContext: 67100672 total in 17 blocks; ...
> 350MB: CacheMemoryContext: 134209536 total in 25 blocks; ...
> 400MB: CacheMemoryContext: 1929
On 17.5.2014 22:33, Tomas Vondra wrote:
> On 17.5.2014 21:31, Andres Freund wrote:
>> On 2014-05-17 20:41:37 +0200, Tomas Vondra wrote:
>>> On 17.5.2014 19:55, Tom Lane wrote:
Tomas Vondra writes:
>>> The tests are already running, and there are a few postgres processes:
>>>
>>> PID VIRT
On 2014-05-17 20:41:37 +0200, Tomas Vondra wrote:
> On 17.5.2014 19:55, Tom Lane wrote:
> > Tomas Vondra writes:
> The tests are already running, and there are a few postgres processes:
>
> PID VIRT RES %CPUTIME+ COMMAND
> 11478 449m 240m 100.0 112:53.57 postgres: pgbuild regression [
On 17.5.2014 19:55, Tom Lane wrote:
> Tomas Vondra writes:
>> ... then of course the usual 'terminating connection because of crash of
>> another server process' warning. Apparently, it's getting killed by the
>> OOM killer, because it exhausts all the memory assigned to that VM (2GB).
>
> Can yo
Tomas Vondra writes:
> ... then of course the usual 'terminating connection because of crash of
> another server process' warning. Apparently, it's getting killed by the
> OOM killer, because it exhausts all the memory assigned to that VM (2GB).
Can you fix things so it runs into its process ulim
Hi all,
a few days ago I setup an buildfarm animal markhor, running the tests
with CLOBBER_CACHE_RECURSIVELY. As the tests are running very long,
reporting the results back to the server fails because of a safeguard
limit in the buildfarm server. Anyway, that's being discussed in a
different threa
18 matches
Mail list logo