On Sat, Jun 20, 2015 at 7:06 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
As far as the
rest of this patch goes, it seems like it could be made less invasive
if the logs got dumped into a subdirectory of tmp_check rather than
adding another top-level directory that has to be cleaned?
On Fri, Jun 19, 2015 at 9:20 AM, Robert Haas robertmh...@gmail.com wrote:
The extraordinarily planning time for query 4 is caused by a
completely different problem: SearchCatCache eats up huge amounts of
CPU; its callers are get_attavgwidth and get_typlen. It's not clear
to me why doubling
I tend to agree on that bottom line; having this be inconsistent with psql
does not seem like a win.
I'm not clear on why we'd need a full SQL lexer.
So you don't get fooled by semicolons embedded in string literals or
comments.
I take it we ignore those now? I mean, personally, it
On Wed, Jun 17, 2015 at 10:52 AM, Tomas Vondra tomas.von...@2ndquadrant.com
wrote:
Hi,
I'm currently running some tests on a 3TB TPC-H data set, and I tripped
over a pretty bad n_distinct underestimate, causing OOM in HashAgg (which
somehow illustrates the importance of the memory-bounded
Hello Andres,
- Move fsync as early as possible, suggested by Andres Freund?
My opinion is that this should be left out for the nonce.
for the nonce - what does that mean?
Nonce \Nonce\ (n[o^]ns), n. [For the nonce, OE. for the nones, ...
{for the nonce}, i. e. for the present time.
On Sat, Jun 20, 2015 at 12:24 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
Has anybody noticed the way castoroides is randomly failing?
SELECT test_shm_mq_pipelined(16384, (select
string_agg(chr(32+(random()*95)::int), '') from generate_series(1,27)),
200, 3);
! PANIC: stuck
Hi,
On 06/20/2015 08:54 AM, Feng Tian wrote:
While better sample/stats is important for choosing a good plan, in
this query, hash agg is really the right plan. If a sort agg is
chosen, the performance will be really really bad. The patch that
Jeff is working on is critical for a decent TPCH
Robert Haas robertmh...@gmail.com writes:
On Sat, Jun 20, 2015 at 10:12 AM, Joel Jacobson j...@trustly.com wrote:
I guess it
primarily depends on how much of the new code that would need to be
rewritten, if the collector is optimized/rewritten in the future?
I don't think that's really the
On Sat, Jun 20, 2015 at 10:55 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Sat, Jun 20, 2015 at 10:12 AM, Joel Jacobson j...@trustly.com
wrote:
I guess it
primarily depends on how much of the new code that would need to be
rewritten, if the
Prakash Itnal prakash...@gmail.com writes:
Sorry for the late response. The current patch only fixes the scenario-1
listed below. It will not address the scenario-2. Also we need a fix in
unix_latch.c where the remaining sleep time is evaluated, if latch is woken
by other events (or result=0).
I would like to enhance the postgres_fdw to allow more complete support for
user-defined types.
Right now, postgres_fdw already does a good job of passing user-defined type
data back and forth, which is pretty nice. However, it will not pass functions
or operators that use user-defined types
On Sat, Jun 20, 2015 at 10:12 AM, Joel Jacobson j...@trustly.com wrote:
Is there any chance the project would accept a patch which adds the
pg_stat_*_columns-feature without first optimizing the collector?
I doubt it. It's such a pain point already that massively increasing
the amount of data
On Sat, Jun 20, 2015 at 7:56 AM, Tomas Vondra tomas.von...@2ndquadrant.com
wrote:
Hi,
On 06/20/2015 08:54 AM, Feng Tian wrote:
While better sample/stats is important for choosing a good plan, in
this query, hash agg is really the right plan. If a sort agg is
chosen, the performance will
On 06/20/2015 04:17 PM, Robert Haas wrote:
On Wed, Jun 17, 2015 at 1:52 PM, Tomas Vondra
tomas.von...@2ndquadrant.com wrote:
I'm currently running some tests on a 3TB TPC-H data set, and I
tripped over a pretty bad n_distinct underestimate, causing OOM in
HashAgg (which somehow illustrates
Paul Ramsey pram...@cleverelephant.ca writes:
... I think the postgres_fdw extension should pass user-defined
functions and operators, but only when it knows those functions and
operators exist at the remote. One way would be to ask the remote what
extensions it has, but the overhead of doing
Simon Riggs si...@2ndquadrant.com writes:
On 20 June 2015 at 18:19, Tom Lane t...@sss.pgh.pa.us wrote:
The key question here is whether filtering functions/operators at the
level of extensions is a good design. It seems to me like a reasonable
compromise between flexibility and ease of use,
On Wed, Jun 17, 2015 at 1:52 PM, Tomas Vondra
tomas.von...@2ndquadrant.com wrote:
I'm currently running some tests on a 3TB TPC-H data set, and I tripped over
a pretty bad n_distinct underestimate, causing OOM in HashAgg (which somehow
illustrates the importance of the memory-bounded hashagg
Feng Tian wrote:
I have not read Jeff's patch, but here is how I think hash agg should work,
I think you should discuss that in Jeff's thread, not here.
--
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training Services
--
Sent
Hi,
On 06/20/2015 05:29 PM, Feng Tian wrote:
I have not read Jeff's patch, but here is how I think hash agg should work,
Hash agg scan lineitem table, perform aggregation in memory. Once
workmem is exhausted, it write intermediate state to disk, bucket by
bucket. When lineitem table is
Magnus Hagander mag...@hagander.net writes:
On Sat, Jun 20, 2015 at 10:55 AM, Tom Lane t...@sss.pgh.pa.us wrote:
I dunno that tweaking the format would accomplish much. Where I'd love
to get to is to not have to write the data to disk at all (except at
shutdown). But that seems to require an
Hi,
On 06/20/2015 03:01 AM, Jeff Janes wrote:
Hmmm, that's probably true. OTOH correlated columns are not all that
uncommon (e.g. table storing time-series data etc.), and this blowup
is quite bad ...
True, but we don't know how big of a problem the density-skew
problem might be
Hi,
Sorry for the late response. The current patch only fixes the scenario-1
listed below. It will not address the scenario-2. Also we need a fix in
unix_latch.c where the remaining sleep time is evaluated, if latch is woken
by other events (or result=0). Here to it is possible the latch might go
Peter Eisentraut pete...@gmx.net writes:
With the recently released Perl 5.22.0, the tests fail thus:
-ERROR: Global symbol $global requires explicit package name at line 3.
-Global symbol $other_global requires explicit package name at line 4.
+ERROR: Global symbol $global requires explicit
On Tue, Jun 16, 2015 at 4:47 AM, Jim Nasby jim.na...@bluetreble.com wrote:
Magnus idea was to first optimize the collector to make it less of a
problem to collect more data. Sounds like a good thing to do, but maybe
more data in it wouldn't be a problem as long as you don't read too
often
On 06/20/2015 11:32 AM, Tom Lane wrote:
Magnus Hagander mag...@hagander.net writes:
On Sat, Jun 20, 2015 at 10:55 AM, Tom Lane t...@sss.pgh.pa.us wrote:
I dunno that tweaking the format would accomplish much. Where I'd love
to get to is to not have to write the data to disk at all (except at
On 20 June 2015 at 18:19, Tom Lane t...@sss.pgh.pa.us wrote:
The key question here is whether filtering functions/operators at the
level of extensions is a good design. It seems to me like a reasonable
compromise between flexibility and ease of use, but others might see it
differently.
I
El vie., 19 jun. 2015 a las 15:06, Simon Riggs (si...@2ndquadrant.com)
escribió:
It doesn't say anything about their being only one index buffer per table,
nor do I think it would make sense to do it that way. So ISTM that the
foreground process still has to insert serially into N index
Vik Fearing wrote:
Session 1:
begin;
alter default privileges in schema bug grant all on tables to postgres;
Session 2:
alter default privileges in schema bug grant all on tables to postgres;
hangs
Session 1:
commit;
Session 2:
ERROR: tuple concurrently updated
So it turns out
Alvaro Herrera wrote:
So it turns out we don't have any locking here at all. I don't believe
we have it for all object types, but in most cases it's not as obnoxious
as this one. But at least for relations we have some nice coding in
RangeVarGetRelidExtended and
On Sat, Jun 20, 2015 at 7:01 PM, Tom Lane t...@sss.pgh.pa.us wrote:
If we arranged things so that the processes could use the data in the
DSM directly rather than having to copy it out, we'd presumably save
quite a bit of memory, since the whole structure would be shared
rather than each
On Sat, Jun 20, 2015 at 6:15 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
One idea would be to advertise a DSM ID in the main shared memory
segment, and have the individual backends read that value and attach
to it. When new stats are generated, the stats
Robert Haas wrote:
If we arranged things so that the processes could use the data in the
DSM directly rather than having to copy it out, we'd presumably save
quite a bit of memory, since the whole structure would be shared
rather than each backend having its own copy. But if the structure
On Sat, Jun 20, 2015 at 11:15 AM, Magnus Hagander mag...@hagander.net wrote:
I've considered both that and to perhaps use a shared memory message queue
to communicate. Basically, have a backend send a request when it needs a
snapshot of the stats data and get a copy back through that method
On Sat, Jun 20, 2015 at 6:53 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Sat, Jun 20, 2015 at 12:44 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Andres Freund and...@anarazel.de writes:
On 2015-06-19 11:16:18 -0400, Robert Haas wrote:
On Fri, Jun 19, 2015 at 11:07 AM, Tom Lane
Robert Haas robertmh...@gmail.com writes:
One idea would be to advertise a DSM ID in the main shared memory
segment, and have the individual backends read that value and attach
to it. When new stats are generated, the stats collector creates a
new DSM (which might be bigger or smaller than
Robert Haas robertmh...@gmail.com writes:
Meanwhile, here is an updated patch.
I don't care for that patch too much: it seems a bit brute-force, and I'm
quite worried by the assumption that it's okay to destroy each child's
append_rel_list after processing the child. That would fail if any of
On Wed, Jun 17, 2015 at 3:36 AM, Prakash Itnal prakash...@gmail.com wrote:
Currently we observed that certain postgres child process, for eg.
autovacuum worker, are not working as expected if there is a system time
change. So I wanted to know if postgres already supports system time changes
or
Hi,
On 06/21/2015 12:15 AM, Tom Lane wrote:
Robert Haas robertmh...@gmail.com writes:
One idea would be to advertise a DSM ID in the main shared memory
segment, and have the individual backends read that value and attach
to it. When new stats are generated, the stats collector creates a
new
Robert Haas robertmh...@gmail.com writes:
On Sat, Jun 20, 2015 at 6:15 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Hmmm. This sounds attractive, but what happens if we fail to create
a new DSM when needed?
Presumably you keep using the old one and retry later. I mean, out of
memory is out of
On Sat, Jun 20, 2015 at 6:48 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
Meanwhile, here is an updated patch.
I don't care for that patch too much: it seems a bit brute-force, and I'm
quite worried by the assumption that it's okay to destroy each child's
40 matches
Mail list logo