On Mon, Jul 18, 2016 at 10:05 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Jan Wieck <j...@wi3ck.info> writes:
> > In the meantime, would it be appropriate to backpatch the double linking
> > of memory context children at this time? I believe it has had plenty of
&
time, the statement is executed.
But such a change definitely needs more testing and probably won't
qualify for backpatching.
In the meantime, would it be appropriate to backpatch the double linking
of memory context children at this time? I believe it has had plenty of
testing in the 9.6 cycle
on that "we don't have
multithreading
today."
Regards, Jan
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
--
Jan Wieck
Senior Postgres Architect
http://pgblog.wi3ck.info
BTW, here is the email thread about double-linking MemoryContext children
patch, that Kevin at the end committed to master.
https://www.postgresql.org/message-id/55F2D834.8040106%40wi3ck.info
Regards, Jan
On Sat, Jul 16, 2016 at 3:47 PM, Jan Wieck <j...@wi3ck.info> wrote:
>
>
&
asby, Data Architect, Blue Treble Consulting, Austin TX
> Experts in Analytics, Data Architecture and PostgreSQL
> Data in Trouble? Get it in Treble! http://BlueTreble.com
> 855-TREBLE2 (855-873-2532) mobile: 512-569-9461
>
>
> --
> Sent via pgsql-hackers mailing l
rees. Those things are
much easier in a trivial database implementation, where an ALTER TABLE is
just trampling over a running transaction.
Regards, Jan
--
Jan Wieck
Senior Postgres Architect
http://pgblog.wi3ck.info
not an academic gripe -- I just exploded production :-D.
>
> merlin
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
--
Jan Wieck
Senior Postgres Architect
ose blocks in memory. Basically we
are talking about the active portion of your database, shared buffers,
the sum of all process local memory and the complete pg_xlog directory
content fitting into RAM.
Regards, Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers
On 09/18/2015 10:47 AM, Tom Lane wrote:
Jan Wieck <j...@wi3ck.info> writes:
Attached is a complete rework of the fix from scratch, based on Tom's
suggestion.
The code now maintains a double linked list as suggested, but only uses
it to mark all currently valid entries as invali
On 09/15/2015 12:02 PM, Jan Wieck wrote:
On 09/15/2015 11:54 AM, Tom Lane wrote:
Jan Wieck <j...@wi3ck.info> writes:
Would it be appropriate to use (Un)RegisterXactCallback() in case of
detecting excessive sequential scanning of that cache and remove all
invalid entries from it at tha
On 09/15/2015 11:54 AM, Tom Lane wrote:
Jan Wieck <j...@wi3ck.info> writes:
Would it be appropriate to use (Un)RegisterXactCallback() in case of
detecting excessive sequential scanning of that cache and remove all
invalid entries from it at that time?
Another idea is that it's not th
On 09/15/2015 10:58 AM, Tom Lane wrote:
I wrote:
Jan Wieck <j...@wi3ck.info> writes:
I'm able to reproduce that failure with CLOBBER_CACHE_ALWAYS and it
definitely is caused by this commit. Do you want to back it out for the
time being? Kevin is on vacation right now.
I'll take a look
cation right now.
Regards, Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 09/15/2015 09:51 AM, Tom Lane wrote:
Jan Wieck <j...@wi3ck.info> writes:
On 09/14/2015 09:56 AM, Tom Lane wrote:
Kevin Grittner <kgri...@postgresql.org> writes:
Fix an O(N^2) problem in foreign key references.
Judging from the buildfarm, this patch is broken under
CLOBBER_
to change both places
that are affected, so ResourceOwnerCreate() in resowner.c would
need a line or two added.
ResourceOwnerCreate() sets ResourceOwnerData.nextchild, not
MemoryContextData.nextchild.
Regards, Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers
makes MemoryContext children a double linked list
that no longer needs any list traversal no find the position of the
child within the list.
Regards, Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
diff --git a/src/backend/utils/mmgr/mcxt.c b/src/backend/utils/mmgr/mcxt.c
index
On 09/11/2015 09:38 AM, Tom Lane wrote:
Jan Wieck <j...@wi3ck.info> writes:
The attached script demonstrates an O(N^2) problem we recently became
aware of. The script simply executes a large anonymous code block that
doesn't do anything useful. Usage is
./t1.py [NUM_STMTS]
On 09/11/2015 09:58 AM, Tom Lane wrote:
Jan Wieck <j...@wi3ck.info> writes:
On 09/11/2015 09:38 AM, Tom Lane wrote:
Seems less invasive to fix SPI to delete in the opposite order?
That would assume that there are no other code paths that destroy memory
contexts out of LIFO order.
On 09/08/2015 08:49 AM, Kevin Grittner wrote:
Jan Wieck <j...@wi3ck.info> wrote:
The problem is a cache introduced in commit 45ba4247 that improves
That's a bit off; 45ba424f seems to be what you mean.
Copy paste over paper.
foreign key lookups during bulk updates when the FK
was supposed to improve.
Regards, Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
diff --git a/src/backend/utils/adt/ri_triggers.c b/src/backend/utils/adt/ri_triggers.c
index 61edde9..d7023ce 100644
--- a/src/backend/utils/adt/ri_triggers.c
+++ b/src/backend/utils/adt/ri_triggers.c
@@ -183,6
bottleneck has on 9.5.
Sorry for the noise, Jan
On 06/10/2015 09:18 AM, Jan Wieck wrote:
Hi,
I think I may have found one of the problems, PostgreSQL has on machines
with many NUMA nodes. I am not yet sure what exactly happens on the NUMA
bus, but there seems to be a tipping point at which
of physical-RAM / concurrent-connections depending on the
complexity of your queries. 48GB does not sound reasonable.
Regards, Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription
the real solution will probably look much different. But I thought it
would be good to share this and start the discussion about reevaluating
the spinlock code before PGCon.
Regards, Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
diff --git a/src/backend/storage/lmgr/s_lock.c b/src
On 06/10/2015 09:28 AM, Andres Freund wrote:
On 2015-06-10 09:18:56 -0400, Jan Wieck wrote:
On a machine with 8 sockets, 64 cores, Hyperthreaded 128 threads total, a
pgbench -S peaks with 50-60 clients around 85,000 TPS. The throughput then
takes a very sharp dive and reaches around 20,000 TPS
with the assembler xchgb operation used by PostgreSQL
today on x84_64.
Regards, Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 02/05/2015 02:15 PM, Jan Wieck wrote:
On 02/05/2015 01:18 PM, Marko Tiikkaja wrote:
pgcrypto bug
That doesn't look too good, but I can't reproduce it against 9.3.6 either.
Attached is an improved script and the final output from it. I ran it
over night and it did not reproduce
data is encoded. I.e. that the
decryption algorithm cannot really figure out what is wrong and just
sometimes gets a little further in the attempt to decrypt.
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
pgcrypto_test.sh
Description: application/shellscript
--
Sent via
On 02/05/2015 10:58 AM, Tom Lane wrote:
Jan Wieck j...@wi3ck.info writes:
I have encountered a small instability in the behavior of pgcrypto's
pgp_sym_decrypt() function. Attached is a script that can reproduce the
problem. It may have to be run repeatedly because the symptom occurs
rather
On 02/05/2015 01:18 PM, Marko Tiikkaja wrote:
On 2/5/15 4:48 PM, Jan Wieck wrote:
What the script does is to encode a small string with pgp_sym_encrypt()
and then repeatedly try to decrypt it with different wrong passwords.
The expected error message for that is of course
Wrong key
those be moved back to
where they used to be?
Slony uses the definitions in external tools, like slon and slonik, to
format sequence numbers in log output.
Regards,
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers
information for those cases and I believe
that tackling them one at a time in depth will lead to more useful
results than trying to cover a lot but shallow.
My $.02
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers
might want to do that function renaming in pg_receivexlog.c as well.
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 10/01/2014 01:19 PM, Andres Freund wrote:
On 2014-10-01 13:17:17 -0400, Jan Wieck wrote:
-static void StreamLog();
+static void StreamLogicalLog();
Not related at all to those patches, but for correctness it is better to
add a declaration (void).
Agreed.
Except those small things
On 10/01/2014 01:32 PM, Andres Freund wrote:
On 2014-10-01 13:22:53 -0400, Jan Wieck wrote:
On 10/01/2014 01:19 PM, Andres Freund wrote:
On 2014-10-01 13:17:17 -0400, Jan Wieck wrote:
-static void StreamLog();
+static void StreamLogicalLog();
Not related at all to those patches
.
If there indeed has to be a catversion bump in the process of this, then
I agree with Craig.
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql
?
But it's another thing if we design specific syntax that encourages such
abuse, as proposed earlier.
The design should explicitly discourage that sort of nonsense.
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org
found inside of complex code
constructs to check values of local variables. I don't think it is even
a good idea to implement assertions that can query arbitrary data.
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers
, the probability of
hitting such an extreme value is miniscule, so if you're just doing a
few quick test runs with a small total number of transactions, you won't
hit that.
- Heikki
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers
it by a unique
set of columns. Making PL/pgSQL behave different on UPDATE than SQL to
enforce that by default was simply a misguided design idea.
Regards,
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes
system, you try to make it foolproof.
Guess what, the second you made something foolproof, evolution will
create a dumber fool. This is a race you cannot win.
Regards,
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers
try to make them special, in my mind they are not.
Regards,
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 09/06/2014 12:33 PM, Marko Tiikkaja wrote:
On 2014-09-06 6:31 PM, Jan Wieck wrote:
On 09/06/2014 12:17 PM, Marko Tiikkaja wrote:
OK, fine. But that's not what I suggested on the wiki page, and is also
not what I'm arguing for here right now. What the message you referred
to was about
cannot be the default.
Regards,
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
for it.
+1 on the OUT alias.
-1 on the ASSERT as proposed. It would be too easy for application
developers to abuse them to govern business logic and a DBA later
turning off assertions for performance reasons.
Regards,
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent
. Sure, all can
reduced to ASSERT(expr) .. but where is some benefit against function call
I am able to do without any change of language as plpgsql extension -
there is no necessary to do any change for too thin proposal
.marko
--
Jan Wieck
Senior Software Engineer
http://slony.info
(default 2 seconds).
Regards,
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
on vacation next
week, getting ready for my first day at EnterpriseDB, this may actually
happen.
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
SELECT (or other DML
statement).
I think I like the COMMAND CONSTRAINT the best so far.
Regards,
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
On 09/04/2014 09:31 AM, Pavel Stehule wrote:
2014-09-04 15:24 GMT+02:00 Jan Wieck j...@wi3ck.info
I think I like the COMMAND CONSTRAINT the best so far.
I not, because when it will not be part of SQL, than parser in plpgsql
will be more complex. You have to inject SELECT, UPDATE, INSERT
into the main parser,
not into the PL.
Regards,
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
do it right.
Regards,
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
for the time being
give us enough room to come up with a proper solution for 9.5?
Regards,
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql
such input data would make it fail again, even with lengths.
Regards,
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
/GUC settings.
Regards,
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
.
Regards,
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
of all it is
[DECLARE
declarations]
BEGIN
statements
END;
Second statements includes the whole definition above as a statement
and controls identifier visibility and such compatible to PL/SQL.
You want to lose that? Not such a great idea, IMHO.
Jan
--
Jan Wieck
Senior Software Engineer
http
operate on one row for each
UPDATE command.
Your idea of what an OLTP application is seems flawed.
b) Data warehouseing applications where you process multiple rows in
each UPDATE command.
Ditto.
Regards,
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers
that pretty and performant?
Because the moment, your pretty language is out there, be sure users
will kick your behind that whenever they use that pretty stuff on
anything but a toy setup, it spirals their servers into a DOS attack state.
Regards,
Jan
--
Jan Wieck
Senior Software Engineer
or plpgsql2. If you
cannot get that same syntax to apply to default client queries, then the
whole idea is bogus because it will confuse developers more than it
helps you with your particular way of thinking.
Regards,
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql
for INTO because it's referring to the INTO. And it's much more obvious
what it could mean there.
For once I completely agree with Andres.
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your
on the side of the highway for a
decade already.
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
mean that. No sarcasm.
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
to discourage anyone from trying. Just saying that it doesn't
fit into our existing regression test framework.
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http
On 04/13/14 08:27, Marko Kreen wrote:
On Sat, Apr 12, 2014 at 02:10:13PM -0400, Jan Wieck wrote:
Since it doesn't seem to produce any side effects, I'd think that
making the snapshot unique within txid_current_snapshot() and
filtering duplicates on input should be sufficient and eligible
On 04/13/14 14:22, Jan Wieck wrote:
On 04/13/14 08:27, Marko Kreen wrote:
I think you need to do SET_VARSIZE also here. Alternative is to
move SET_VARSIZE after sort_snapshot().
And it seems the drop-double-txid logic should be added also to
txid_snapshot_recv(). It seems weird to have
On 04/12/14 03:27, Heikki Linnakangas wrote:
On 04/12/2014 12:07 AM, Jan Wieck wrote:
Hackers,
the Slony team has been getting seldom reports of a problem with the
txid_snapshot data type.
The symptom is that a txid_snapshot on output lists the same txid
multiple times in the xip list part
glancing over the code earlier.
Right, that is easy enough and looks like an acceptable fix for back
branches too.
Thanks,
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http
On 04/12/14 10:09, Greg Stark wrote:
A pg_restore would start a new xid space from FirstNormalXid which would
obviously not work with any stored txids.
http://www.postgresql.org/docs/9.3/static/app-pgresetxlog.html
Regards,
Jan
--
Jan Wieck
Senior Software Engineer
http://slony.info
On 04/12/14 11:18, Andres Freund wrote:
On 2014-04-12 11:15:09 -0400, Jan Wieck wrote:
On 04/12/14 10:09, Greg Stark wrote:
A pg_restore would start a new xid space from FirstNormalXid which would
obviously not work with any stored txids.
http://www.postgresql.org/docs/9.3/static/app
On 04/12/14 10:03, Andres Freund wrote:
On 2014-04-12 09:47:24 -0400, Tom Lane wrote:
Heikki Linnakangas hlinnakan...@vmware.com writes:
On 04/12/2014 12:07 AM, Jan Wieck wrote:
the Slony team has been getting seldom reports of a problem with the
txid_snapshot data type.
The symptom
change for a back release, but considering
that the _out() generated external representation of an existing binary
datum won't pass the type's _in() function, I argue that such change is
warranted. Especially since this problem could possibly corrupt a dump.
Comments?
Jan
--
Jan Wieck
Senior
On 05/01/13 04:57, Fabien COELHO wrote:
Add --throttle to pgbench
Each client is throttled to the specified rate, which can be expressed in
tps or in time (s, ms, us). Throttling is achieved by scheduling
transactions along a Poisson-distribution.
This is an update of the previous
On 06/19/13 14:34, Fabien COELHO wrote:
The use case of the option is to be able to generate a continuous gentle
load for functional tests, eg in a practice session with students or for
testing features on a laptop.
Why does this need two option formats (-H and --throttle)?
On the latest
On 4/12/2013 1:57 PM, Tom Lane wrote:
Kevin Grittner kgri...@ymail.com writes:
Tom Lane t...@sss.pgh.pa.us wrote:
I think that the minimum appropriate fix here is to revert the hunk
I quoted, ie take out the suppression of stats reporting and analysis.
I'm not sure I understand -- are you
On 4/12/2013 2:08 PM, Alvaro Herrera wrote:
Tom Lane escribió:
Are you saying you intend to revert that whole concept? That'd be
okay with me, I think. Otherwise we need some thought about how to
inform the stats collector what's really happening.
Maybe what we need is to consider table
On 4/18/2013 11:44 AM, Jan Wieck wrote:
Yes, that was the rationale behind it combined with don't change
function call sequences and more all over the place.
function call signatures
--
Anyone who trades liberty for security deserves neither
liberty nor security. -- Benjamin Franklin
On 12/20/2012 4:47 PM, Dimitri Fontaine wrote:
Tom Lane t...@sss.pgh.pa.us writes:
The reason this fails is that you've got a half-megabyte source string,
and each of the 11000 plans that are due to be created from it saves
its own copy of the source string. Hence, 5500 megabytes needed just
On 12/14/2012 3:20 PM, Robert Haas wrote:
On Fri, Dec 14, 2012 at 2:11 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
... In more
than ten years of working with PostgreSQL, I've never encountered
where the restriction at issue here prevented a bug. It's only
On 12/12/2012 1:12 PM, Simon Riggs wrote:
Currently, ANALYZE collects data on all columns and stores these
samples in pg_statistic where they can be seen via the view pg_stats.
In some cases we have data that is private and we do not wish others
to see it, such as patient names. This becomes
On 12/12/2012 3:12 PM, Simon Riggs wrote:
On 12 December 2012 19:13, Jan Wieck janwi...@yahoo.com wrote:
On 12/12/2012 1:12 PM, Simon Riggs wrote:
Currently, ANALYZE collects data on all columns and stores these
samples in pg_statistic where they can be seen via the view pg_stats.
In some
On 12/9/2012 2:37 PM, Kevin Grittner wrote:
Jan Wieck wrote:
Based on the discussion and what I feel is a consensus I have
created an updated patch that has no GUC at all. The hard coded
parameters in include/postmaster/autovacuum.h are
AUTOVACUUM_TRUNCATE_LOCK_CHECK_INTERVAL 20 /* ms
I am aware that in the case at hand, the call to make_fn_arguments() is
with the only possible candidate function, so changing COERCE_IMPLICIT
to COERCE_ASSIGNMENT inside of make_fn_arguments() is correct. But I
wonder if this may have any unwanted side effects for other code paths
to
On 12/6/2012 12:45 PM, Robert Haas wrote:
On Wed, Dec 5, 2012 at 10:16 PM, Jan Wieck janwi...@yahoo.com wrote:
That sort of dynamic approach would indeed be interesting. But I fear that
it is going to be complex at best. The amount of time spent in scanning
heavily depends on the visibility map
Kevin and Robert are well aware of most of the below. I just want to put
this out here so other people, who haven't followed the discussion too
closely, may chime in.
Some details on the problem:
First of all, there is a minimum number of 1000 pages that the vacuum
scan must detect as
On 12/5/2012 2:00 PM, Robert Haas wrote:
Many it'd be sensible to relate the retry time to the time spend
vacuuming the table. Say, if the amount of time spent retrying
exceeds 10% of the time spend vacuuming the table, with a minimum of
1s and a maximum of 1min, give up. That way, big tables
On 12/4/2012 8:06 AM, Kevin Grittner wrote:
Jan Wieck wrote:
I believe the check interval needs to be decoupled from the
deadlock_timeout again.
OK
This will leave us with 2 GUCs at least.
Hmm. What problems do you see with hard-coding reasonable values?
The question is what
On 12/4/2012 1:51 PM, Kevin Grittner wrote:
Jan Wieck wrote:
[arguments for GUCs]
This is getting confusing. I thought I had already conceded the
case for autovacuum_truncate_lock_try, and you appeared to spend
most of your post arguing for it anyway. I think. It's a little
hard to tell
On 12/3/2012 5:42 PM, Kevin Grittner wrote:
Jan Wieck wrote:
Attached is a new patch that addresses most of the points raised
in discussion before.
1) Most of the configuration variables are derived from
deadlock_timeout now. The check for conflicting lock request
interval is deadlock_timeout
Attached is a new patch that addresses most of the points raised in
discussion before.
1) Most of the configuration variables are derived from deadlock_timeout
now. The check for conflicting lock request interval is
deadlock_timeout/10, clamped to 10ms. The try to acquire exclusive
lock
On 11/28/2012 3:33 PM, Kevin Grittner wrote:
Kevin Grittner wrote:
I still need to review the timing calls, since I'm not familiar
with them so it wasn't immediately obvious to me whether they
were being used correctly. I have no reason to believe that they
aren't, but feel I should check.
On 11/29/2012 9:46 AM, Tom Lane wrote:
Jan Wieck janwi...@yahoo.com writes:
On 11/28/2012 3:33 PM, Kevin Grittner wrote:
Resetting starttime this way seems especially odd.
instr_time is LARGE_INTEGER on Win32 but struct timeval on Unix. Is
starttime = currenttime;
portable if those
On 10/27/2012 2:41 PM, Heikki Linnakangas wrote:
And it's not at all
clear to me that it would perform better than full_page_writes. You're
writing and flushing out roughly the same amount of data AFAICS.
I think this assumption is wrong. full_page_writes=on means we write the
full page
On 10/28/2012 10:50 PM, Peter Geoghegan wrote:
On 28 October 2012 22:35, Jan Wieck janwi...@yahoo.com wrote:
The amount of WAL generated with full_page_writes=on is quite substantial.
For pgbench for example the ratio 20:1. Meaning with full_page_writes you
write 20x the amount you do without
Steven,
On 10/24/2012 10:46 PM, Stephen Frost wrote:
Jan,
* Jan Wieck (janwi...@yahoo.com) wrote:
This problem has been discussed before. Those familiar with the
subject please skip the next paragraph.
Apologies if this was already thought-of and ruled out for some reason,
but...
Because
On 10/25/2012 9:45 AM, Tom Lane wrote:
Jan Wieck janwi...@yahoo.com writes:
On 10/24/2012 10:46 PM, Stephen Frost wrote:
Would it be possible to use the FSM to figure out if things have changed
since the last scan..? Does that scan update the FSM, which would then
be updated by another
On 10/25/2012 10:12 AM, Stephen Frost wrote:
Jan,
* Jan Wieck (janwi...@yahoo.com) wrote:
The problem case this patch is dealing with is rolling window tables
that experienced some bloat. The typical example is a log table,
that has new data constantly added and the oldest data constantly
This problem has been discussed before. Those familiar with the subject
please skip the next paragraph.
When autovacuum finds a substantial amount of empty pages at the end of
a relation, it attempts to truncate it in lazy_truncate_heap(). Because
all the scanning had been done in parallel to
Here is the patch for it.
Jan
--
Anyone who trades liberty for security deserves neither
liberty nor security. -- Benjamin Franklin
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index c9253a9..9f880f0 100644
*** a/src/backend/commands/vacuumlazy.c
---
On 10/15/2012 4:43 PM, Simon Riggs wrote:
Jan spoke at length at PgCon, for all to hear, that what we are
building is a much better way than the trigger logging approach Slony
uses. I don't take that as carte blanche for approval of everything
being done, but its going in the right direction
On 10/15/2012 3:25 PM, Andres Freund wrote:
On Monday, October 15, 2012 09:18:57 PM Peter Geoghegan wrote:
On 15 October 2012 19:19, Bruce Momjian br...@momjian.us wrote:
I think Robert is right that if Slony can't use the API, it is unlikely
any other replication system could use it.
I
1 - 100 of 910 matches
Mail list logo