I posted this on this mailing list before at Jane Street we have developed
very fast code to get timing information based on TSC if available. It's
all ocaml but well documented and mostly just calls to c functions so
should be easy to port to C and we release it under a very liberal license
so
On Thu, May 15, 2014 at 8:19 AM, Benedikt Grundmann
bgrundm...@janestreet.com wrote:
I posted this on this mailing list before at Jane Street we have developed
very fast code to get timing information based on TSC if available. It's
all ocaml but well documented and mostly just calls to c
New version of the selectivity estimation patch attached. I am adding
it to CommitFest 2014-06. Previous version of it reviewed by
Andreas Karlson on the previous CommitFest with the GiST support patch.
The new version includes join selectivity estimation.
Join selectivity is calculated in 4
Hi All,
I am runnig dbt2 with last postgresql kit. pg9.4. I tried everything again
after setting up awhole new machine again with ubuntu. Still facing the
same error.
I run the the *dbt2-pgsql-build-db -w 1 *
but, after some time, I faced this error
On 05/14/2014 06:06 PM, Noah Misch wrote:
On Wed, May 14, 2014 at 05:51:24PM +0300, Heikki Linnakangas wrote:
On 05/14/2014 05:37 PM, Noah Misch wrote:
On Wed, May 14, 2014 at 03:15:38PM +0300, Heikki Linnakangas wrote:
On 05/09/2014 02:56 AM, Noah Misch wrote:
MinGW:
On Thu, May 15, 2014 at 8:19 AM, Benedikt Grundmann
bgrundm...@janestreet.com wrote:
I posted this on this mailing list before at Jane Street we have developed
very fast code to get timing information based on TSC if available. It's
all ocaml but well documented and mostly just calls to c
On Thu, May 15, 2014 at 11:31 AM, Greg Stark st...@mit.edu wrote:
On Thu, May 15, 2014 at 8:19 AM, Benedikt Grundmann
bgrundm...@janestreet.com wrote:
I posted this on this mailing list before at Jane Street we have
developed
very fast code to get timing information based on TSC if
On 2014-05-15 12:04:25 +0100, Benedikt Grundmann wrote:
On Thu, May 15, 2014 at 11:31 AM, Greg Stark st...@mit.edu wrote:
On Thu, May 15, 2014 at 8:19 AM, Benedikt Grundmann
bgrundm...@janestreet.com wrote:
I posted this on this mailing list before at Jane Street we have
developed
Hi,
On 2014-05-13 18:58:11 -0400, Tom Lane wrote:
Anyway it looks like clock_gettime() might be worth using on Linux
just for the more precise output. It doesn't seem to exist on OS X
though, and I have no idea about elsewhere.
Agreed that using clock_gettime() would be a good idea. I'd say
On 05/14/2014 08:49 PM, Euler Taveira wrote:
While updating pt-br translation I noticed that some sentences could be
improved. I also fix some style glitches. A set of patches are attached.
Thanks, applied.
- Heikki
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To
On Tue, May 6, 2014 at 11:15:17PM +0100, Simon Riggs wrote:
Well, for what it's worth, I've encountered systems where setting
effective_cache_size too low resulted in bad query plans, but I've
never encountered the reverse situation.
I agree with that.
Though that misses my point,
On Thu, May 15, 2014 at 6:20 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
Ok, I committed #undefs. I don't have a Mingw(-w64) environment to test
with, so let's see if the buildfarm likes it.
There does not seem to be a buildfarm machine using MinGW-w64... Btw,
I tested latest master on
On Thu, May 15, 2014 at 9:06 PM, Bruce Momjian br...@momjian.us wrote:
This is the same problem we had with auto-tuning work_mem, in that we
didn't know what other concurrent activity was happening. Seems we need
concurrent activity detection before auto-tuning work_mem and
On 05/15/2014 04:15 PM, Michael Paquier wrote:
On Thu, May 15, 2014 at 6:20 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
Ok, I committed #undefs. I don't have a Mingw(-w64) environment to test
with, so let's see if the buildfarm likes it.
There does not seem to be a buildfarm machine
On 05/06/2014 02:44 PM, Andres Freund wrote:
On 2014-05-05 13:41:00 +0300, Heikki Linnakangas wrote:
+/*
+ * Exit hook to unlock the global transaction entry we're working on.
+ */
+static void
+AtProcExit_Twophase(int code, Datum arg)
+{
+ /* same logic as abort */
+
On Thu, May 15, 2014 at 10:23:19PM +0900, Amit Langote wrote:
On Thu, May 15, 2014 at 9:06 PM, Bruce Momjian br...@momjian.us wrote:
This is the same problem we had with auto-tuning work_mem, in that we
didn't know what other concurrent activity was happening. Seems we need
concurrent
On Thu, May 15, 2014 at 11:24 PM, Bruce Momjian br...@momjian.us wrote:
On Thu, May 15, 2014 at 10:23:19PM +0900, Amit Langote wrote:
On Thu, May 15, 2014 at 9:06 PM, Bruce Momjian br...@momjian.us wrote:
This is the same problem we had with auto-tuning work_mem, in that we
didn't know
On 2014-05-15 17:21:28 +0300, Heikki Linnakangas wrote:
Is it guaranteed that all paths have called LWLockReleaseAll()
before calling the proc exit hooks? Otherwise we might end up waiting
for ourselves...
Hmm. AbortTransaction() will release locks before we get here, but the
From the department of punishing good deeds ... in commit 2dc4f011fd
I added setvbuf() calls to initdb to ensure that output to stdout and
stderr would appear in a consistent order regardless of whether the
output was going to a terminal or a file. The buildfarm shows that
on several (but not
On 04/14/2014 11:55 AM, Marko Kreen wrote:
On Sun, Apr 13, 2014 at 05:46:20PM -0400, Jan Wieck wrote:
On 04/13/14 14:22, Jan Wieck wrote:
On 04/13/14 08:27, Marko Kreen wrote:
I think you need to do SET_VARSIZE also here. Alternative is to
move SET_VARSIZE after sort_snapshot().
And it
Tom Lane wrote:
It might also be reasonable to create a wrapper macro along the line of
PG_STD_IO_BUFFERING() that would encapsulate the whole sequence
setvbuf(stdout, NULL, _IOLBF, 0);
setvbuf(stderr, NULL, _IONBF, 0);
Or maybe we should have separate macros for those two calls.
On Thu, May 15, 2014 at 11:36:51PM +0900, Amit Langote wrote:
No, all memory allocat is per-process, except for shared memory. We
probably need a way to record our large local memory allocations in
PGPROC that other backends can see; same for effective cache size
assumptions we make.
Alvaro Herrera alvhe...@2ndquadrant.com writes:
Tom Lane wrote:
It might also be reasonable to create a wrapper macro along the line of
PG_STD_IO_BUFFERING() that would encapsulate the whole sequence
setvbuf(stdout, NULL, _IOLBF, 0);
setvbuf(stderr, NULL, _IONBF, 0);
Or maybe we should have
Hi all,
today I got a few of errors like these (this one is from last week, though):
Status Line: 493 snapshot too old: Wed May 7 04:36:57 2014 GMT
Content:
snapshot to old: Wed May 7 04:36:57 2014 GMT
on the new buildfarm animals. I believe it was my mistake (incorrectly
configured
Spotted while testing pg_recvlogical:
1. Set up pg_recvlogical to receive:
./pg_recvlogical -S fooslot -d postgres --create
./pg_recvlogical -S fooslot -d postgres --start -f -
2. In another terminal, with psql:
create table foo (id int4);
begin;
insert into foo values (4);
alter table foo
On 2014-05-13 17:43:47 +0300, Heikki Linnakangas wrote:
On 05/13/2014 04:35 PM, Andres Freund wrote:
On 2014-05-13 16:31:25 +0300, Heikki Linnakangas wrote:
Another thing I noticed is that if when the output goes to a file, the file
isn't re-opened immediately on SIGHUP. Only after receiving
On 05/15/2014 07:57 PM, Heikki Linnakangas wrote:
Spotted while testing pg_recvlogical:
1. Set up pg_recvlogical to receive:
./pg_recvlogical -S fooslot -d postgres --create
./pg_recvlogical -S fooslot -d postgres --start -f -
2. In another terminal, with psql:
create table foo (id int4);
On 05/15/2014 07:59 PM, Andres Freund wrote:
On 2014-05-13 17:43:47 +0300, Heikki Linnakangas wrote:
On 05/13/2014 04:35 PM, Andres Freund wrote:
On 2014-05-13 16:31:25 +0300, Heikki Linnakangas wrote:
Another thing I noticed is that if when the output goes to a file, the file
isn't re-opened
Hi,
On 2014-05-15 20:07:23 +0300, Heikki Linnakangas wrote:
On 05/15/2014 07:57 PM, Heikki Linnakangas wrote:
Spotted while testing pg_recvlogical:
1. Set up pg_recvlogical to receive:
./pg_recvlogical -S fooslot -d postgres --create
./pg_recvlogical -S fooslot -d postgres --start -f -
On 05/15/2014 12:43 PM, Tomas Vondra wrote:
Hi all,
today I got a few of errors like these (this one is from last week, though):
Status Line: 493 snapshot too old: Wed May 7 04:36:57 2014 GMT
Content:
snapshot to old: Wed May 7 04:36:57 2014 GMT
on the new buildfarm animals. I
On 2014-05-15 20:07:23 +0300, Heikki Linnakangas wrote:
Ok, so the immediate cause was quick to find: when decoding a
commit-prepared WAL record, we have to use the XID from the record content
(patch attached). The XID in the record header is the XID of the transaction
doing the COMMIT
On 2014-05-15 19:46:57 +0200, Andres Freund wrote:
Attached patch fixes things, but I want to add some regression tests
before commit.
And now actually attached. Will send a patch with regression tests later
tonight or tomorrow. Need to eat first...
Greetings,
Andres Freund
--
Andres
Hello
2014-05-15 15:04 GMT+02:00 Sergey Muraviov sergey.k.murav...@gmail.com:
Hi.
Please review the new patch.
This version works perfect
Regards
Pavel
PS
Issues which were described by Tom and Pavel were relevant to single-line
headers.
So I've added appropriate regression tests
Now that popen and pclose don't throw thousands of warnings when compiling
mingw builds, some other warnings stand out.
parallel.c: In function 'pgpipe':
parallel.c:1332:2: warning: overflow in implicit constant conversion
[-Woverflow]
parallel.c:1386:3: warning: overflow in implicit constant
On Mon, May 12, 2014 at 06:01:59PM +0300, Heikki Linnakangas wrote:
Some of the stuff in here will be influence whether your freezing
replacement patch gets in. Do you plan to further pursue that one?
Not sure. I got to the point where it seemed to work, but I got a
bit of a cold feet
On Mon, May 12, 2014 at 07:16:48PM -0400, Tom Lane wrote:
Christoph Berg c...@df7cb.de writes:
84df54b22e8035addc7108abd9ff6995e8c49264 introduced timestamp
constructors. In the regression tests, various time zones are tested,
including America/Metlakatla. Now, if you configure using
On Wed, May 14, 2014 at 8:46 PM, Jeff Janes jeff.ja...@gmail.com wrote:
+1. I can't think of many things we might do that would be more
important.
Can anyone guess how likely this approach is to make it into 9.5? I've been
pondering some incremental improvements over what we have now, but
Bruce Momjian br...@momjian.us writes:
On Mon, May 12, 2014 at 07:16:48PM -0400, Tom Lane wrote:
I agree, that seems an entirely gratuitous choice of zone. It does
seem like a good idea to test a zone that has a nonintegral offset
from GMT, but we can get that from almost anywhere as long as
On Tue, May 13, 2014 at 09:55:26AM -0400, Alvaro Herrera wrote:
Christoph Berg wrote:
Of course, Wikipedia has something to say about this:
http://en.wikipedia.org/wiki/Timekeeping_on_Mars
Nice.
I especially like MTC, Mars Time Coordinated. But whatever scheme gets
chosen, it won't
On Tue, May 13, 2014 at 06:58:11PM -0400, Tom Lane wrote:
A recent question from Tim Kane prompted me to measure the overhead
costs of EXPLAIN ANALYZE, which I'd not checked in awhile. Things
are far worse than I thought. On my current server (by no means
lavish hardware: Xeon E5-2609
On Thu, May 15, 2014 at 8:06 AM, Bruce Momjian br...@momjian.us wrote:
On Tue, May 6, 2014 at 11:15:17PM +0100, Simon Riggs wrote:
Well, for what it's worth, I've encountered systems where setting
effective_cache_size too low resulted in bad query plans, but I've
never encountered the
On Thu, May 15, 2014 at 10:38 AM, Andres Freund and...@2ndquadrant.com wrote:
shrug. async.c and namespace.c does the same, and it hasn't been a
problem.
Well, it doesn't seem unreasonable to have C code using
PG_ENSURE_ERROR_CLEANUP/PG_END_ENSURE_ERROR_CLEANUP around a 2pc commit
to me.
In testing 9.4 with some long running tests, I noticed that autovacuum
launcher/worker sometimes goes a bit nuts. It vacuums the same database
repeatedly without respect to the nap time.
As far as I can tell, the behavior is the same in older versions, but I
haven't tested that.
This is my
On Thu, May 15, 2014 at 02:47:21PM -0400, Tom Lane wrote:
Bruce Momjian br...@momjian.us writes:
On Mon, May 12, 2014 at 07:16:48PM -0400, Tom Lane wrote:
I agree, that seems an entirely gratuitous choice of zone. It does
seem like a good idea to test a zone that has a nonintegral offset
On 05/15/2014 08:46 PM, Andres Freund wrote:
On 2014-05-15 20:07:23 +0300, Heikki Linnakangas wrote:
Ok, so the immediate cause was quick to find: when decoding a
commit-prepared WAL record, we have to use the XID from the record content
(patch attached). The XID in the record header is the XID
On Thu, May 15, 2014 at 2:34 PM, Bruce Momjian br...@momjian.us wrote:
On Mon, May 12, 2014 at 06:01:59PM +0300, Heikki Linnakangas wrote:
Some of the stuff in here will be influence whether your freezing
replacement patch gets in. Do you plan to further pursue that one?
Not sure. I got to
Jeff Janes wrote:
If you have a database with a large table in it that has just passed
autovacuum_freeze_max_age, all future workers will be funnelled into that
database until the wrap-around completes. But only one of those workers
can actually vacuum the one table which is holding back the
Andres Freund andres at anarazel.de writes:
Hi,
Some comments about the patch:
* Coding Style:
* multiline comments have both /* and */ on their own lines.
* I think several places indent by two tabs.
* Spaces around operators
* ...
* Many of the new comments would enjoy a bit
On 15 Květen 2014, 19:46, Andrew Dunstan wrote:
On 05/15/2014 12:43 PM, Tomas Vondra wrote:
Hi all,
today I got a few of errors like these (this one is from last week,
though):
Status Line: 493 snapshot too old: Wed May 7 04:36:57 2014 GMT
Content:
snapshot to old: Wed May
On 2014-05-15 22:30:53 +0300, Heikki Linnakangas wrote:
On 05/15/2014 08:46 PM, Andres Freund wrote:
On 2014-05-15 20:07:23 +0300, Heikki Linnakangas wrote:
How very wierd. The reason for this is that
RecordTransactionCommitPrepared() forgets to fill a couple of fields in
xl_xact_commit. Any
On 2014-05-15 15:40:06 -0400, Robert Haas wrote:
On Thu, May 15, 2014 at 2:34 PM, Bruce Momjian br...@momjian.us wrote:
On Mon, May 12, 2014 at 06:01:59PM +0300, Heikki Linnakangas wrote:
Some of the stuff in here will be influence whether your freezing
replacement patch gets in. Do you
On 05/15/2014 03:57 PM, Tomas Vondra wrote:
How long does a CLOBBER_CACHE_RECURSIVELY run take? days or weeks seems
kinda nuts.
I don't know. According to this comment from cache/inval.c, it's expected
to be way slower (~100x) compared to CLOBBER_CACHE_ALWAYS.
/*
* Test code to force cache
On Thu, May 15, 2014 at 10:06:32PM +0200, Andres Freund wrote:
If the larger clog size is a show-stopper (and I'm not sure I have an
intelligent opinion on that just yet), one way to get around the
problem would be to summarize CLOG entries after-the-fact. Once an
XID precedes the xmin of
Andrew Dunstan and...@dunslane.net writes:
Incidentally, should the CLOBBER_CACHE_ALWAYS machines also be defining
CLOBBER_FREED_MEMORY?
The former does need the latter or it's not very thorough. However,
CLOBBER_FREED_MEMORY is defined automatically by --enable-cassert,
so you shouldn't need
On 2014-05-15 16:13:49 -0400, Bruce Momjian wrote:
On Thu, May 15, 2014 at 10:06:32PM +0200, Andres Freund wrote:
If the larger clog size is a show-stopper (and I'm not sure I have an
intelligent opinion on that just yet), one way to get around the
problem would be to summarize CLOG
On 05/15/2014 07:46 PM, Andrew Dunstan wrote:
On 05/15/2014 12:43 PM, Tomas Vondra wrote:
Hi all,
today I got a few of errors like these (this one is from last week,
though):
Status Line: 493 snapshot too old: Wed May 7 04:36:57 2014 GMT
Content:
snapshot to old: Wed May 7
On 05/15/2014 04:30 PM, Stefan Kaltenbrunner wrote:
On 05/15/2014 07:46 PM, Andrew Dunstan wrote:
On 05/15/2014 12:43 PM, Tomas Vondra wrote:
Hi all,
today I got a few of errors like these (this one is from last week,
though):
Status Line: 493 snapshot too old: Wed May 7 04:36:57 2014
On 2014-05-15 22:30:53 +0300, Heikki Linnakangas wrote:
Attached patch fixes things, but I want to add some regression tests
before commit.
Looks good to me.
Attached are two patches. One for the unitialized dbId/tsId issue; one
for the decoding bug. The former should be backpatched.
Should
Hi,
I have some preliminary tests for the pg_recvlogical binary using the
infrastructure Peter added. I am wondering if somebody has a good idea
about how to make the tests more meaningful. Currently all that's tested
are simple commands. Not the main functionality namely the actual
streaming of
I was watching a very large recursive CTE get built today and this CTE
involves on the order of a dozen or so loops joining the initial
table against existing tables. It struck me that - every time through
the loop the tables were sorted and then joined and that it would be
much more efficient if
Andres Freund wrote:
On 2014-05-15 15:40:06 -0400, Robert Haas wrote:
On Thu, May 15, 2014 at 2:34 PM, Bruce Momjian br...@momjian.us wrote:
If the larger clog size is a show-stopper (and I'm not sure I have an
intelligent opinion on that just yet), one way to get around the
problem
On 2014-05-15 17:37:14 -0400, Alvaro Herrera wrote:
Andres Freund wrote:
On 2014-05-15 15:40:06 -0400, Robert Haas wrote:
On Thu, May 15, 2014 at 2:34 PM, Bruce Momjian br...@momjian.us wrote:
If the larger clog size is a show-stopper (and I'm not sure I have an
intelligent opinion
On Wed, May 14, 2014 at 8:26 AM, Robert Haas robertmh...@gmail.com wrote:
On Sun, May 11, 2014 at 12:47 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Simon Riggs si...@2ndquadrant.com writes:
On 11 May 2014 11:18, Andres Freund and...@2ndquadrant.com wrote:
I don't know. I'd find UPDATE/DELETE
Jon Nelson-14 wrote
I was watching a very large recursive CTE get built today and this CTE
involves on the order of a dozen or so loops joining the initial
table against existing tables. It struck me that - every time through
the loop the tables were sorted and then joined and that it would be
On 15-05-2014 18:09, Andres Freund wrote:
I have some preliminary tests for the pg_recvlogical binary using the
infrastructure Peter added. I am wondering if somebody has a good idea
about how to make the tests more meaningful. Currently all that's tested
are simple commands. Not the main
On 2014-05-15 18:52:45 -0300, Euler Taveira wrote:
On 15-05-2014 18:09, Andres Freund wrote:
I have some preliminary tests for the pg_recvlogical binary using the
infrastructure Peter added. I am wondering if somebody has a good idea
about how to make the tests more meaningful. Currently
Andres Freund wrote:
On 2014-05-15 17:37:14 -0400, Alvaro Herrera wrote:
Andres Freund wrote:
On 2014-05-15 15:40:06 -0400, Robert Haas wrote:
On Thu, May 15, 2014 at 2:34 PM, Bruce Momjian br...@momjian.us wrote:
If the larger clog size is a show-stopper (and I'm not sure I have
Sorry, I've forgotten the report.
The test fails on label test come from specification change in the mcs policy.
Previously, it was applied to all the domains including unconfined_t, but now,
it became to be applied on the domain with mcsconstrained attribute.
This regression test run
On Thu, May 15, 2014 at 4:50 PM, David G Johnston
david.g.johns...@gmail.com wrote:
Jon Nelson-14 wrote
I was watching a very large recursive CTE get built today and this CTE
involves on the order of a dozen or so loops joining the initial
table against existing tables. It struck me that -
Hi All,
I am using centOS6 and after all confugration, I run the below command
*dbt2-run-workload -a pgsql -d 120 -w 1 -o /tmp/result -c 10*
*Error:*
Stage 3. Processing of results...
Killing client...
waiting for server to shut down done
server stopped
Traceback (most recent call last):
On 05/15/2014 06:37 PM, Rohit Goyal wrote:
Hi All,
I am using centOS6 and after all confugration, I run the below command
*dbt2-run-workload -a pgsql -d 120 -w 1 -o /tmp/result -c 10
*
*Error:*
Stage 3. Processing of results...
Killing client...
waiting for server to shut down done
On Thu, May 15, 2014 at 3:47 PM, Andrew Dunstan and...@dunslane.net wrote:
Do these questions about running dbt2 even belong on pgsql-hackers? They
seem to me to be usage questions that belong on pgsql-general.
I agree.
Anyway, perhaps the OP will have more luck with OLTPBenchmark, which
has
On 15.5.2014 22:56, Andrew Dunstan wrote:
On 05/15/2014 04:30 PM, Stefan Kaltenbrunner wrote:
well I'm not sure about about misconfigured but both my personal
buildfarm members and pginfra run ones (like gaibasaurus) got errors
complaining about snapshot too old in the past for long running
On 15.5.2014 22:07, Andrew Dunstan wrote:
Yes, I've seen that. Frankly, a test that takes something like 500
hours is a bit crazy.
Maybe. It certainly is not a test people will use during development.
But if it can detect some hard-to-find errors in the code, that might
possibly lead to
Hi,
I'm thinking of an extension to trigger functionality like this:
CREATE TRIGGER trigger_name
AFTER event
ON table
CONCURRENTLY EXECUTE PROCEDURE trigger_fc
This would call the trigger after the end of the transaction.
The following is a use-case, please tell me if I'm doing it
On 2014-05-04 08:46:07 -0400, Bruce Momjian wrote:
I have completed the initial version of the 9.4 release notes. You can
view them here:
http://www.postgresql.org/docs/devel/static/release-9-4.html
I will be adding additional markup in the next few days.
Feedback expected and
Andrew Dunstan wrote
On 05/15/2014 06:37 PM, Rohit Goyal wrote:
Hi All,
I am using centOS6 and after all confugration, I run the below command
*dbt2-run-workload -a pgsql -d 120 -w 1 -o /tmp/result -c 10
*
*Error:*
Stage 3. Processing of results...
Killing client...
waiting for server
Andres Freund-3 wrote
On 2014-05-04 08:46:07 -0400, Bruce Momjian wrote:
I have completed the initial version of the 9.4 release notes. You can
view them here:
http://www.postgresql.org/docs/devel/static/release-9-4.html
I will be adding additional markup in the next few days.
Blagoj Petrushev wrote
Hi,
I'm thinking of an extension to trigger functionality like this:
CREATE TRIGGER trigger_name
AFTER event
ON table
CONCURRENTLY EXECUTE PROCEDURE trigger_fc
This would call the trigger after the end of the transaction.
The following is a
Mysteriously, commit 6b2a1445ec8a631060c4cbff3f172bf31d3379b9 has broken
the PDF build (openjade + pdfjadetex) in the 9.0 branch only. The error
is
[256.0.28
! pdfTeX error (ext4): \pdfendlink ended up in different nesting level than \pd
fstartlink.
\AtBegShi@Output ...ipout \box
Peter Eisentraut pete...@gmx.net writes:
Mysteriously, commit 6b2a1445ec8a631060c4cbff3f172bf31d3379b9 has broken
the PDF build (openjade + pdfjadetex) in the 9.0 branch only. The error
is
[256.0.28
! pdfTeX error (ext4): \pdfendlink ended up in different nesting level than
\pd
I wrote:
Yeah. This is caused by a hyperlink whose displayed text crosses a page
boundary. The only known fix is to change the text enough so the link
no longer runs across a page boundary. Unfortunately, pdfTeX is pretty
unhelpful about identifying exactly where the problem is. I seem to
Hi all,
Are there some reason to don't show the tablespace size in the \db+ psql
command?
Regards,
--
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
Timbira: http://www.timbira.com.br
Blog sobre TI: http://fabriziomello.blogspot.com
Perfil Linkedin:
Hi all
There's a bug in the dynamic bgworkers code that I think needs fixing
before release. TL;DR: BGW_NO_RESTART workers are restarted after
postmaster crash, attached patch changes that.
The case that's triggering the issue is where a static bgworker is
registering a new dynamic bgworker to
On 05/16/2014 08:06 AM, Blagoj Petrushev wrote:
Hi,
I'm thinking of an extension to trigger functionality like this:
CREATE TRIGGER trigger_name
AFTER event
ON table
CONCURRENTLY EXECUTE PROCEDURE trigger_fc
This would call the trigger after the end of the transaction.
If
85 matches
Mail list logo