Re: [HACKERS] enhanced error fields
Hello 2012/12/29 Stephen Frost : > * Pavel Stehule (pavel.steh...@gmail.com) wrote: >> it is a problem of this patch or not consistent constraints implementation ? > > Not sure, but I don't think it matters. You can blame the constraint > implementation, but that doesn't change my feelings about what we need > before we can accept a patch like this. Providing something which works > only part of the time and then doesn't work for very unclear reasons > isn't a good idea. Perhaps we need to fix the constraint implementation > and perhaps we need to fix the error information being returned, or most > likely we have to fix both, it doesn't change that we need to do > something more than just ignore this problem. so we have to solve this issue first. Please, can you do resume, what is and where is current constraint implementation raise strange/unexpected messages? one question when we will fix constraints, maybe we can use some infrastructure for enhanced error fields. What about partial commit now - just necessary infrastructure without modification of other code - I am thinking so there is agreement on new fields: column_name, table_name, schema_name, constraint_name and constraint_schema? Regards Pavel > > Thanks, > > Stephen -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Enabling Checksums
On 12/19/12 6:30 PM, Jeff Davis wrote: The idea is to prevent interference from the bgwriter or autovacuum. Also, I turn of fsync so that it's measuring the calculation overhead, not the effort of actually writing to disk. With my test server issues sorted, what I did was setup a single 7200RPM drive with a battery-backed write cache card. That way fsync doesn't bottleneck things. And I to realized that limit had to be cracked before anything use useful could be done. Having the BBWC card is a bit better than fsync=off, because we'll get something more like the production workload out of it. I/O will be realistic, but limited to only one one drive can pull off. Without checksums, it takes about 1000ms. With checksums, about 2350ms. I also tested with checksums but without the CHECKPOINT commands above, and it was also 1000ms. I think we need to use lower checkpoint_segments to try and trigger more checkpoints. My 10 minute pgbench-tool runs will normally have at most 3 checkpoints. I would think something like 10 would be more useful, to make sure we're spending enough time seeing extra WAL writes; This test is more plausible than the other two, so it's more likely to be a real problem. So, the biggest cost of checksums is, by far, the extra full-page images in WAL, which matches our expectations. What I've done with pgbench-tools is actually measure the amount of WAL from the start to the end of the test run. To analyze it you need to scale it a bit; computing "wal bytes / commit" seems to work. pgbench-tools also launches vmstat and isstat in a way that it's possible to graph the values later. The interesting results I'm seeing are when the disk is about 80% busy and when it's 100% busy. -- Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] buffer assertion tripping under repeat pgbench load
On 12/26/12 7:23 PM, Greg Stark wrote: It's also possible it's a bad cpu, not bad memory. If it affects decrement or increment in particular it's possible that the pattern of usage on LocalRefCount is particularly prone to triggering it. This looks to be the winning answer. It turns out that under extended multi-hour loads at high concurrency, something related to CPU overheating was occasionally flipping a bit. One round of compressed air for all the fans/vents, a little tweaking of the fan controls, and now the system goes >24 hours with no problems. Sorry about all the noise over this. I do think the improved warning messages that came out of the diagnosis ideas are useful. The reworked code must slows down the checking a few cycles, but if you care about performance these assertions are tacked onto the biggest pig around. I added the patch to the January CF as "Improve buffer refcount leak warning messages". The sample I showed with the patch submission was a simulated one. Here's the output from the last crash before resolving the issue, where the assertion really triggered: WARNING: buffer refcount leak: [170583] (rel=base/16384/16578, blockNum=302295, flags=0x106, refcount=0 1073741824) WARNING: buffers with non-zero refcount is 1 TRAP: FailedAssertion("!(RefCountErrors == 0)", File: "bufmgr.c", Line: 1712) -- Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review]
On Sunday, January 13, 2013 12:41 AM Tom Lane wrote: Boszormenyi Zoltan writes: >> No, I mean the reaper(SIGNAL_ARGS) function in >> src/backend/postmaster/postmaster.c where your patch has this: >> *** a/src/backend/postmaster/postmaster.c >> --- b/src/backend/postmaster/postmaster.c >> *** >> *** 2552,2557 reaper(SIGNAL_ARGS) > I have not been following this thread, but I happened to see this bit, > and I have to say that I am utterly dismayed that anything like this is > even on the table. This screams overdesign. We do not need a global > lock file, much less postmaster-side cleanup. All that's needed is a > suitable convention about temp file names that can be written and then > atomically renamed into place. If we're unlucky enough to crash before > a temp file can be renamed into place, it'll just sit there harmlessly. I can think of below ways to generate tmp file 1. Generate session specific tmp file name during operation. This will be similar to what is currently being done in OpenTemporaryFileinTablespace() to generate a tempfilename. 2. Generate global tmp file name. For this we need to maintain global counter. 3. Always generate temp file with same name "postgresql.auto.conf.tmp", as at a time only one session is allowed to operate. In any approach, there will be chance that temp files will remain if server crashes during this command execution which can lead to collision of temp file name next time user tries to use SET persistent command. An appropriate error will be raised and user manually needs to delete that file. I just noticed that you are using "%m" in format strings twice. man 3 printf says: m (Glibc extension.) Print output of strerror(errno). No argument is required. This doesn't work anywhere else, only on GLIBC systems, it means Linux. I also like the brevity of this extension but PostgreSQL is a portable system. You should use %s and strerror(errno) as argument explicitly. >>> %m is used at other places in code as well. > As far as that goes, %m is appropriate in elog/ereport (which contain > special support for it), but not anywhere else. In the patch, it's used in ereport, so I assume it is safe and patch doesn't need any change for %m. With Regards, Amit Kapila. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [PATCH] unified frontend support for pg_malloc et al and palloc/pfree mulation (was xlogreader-v4)
Heikki Linnakangas writes: > When I compile with gcc -O0, I get one warning with this: > datetime.c: In function DateTimeParseError: > datetime.c:3575:1: warning: noreturn function does return [enabled by > default] > That suggests that the compiler didn't correctly deduce that > ereport(ERROR, ...) doesn't return. With -O1, the warning goes away. Yeah, I am seeing this too. It appears to be endemic to the local-variable approach, ie if we have const int elevel_ = (elevel); ... (elevel_ >= ERROR) ? pg_unreachable() : (void) 0 then we do not get the desired results at -O0, which is not terribly surprising --- I'd not really expect the compiler to propagate the value of elevel_ when not optimizing. If we don't use a local variable, we don't get the warning, which I take to mean that gcc will fold "ERROR >= ERROR" to true even at -O0, and that it does this early enough to conclude that unreachability holds. I experimented with some variant ways of phrasing the macro, but the only thing that worked at -O0 required __builtin_constant_p, which rather defeats the purpose of making this accessible to non-gcc compilers. If we go with the local-variable approach, we could probably suppress this warning by putting an abort() call at the bottom of DateTimeParseError. It seems a tad ugly though, and what's a bigger deal is that if the compiler is unable to deduce unreachability at -O0 then we are probably going to be fighting such bogus warnings for all time to come. Note also that an abort() (much less a pg_unreachable()) would not do anything positive like give us a compile warning if we mistakenly added a case that could fall through. On the other hand, if there's only one such case in our tree today, maybe I'm worrying too much. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: logical changeset generation v3 - comparison to Postgres-R change set format
[Catching up on old threads.] On Sat, Nov 17, 2012 at 03:40:49PM +0100, Hannu Krosing wrote: > On 11/17/2012 03:00 PM, Markus Wanner wrote: >> On 11/17/2012 02:30 PM, Hannu Krosing wrote: >>> Is it possible to replicate UPDATEs and DELETEs without a primary key in >>> PostgreSQL-R >> No. There must be some way to logically identify the tuple. > It can be done as selecting on _all_ attributes and updating/deleting > just the first matching row > > create cursor ... > select from t ... where t.* = () > fetch one ... > delete where current of ... > > This is on distant (round 3 or 4) roadmap for this work, just was > interested > if you had found any better way of doing this :) That only works if every attribute's type has a notion of equality ("xml" does not). The equality operator may have a name other than "=", and an operator named "=" may exist with semantics other than equality ("box" is affected). Code attempting this replication strategy should select an equality operator the way typcache.c does so. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Latex longtable format
I have received several earnest requests over the years for LaTeX 'longtable' output, and I have just implemented it based on a sample LaTeX longtable output file. I have called it 'latex-longtable' and implemented all the behaviors of ordinary latex mode. One feature is that in latex-longtable mode, 'tableattr' allows control over the column widths --- that seemed to be very important to the users. One requested change I made to the ordinary latex output was to suppress the line under the table title if border = 0 (default is border = 1). Patch and sample output attached. I would like to apply this for PG 9.3. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. + diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml new file mode 100644 index c41593c..932c7ca *** a/doc/src/sgml/ref/psql-ref.sgml --- b/doc/src/sgml/ref/psql-ref.sgml *** lo_import 152801 *** 1979,1985 Sets the output format to one of unaligned, aligned, wrapped, html, ! latex, or troff-ms. Unique abbreviations are allowed. (That would mean one letter is enough.) --- 1979,1986 Sets the output format to one of unaligned, aligned, wrapped, html, ! latex, latex-longtable, ! or troff-ms. Unique abbreviations are allowed. (That would mean one letter is enough.) *** lo_import 152801 *** 2005,2016 ! The html, latex, and troff-ms formats put out tables that are intended to be included in documents using the respective mark-up language. They are not complete documents! (This might not be ! so dramatic in HTML, but in LaTeX you must ! have a complete document wrapper.) --- 2006,2019 ! The html, latex, ! latex-longtable, and troff-ms formats put out tables that are intended to be included in documents using the respective mark-up language. They are not complete documents! (This might not be ! so dramatic in HTML, but in ! LaTeX you must have a complete ! document wrapper.) *** lo_import 152801 *** 2141,2149 tableattr (or T) ! Specifies attributes to be placed inside the ! HTML table tag in ! html output format. This could for example be cellpadding or bgcolor. Note that you probably don't want to specify border here, as that is already --- 2144,2151 tableattr (or T) ! In HTML format, this specifies attributes ! to be placed inside the table tag. This could for example be cellpadding or bgcolor. Note that you probably don't want to specify border here, as that is already *** lo_import 152801 *** 2152,2157 --- 2154,2165 value is given, the table attributes are unset. + + In latex-longtable format, this controls + the proportional width of each column. It is specified as a + space-separated list of values, e.g. '0.2 0.2 0.6'. + Unspecified output columns will use the last specified value. + diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c new file mode 100644 index 59f8b03..740884f *** a/src/bin/psql/command.c --- b/src/bin/psql/command.c *** _align2string(enum printFormat in) *** 2164,2169 --- 2164,2172 case PRINT_LATEX: return "latex"; break; + case PRINT_LATEX_LONGTABLE: + return "latex-longtable"; + break; case PRINT_TROFF_MS: return "troff-ms"; break; *** do_pset(const char *param, const char *v *** 2197,2202 --- 2200,2207 popt->topt.format = PRINT_HTML; else if (pg_strncasecmp("latex", value, vallen) == 0) popt->topt.format = PRINT_LATEX; + else if (pg_strncasecmp("latex-longtable", value, vallen) == 0) + popt->topt.format = PRINT_LATEX_LONGTABLE; else if (pg_strncasecmp("troff-ms", value, vallen) == 0) popt->topt.format = PRINT_TROFF_MS; else diff --git a/src/bin/psql/print.c b/src/bin/psql/print.c new file mode 100644 index 466c255..6839d6c *** a/src/bin/psql/print.c --- b/src/bin/psql/print.c *** print_latex_text(const printTableContent *** 1656,1662 fputc('}', fout); } fputs(" \n", fout); ! fputs("\\hline\n", fout); } } --- 1656,1663 fputc('}', fout); }
Re: [HACKERS] [PATCH] unified frontend support for pg_malloc et al and palloc/pfree mulation (was xlogreader-v4)
Andres Freund writes: > On 2013-01-12 13:16:56 -0500, Tom Lane wrote: >> However, using a do-block with a local variable is definitely something >> worth considering. I'm getting less enamored of the __builtin_constant_p >> idea after finding out that the gcc boys seem to have curious ideas >> about what its semantics ought to be: >> https://bugzilla.redhat.com/show_bug.cgi?id=894515 > I wonder whether __builtin_choose_expr is any better? Right offhand I don't see how that helps us. AFAICS, __builtin_choose_expr is the same as x?y:z except that y and z are not required to have the same datatype, which is okay because they require the value of x to be determinable at compile time. So we couldn't just write __builtin_choose_expr((elevel) >= ERROR, ...). That would fail if elevel wasn't compile-time-constant. We could conceivably do __builtin_choose_expr(__builtin_constant_p(elevel) && (elevel) >= ERROR, ...) with the idea of making real sure that that expression reduces to a compile time constant ... but stacking two nonstandard constructs on each other seems to me to probably increase our exposure to gcc's definitional randomness rather than reduce it. I mean, if __builtin_constant_p can't already be trusted to act like a constant, why should we trust that __builtin_choose_expr doesn't also have a curious definition of constant-ness? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Porting to Haiku
Andrew Dunstan writes: > On 01/12/2013 04:07 PM, Tom Lane wrote: >> ... I would >> definitely advise that you not run the buildfarm under the same userid >> as any live server, so that no accidental damage is possible.) > No, I'm never going to make it unsafe to run the buildfarm alongside a > live server. I think you've misunderstood my intentions. Oh, the intention is clear enough. But there are always bugs. I don't actually run any buildfarm critters myself, but if I did you can be sure they'd be under their own userid that couldn't possibly break anything else. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Porting to Haiku
On 01/12/2013 04:07 PM, Tom Lane wrote: "Mark Hellegers" writes: I have only one server available running Haiku. Can I also run a normal Postgresql installation on that same machine? If so, I will be able to run the build multiple times a day. I believe that works at the moment, although there were discussions just yesterday about whether we really wanted to support it. (The point being that the buildfarm script would have to be careful not to kill the live postmaster when cleaning up after a test failure. I would definitely advise that you not run the buildfarm under the same userid as any live server, so that no accidental damage is possible.) No, I'm never going to make it unsafe to run the buildfarm alongside a live server. I think you've misunderstood my intentions. My main development machine at any time has from about three to six postmasters running, completely undisturbed by the buildfarm animal that fires every hour alongside them. Keep in mind though that the buildfarm run tends to suck a lot of cycles when it's going --- you might not like the response-time hit you'll probably see on the live server, unless this is a nice beefy machine. If this is an issue, it's best to run the script when load is least important, like 2.00am. It's designed to run from cron. cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Re: [PATCH] unified frontend support for pg_malloc et al and palloc/pfree mulation (was xlogreader-v4)
Andres Freund writes: >>> It does *not* combine elog_start and elog_finish into one function if >>> varargs are available although that brings a rather measurable >>> size/performance benefit. >> Since you've apparently already done the measurement: how much? >> It would be a bit tedious to support two different infrastructures for >> elog(), but if it's a big enough win maybe we should. > Imo its pretty definitely a big enough win. So big I have a hard time > believing it can be true without negative effects somewhere else. Well, actually there's a pretty serious negative effect here, which is that when it's implemented this way it's impossible to save errno for %m before the elog argument list is evaluated. So I think this is a no-go. We've always had the contract that functions in the argument list could stomp on errno without care. If we switch to a do-while macro expansion it'd be possible to do something like do { int save_errno = errno; int elevel = whatever; elog_internal( save_errno, elevel, fmt, __VA__ARGS__ ); } while (0); but this would almost certainly result in more code bloat not less, since call sites would now be responsible for fetching errno. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Porting to Haiku
On 01/12/2013 10:07 PM, Tom Lane wrote: > "Mark Hellegers" writes: >> I have only one server available running Haiku. Can I also run a normal >> Postgresql installation on that same machine? If so, I will be able to >> run the build multiple times a day. > > I believe that works at the moment, although there were discussions just > yesterday about whether we really wanted to support it. (The point > being that the buildfarm script would have to be careful not to kill the > live postmaster when cleaning up after a test failure. I would > definitely advise that you not run the buildfarm under the same userid > as any live server, so that no accidental damage is possible.) iirc Haiku is very strange in that regard - it is basically a single-user operating system, which I think makes it basically useless as a server and a horror from a security pov. Stefan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Porting to Haiku
"Mark Hellegers" writes: > I have only one server available running Haiku. Can I also run a normal > Postgresql installation on that same machine? If so, I will be able to > run the build multiple times a day. I believe that works at the moment, although there were discussions just yesterday about whether we really wanted to support it. (The point being that the buildfarm script would have to be careful not to kill the live postmaster when cleaning up after a test failure. I would definitely advise that you not run the buildfarm under the same userid as any live server, so that no accidental damage is possible.) Keep in mind though that the buildfarm run tends to suck a lot of cycles when it's going --- you might not like the response-time hit you'll probably see on the live server, unless this is a nice beefy machine. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Porting to Haiku
> "Mark Hellegers" writes: > >> You might want to look into whether you can get the buildfarm > > > script > >> to run before you go too far. > > > I will check that this weekend. Thanks. > > Does this buildfarm member need to run continuously? > > Once a day is sufficient if that's all you can manage, though several > times a day is nicer. I managed to run the buildfarm script. Needed to make a small change to the perl script as on Haiku everyone is "root", but other than that it runs fine. I have only one server available running Haiku. Can I also run a normal Postgresql installation on that same machine? If so, I will be able to run the build multiple times a day. > > I just did a quick check of my changes and the changes seem minor. > > Yeah, as Andrew remarked, it shouldn't be that hard given that the > code > used to run on BeOS. You should check the commits around where that > support was removed, if you didn't already. Yes, I already have done that. I think I don't need all the changes that were done then, but we will see. > > I also found (I think) two places where the implementation of a > > function contained slightly different parameters from the > > declaration > > which breaks on Haiku as an int is not the same as int32 (see src/ > > backend/catalog/dependency.c function deleteOneObject for one of > > those). > > This is the sort of thing that gets noticed much quicker if there's a > buildfarm member that complains about it ... I understand. I will start working on reapplying my patches to master. Git is still new to me, but I'm making progress. Kind regards, Mark -- Spangalese for beginners: `Lipsanae bully booly.' `The Lemming is driving recklessly.' -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] unified frontend support for pg_malloc et al and palloc/pfree mulation (was xlogreader-v4)
Heikki Linnakangas writes: > On 12.01.2013 20:42, Andres Freund wrote: >>> I don't care for that too much in detail -- if errstart were to return >>> false (it shouldn't, but if it did) this would be utterly broken, > With the current ereport(), we'll call abort() if errstart returns false > and elevel >= ERROR. That's even worse than making an error reporting > calls that elog.c isn't prepared for. No it isn't: you'd get a clean and easily interpretable abort. I am not sure exactly what would happen if we plow forward with calling elog.c functions after errstart returns false, but it wouldn't be good, and debugging a field report of such behavior wouldn't be easy. This is actually a disadvantage of the proposal to replace the abort() calls with __builtin_unreachable(), too. The gcc boys interpret the semantics of that as "if control reaches here, the behavior is undefined" --- and their notion of undefined generally seems to amount to "we will screw you as hard as we can". For example, they have no problem with using the assumption that control won't reach there to make deductions while optimizing the rest of the function. If it ever did happen, I would not want to have to debug it. The same goes for __attribute__((noreturn)), BTW. >> Yea, I didn't really like it all that much either - but anyway, I have >> yet to find *any* version with a local variable that doesn't lead to >> 200kb size increase :(. > Hmm, that's strange. Assuming the optimizer optimizes away the paths in > the macro that are never taken when elevel is a constant, I would expect > the resulting binary to be smaller than what we have now. I was wondering about that too, but haven't tried it locally yet. It could be that Andres was looking at an optimization level in which a register still got allocated for the local variable. > Here's what I got with and without my patch on my laptop: > -rwxr-xr-x 1 heikki heikki 24767237 tammi 12 21:27 postgres-do-while-mypatch > -rwxr-xr-x 1 heikki heikki 24623081 tammi 12 21:28 postgres-unpatched > But when I build without --enable-debug, the situation reverses: > -rwxr-xr-x 1 heikki heikki 5961309 tammi 12 21:48 postgres-do-while-mypatch > -rwxr-xr-x 1 heikki heikki 5986961 tammi 12 21:49 postgres-unpatched Yes, I noticed that too: these patches make the debug overhead grow substantially for some reason. It's better to look at the output of "size" rather than the executable's total file size. That way you don't need to rebuild without debug to see reality. (I guess you could also "strip" the executables for comparison purposes.) regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] "pg_ctl promote" exit status
On Tue, Oct 23, 2012 at 12:29:11PM -0400, Robert Haas wrote: > On Tue, Oct 23, 2012 at 6:39 AM, Dhruv Ahuja wrote: > > The "pg_ctl promote" command returns an exit code of 1 when the server > > is not in standby mode, and the same exit code of 1 when the server > > isn't started at all. The only difference at the time being is the > > string output at the time, which FYI are... > > > > pg_ctl: cannot promote server; server is not in standby mode > > > > ...and... > > > > pg_ctl: PID file "/var/lib/pgsql/9.1/data/postmaster.pid" does not exist > > Is server running? > > > > ...respectively. > > > > I am in the process of developing a clustering solution around luci > > and rgmanager (in Red Hat EL 6) and for the time being, am basing it > > off the string output. Maybe each different exit reason should have a > > unique exit code, whatever my logic and approach to solving this > > problem be? > > That doesn't seem like a bad idea. Got a patch? > The Linux Standard Base Core Specification 3.1 says this should return '3'. [1] [1] http://refspecs.freestandards.org/LSB_3.1.1/LSB-Core-generic/LSB-Core-generic/iniscrptact.html -- Mr. Aaron W. Swenson Gentoo Linux Developer Email : titanof...@gentoo.org GnuPG FP : 2C00 7719 4F85 FB07 A49C 0E31 5713 AA03 D1BB FDA0 GnuPG ID : D1BBFDA0 diff --git a/src/bin/pg_ctl/pg_ctl.c b/src/bin/pg_ctl/pg_ctl.c index e412d71..6743849 100644 --- a/src/bin/pg_ctl/pg_ctl.c +++ b/src/bin/pg_ctl/pg_ctl.c @@ -900,7 +900,13 @@ do_stop(void) { write_stderr(_("%s: PID file \"%s\" does not exist\n"), progname, pid_file); write_stderr(_("Is server running?\n")); - exit(1); +/* + * The Linux Standard Base Core Specification 3.1 says this should return + * '3' + * http://refspecs.freestandards.org/LSB_3.1.1/LSB-Core-generic/LSB-Core-ge + * neric/iniscrptact.html + */ +exit(3); } else if (pid < 0) /* standalone backend, not postmaster */ { @@ -1076,7 +1082,13 @@ do_reload(void) { write_stderr(_("%s: PID file \"%s\" does not exist\n"), progname, pid_file); write_stderr(_("Is server running?\n")); - exit(1); +/* + * The Linux Standard Base Core Specification 3.1 says this should return + * '3' + * http://refspecs.freestandards.org/LSB_3.1.1/LSB-Core-generic/LSB-Core-ge + * neric/iniscrptact.html + */ +exit(3); } else if (pid < 0) /* standalone backend, not postmaster */ { @@ -1116,7 +1128,13 @@ do_promote(void) { write_stderr(_("%s: PID file \"%s\" does not exist\n"), progname, pid_file); write_stderr(_("Is server running?\n")); - exit(1); +/* + * The Linux Standard Base Core Specification 3.1 says this should return + * '3' + * http://refspecs.freestandards.org/LSB_3.1.1/LSB-Core-generic/LSB-Core-ge + * neric/iniscrptact.html + */ +exit(3); } else if (pid < 0) /* standalone backend, not postmaster */ { pgpmyZk6PCS05.pgp Description: PGP signature
Re: [HACKERS] [PATCH] unified frontend support for pg_malloc et al and palloc/pfree mulation (was xlogreader-v4)
On 12.01.2013 20:42, Andres Freund wrote: On 2013-01-12 13:16:56 -0500, Tom Lane wrote: Heikki Linnakangas writes: Here's one more construct to consider: #define ereport_domain(elevel, domain, rest) \ do { \ const int elevel_ = elevel; \ if (errstart(elevel_,__FILE__, __LINE__, PG_FUNCNAME_MACRO, domain) || elevel_>= ERROR) \ { \ (void) rest; \ if (elevel_>= ERROR) \ errfinish_noreturn(1); \ else \ errfinish(1); \ } \ } while(0) I don't care for that too much in detail -- if errstart were to return false (it shouldn't, but if it did) this would be utterly broken, because we'd be making error reporting calls that elog.c wouldn't be prepared to accept. We should stick to the technique of doing the ereport as today and then adding something that tells the compiler we don't expect to get here; preferably something that emits no added code. With the current ereport(), we'll call abort() if errstart returns false and elevel >= ERROR. That's even worse than making an error reporting calls that elog.c isn't prepared for. If we take that risk seriously, with bogus error reporting calls we at least have chance of catching it and reporting an error. Yea, I didn't really like it all that much either - but anyway, I have yet to find *any* version with a local variable that doesn't lead to 200kb size increase :(. Hmm, that's strange. Assuming the optimizer optimizes away the paths in the macro that are never taken when elevel is a constant, I would expect the resulting binary to be smaller than what we have now. All those useless abort() calls must take up some space. Here's what I got with and without my patch on my laptop: -rwxr-xr-x 1 heikki heikki 24767237 tammi 12 21:27 postgres-do-while-mypatch -rwxr-xr-x 1 heikki heikki 24623081 tammi 12 21:28 postgres-unpatched But when I build without --enable-debug, the situation reverses: -rwxr-xr-x 1 heikki heikki 5961309 tammi 12 21:48 postgres-do-while-mypatch -rwxr-xr-x 1 heikki heikki 5986961 tammi 12 21:49 postgres-unpatched I suspect you're also just seeing bloat caused by more debugging symbols, which won't have any effect at runtime. It would be nice to have smaller debug-enabled builds, too, of course, but it's not that important. However, using a do-block with a local variable is definitely something worth considering. I'm getting less enamored of the __builtin_constant_p idea after finding out that the gcc boys seem to have curious ideas about what its semantics ought to be: https://bugzilla.redhat.com/show_bug.cgi?id=894515 I wonder whether __builtin_choose_expr is any better? I forgot to mention (and it wasn't visible in the snippet I posted) the reason I used __attribute__ ((noreturn)). We already use that attribute elsewhere, so this optimization wouldn't require anything more from the compiler than what already take advantage of. I think that noreturn is more widely supported than these other constructs. Also, if I'm reading the MSDN docs correctly, while MSVC doesn't understand __attribute__ ((noreturn)) directly, it has an equivalent syntax _declspec(noreturn). So we could easily support MSVC with this approach. Attached is the complete patch I used for the above tests. - Heikki diff --git a/src/backend/bootstrap/bootscanner.l b/src/backend/bootstrap/bootscanner.l index bdd7dcb..b97bee4 100644 --- a/src/backend/bootstrap/bootscanner.l +++ b/src/backend/bootstrap/bootscanner.l @@ -42,8 +42,13 @@ /* Avoid exit() on fatal scanner errors (a bit ugly -- see yy_fatal_error) */ #undef fprintf -#define fprintf(file, fmt, msg) ereport(ERROR, (errmsg_internal("%s", msg))) +#define fprintf(file, fmt, msg) fprintf_to_ereport(fmt, msg) +static void +fprintf_to_ereport(char *fmt, const char *msg) +{ + ereport(ERROR, (errmsg_internal("%s", msg))); +} static int yyline = 1; /* line number for error reporting */ diff --git a/src/backend/parser/scan.l b/src/backend/parser/scan.l index 622cb1f..877345b 100644 --- a/src/backend/parser/scan.l +++ b/src/backend/parser/scan.l @@ -42,7 +42,14 @@ /* Avoid exit() on fatal scanner errors (a bit ugly -- see yy_fatal_error) */ #undef fprintf -#define fprintf(file, fmt, msg) ereport(ERROR, (errmsg_internal("%s", msg))) +#define fprintf(file, fmt, msg) fprintf_to_ereport(fmt, msg) + +static void +fprintf_to_ereport(char *fmt, const char *msg) +{ + ereport(ERROR, (errmsg_internal("%s", msg))); +} + /* * GUC variables. This is a DIRECT violation of the warning given at the diff --git a/src/backend/replication/repl_scanner.l b/src/backend/replication/repl_scanner.l index 9ccf02a..b62625b 100644 --- a/src/backend/replication/repl_scanner.l +++ b/src/backend/replication/repl_scanner.l @@ -19,7 +19,13 @@ /* Avoid exit() on fatal scanner errors (a
Re: [HACKERS] 9.2.1 & index-only scans : abnormal heap fetches after VACUUM FULL
Amit Kapila wrote: > On Thursday, January 10, 2013 6:09 AM Josh Berkus wrote: >> Surely VACUUM FULL should rebuild the visibility map, and make >> tuples in the new relation all-visible, no? Certainly it seems odd to me that VACUUM FULL leaves the the table in a less-well maintained state in terms of visibility than a "normal" vacuum. VACUUM FULL should not need to be followed by another VACUUM. > I think it cannot made all visible. I don't think all tuples in the relation are necessarily visible to all transactions, but the ones which are should probably be flagged that way. > How about if any transaction in SSI mode is started before Vacuum > Full, should it see all tuples. There are no differences between visibility rules for serializable transactions (SSI) and repeatable read transactions. It should be based on whether any snapshots exist which can still see the tuple. -Kevin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review]
Boszormenyi Zoltan writes: > No, I mean the reaper(SIGNAL_ARGS) function in > src/backend/postmaster/postmaster.c where your patch has this: > *** a/src/backend/postmaster/postmaster.c > --- b/src/backend/postmaster/postmaster.c > *** > *** 2552,2557 reaper(SIGNAL_ARGS) > --- 2552,2569 > continue; > } > + /* Delete the postgresql.auto.conf.lock file if > exists */ > + { > + char LockFileName[MAXPGPATH]; > + > + strlcpy(LockFileName, ConfigFileName, > sizeof(LockFileName)); > + get_parent_directory(LockFileName); > + join_path_components(LockFileName, > LockFileName, > AutoConfigLockFilename); > + canonicalize_path(LockFileName); > + > + unlink(LockFileName); > + } > + > /* > * Startup succeeded, commence normal operations > */ I have not been following this thread, but I happened to see this bit, and I have to say that I am utterly dismayed that anything like this is even on the table. This screams overdesign. We do not need a global lock file, much less postmaster-side cleanup. All that's needed is a suitable convention about temp file names that can be written and then atomically renamed into place. If we're unlucky enough to crash before a temp file can be renamed into place, it'll just sit there harmlessly. >>> I just noticed that you are using "%m" in format strings twice. man 3 >>> printf says: >>> m (Glibc extension.) Print output of strerror(errno). No argument is >>> required. >>> This doesn't work anywhere else, only on GLIBC systems, it means Linux. >>> I also like the brevity of this extension but PostgreSQL is a portable >>> system. >>> You should use %s and strerror(errno) as argument explicitly. >> %m is used at other places in code as well. As far as that goes, %m is appropriate in elog/ereport (which contain special support for it), but not anywhere else. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] unified frontend support for pg_malloc et al and palloc/pfree mulation (was xlogreader-v4)
On 2013-01-12 13:16:56 -0500, Tom Lane wrote: > Heikki Linnakangas writes: > > On 11.01.2013 23:49, Tom Lane wrote: > >> Hm ... well, that's a point. I may be putting too much weight on the > >> fact that any such bug for elevel would be a new one that never existed > >> before. What do others think? > > > It sure would be nice to avoid multiple evaluation. At least in xlog.c > > we use emode_for_corrupt_record() to pass the elevel, and it does have a > > side-effect. > > Um. I had forgotten about that example, but its existence is sufficient > to reinforce my opinion that we must not risk double evaluation of the > elevel argument. Unfortunately I aggree :( > > Here's one more construct to consider: > > > #define ereport_domain(elevel, domain, rest) \ > > do { \ > > const int elevel_ = elevel; \ > > if (errstart(elevel_,__FILE__, __LINE__, PG_FUNCNAME_MACRO, > > domain) || > > elevel_>= ERROR) \ > > { \ > > (void) rest; \ > > if (elevel_>= ERROR) \ > > errfinish_noreturn(1); \ > > else \ > > errfinish(1); \ > > } \ > > } while(0) > > I don't care for that too much in detail -- if errstart were to return > false (it shouldn't, but if it did) this would be utterly broken, > because we'd be making error reporting calls that elog.c wouldn't be > prepared to accept. We should stick to the technique of doing the > ereport as today and then adding something that tells the compiler we > don't expect to get here; preferably something that emits no added code. Yea, I didn't really like it all that much either - but anyway, I have yet to find *any* version with a local variable that doesn't lead to 200kb size increase :(. > However, using a do-block with a local variable is definitely something > worth considering. I'm getting less enamored of the __builtin_constant_p > idea after finding out that the gcc boys seem to have curious ideas > about what its semantics ought to be: > https://bugzilla.redhat.com/show_bug.cgi?id=894515 I wonder whether __builtin_choose_expr is any better? Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PATCH] unified frontend support for pg_malloc et al and palloc/pfree mulation (was xlogreader-v4)
Heikki Linnakangas writes: > On 11.01.2013 23:49, Tom Lane wrote: >> Hm ... well, that's a point. I may be putting too much weight on the >> fact that any such bug for elevel would be a new one that never existed >> before. What do others think? > It sure would be nice to avoid multiple evaluation. At least in xlog.c > we use emode_for_corrupt_record() to pass the elevel, and it does have a > side-effect. Um. I had forgotten about that example, but its existence is sufficient to reinforce my opinion that we must not risk double evaluation of the elevel argument. > Here's one more construct to consider: > #define ereport_domain(elevel, domain, rest) \ > do { \ > const int elevel_ = elevel; \ > if (errstart(elevel_,__FILE__, __LINE__, PG_FUNCNAME_MACRO, > domain) || > elevel_>= ERROR) \ > { \ > (void) rest; \ > if (elevel_>= ERROR) \ > errfinish_noreturn(1); \ > else \ > errfinish(1); \ > } \ > } while(0) I don't care for that too much in detail -- if errstart were to return false (it shouldn't, but if it did) this would be utterly broken, because we'd be making error reporting calls that elog.c wouldn't be prepared to accept. We should stick to the technique of doing the ereport as today and then adding something that tells the compiler we don't expect to get here; preferably something that emits no added code. However, using a do-block with a local variable is definitely something worth considering. I'm getting less enamored of the __builtin_constant_p idea after finding out that the gcc boys seem to have curious ideas about what its semantics ought to be: https://bugzilla.redhat.com/show_bug.cgi?id=894515 regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [PATCH] unified frontend support for pg_malloc et al and palloc/pfree mulation (was xlogreader-v4)
On 2013-01-12 12:26:37 +0200, Heikki Linnakangas wrote: > On 11.01.2013 23:49, Tom Lane wrote: > >Andres Freund writes: > >>On 2013-01-11 16:28:13 -0500, Tom Lane wrote: > >>>I'm not very satisfied with that answer. Right now, Peter's > >>>patch has added a class of bugs that never existed before 9.3, and yours > >>>would add more. It might well be that those classes are empty ... but > >>>*we can't know that*. I don't think that keeping a new optimization for > >>>non-gcc compilers is worth that risk. Postgres is already full of > >>>gcc-only optimizations, anyway. > > > >>ISTM that ereport() already has so strange behaviour wrt evaluation of > >>arguments (i.e doing it only when the message is really logged) that its > >>doesn't seem to add a real additional risk. > > > >Hm ... well, that's a point. I may be putting too much weight on the > >fact that any such bug for elevel would be a new one that never existed > >before. What do others think? > > It sure would be nice to avoid multiple evaluation. At least in xlog.c we > use emode_for_corrupt_record() to pass the elevel, and it does have a > side-effect. It's quite harmless in practice, you'll get some extra DEBUG1 > messages in the log, but still. > > Here's one more construct to consider: > > #define ereport_domain(elevel, domain, rest) \ > do { \ > const int elevel_ = elevel; \ > if (errstart(elevel_,__FILE__, __LINE__, PG_FUNCNAME_MACRO, > domain) || > elevel_>= ERROR) \ > { \ > (void) rest; \ > if (elevel_>= ERROR) \ > errfinish_noreturn(1); \ > else \ > errfinish(1); \ > } \ > } while(0) > > extern void errfinish(int dummy,...); > extern void errfinish_noreturn(int dummy,...) __attribute__((noreturn)); I am afraid that that would bloat the code again as it would have to put that if() into each callsite whereas a variant that ends up using __builtin_unreachable() doesn't. So I think we should use __builtin_unreachable() while falling back to abort() as before. But that's really more a detail than anything fundamental about the approach. I'll play a bit. > With do-while, ereport() would no longer be an expression. There only place > where that's a problem in our codebase is in scan.l, bootscanner.l and > repl_scanner.l, which do this: I aggree that would easy to fix, so no problem there. Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Hash twice
Lock code says it calculates "hash value once and then pass around as needed". But it actually calculates it twice for new locks. Trivial patch attached to make it follow the comments in LockTagHashCode and save a few cycles. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services hash_once.v1.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation
On Saturday, January 12, 2013 4:36 PM Simon Riggs wrote: On 11 January 2013 15:57, Simon Riggs wrote: I've moved this to the next CF. I'm planning to review this one first. >> >>> Thank you. > >> Just reviewing the patch now, making more sense with comments added. > Making more sense, but not yet making complete sense. > I'd like you to revisit the patch comments since some of them are > completely unreadable. I will once again review all the comments and make them more meaningful. With Regards, Amit Kapila. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation
On Saturday, January 12, 2013 3:45 PM Simon Riggs wrote: On 12 January 2013 03:50, Amit kapila wrote: > On Saturday, January 12, 2013 12:23 AM Simon Riggs wrote: > On 11 January 2013 18:11, Amit kapila wrote: > > Can we identify which columns have changed? i.e. 1st, 3rd and 12th > columns? As per current algorithm, we can't as it is based on offsets. What I mean to say is that the basic idea to reconstruct tuple during recovery is copy data from old tuple offset-wise (offsets stored in encoded tuple) and use new data (modified column data) from encoded tuple directly. So we don't need exact column numbers. > >>> Another patch is going through next CF related to reassembling changes >>> from WAL records. > >>> To do that efficiently, we would want to store a bitmap showing which >>> columns had changed in each update. Would that be an easy addition, or >>> is that blocked by some aspect of the current design? > >> I don't think it should be a problem, as it can go in current way of WAL >> tuple construction as >> we do in this patch when old and new buf are different. This >> differentiation is done in >> log_heap_update. > >> IMO, for now we can avoid this optimization (way we have done incase >> updated tuple is not on same page) >> for the bitmap storing patch and later we can evaluate if we can do this >> optimization for >> the feature of that patch. > Yes, we can simply disable this feature. But that is just bad planning > and we should give some thought to having new features play nicely > together. > I would like to work out how to modify this so it can work with wal > decoding enabled. I know we can do this, I want to look at how, > because we know we're going to do it. I am sure this can be done, as for WAL decoding we mainly new values and column numbers So if we include bitmap in WAL tuple and teach WAL decoding method how to decode this new format WAL tuple it can be done. However it will need changes in algorithm for both the patches and it can be risk for one or for both patches. I am open to have discussion about how both can work together, but IMHO at this moment (as this will be last CF) it will be little risky. If there is some way such that with minor modifications, we can address this scenario, I will be happy to see both working together. With Regards, Amit Kapila. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] fix SQL example syntax in file comment
Hi, Here's a trivial patch that fixes a comment in execProcNode.c For archeological interest, that comment dates back to when it was written in POSTQUEL... The cleanup in a9b1ff4c1d699c8aa615397d47bb3071275c64ef changed RETRIEVE to SELECT, but forgot to add a FROM clause :) Cheers, Jan diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c index 4c1189e..76dd62f 100644 --- a/src/backend/executor/execProcnode.c +++ b/src/backend/executor/execProcnode.c @@ -32,6 +32,7 @@ * the number of employees in that department. So we have the query: * *select DEPT.no_emps, EMP.age + *from DEPT, EMP *where EMP.name = DEPT.mgr and * DEPT.name = "shoe" * -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation
On 11 January 2013 15:57, Simon Riggs wrote: >>> I've moved this to the next CF. I'm planning to review this one first. >> >> Thank you. > > Just reviewing the patch now, making more sense with comments added. Making more sense, but not yet making complete sense. I'd like you to revisit the patch comments since some of them are completely unreadable. Examples "Frames the original tuple which needs to be inserted into the heap by decoding the WAL tuplewith the help of old Heap tuple." "The delta tuples for update WAL is to eliminate copying the entire the new record to WAL for the update operation." I don't mind rewording the odd line here and there, that's just normal editing, but this needs extensive work in terms of number of places requiring change and the level of change at each place. That's not easy for me to do when I'm trying to understand the patch in the first place. My own written English isn't that great, so please read some of the other comments in other parts of the code so you can see the level of clarity that's needed in PostgreSQL. Copying chunks of text from other comments doesn't help much either, especially when you miss out parts of the explanation. You refer to a "history tag" but don't define it that well, and don't explain why it might sometimes be 3 bytes, or what that means. pg_lzcompress doesn't call it that either, which is confusing. If you use a concept from elsewhere you should either use the same name, or if you rename it, rename it in both places. /* * Do only the delta encode when the update is going to the same page and * buffer doesn't need a backup block in case of full-pagewrite is on. */ if ((oldbuf == newbuf) && !XLogCheckBufferNeedsBackup(newbuf)) The comment above says nothing. I can see that oldbuf and newbuf must be the same and the call to XLogCheckBufferNeedsBackup is clear because the function is already well named. What I'd expect to see here is a discussion of why this test is being applied and maybe why it is applied here. Such an important test deserves a long discussion, perhaps 10-20 lines of comment. Thanks -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: [PATCH] unified frontend support for pg_malloc et al and palloc/pfree mulation (was xlogreader-v4)
On 11.01.2013 23:49, Tom Lane wrote: Andres Freund writes: On 2013-01-11 16:28:13 -0500, Tom Lane wrote: I'm not very satisfied with that answer. Right now, Peter's patch has added a class of bugs that never existed before 9.3, and yours would add more. It might well be that those classes are empty ... but *we can't know that*. I don't think that keeping a new optimization for non-gcc compilers is worth that risk. Postgres is already full of gcc-only optimizations, anyway. ISTM that ereport() already has so strange behaviour wrt evaluation of arguments (i.e doing it only when the message is really logged) that its doesn't seem to add a real additional risk. Hm ... well, that's a point. I may be putting too much weight on the fact that any such bug for elevel would be a new one that never existed before. What do others think? It sure would be nice to avoid multiple evaluation. At least in xlog.c we use emode_for_corrupt_record() to pass the elevel, and it does have a side-effect. It's quite harmless in practice, you'll get some extra DEBUG1 messages in the log, but still. Here's one more construct to consider: #define ereport_domain(elevel, domain, rest) \ do { \ const int elevel_ = elevel; \ if (errstart(elevel_,__FILE__, __LINE__, PG_FUNCNAME_MACRO, domain) || elevel_>= ERROR) \ { \ (void) rest; \ if (elevel_>= ERROR) \ errfinish_noreturn(1); \ else \ errfinish(1); \ } \ } while(0) extern void errfinish(int dummy,...); extern void errfinish_noreturn(int dummy,...) __attribute__((noreturn)); With do-while, ereport() would no longer be an expression. There only place where that's a problem in our codebase is in scan.l, bootscanner.l and repl_scanner.l, which do this: #undef fprintf #define fprintf(file, fmt, msg) ereport(ERROR, (errmsg_internal("%s", msg))) and flex does this: (void) fprintf( stderr, "%s\n", msg ); but that's easy to work around by creating a wrapper fprintf function in those .l files. When I compile with gcc -O0, I get one warning with this: datetime.c: In function ‘DateTimeParseError’: datetime.c:3575:1: warning: ‘noreturn’ function does return [enabled by default] That suggests that the compiler didn't correctly deduce that ereport(ERROR, ...) doesn't return. With -O1, the warning goes away. - Heikki -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Performance Improvement by reducing WAL for Update Operation
On 12 January 2013 03:50, Amit kapila wrote: > On Saturday, January 12, 2013 12:23 AM Simon Riggs wrote: > On 11 January 2013 18:11, Amit kapila wrote: > Can we identify which columns have changed? i.e. 1st, 3rd and 12th columns? >>> As per current algorithm, we can't as it is based on offsets. >>> What I mean to say is that the basic idea to reconstruct tuple during >>> recovery >>> is copy data from old tuple offset-wise (offsets stored in encoded tuple) >>> and use new data (modified column data) >>> from encoded tuple directly. So we don't need exact column numbers. > >> Another patch is going through next CF related to reassembling changes >> from WAL records. > >> To do that efficiently, we would want to store a bitmap showing which >> columns had changed in each update. Would that be an easy addition, or >> is that blocked by some aspect of the current design? > > I don't think it should be a problem, as it can go in current way of WAL > tuple construction as > we do in this patch when old and new buf are different. This > differentiation is done in > log_heap_update. > > IMO, for now we can avoid this optimization (way we have done incase > updated tuple is not on same page) > for the bitmap storing patch and later we can evaluate if we can do this > optimization for > the feature of that patch. Yes, we can simply disable this feature. But that is just bad planning and we should give some thought to having new features play nicely together. I would like to work out how to modify this so it can work with wal decoding enabled. I know we can do this, I want to look at how, because we know we're going to do it. >> The idea would be that we could re-construct an UPDATE statement that >> would perform exactly the same change, yet without needing to refer to >> a base tuple. > > I understood, that such a functionality would be needed by logical > replication. Yes, though the features being added are to allow decoding of WAL for any purpose. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers