[HACKERS] Incorrect information in src/backend/optimizer/README
ISTM the README contains incorrect information and requires revision to reflect the current code. Please find attached a patch. ISTM the patch needs to be applied to all active branches. Thanks, Best regards, Etsuro Fujita optimizer-readme.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] insert throw error when year field len 4 for timestamptz datatype
Hi, While working on something I come across this issue. Consider following test: postgres=# select version(); version - PostgreSQL 9.4devel on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-3), 64-bit (1 row) postgres=# create table test ( a timestamptz); CREATE TABLE -- Date with year 1000 postgres=# insert into test values ( 'Sat Mar 11 23:58:48 1000 IST'); INSERT 0 1 -- Now try with year 1 it will return error postgres=# insert into test values ( 'Sat Mar 11 23:58:48 1 IST'); ERROR: invalid input syntax for type timestamp with time zone: Sat Mar 11 23:58:48 1 IST LINE 1: insert into test values ( 'Sat Mar 11 23:58:48 1 IST'); here error coming from timestamptz_in() - datefields_to_timestamp() - DecodeDateTime() stack. Looking more at the DecodeDateTime() function, here error coming while trying to Decode year field which is 1 in the our test. For year field ftype is DTK_NUMBER, and under DTK_NUMBER for this case if drop in to following condition: else if (flen 4) { dterr = DecodeNumberField(flen, field[i], fmask, tmask, tm, fsec, is2digits); if (dterr 0) return dterr; } because flen in out case flen is 5 (1). As per the comment above DecodeNumberField(), it interpret numeric string as a concatenated date or time field. So ideally we should be into DecodeNumberField function only with (fmask DTK_DATE_M) == 0 or (fmask DTK_TIME_M) == 0, right ?? So, I tried the same and after that test working fine. Another fix could be to modify DecodeNumberField() to only check for the date and time when (fmask DTK_DATE_M) == 0 and (fmask DTK_TIME_M) == 0. And if DecodeNumberField() returns error then call DecodeNumber() to check the year possibility. But I didn't Results after fix: postgres=# select * from test; a -- 1000-03-12 03:52:16+05:53:28 1-03-12 03:28:48+05:30 (2 rows) PFA patch and share your input/suggestions. (With patch make check running fine without additional failures) Regards, Rushabh Lathia www.EnterpriseDB.com timestamptz_fix.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Regarding BGworkers
On Wed, Aug 14, 2013 at 10:10 AM, Robert Haas robertmh...@gmail.com wrote: I think Alvaro's suggestion is better. It's shorter, and makes clear that at most one will be started. OK cool. Here are patches for 9.3 and master respecting those comments. Regards, -- Michael 20130814_bgworker_refactor_93_v2.patch Description: Binary data 20130814_bgworker_refactor_master_v2.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] timeline signedness
On Tue, Aug 13, 2013 at 1:31 PM, Peter Eisentraut pete...@gmx.net wrote: On Wed, 2013-08-07 at 21:55 -0400, Peter Eisentraut wrote: WAL timelines are unsigned 32-bit integers everywhere, except the replication parser (replication/repl_gram.y and replication/repl_scanner.l) treats them as signed 32-bit integers. It's obviously a corner case, but it would be prudent to be correct about this. It should be easy to fix in those grammar files. Here is a patch to fix this. Looks fine to me, and looks like the correct thing to do. -- Magnus Hagander Me: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] timeline signedness
On 13.08.2013 14:31, Peter Eisentraut wrote: On Wed, 2013-08-07 at 21:55 -0400, Peter Eisentraut wrote: WAL timelines are unsigned 32-bit integers everywhere, except the replication parser (replication/repl_gram.y and replication/repl_scanner.l) treats them as signed 32-bit integers. It's obviously a corner case, but it would be prudent to be correct about this. It should be easy to fix in those grammar files. +1 Here is a patch to fix this. If I'm reading this correctly, timeline 0 no longer throws an error with this patch. - Heikki -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] StrategyGetBuffer optimization, take 2
Performance testing this patch is a real bugaboo for me; the VMs I have to work with are too unstable to give useful results :-(. Need to scrounge up a doner box somewhere... On Tue, Aug 13, 2013 at 12:26 AM, Amit Kapila amit.kapil...@gmail.comwrote: Merlin Moncure wrote: On Wed, Aug 7, 2013 at 11:52 PM, Amit Kapila amit(dot)kapila(at)huawei(dot)com wrote: -Original Message- From: pgsql-hackers-owner(at)postgresql(dot)org [mailto:pgsql-hackers- owner(at)postgresql(dot)org] On Behalf Of Merlin Moncure Sent: Thursday, August 08, 2013 12:09 AM To: Andres Freund Cc: PostgreSQL-development; Jeff Janes Subject: Re: [HACKERS] StrategyGetBuffer optimization, take 2 On Wed, Aug 7, 2013 at 12:07 PM, Andres Freund andres(at)2ndquadrant(dot)com wrote: On 2013-08-07 09:40:24 -0500, Merlin Moncure wrote: I have some very strong evidence that the problem is coming out of the buffer allocator. Exhibit A is that vlad's presentation of the problem was on a read only load (if not allocator lock, then what?). Exhibit B is that lowering shared buffers to 2gb seems to have (so far, 5 days in) fixed the issue. This problem shows up on fast machines with fast storage and lots of cores. So what I think is happening is that usage_count starts creeping up faster than it gets cleared by the sweep with very large buffer settings which in turn causes the 'problem' buffers to be analyzed for eviction more often. Yes one idea which was discussed previously is to not increase usage count, every time buffer is pinned. I am also working on some of the optimizations on similar area, which you can refer here: http://www.postgresql.org/message-id/006e01ce926c$c7768680$56639380$@kapila@ huawei.com yup -- just took a quick look at your proposed patch. You're attacking the 'freelist' side of buffer allocation where my stripped down patch addresses issues with the clocksweep. I think this is a good idea but more than I wanted to get into personally. Good news is that both patches should essentially bolt on together AFAICT. True, I also think so as both are trying to reduce contention in same area. I propose we do a bit of consolidation of performance testing efforts and run tests with patch A, B, and AB in various scenarios. I have a 16 core vm (4gb ram) that I can test with and want to start with say 2gb database 1gb shared_buffers high concurrency test and see how it burns in. What do you think? I think this can mainly benefit with large data and shared buffers ( 10G), last year also I had ran few tests with similar idea's but didn't get much in with less shared buffers. Are you at a point where we can run some tests? Not now, but I will try to run before/during next CF. With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com
Re: [HACKERS] CREATE MATERIALIZED VIEW .. FOR UPDATE
Kevin Grittner kgri...@ymail.com wrote: Alvaro Herrera alvhe...@2ndquadrant.com wrote: Does the combination in $SUBJECT make sense? I don't think so; I don't know what it would mean. Oh, I see -- it's part of the SELECT statement, causing a row-level lock on each row as it is accessed. It is currently allowed, I will take a look. Now that I'm reading it correctly, it seems within the realm of possibility that someone might have a legitimate use for this. I don't see any reason to jump through any hoops to prevent anyone from doing it. It is worth noting that both CREATE VIEW and CREATE TABLE AS allow the same thing. Thanks for pointing this out, though. -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] StrategyGetBuffer optimization, take 2
On 8/14/13 8:30 AM, Merlin Moncure wrote: Performance testing this patch is a real bugaboo for me; the VMs I have to work with are too unstable to give useful results :-(. Need to scrounge up a doner box somewhere... I offered a server or two to the community a while ago but I don't think anything was ever resolved. It wouldn't have a massive number of cores (only 24 IIRC), but it would have a lot of memory (definitely over 192G; maybe more). The community just needs to decide it wants it and where it gets shipped to... -- Jim Nasby, Lead Data Architect (512) 569-9461 (primary) (512) 579-9024 (backup) -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] System catalog vacuum issues
On 8/14/13 12:31 AM, Vlad Arkhipov wrote: I used to use VACUUM FULL periodically to resolve the issue, but the problem arises again in 2-3 months. Here is the statistics (from pgstattuple). I run VACUUM FULL on 2013-08-07. How much non-temporary DDL do you do? It's possible that you end up with a tuple at the end of the table for a non-temporary object. One of those would stay valid for quite some time, and if you're unlucky then you'll end up with another long-lived row farther down the table, etc, etc. Depending on how frequently you're creating temp objects, autovac might not be able to keep up. Assuming that a manual vacuum doesn't take too long it might be a good idea to cron a manual vacuum (NOT FULL) of that table once a minute. dcdb=# select date, relpages, reltuples, table_len, tuple_count, tuple_percent, dead_tuple_count, dead_tuple_len, free_space, free_percent, autovacuum_count from public.table_statistics where relname = 'pg_attribute' order by date; date| relpages | reltuples | table_len | tuple_count | tuple_percent | dead_tuple_count | dead_tuple_len | free_space | free_percent | autovacuum_count +--+---+---+-+---+--+++--+-- 2013-08-08 |39029 |109096 | 319725568 | 37950 | 1.66 | 52540 |7355600 | 296440048 |92.72 | 6359 2013-08-09 |12382 | 95848 | 101433344 | 38232 | 5.28 | 57443 |8042020 | 83862864 |82.68 | 6711 2013-08-10 |11365 |105073 | 93102080 | 37789 | 5.68 | 65599 |9183860 | 74483104 | 80 | 7002 2013-08-12 | 9447 | 95289 | 77389824 | 37811 | 6.84 | 57154 |8001560 | 60479736 |78.15 | 7161 2013-08-13 |47841 | 82877 | 391913472 | 38536 | 1.38 | 30461 |4264540 | 369093756 |94.18 | 7347 2013-08-14 |70265 |104926 | 575610880 | 38838 | 0.94 | 34649 |4850860 | 546449480 |94.93 | 7398 (6 rows) Autovacuum is running on this table, however it keeps growing. On 08/06/2013 09:35 PM, Tom Lane wrote: Vlad Arkhipov arhi...@dc.baikal.ru writes: On 08/06/2013 04:26 PM, Sergey Konoplev wrote: What pgstattuple shows on this table? dcdb=# select * from pgstattuple('pg_catalog.pg_attribute'); table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent +-+---+---+--++++-- 6363938816 | 48786 | 6830040 | 0.11 | 1459439 | 204321460 | 3.21 | 5939017376 | 93.32 (1 row) So the problem isn't so much that you have lots of dead tuples, it's that the file is full of free space. I suspect the key issue is that autovacuum is unable to truncate the file because of too many concurrent accesses. There was a fix in 9.2.3 that was meant to ameliorate that problem, but maybe that's not getting the job done for you. Or maybe the bloat we're looking at is left over from when you were running earlier 9.2.x releases; in which case a one-time VACUUM FULL should fix it. regards, tom lane -- Jim C. Nasby, Data Architect j...@nasby.net 512.569.9461 (cell) http://jim.nasby.net -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Incorrect information in src/backend/optimizer/README
Etsuro Fujita fujita.ets...@lab.ntt.co.jp writes: ISTM the README contains incorrect information and requires revision to reflect the current code. Please find attached a patch. ISTM the patch needs to be applied to all active branches. This patch isn't an improvement, since grouping_planner doesn't have anything to do with constant quals at all. In modern releases, that's dealt with by create_gating_plan inside create_plan(). I remember looking at this text when I redid query_planner's API a few days ago and electing not to change it just then. I'm a bit inclined to just remove both references to constant quals from this overview, since they're now taken care of at much lower levels --- query_planner itself has nothing to do with that processing. Alternatively, if we do want to discuss constant quals somewhere in here, maybe it needs to be in a separate subsection. I'm not sure we need anything though. Constant quals are now handled just like other quals up till the last moment in createplan.c, where they're split out and put in a separate gating Result node just above where they would otherwise have been. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] insert throw error when year field len 4 for timestamptz datatype
Rushabh Lathia rushabh.lat...@gmail.com writes: PFA patch and share your input/suggestions. I think this needs review. Please add it to the next commitfest. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] LATERAL quals revisited
Antonin Houska antonin.hou...@gmail.com writes: On 07/04/2013 06:11 PM, Antonin Houska wrote: On 07/03/2013 08:32 PM, Tom Lane wrote: Another possibility would be to keep the optimization, but disable it in queries that use LATERAL. I don't much care for that though --- seems too Rube Goldbergish, and in any case I have a lot less faith in the whole concept now than I had before I started digging into this issue. I constructed a query that triggers the optimization - see attachment with comments. Thanks for poking at this. EXPLAIN shows the same plan with or without the ph_may_need optimization, but that might be data problem (my tables are empty). Yeah, I didn't have much luck getting a different plan even with data in the tables. What you'd need for this to be important would be for a join order that's precluded without the ph_may_need logic to be significantly better than the join orders that are still allowed. While that's certainly within the realm of possibility, the difficulty of triggering the case at all reinforces my feeling that this optimization isn't worth bothering with. For the moment I'm just going to take it out. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] System catalog vacuum issues
On Tue, Aug 13, 2013 at 10:31 PM, Vlad Arkhipov arhi...@dc.baikal.ru wrote: I used to use VACUUM FULL periodically to resolve the issue, but the problem arises again in 2-3 months. Here is the statistics (from pgstattuple). I run VACUUM FULL on 2013-08-07. date| relpages | reltuples | table_len | tuple_count | tuple_percent | dead_tuple_count | dead_tuple_len | free_space | free_percent | autovacuum_count +--+---+---+-+---+--+++--+-- 2013-08-08 |39029 |109096 | 319725568 | 37950 | 1.66 |52540 |7355600 | 296440048 |92.72 | Are you sure you did VACUUM FULL pg_attribute on Aug 7, could you please confirm that free_percent arises from 0 to 92% in one day? Do you have some processes that intensively create tables or columns and then delete them or create them in transaction and rollback the transaction? -- Kind regards, Sergey Konoplev PostgreSQL Consultant and DBA http://www.linkedin.com/in/grayhemp +1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979 gray...@gmail.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] 9.4 regression
On Wed, Aug 7, 2013 at 07:04:43PM +0100, Thom Brown wrote: On 7 August 2013 18:49, Jon Nelson jnelson+pg...@jamponi.net wrote: On Wed, Aug 7, 2013 at 12:42 PM, Thom Brown t...@linux.com wrote: for i in 1 2 5 10 100; do ./test_fallocate foo $i 1; done method: classic. 1 open/close iterations, 1 rewrite in 0.6380s method: posix_fallocate. 1 open/close iterations, 1 rewrite in 0.3204s method: glibc emulation. 1 open/close iterations, 1 rewrite in 0.6274s method: classic. 2 open/close iterations, 1 rewrite in 1.2908s method: posix_fallocate. 2 open/close iterations, 1 rewrite in 0.6596s method: glibc emulation. 2 open/close iterations, 1 rewrite in 1.2666s method: classic. 5 open/close iterations, 1 rewrite in 3.1419s method: posix_fallocate. 5 open/close iterations, 1 rewrite in 1.5930s method: glibc emulation. 5 open/close iterations, 1 rewrite in 3.1516s method: classic. 10 open/close iterations, 1 rewrite in 6.2912s method: posix_fallocate. 10 open/close iterations, 1 rewrite in 3.2626s method: glibc emulation. 10 open/close iterations, 1 rewrite in 6.3667s method: classic. 100 open/close iterations, 1 rewrite in 67.4174s method: posix_fallocate. 100 open/close iterations, 1 rewrite in 37.8788s method: glibc emulation. 100 open/close iterations, 1 rewrite in 55.0714s OK. That's interesting, and a good data point. One thing you could try manually disabling the use of posix_fallocate in 269e78. After running ./configure --stuff-here edit src/include/pg_config.h and comment out the following line (on or around line 374) #define HAVE_POSIX_FALLOCATE 1 *then* build postgresql and see if the performance hit is still there. Okay, done that. The TPS increases again: 2308.807568 / 2554.264572 / 2563.190204 And I did run ./configure... before removing the line, and built it after the change. Where are we on this issue? -- Bruce Momjian br...@momjian.ushttp://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [BUGS] BUG #8335: trim() un-document behaviour
On Mon, Aug 12, 2013 at 11:31:38PM -0400, Bruce Momjian wrote: On Mon, Aug 12, 2013 at 05:19:30PM -0400, Bruce Momjian wrote: Attached are docs that add the new syntax, and mention it is non-standard; you can see the output here: http://momjian.us/tmp/pgsql/functions-string.html#FUNCTIONS-STRING-SQL We do document three syntaxes for substring() in the same table, one row for each, so there is precedent for doing this. Attached is an updated patch with a proper example. I could move the extra syntax into the description of the existing trim entry instead. Patch applied to head. I did not apply this to 9.3 in case we change our minds about documenting this. -- Bruce Momjian br...@momjian.ushttp://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] danger of stats_temp_directory = /dev/shm
Jeff, Before 9.3, it would delete one specific file from a potentially shared directory. In 9.3 it deletes the entire contents of a potentially shared directory. That is a massive expansion in the surface area for unintentional deletion. If we will disallow using shared directories before the time 9.3 is released, that would fix it one way, but I don't know if that is the plan or not. I can't see doing that. I can see adding the requirement for 9.3, and then documenting it though. -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] TODO request: multi-dimensional arrays in PL/pythonU
On Tue, 2013-08-13 at 14:30 -0700, Josh Berkus wrote: Currently PL/python has 1 dimension hardcoded for returning arrays: create or replace function nparr () returns float[][] language plpythonu as $f$ from numpy import array x = ((1.0,2.0),(3.0,4.0),(5.0,6.0),) return x $f$; There is no way to know how many dimensions the function expects to get back. (float[][] doesn't actually mean anything.) So when converting the return value back to SQL, you'd have to guess, is the first element convertible to float (how do you know?), if not, does it support the sequence protocol, if yes, so let's try to construct a multidimensional array. What if the first element is a float but the second is not? It would be useful to have a solution for that, but it would need to be more principled than what I just wrote. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] danger of stats_temp_directory = /dev/shm
Josh Berkus j...@agliodbs.com writes: Before 9.3, it would delete one specific file from a potentially shared directory. In 9.3 it deletes the entire contents of a potentially shared directory. That is a massive expansion in the surface area for unintentional deletion. If we will disallow using shared directories before the time 9.3 is released, that would fix it one way, but I don't know if that is the plan or not. I can't see doing that. I can see adding the requirement for 9.3, and then documenting it though. I think we should change 9.3 to be restrictive about ownership/permissions on the stats_temp_directory (ie, require owner = postgres user, permissions = 0700, same as for the $PGDATA directory). I agree that back-patching such a change to the older branches is probably not a good plan. I can't quite parse what you say above, so I'm not sure if you're fully agreeing with that position or not. In addition to that, it might be a good idea to do what the comment in the code suggests, namely do more than zero checking on each file name to try to make sure it looks like a stats temp file name that we'd generate before we delete it. The ownership/permissions test wouldn't be enough to prevent you from pointing at, say, ~postgres and thereby losing some files you'd rather not. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] danger of stats_temp_directory = /dev/shm
Tom Lane wrote: I think we should change 9.3 to be restrictive about ownership/permissions on the stats_temp_directory (ie, require owner = postgres user, permissions = 0700, same as for the $PGDATA directory). I agree that back-patching such a change to the older branches is probably not a good plan. I can't quite parse what you say above, so I'm not sure if you're fully agreeing with that position or not. In addition to that, it might be a good idea to do what the comment in the code suggests, namely do more than zero checking on each file name to try to make sure it looks like a stats temp file name that we'd generate before we delete it. The ownership/permissions test wouldn't be enough to prevent you from pointing at, say, ~postgres and thereby losing some files you'd rather not. I will look into this. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Incorrect information in src/backend/optimizer/README
From: Tom Lane [mailto:t...@sss.pgh.pa.us] Etsuro Fujita fujita.ets...@lab.ntt.co.jp writes: ISTM the README contains incorrect information and requires revision to reflect the current code. Please find attached a patch. ISTM the patch needs to be applied to all active branches. This patch isn't an improvement, since grouping_planner doesn't have anything to do with constant quals at all. In modern releases, that's dealt with by create_gating_plan inside create_plan(). OK I remember looking at this text when I redid query_planner's API a few days ago and electing not to change it just then. I'm a bit inclined to just remove both references to constant quals from this overview, since they're now taken care of at much lower levels --- query_planner itself has nothing to do with that processing. Alternatively, if we do want to discuss constant quals somewhere in here, maybe it needs to be in a separate subsection. I'm not sure we need anything though. Constant quals are now handled just like other quals up till the last moment in createplan.c, where they're split out and put in a separate gating Result node just above where they would otherwise have been. +1 for removing both of the references from the overview. Thanks, Best regards, Etsuro Fujita -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] timeline signedness
On Wed, 2013-08-14 at 16:20 +0300, Heikki Linnakangas wrote: On 13.08.2013 14:31, Peter Eisentraut wrote: On Wed, 2013-08-07 at 21:55 -0400, Peter Eisentraut wrote: WAL timelines are unsigned 32-bit integers everywhere, except the replication parser (replication/repl_gram.y and replication/repl_scanner.l) treats them as signed 32-bit integers. It's obviously a corner case, but it would be prudent to be correct about this. It should be easy to fix in those grammar files. +1 Here is a patch to fix this. If I'm reading this correctly, timeline 0 no longer throws an error with this patch. Fixed that. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] updatable/deletable terminology
On Tue, 2013-08-13 at 09:27 +0100, Dean Rasheed wrote: So on balance I think you're right, and it would be better to simply say: ERROR: cannot insert into view one DETAIL: Views that do not select from a single table or view are not automatically updatable. HINT: You need an INSTEAD OF INSERT trigger or an unconditional ON INSERT DO INSTEAD rule. I made adjustments similar to this. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] StrategyGetBuffer optimization, take 2
On Wed, Aug 14, 2013 at 7:00 PM, Merlin Moncure mmonc...@gmail.com wrote: Performance testing this patch is a real bugaboo for me; the VMs I have to work with are too unstable to give useful results :-(. Need to scrounge up a doner box somewhere... While doing performance tests in this area, I always had a feeling that OS layer (scheduler that flushes OS buffers) had a role to play here, So I use to reboot m/c between tests, ofcourse taking performance data in this fashion is tedious. I think it is good to run tests on m/c with configuration similar to what Jim Nasby mentioned in his below mail. With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers