[GENERAL] Casting composite types
Using 8.1 # create table foo (a integer, b integer); # create table baz (b integer, c integer); # insert into foo values (8,9); # insert into baz values (9,1); # select * from foo; a | b ---+--- 8 | 9 (1 row) # select * from baz; b | c ---+--- 9 | 1 (1 row) # create view foobaz as select foo.*, baz.c from foo join baz using (b); # select * from foobaz; a | b | c ---+---+--- 8 | 9 | 1 (1 row) So far so good. I have many functions that take the composite type foo, and therefore wish to be able to cast a foobaz into a foo, by taking only columns in foo (i.e. a and b). But of course there's no cast defined: # select foobaz::foo from foobaz; ERROR: cannot cast type foobaz to foo # select foo(foobaz) from foobaz; ERROR: function foo(foobaz) does not exist HINT: No function matches the given name and argument types. You may need to add explicit type casts. Ideally, I'd love to avoid writing a separate function for each foo,baz pair of types as I have many of each. In any case, I want to avoid specifying the columns of foo in the code of foo(foobaz) so that the function doesn't break when I alter the foo table. The best I've got for the latter is: CREATE OR REPLACE FUNCTION foo(foobaz) RETURNS foo AS $$ my ($foobaz) = @_; my $foo = {}; $row = spi_fetchrow(spi_query(SELECT * from foo limit 1)); for (keys %$row) {$foo-{$_} = $foobaz-{$_}}; return $foo; $$ LANGUAGE plperlu; which feels very cumbersome, but works, provided foo is not empty. # select foo(foobaz) from foobaz; foo --- (8,9) (1 row) Am I missing an obvious trick or syntax here for such an 'autocast'? Or have I just been corrupted by Perl to take types too lightly? Thanks Julian -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] last analyze time in 8.1?
Is there any way of getting at the last time a table was analyzed (by autovacuum) in 8.1 or is that only recorded (in pg_stat_*_tables) since 8.2? TIA Julian ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Timezones -- what comes out does not go in?
The timezone on our boxes is set to Etc/UCT (possibly a distro default). (psql 8.1.4) = select to_char(current_timestamp, 'HH24:MI:SS TZ'); to_char -- 09:05:48 UCT (1 row) = select '09:05:48 UCT'::time; ERROR: invalid input syntax for type time: 09:05:48 UCT UTC works, of course: = select '09:05:48 UTC'::time; time -- 09:05:48 (1 row) Is this behaviour expected? Desirable? Thanks Julian ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Timezones -- what comes out does not go in?
From: Tom Lane [EMAIL PROTECTED] This works in CVS HEAD, thanks to some hard work by Joachim Wieland. One of these days I'll find an issue *before* you folks have patched it. :-) Thanks Julian ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] INSERT OR UPDATE
I'm keeping config information for an application in a series of related tables. I'd like a command that INSERTs data if it's new, or UPDATEs it if the key is duplicated. Copying the config info from one database to another virgin installation is easy, of course. I can just use pg_dump on the original db1 to create the DDL and run it into the new database db2. The problem comes when I've made changes in db1 that I want to copy to db2, but db2 may itself have additional entries not present in db1. I'd like to update the entries in db2 that have equivalent entries in db1 with their db1 values, without destroying the rest of the data in db2. Here's an example: db1 is created as follows: create table a ( a_id text PRIMARY KEY, param_b text, param_c text ); create table d ( d_id text PRIMARY KEY, a_id text references a(a_id) ON UPDATE CASCADE, param_e text, param_f text ); insert into a values ('a1', 'b1', 'c1'); insert into a values ('a2', 'b2', 'c2'); insert into d values ('d1', 'a1', 'e1', 'f1'); insert into d values ('d2', 'a2', 'e2', 'f2'); insert into d values ('d3', 'a1', 'e3', 'f3'); The databases have identical schema: $ pg_dump -s db1 | psql db2 and provided db2 is empty, I can just copy the contents across: $ pg_dump -ad db1 db1.config $ psql db2 db1.config So far so good. But now db2 gets some additional entries: db2=# insert into a values ('a3', 'b3', 'c3'); db2=# insert into d values ('d4', 'a1', 'e4', 'f4'); db2=# insert into d values ('d5', 'a2', 'e5', 'f5'); Meanwhile, I make some config changes in db1: db1=# update a set param_b = 'b1 new' where a_id = 'a1'; db1=# update d set param_e = 'e1 new' where d_id = 'd1'; db1=# update d set param_f = 'f2 new' where d_id = 'd2'; So: $ pg_dump -ad db1 db1.config and db1.config now contains INSERT INTO a VALUES ('a2', 'b2', 'c2'); INSERT INTO a VALUES ('a1', 'b1 new', 'c1'); INSERT INTO d VALUES ('d3', 'a1', 'e3', 'f3'); INSERT INTO d VALUES ('d1', 'a1', 'e1 new', 'f1'); INSERT INTO d VALUES ('d2', 'a2', 'e2', 'f2 new'); I want to update the data in db2 to reflect the values in db1. I can't truncate the tables in db2 because I'd lose the additional entries (a3, d4, d5). But of course when I do... $ psql db2 db1.config ... I get ... ERROR: duplicate key violates unique constraint a_pkey ERROR: duplicate key violates unique constraint d_pkey ...and the a1, d1, d2 rows are not updated to match db1. What I'd really like is to be able to do: INSERT OR UPDATE INTO a VALUES ('a2', 'b2', 'c2'); INSERT OR UPDATE INTO a VALUES ('a1', 'b1 new', 'c1'); INSERT OR UPDATE INTO d VALUES ('d3', 'a1', 'e3', 'f3'); INSERT OR UPDATE INTO d VALUES ('d1', 'a1', 'e1 new', 'f1'); INSERT OR UPDATE INTO d VALUES ('d2', 'a2', 'e2', 'f2 new'); so that the rest of the row is treated as an UPDATE if the primary key is a duplicate. Of course I can write something at the application level to examine each row and take appropriate action. But it feels like this may be a commonly encountered problem for which there may be a database-level solution. Am I missing something obvious? Thanks Julian Scarfe ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] SQL Question
From: Alex [EMAIL PROTECTED] - How can i select only the newest record for each ProdId ? 100| 2005-04-01 200| 2005-04-01 DISTINCT ON was made for this and on the similar tables I have performs rather more efficiently than using a subquery. select distinct on (ProdId) ProdId , LastUpdate from produpdate order by ProdId , LastUpdate desc; - How can i select to retrieve the last 2 dates in record 100| 2005-04-01 | 2005-03-01 200| 2005-04-01 | 2005-03-01 To get the previous one, my first thought is something like: select distinct on (ProdId) ProdId , LastUpdate from produpdate p1 where LastUpdate ( select max(LastUpdate ) from produpdate p2 where p2.ProdId = p1.ProdId ) order by ProdId , LastUpdate desc ; but there may be a much more efficient way of getting the nth result in general. Julian Scarfe ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Index use with left join
Julian Scarfe [EMAIL PROTECTED] writes: Does the planner realise that the intersection, Query 6, will still return 150 rows, or does it assume independence of the filters in some way and estimate 20,000*(150/20,000)*(396/20,000)? From: Tom Lane [EMAIL PROTECTED] It assumes independence of the conditions --- which is why having two of them reduced the rowcount estimate so much. There are some limited cases in which it can recognize redundant conditions, but offhand I think that only works for scalar inequalities (like x 5 AND x 6). Even that's smarter than I dared hope for! Any general suggestions for workarounds? Not much, other than trying to avoid redundant conditions. Did you look into the state of the PostGIS work on geometric statistics? No, though PostGIS is clearly the way forward for my needs in the medium/long term. PostGIS stores bounding boxes for its geometric features. The operators like and @ work as intersect and containment for the bounding boxes, while Intersects() and Contains() use more exact but presumably computationally expensive functions. I don't yet know how these, GiST indexes and the planner get along together. But I imagine the issue I've come across is one of the, if not the, most important one in spatially enabled databases. Thanks again Julian ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] Index use with left join
..1308.44 rows=10105 width=25) (actual time=180.078..180.927 rows=1391 loops=1) Sort Key: (a.ident)::text - Seq Scan on a (cost=0.00..611.05 rows=10105 width=25) (actual time=0.170..83.442 rows=10105 loops=1) Total runtime: 917.066 ms Again, so far, nothing obviously unusual. Now I combine the filters in 4 5 (as I did from 1 2 to get 3) 6) Now I combine the filters in 4 5 (as I did from 1 2 to get 3, which performed in a similar time to 1) explain analyze select n.ref, n.code, a.ident, a.name from n left outer join a on (a.ident = n.code) where bbox box (point (-0.032, 0.873), point (0.017, 0.908)) and box (q_node, q_node) @ box (point (-0.032, 0.873), point (0.017, 0.908)) QUERY PLAN - Nested Loop Left Join (cost=0.00..851.06 rows=8 width=45) (actual time=11.662..7919.946 rows=150 loops=1) Join Filter: ((inner.ident)::text = outer.code) - Index Scan using n_bbox on n (cost=0.00..88.44 rows=1 width=20) (actual time=0.107..10.256 rows=150 loops=1) Index Cond: (bbox '(0.017,0.908),(-0.032,0.873)'::box) Filter: (box(q_node, q_node) @ '(0.017,0.908),(-0.032,0.873)'::box) - Seq Scan on a (cost=0.00..611.05 rows=10105 width=25) (actual time=0.006..18.044 rows=10105 loops=150) Total runtime: 7920.684 ms Whoa! Instead of a performance similar to query 4, it chooses a different strategy, and takes 40 times as long. (Both tables just analyzed.) By brute force: set enable_nestloop to off; explain analyze select n.ref, n.code, a.ident, a.name from n left outer join a on (a.ident = n.code) where bbox box (point (-0.032, 0.873), point (0.017, 0.908)) and box (q_node, q_node) @ box (point (-0.032, 0.873), point (0.017, 0.908)) QUERY PLAN -- Merge Left Join (cost=1371.62..1447.50 rows=8 width=45) (actual time=177.273..179.341 rows=150 loops=1) Merge Cond: (outer.code = inner.?column3?) - Sort (cost=88.45..88.45 rows=1 width=20) (actual time=8.452..8.538 rows=150 loops=1) Sort Key: n.code - Index Scan using n_bbox on n (cost=0.00..88.44 rows=1 width=20) (actual time=0.109..7.031 rows=150 loops=1) Index Cond: (bbox '(0.017,0.908),(-0.032,0.873)'::box) Filter: (box(q_node, q_node) @ '(0.017,0.908),(-0.032,0.873)'::box) - Sort (cost=1283.17..1308.44 rows=10105 width=25) (actual time=165.520..166.348 rows=1391 loops=1) Sort Key: (a.ident)::text - Seq Scan on a (cost=0.00..611.05 rows=10105 width=25) (actual time=0.042..69.560 rows=10105 loops=1) Total runtime: 182.275 ms What's happening here, please? How am I misleading the planner? Is it because the index is rtree? Yes, I should consider PostGIS for spatial stuff, but I've got what I've got :-). TIA Julian Scarfe ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Index use with left join
From: Tom Lane [EMAIL PROTECTED] The problem is that it's underestimating the number of rows pulled from the n table (1 vs actual 150), which makes a simple nestloop join look like the way to go. That error comes from the fact that we don't really have any statistical estimation for geometric conditions :-(. Some of the PostGIS hackers have been working on such, I believe, but I'm not sure how far they've gotten. Thanks Tom. I can see the poor estimation of the geometric filter is a significant factor. Reviewing it I wondered if the coarse/fine nature of the filters was also an issue. Query 4, with the filter on the index, selects 396 of the ~20,000 rows of the table (having estimated 22). Query 5, with the filter requiring a Seq scan, selects 150 of the ~20,000 rows of the table (having estimated 5), all of which were returned by Query 4. Does the planner realise that the intersection, Query 6, will still return 150 rows, or does it assume independence of the filters in some way and estimate 20,000*(150/20,000)*(396/20,000)? I guess I can test that by trying it with a non-geometric column and a btree index: 7) Filtered requiring a sequntial scan explain analyze select n.ref, n.code, a.ident, a.name from n left outer join a on (a.ident = n.code) where code ~~ 'EGT%'; QUERY PLAN --- Merge Left Join (cost=10361.53..10441.90 rows=419 width=45) (actual time=731.175..732.615 rows=248 loops=1) Merge Cond: (outer.code = inner.?column3?) - Sort (cost=9078.36..9078.49 rows=53 width=20) (actual time=547.154..547.300 rows=248 loops=1) Sort Key: n.code - Seq Scan on n (cost=0.00..9076.84 rows=53 width=20) (actual time=260.558..543.587 rows=248 loops=1) Filter: (code ~~ 'EGT%'::text) - Sort (cost=1283.17..1308.44 rows=10105 width=25) (actual time=180.359..181.149 rows=1292 loops=1) Sort Key: (a.ident)::text - Seq Scan on a (cost=0.00..611.05 rows=10105 width=25) (actual time=0.125..83.844 rows=10105 loops=1) Total runtime: 735.613 ms 8) Filtered using the index, but returning a subset of the 419 rows from Query 7 explain analyze select n.ref, n.code, a.ident, a.name from n left outer join a on (a.ident = n.code) where code = 'EGTT'; QUERY PLAN Merge Left Join (cost=1283.17..1572.15 rows=411 width=45) (actual time=451.609..510.507 rows=226 loops=1) Merge Cond: (outer.code = inner.?column3?) - Index Scan using n_pkey on n (cost=0.00..208.82 rows=52 width=20) (actual time=17.301..73.840 rows=226 loops=1) Index Cond: (code = 'EGTT'::text) - Sort (cost=1283.17..1308.44 rows=10105 width=25) (actual time=430.231..431.032 rows=1279 loops=1) Sort Key: (a.ident)::text - Seq Scan on a (cost=0.00..611.05 rows=10105 width=25) (actual time=5.743..321.326 rows=10105 loops=1) Total runtime: 514.103 ms 9) Filtered on both explain analyze select n.ref, n.code, a.ident, a.name from n left outer join a on (a.ident = n.code) where code ~~ 'EGT%' and code = 'EGTT'; QUERY PLAN -- Nested Loop Left Join (cost=0.00..971.58 rows=8 width=45) (actual time=53.634..1.285 rows=226 loops=1) Join Filter: ((inner.ident)::text = outer.code) - Index Scan using n_pkey on n (cost=0.00..208.95 rows=1 width=20) (actual time=0.288..21.137 rows=226 loops=1) Index Cond: (code = 'EGTT'::text) Filter: (code ~~ 'EGT%'::text) - Seq Scan on a (cost=0.00..611.05 rows=10105 width=25) (actual time=0.008..18.741 rows=10105 loops=226) Total runtime: 12223.328 ms Similar problem. Of course Query 9 is concocted and unrealistic, but it is representative of the coarse/fine filter problem, where I select a set of rows using an approximate filter (e.g. bounding box for the geometrical case) with an index and then use a second, exact but computationally expensive filter to keep only those rows that I really want. Any general suggestions for workarounds? Julian ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] truncate/create slowness
It's possible you could get out of this by vacuum full and then reindex each catalog, but it might be easier to dump and reload the database ... I've got a similar issue, but caused by neglect rather than anything to to with pg_autovacuum. Do you have any rules of thumb for deciding when a pg_dumpall/restore is likely to be faster than a vacuum full? Or perhaps more straightforwardly, how would you expect the time required for a vacuum full to scale with pages used and rows in the table? Thanks Julian Scarfe ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] Clustering system tables
I've got a database (7.4) whose system tables have been long neglected. Instead of the 100 or so pages I'd expect for 4000 rows after VACUUM, I've got 24,000 pages and a mere 1.4 million unused item pointers. If it were an ordinary table, I'd CLUSTER it, as from experience it would be vastly quicker than a VACUUM FULL. But CLUSTER responds with: ERROR: pg_attribute is a system catalog Is VACUUM FULL my only option to compact the table? Julian Scarfe ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] postgresql 8.0 advantages
From: Tom Lane [EMAIL PROTECTED] There are some speed improvements in 8.0 too, though that wasn't the main focus of the release cycle. For instance, Lonni Friedman's nearby thread reports on a case where 8.0 consistently finds a much better query plan for a complex query than prior releases did, because we fixed some issues in the GEQO planner module. Could you give an example or two of the sorts of queries for which performance is improved under 8.0 compared with 7.4, please Tom? Thanks Julian Scarfe ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] pgpool 2.5b2 released
After takin a swig o' Arrakan spice grog, [EMAIL PROTECTED] (Julian Scarfe) belched out: So all I'm looking for is a way for pgpool to shout if it detects a failure. That could initiate the investigation of the other criteria required for failover. _There_ lies the one change that is needed. Given that, some outside 'oracle' can be used to decide if it's appropriate to do a FAILOVER. It's quite important for this not to be deeply embedded in pgpool... I think we have a consensus on that. So what's the most sensible mechanism for the shout. Since I posted my original question, I realized that pgpool notes a failure of either master or slave in its log. Would we want something more proactive? Julian ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] pgpool 2.5b2 released
From: Joshua D. Drake [EMAIL PROTECTED] Since pgpool has this capability, how about including a hook that allows a script to be run when pgpool detects a problem with the master? That would allow action to be taken to investigate further and, if required, switchover or failover and promote the slave to master. Because it is not the place of a connection management software to failover. Failover should happen only if: 1. You have a specific set of criteria that is matched via a network/system management software. 2. You do it yourself. I'm not suggesting that it's the place of pgpool to *force* a failover. I am suggesting that one of the criteria that is likely to be useful is the inability to connect to the master, and that's something that pgpool, apparently, detects. It seems unnecessary to use completely different failure-detection mechanisms for the purpose of failover to those used for the connection management. So all I'm looking for is a way for pgpool to shout if it detects a failure. That could initiate the investigation of the other criteria required for failover. The last thing in the world you need is to fail over to a slave because somebody accidently tripped over a network cord. In our application, that's *exactly* what we need. We have a database that receives data in a fairly continuous stream. If the datastream cannot be written to the database, the database becomes worse than useless quite rapidly. We need the ability to switchover or failover to another node as master as soon as possible, to allow the datastream to be written to the other node. We'll rebuild the failed master later, if necessary. But if the failover doesn't happen promptly, we might as well rebuild the whole cluster. Julian Scarfe ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] pgpool 2.5b2 released
Does it attempt any interaction with Slony when it detects a failure of the master? It would seem a pity to have pgpool watching the pair to detect failure but having to have a separate watcher process to tell Slony to failover. If pgpool detects PostgreSQL failure, Slony should detect it as well, no? Others are much better place to answer this, but my understanding is that Slony does not do so: http://gborg.postgresql.org/project/slony1/genpage.php?howto_overview What Slony-I is not: Slony-I is not a network management system. Slony-I does not have any functionality within it to detect a node failure, or automatically promote a node to a master or other data origin. Slony-I is not multi-master; it's not a connection broker, and it doesn't make you coffee and toast in the morning. pgpool 2.5 has the capabilty to perform periodical health checking to PostgreSQL. Since pgpool has this capability, how about including a hook that allows a script to be run when pgpool detects a problem with the master? That would allow action to be taken to investigate further and, if required, switchover or failover and promote the slave to master. Julian Scarfe ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] pgpool 2.5b2 released
From: Tatsuo Ishii [EMAIL PROTECTED] Pgpool 2.5b2 supports master slave mode which can cope with master/slave replication softwares such as Slony-I. In this mode pgpool sends non SELECT queries to master only. SELECTs are load balanced by pgpool. Sounds good! Does it attempt any interaction with Slony when it detects a failure of the master? It would seem a pity to have pgpool watching the pair to detect failure but having to have a separate watcher process to tell Slony to failover. Julian Scarfe ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Index bloat in 7.2
From: Christopher Browne [EMAIL PROTECTED] The empty pages not reclaimed problem is something that did indeed get fixed in the post-7.2 days. I _think_ it was 7.4, but it might have been 7.3. In short, 7.4.x is indeed a good resolution to your issue. From: Tom Lane [EMAIL PROTECTED] That's exactly what I'd expect ... Thanks both. So it sounds like: a) the issue is controllable with a regular (and in our case, just occasional) reindex without any long term negative consequences b) Only a dump-restore major version upgrade (which we'll do next time we can take the system out for long enough) will avoid the issue. Julian ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Index bloat in 7.2
On Mon, Dec 06, 2004 at 08:48:04AM -, Julian Scarfe wrote: b) Only a dump-restore major version upgrade (which we'll do next time we can take the system out for long enough) will avoid the issue. On 6 Dec 2004, at 16:18, Alvaro Herrera wrote: Long enough could be a minutes or seconds issue if you use Slony-I, I've heard ... (Of course you'd still need to fix your apps, which may take somewhat longer than that.) A good point Alvaro, but I don't think Slony-I is compatible with 7.2, which is the version I'm starting from. For upgrades from 7.3 onwards, it's certainly a route worth considering. Julian ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] Index bloat in 7.2
I've got a box running 7.2.1 (yes, I know :-() in which an index for a rapidly turning over (and regularly vacuumed) table is growing steadily in size. The index in question is on a timestamp field that is just set to now() on the entry of the row, to enable the query that clears out old data to an archive to run efficiently. Reindexing shrinks it back to a reasonable size. Other indexes reach an equilibrium size and stay there. The behaviour is fine on a system running 7.4.x: the index stays at a sensible number of pages. Is this likely to be related to a known issue with 7.2 that got fixed, or have I got potentially more serious problems? Thanks Julian Scarfe ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[GENERAL] How can I change a cast from explicit only to implicit?
In 7.2.x template1=# select point('1'::text, '2'::text); point --- (1,2) (1 row) but in 7.4.x template1=# select point('1'::text, '2'::text); ERROR: function point(text, text) does not exist HINT: No function matches the given name and argument types. You may need to add explicit type casts. List of casts Source type | Target type | Function | Implicit? -+-+ -+--- ... text| double precision| float8 | no OK, so to make the cast work without explicit casts in the SQL, I need the text to float8 (or another suitable numeric type) cast to be implicit. But: template1=# create cast (text as float8) with function float8(text) as implicit; ERROR: cast from type text to type float8 already existsoat8(text) as implicit; template1=# drop cast (text as float8); ERROR: cannot drop cast from text to double precision because it is required by the database system So how can I force a built-in cast to become implicit? Thanks Julian Scarfe ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] How can I change a cast from explicit only to implicit?
From: Tom Lane [EMAIL PROTECTED] So how can I force a built-in cast to become implicit? If you're intent on doing that, you can change its entry in pg_cast. But I think you'd be far better off to fix your application. Implicit casts across type categories have a habit of kicking in when you least expected it, causing the backend to adopt surprising and unpleasant interpretations of straightforward-looking queries. If you check the pgsql-bugs archives you will find some of the examples that prompted us to change this cast to non-implicit... It's a fair point, Tom. Having developed the code with 7.2, we are looking for the simplest way to recreate backward compatible behaviour in 7.4. Sure enough, we'll flag the code for revision next time it gets an overhaul, but a single change to the database would be preferable in the short term. I think the current default of these casts being explicit only is sensible. Thanks Julian ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Simulating an outer join
Bruce Momjian wrote: I am just writing the EXISTS section from my book. I don't think it matters what fields are returned from an EXISTS subquery. If I am wrong, someone please let me know. Celko also writes (in his chapter on EXISTS in "SQL for Smarties"): "In general the SELECT * option should perform better than the actual column. It lets the query optimizer decide which column to use. If a column has an index on it, then simply seeing a pointer to the index is enough to determine that something exists." Obviously you're in a much better position than me to judge whether that's the case in pgsql! But it might be worth a test. Julian Scarfe
[GENERAL] Making points into paths
I'd like to take a set of points and link them into a path. But I can't see a single operator/function that creates a path from points! ;-( It seems like a fairly fundamental operation, unlike some of Postgres's delightfully rich set of geometric datatypes, operators and functions. It doesn't look too hard to write an external function that appends a point to a path, but am missing something obvious? Thanks Julian Scarfe