Re: [SQL] Failing query...
Shaun <[EMAIL PROTECTED]> writes: > SELECT a.auction_id, a.user_id, c.other_names, >c.surname, c.email, a.reserve, a.close_time, a.short_desc, >a.long_desc, a.start_time, >(COALESCE((select MAX(bid) from bid where auction_id = a.auction_id >group by auction_id), 0)) as max_bid > FROM Auction a, Customer c > WHERE a.user_id = c.user_id > AND a.auction_id = 754; Sub-selects inside COALESCE don't work :-(. This was just fixed about a week ago --- it will be in 7.1. In the meantime you might try it the other way round: (select COALESCE(MAX(bid), 0) from bid where auction_id = a.auction_id) as max_bid regards, tom lane
Re: [SQL] Tuple size limit.
Christopher Sawtell <[EMAIL PROTECTED]> writes: > I understand that the 7.1 release currently in CVS does not have this > limitation. So I'd like to know if this 7.1 release is imminent; > i.e. < ~2 to 3 months? If it's not out in <3 months, you won't be the only person who's very unhappy ;-). I don't want to see us hold up TOAST that long, even if it means not having some of the other features originally planned for 7.1... regards, tom lane
Re: [SQL] protecting a field after creation
Louis-David Mitterrand <[EMAIL PROTECTED]> writes: > Is there a way (outside of RULEs and TRIGGERs) to make a field > read-only once it is INSERTed or assigned its default value? I'm > thinking, for example, of the "created" column that I add to most > tables, holding the row's creation timestamp. An on-update trigger seems like a very simple solution here. You could either copy the old value into the new, or raise an error if they are different, depending on what you want. regards, tom lane
Re: [SQL] Speed or configuration
"Franz J Fortuny" <[EMAIL PROTECTED]> writes: > [ this query is slow: ] > select xx1,xx2,xx3 from tableX > where field1 in > (select field1 from tableY where > field2=NNN and field3=NNN2 and field4=NNN4) What version of Postgres are you using, and what does EXPLAIN show as the query plan for this query? How many tableY rows is the sub- query likely to produce, and how many matches do you expect to get from tableX? The Postgres planner is not currently very smart about optimizing sub-SELECTs. We hope to do something about that by 7.2 or so, but in the meantime you should look at transforming the query to some other form. You might find that EXISTS will help: select xx1,xx2,xx3 from tableX where exists (select 1 from tableY where field1=tableX.field1 and field2=NNN and field3=NNN3 and field4=NNN4) However this is unlikely to help much unless you change the index structure for tableY. Perhaps you could write it as a join: select xx1,xx2,xx3 from tableX, tableY where tableX.field1=tableY.field1 and field2=NNN and field3=NNN3 and field4=NNN4 although this won't give quite the same results if there can be multiple matching rows in tableY for a tableX row. (DISTINCT might help if so.) regards, tom lane
Re: [SQL] Copying data with triggers
Keith Wong <[EMAIL PROTECTED]> writes: > create function tp_info () returns opaque as ' > begin > -- insert into audit table > insert into AudInfo (info_id, some_data, aud_operation_type) values > (new.info_id, new.some_data, ''i''); > return new; > end; > ' language 'plpgsql'; > create trigger tp_info before insert on Info > for each row execute procedure tp_info(); > This doesn't work however. A parse error occurs in the first line. > I suspect that I cannot insert a record into another table in a trigger. You certainly can --- I have apps that do it all the time. I suspect maybe the plpgsql parser gets confused by SQL-style comments? Your example seems to work fine if I leave out the comment. regards, tom lane
Re: [SQL] using INTERSECT and UNION in IN clause
Alex Guryanow <[EMAIL PROTECTED]> writes: > Is it possible to use INTERSECT and UNION keywords in subqueries? No, not at the moment. This is one of many things we hope to fix when we redesign querytrees (currently planned for 7.2 cycle). regards, tom lane
Re: [SQL] tip: weird parse error for pl/pgsql
Keith Wong <[EMAIL PROTECTED]> writes: > It seems that pl/pgsql has a problem parsing Window style > new line characters. Ah-hah, good catch! I have fixed this bug for 7.1. If you want to patch your local copy, the critical changes are: *** src/pl/plpgsql/src/scan.l.orig Thu Jun 22 19:08:34 2000 --- src/pl/plpgsql/src/scan.l Tue Aug 22 10:59:28 2000 *** *** 143,155 * Ignore whitespaces but remember this happened * -- */ ! [ \t\n]+ { plpgsql_SpaceScanned = 1; } /* -- * Eat up comments * -- */ ! --[^\n]* ; \/\* { start_lineno = yylineno; BEGIN IN_COMMENT; } --- 146,158 * Ignore whitespaces but remember this happened * -- */ ! [ \t\r\n]+{ plpgsql_SpaceScanned = 1; } /* -- * Eat up comments * -- */ ! --[^\r\n]*; \/\* { start_lineno = yylineno; BEGIN IN_COMMENT; } regards, tom lane
Re: [SQL] Null function parameters
"Graham Vickrage" <[EMAIL PROTECTED]> writes: > However my understanding was that if the default value is SQL NULL then any > values passed into the function that are null would be treated as 'NULL'. Not sure what you think you meant by that, but a null is a null. If you declared the table column as NOT NULL then Postgres is doing exactly what it should. You may wish to code the insert along the lines of INSERT INTO table VALUES (..., COALESCE($1, suitable-default), ...) COALESCE is a handy notation for "value1 unless it's NULL, in which case value2". regards, tom lane
Re: [SQL] Continuous inserts...
Jan Wieck <[EMAIL PROTECTED]> writes: > I haven't looked at the code, but pg_class only has a boolean > telling if a class has rules or not. Could it be that adding > more rules (or dropping just a few instead of all) doesn't > update the pg_class tuple, thus the syscache for the table > isn't invalidated and other backends continue to use the old > information instead of rescanning pg_rewrite? This is done correctly in current sources --- see setRelhasrulesInRelation(). However I recall having dorked with that code not long ago, and I forget what it looked like before. Perhaps 7.0.* is broken in this respect? Would think people would have noticed, though. regards, tom lane
Re: [SQL] Null function parameters
[EMAIL PROTECTED] (Anatoly K. Lasareff) writes: > I'afraid no. My question is: if I pass one null argument into function > then all other argumens, which are not null, became null inside > function body. Yes --- and not only that, but the function's result will be taken to be null whether you want it to be or not. This has been gone over *many* times before on this mail list, so I didn't think I needed to repeat it. This will be fixed in 7.1 (is already fixed in current sources). regards, tom lane
Re: [SQL] Using SETOF in plpgsql function
Jan Wieck <[EMAIL PROTECTED]> writes: > hlefebvre wrote: >> I'd like to return a set of integer in an pl/pgsql function. How can I >> do that ? > You can't. Not with PL/pgSQL nor with any other PL or C. The > problem is nested deeper and requires the planned querytree > redesign to get solved. Not really. Coincidentally enough, I am just in the middle of removing execQual.c's hard-wired assumption that only SQL-language functions can return sets. (This is a side effect of fixing the function manager so that SQL functions can be called in all contexts, eg used as index functions.) If you want to fix plpgsql so that it retains state and can produce multiple elements of a set over repeated calls, the same way that SQL functions do, then it could be done today. We may well want to rip out that whole approach to set functions later when we redo querytrees, but the real limitation so far has been bogus assumptions in the function-call API, not querytrees. regards, tom lane
Re: [SQL] Re: Date of creation and of change
Andreas Tille <[EMAIL PROTECTED]> writes: > On Wed, 23 Aug 2000, hlefebvre wrote: >> create table mytable( CreateDate timestamp default timestamp('now'), >> ); > I've done a pg_dump and there this line was transformed to: > "createdat" timestamp DEFAULT '23.08.2000 15:35:16.00 CEST'::"timestamp", This approach does not work in 7.0 (I think it did work in some prior releases, but not recently). The recommended method is shown in the FAQ: 4.22) How do I create a column that will default to the current time? Use now(): CREATE TABLE test (x int, modtime timestamp default now() ); regards, tom lane
Re: [SQL] Create table in functions
Andreas Tille <[EMAIL PROTECTED]> writes: > web=# create function pHelpTable( ) > web-# returns int > web-# As ' > web'# Begin > web'# Create Table Temp_NumTable ( Num int ) ; > web'# > web'# return 17 ; > web'# End; ' > web-# language 'plpgsql' ; > CREATE The majority of utility commands don't work in plpgsql functions, because no one had gotten 'round to writing querytree copy routines for them. Ian Turner finally did all the gruntwork for that a few weeks ago, so this does work in current sources and will be in 7.1. I believe you could make this work in 7.0 by using an SQL function instead of plpgsql, if that helps any... regards, tom lane
Re: [SQL] Create table in functions
"Hiroshi Inoue" <[EMAIL PROTECTED]> writes: > Hmm,Andreas's original function seems to contain other statements. > If the function contains DML statements for the table Temp_Num_Table, > it wouldn't work properly. i.e 1st call would work but 2nd call woudn't. Are you thinking about plpgsql's caching of query plans (and specifically the table OIDs stored in those plans) or is there another issue here? We do need to think about invalidating cached query plans when updates happen... regards, tom lane
Re: [SQL] Regular expression query
Rodger Donaldson <[EMAIL PROTECTED]> writes: > SELECT url > FROM sites > WHERE url ~ url || '\\s+' > While this concatenation works with the LIKE directive (ie LIKE url || '%'), > postgresql barfs on it in a regexp with the error: > ERROR: Unable to identify an operator '||' for types 'bool' and 'unknown' > You will have to retype this query using an explicit cast LIKE and ~ do not have the same precedence. See http://www.postgresql.org/docs/postgres/operators.htm. ~ and || actually fall in the same category ("all other") and therefore are grouped left-to-right; so you're getting (url ~ url) || '...'. > The other aspect of this is that it seems that postgresql's regexp engine > doesn't understand some expected regexps; I've tried both escaped and > unescaped versions of, eg \w, \s, \n and so on a pg seems to ignore them. The regexp package we currently use implements POSIX 1003.2 regexps (see src/backend/regex/re_format.7). I believe there is an item on the TODO list about upgrading the regexp parser to something more modern ... feel free to hop on that project if it's bugging you ... regards, tom lane
Re: [SQL] Re: Date of creation and of change
Andreas Tille <[EMAIL PROTECTED]> writes: >> NEW.ChangedAt := timestamp(''now''); > This avoids the error message, but doesn't have any effect to the value > of ChangedAt. It just remains the same as CreatedAt :-(. I think you are getting burnt by premature constant folding --- see nearby discussion of how to define a column default that gives the time of insertion. You need to write this as NEW.ChangedAt := now(); to prevent the system from reducing timestamp('now') to a constant when the function is first executed. regards, tom lane
Re: [SQL] shared memory leak in 7.0.2?
[EMAIL PROTECTED] writes: > i've been running 7.0.2 for the last month or so, and I've had to > reboot my redhat linux box twice to clear up a shared memory leak > issue. Essentially with the DB running for about 2weeks with large > amounts of usage, eventually the Os runs out of shared memory and the > db crashes and fails to restart. The only way to get the db back > online is to reboot. I haven't seen this reported before. Are you sure Postgres deserves the blame, rather than some other package? Postgres' use of shared memory is fixed for the life of a postmaster, so unless you're constantly restarting the postmaster I don't see how we could be leaking shmem. However, rather than speculate, let's get some hard facts. Try using "ipcs -m -a" to keep track of shared mem allocations, and see what usage is creeping up over time. regards, tom lane
Re: [SQL] shared memory leak in 7.0.2?
Pierre Padovani <[EMAIL PROTECTED]> writes: > Here is the current output of that ipc command: Looks pretty reasonable --- the three shmem segments look to be about what postgres should be using (I take it you've got -B set to 350 or so). > If postgres were to crash for some reason. Would the shared memory be > left in never never land? In theory the postmaster should release those segments when it's shut down, or failing that reuse 'em when it's restarted. We have heard occasional reports of this failing to happen on some platforms; but again, unless you're restarting the postmaster on a regular basis that doesn't seem like it'd be the issue. Anyway, keep an eye on things with ipcs and we'll see what's going on... regards, tom lane
Re: [SQL] pg_attribute_relid_attnam_index problem after log disk filled up
Palle Girgensohn <[EMAIL PROTECTED]> writes: > What's this? What to do? Shall I bother? > pp=> vacuum; > NOTICE: Index pg_attribute_relid_attnam_index: NUMBER OF > INDEX' TUPLES (3094) IS NOT THE SAME AS HEAP' (3093) Under 6.5, if vacuum doesn't get rid of the message then I think the only way to do so is to dump and reload the database. (I'd strongly recommend updating to 7.0.2 while you do that.) The message in itself is not catastrophic, but it does raise the question of what other damage there might be. Updating would be a good idea anyway, considering all the bugs that got fixed between 6.5 and 7.0. > The problem started when the log partition (where > postmaster logging output is written) was full, and some > backends crashed. Did they actually crash, or just hang up waiting for space to become available for the log file? That really shouldn't have been much of a problem, AFAICS. regards, tom lane
Re: [SQL] How do you detect row version changes in PostgreSQL?
"Craig Manley" <[EMAIL PROTECTED]> writes: > Some databases have a hidden field that contains a row version key. > Everytime an update is performed on a row the value of the version field > changes. Does PostgreSQL have a similar hidden field? See xmin (and also cmin if you need to keep track of multiple changes intra-transaction). It might not be defined quite the way you want, but if you're not picky about what a "version number" is, it'll do. regards, tom lane
Re: [SQL] performance on insert/update
Jerome Raupach <[EMAIL PROTECTED]> writes: > I have a big problem of performance, please help me. You could code the update so it only evaluates the view once, rather than twice per table2 row as you now have it: UPDATE table2 SET nb=table2.nb+view1.nb, time=table2.time+view1.time FROM view1 WHERE view1.f1=table2.f1 and view1.date=table2.date; Subselects are nice, but they're not cheap. Joins are faster. The view itself also seems to be pretty inefficiently coded, although I'm not sure you can do much better given that GROUP BY doesn't really work in views right now. Hopefully by 7.2 there will be a better way to do the view. regards, tom lane
Re: [SQL] Problems with complex queries ...
"J. Fernando Moyano" <[EMAIL PROTECTED]> writes: > I try this on my system: (Postgres 6.5.2, Linux) > "select n_lote from pedidos except select rp.n_lote from relpedidos rp, > relfacturas rf where rp.n_lote=rf.n_lote group by rp.n_lote having > sum(rp.cantidad)=sum(rf.cantidad)" > and I get this result: > ERROR: rewrite: comparision of 2 aggregate columns not supported I think this is fixed in 7.0. regards, tom lane
Re: [SQL] Optimizing huge inserts/copy's
Webb Sprague <[EMAIL PROTECTED]> writes: > I am experimenting with this too. If I have any > indexes at all, the copy's get VERY SLOW as the table > gets big. Delete ALL your indexes, do your copy's, > and then create your indexes again. Do you have a lot of equal index keys in the data you're inserting? I've recently been swatting some performance problems in the btree index code for the case of large numbers of equal keys. regards, tom lane
Re: [SQL] Order by in stored functions
Andreas Tille <[EMAIL PROTECTED]> writes: > I've got the following error message: > ERROR: function declared to return varchar returns multiple values in > final retrieve This is a bug that has already been fixed in current sources: the check that makes sure your select produces only one varchar column is mistakenly counting the hidden IdSort column that's needed to sort by. I don't know of any good workaround in 7.0, short of patching the erroneous code. Have you thought about using a view, rather than a function returning set? regards, tom lane
Re: [SQL] Operator Precedence problem?
Philip Warner <[EMAIL PROTECTED]> writes: > Which makes me think that the precedence of 'or' is not what I > expected. OR is certainly lower-precedence than AND --- this is hard-wired in the grammar and not subject to change across databases. It's also required by SQL92: ::= | OR ::= | AND ::= [ NOT ] ::= [ IS [ NOT ] ] ::= TRUE | FALSE | UNKNOWN ::= | BTW, I notice that we do not correctly implement the IS tests. The parser turns them into " = 't'::bool" and so on, which is wrong because it will yield NULL for NULL input, which is contrary to the spec for these tests. We need specialized functions comparable to the ones for IS NULL (in fact, IS UNKNOWN should be equivalent to IS NULL except for requiring a boolean input, AFAICT). regards, tom lane
Re: [SQL] Creating an aggregate function
Roberto Mello <[EMAIL PROTECTED]> writes: > and our output was: > date_id | n_sessions_day | members | non_members > -++---+- > 1 |500 |30 | 136394044 > 2 | 2000 | 136394612 | 136394612 In current sources I get the expected results if I write the CREATE AGGREGATE commands with initcond1 = '0'); Although the parser will accept unquoted integers as initcond inputs, it looks like something downstream is mishandling them, resulting in an invalid initial value entered into the pg_aggregate entry for the aggregate. I'll look into fixing that for 7.1, but in the meantime try putting quotes around the initial values. BTW, your agg_if_member function is not very good, because if it gets more than one input row with member_p = 1 then the result is order- dependent. You don't want that. regards, tom lane
[SQL] Re: [BUGS] "ORDER BY" issue - is this a bug?
Max Pyziur <[EMAIL PROTECTED]> writes: > I've come across what I consider an odd sorting anomaly in 7.0.2-2 where running > the following query: > select headline from headlines where headline like 'Alb%' order by 1 ; > The results in the 7.0.2-2 install come back case-insensitive and > oblivious to punctuation. That's pretty bizarre (not to say difficult to believe). What LOCALE setting are you running the postmaster in? regards, tom lane
[SQL] Re: [BUGS] "ORDER BY" issue - is this a bug?
Max Pyziur <[EMAIL PROTECTED]> writes: >> That's pretty bizarre (not to say difficult to believe). What LOCALE >> setting are you running the postmaster in? > On none of the installations - the two 6.5.x and the 7.0.2-2 one - I > don't have any locale set. I get (what I think are) correct results > with the first two. > Does 7.0.2-2 require the setting of locale. AFAIK its behavior should be the same as 6.5 for LOCALE issues. That's why I suspect an environment difference. I can assure you there is no code in the backend that will do case-insensitive, punctuation-insensitive comparisons --- much less any to do so without request. I'm betting that either this is your error, or the strcmp() library function is doing it; and as far as I've heard, only LOCALE environment variables might affect the behavior of strcmp(). It also seems possible that no sort is happening at all (which would be a planner bug), and the ordering you're getting is just whatever happens to be in the underlying table. Does EXPLAIN show that the query is being done with an explicit sort? regards, tom lane
Re: [SQL] Porting from mysql to psql (UNIX_TIMESTAMP()?)
Zlatko Calusic <[EMAIL PROTECTED]> writes: > Is there any similar functionality (returning unixish number of > seconds since 1970 from the timestamp field) in PostgreSQL? Sure. You can use date_part, or cast to abstime and thence to integer: regression=# select now(); now 2000-09-09 12:55:50-04 (1 row) regression=# select date_part('epoch',now()); date_part --- 968518563 (1 row) regression=# select now()::abstime::int4; ?column? --- 968518585 (1 row) To go the other way (integer seconds to timestamp), use the cast method in reverse: regression=# select 968518585 :: int4 :: abstime :: timestamp; ?column? 2000-09-09 12:56:25-04 (1 row) (there's probably a cleaner way to do this, but that works ...) regards, tom lane
Re: [SQL] Isolation and Concurrency in PG functions?
Roberto Mello <[EMAIL PROTECTED]> writes: > I am porting some VERY big functions to PG for a data warehousing system > (that is GPL'd BTW) and it seems that the results of one portion of the > function (e.g. a create table or a series of inserts) are invisible to > the other parts which obviously causes the function (and following > functions) to fail and is completely driving me nuts because I see the > results when I do the queries interactively. ?? How are you defining these functions, exactly? In SQL or PLPGSQL functions, successive queries certainly do see each others' results, for example regression=# create table foot (f1 int); CREATE regression=# create function foo(int) returns int as regression-# 'insert into foot values($1); regression'# select count(*) from foot' language 'sql'; CREATE regression=# select foo(1); foo - 1 (1 row) regression=# select foo(1); foo - 2 (1 row) If you are coding at the C level you may need to call CommandCounterIncrement() between queries. regards, tom lane
Re: [SQL] Optimizing Multiply Joins ???
Meszaros Attila <[EMAIL PROTECTED]> writes: > Until we join no more than 10 tables the response time is below 0.2 s. > joining the 11th table comes with a dramatic change: response time > usually grows up to 5-7 s, That's interesting; apparently the regular optimizer is faster than the GEQO optimizer for your style of query. Try increasing the GEQO threshold (pg_option "geqo_rels") to more than 11. > I have examined the output of explain in all 3 cases, and I have > the feeling that the planner simply forgets the best solutions > in 2nd and 3rd case. The GEQO planner does not guarantee to find an optimal solution, it just does a random search through a limited number of possible solutions and uses the best one it happened across. > But I think (hope) we could have a solution, because all of our > complex joins are following foreign keys. Actually, as the 7.1 code currently stands, a query that uses explicit JOIN operators like yours does will always be implemented in exactly the given join order, with no searching. I haven't quite decided if that's a bug or a feature ... regards, tom lane
Re: [SQL] Optimizing Multiply Joins ???
Meszaros Attila <[EMAIL PROTECTED]> writes: > Changing this option in a psql session with 'set' has really helped > > But it seems, geqo_rels option is not parsed from the pg_option > file. My mistake. In 7.0 the GEQO options are set in a file "pg_geqo" in the $PGDATA directory. (There should be a prototype file "pg_geqo.sample" there for you to copy and edit.) Peter Eisentraut has cleaned up the option handling for 7.1 so that GEQO options are handled like all the others... regards, tom lane
Re: [SQL] Optimizing Multiply Joins ???
Meszaros Attila <[EMAIL PROTECTED]> writes: >> Actually, as the 7.1 code currently stands, a query that uses explicit >> JOIN operators like yours does will always be implemented in exactly >> the given join order, with no searching. I haven't quite decided if >> that's a bug or a feature ... > Do you mean a "linear binary tree" like this is executed? >/\ > /\ f > /\ e > /\ d >/\ c > a b If that's what you write, yes. You can parenthesize the JOIN clauses any way you like, though, and the 7.1 planner will follow that structure. For example SELECT ... FROM (a CROSS JOIN b) CROSS JOIN (c CROSS JOIN d) WHERE ... is semantically the same as FROM a,b,c,d, but given this JOIN form the planner will only consider plans that join a to b and join c to d and finally join those results. With the "FROM a,b,c,d" form it will do a search through all possible join orders, same as before. Also, you can mix styles: SELECT ... FROM a, b, c CROSS JOIN d WHERE ... which forces c and d to be joined first, but lets the planner have its head about what to do next. > I'm not sure which version of standards allows to bracket joins, > but I know sybase accepts the above form. SQL92 says ::= | | so anything that claims to accept SQL92 had better allow parentheses around JOIN expressions... regards, tom lane
Re: [SQL] Optimizing Multiply Joins ???
Meszaros Attila <[EMAIL PROTECTED]> writes: > Can I test this feature in the current snapshot? Sure. But see my message to pghackers on Tuesday for notes about what's not working yet in the JOIN support. regards, tom lane
Re: [SQL] Error with DISTINCT
Jerome Raupach <[EMAIL PROTECTED]> writes: > testdb=#SELECT DISTINCT table_2.f1, table_1.f2, '2000-08-22' AS > testdb-#date FROM table_1, table_2 WHERE table_1.f1 = table_2.f1; > ERROR: Unable to identify an ordering operator '<' for type 'unknown' > Use an explicit ordering operator or modify the query > Is it a bug ? No, I don't think so. The system has no way to intuit what datatype you consider '2000-08-22' to be. SELECT DISTINCT table_2.f1, table_1.f2, '2000-08-22'::date ... would work. regards, tom lane
Re: [SQL] installing pgaccess
Keith Wong <[EMAIL PROTECTED]> writes: > Anybody know how to compile pgaccess from postgres source files? I think it's driven by configure --with-tcl. regards, tom lane
Re: [SQL] Does optimizer know about 'constant' expressions?
Philip Warner <[EMAIL PROTECTED]> writes: > It seems that the optimizer does not know about (or calculate values of) > constant expressions when determining strategy. Datetime calculations are not considered constant-foldable. See prior discussions, eg thread "Constant propagation and similar issues" in pghackers on 11-Sep. regards, tom lane
Re: [SQL] sum of agreggates in one SELECT?
Louis-David Mitterrand <[EMAIL PROTECTED]> writes: > Now I would like to sum() all results from the quantity column and > return it with one SELECT statement. Is that possible? You can't have aggregates of aggregates in a single SELECT structure; that's just not in the SQL execution model. The way around this is to write multiple levels of SELECT, using either selection from a grouped/aggregated view or subselect-in-FROM. Unfortunately Postgres doesn't have either of those features --- yet. They might be in 7.1 if I spend less time answering email and more time coding... regards, tom lane
Re: [SQL] Repeatable reads
"Michael Richards" <[EMAIL PROTECTED]> writes: > It appears that postgres 7.0 does not support repeatable read for > transaction isolation. Is this planned? If so, when? ?? Maybe you need to do SET TRANSACTION ISOLATION LEVEL? regards, tom lane
Re: [SQL] no ORDER BY in subselects?
Louis-David Mitterrand <[EMAIL PROTECTED]> writes: > Aren't ORDER BY clauses allowed in subselects? No. This is per SQL92... regards, tom lane
Re: [SQL] ERROR: replace_vars_with_subplan_refs (!?)
Louis-David Mitterrand <[EMAIL PROTECTED]> writes: > How should I interpret that error? > ERROR: replace_vars_with_subplan_refs: variable not in subplan target list Looks like a bug to me. I think this may already be fixed in current sources, but not sure. Could I trouble you for the relevant table declarations, so I can try the example without a lot of guessing? regards, tom lane
Re: [SQL] ERROR: replace_vars_with_subplan_refs (!?)
Louis-David Mitterrand <[EMAIL PROTECTED]> writes: > On Wed, Sep 20, 2000 at 10:43:59AM -0400, Tom Lane wrote: >> Louis-David Mitterrand <[EMAIL PROTECTED]> writes: >>>> How should I interpret that error? >>>> ERROR: replace_vars_with_subplan_refs: variable not in subplan target list >> >> Looks like a bug to me. I think this may already be fixed in current >> sources, but not sure. Could I trouble you for the relevant table >> declarations, so I can try the example without a lot of guessing? > Please find attached the full dump. OK, this is indeed fixed in current sources. I think you are running into the same problem you reported in June, namely that subselects appearing in the targetlist of an Append plan are misprocessed in 7.0.*. Append is mainly used for handling inherited queries, so it's the combination of inheritance and subselect in targetlist that's needed to trigger the bug. This probably also explains the other report you filed this morning; once the subselect is messed up, all sorts of things start to go wrong :-( I think you could back-patch the fix into 7.0.* just by dropping rev 1.64 of setrefs.c into the 7.0 source tree --- see http://www.postgresql.org/cgi/cvswebtest.cgi/pgsql/src/backend/optimizer/plan/setrefs.c for that version. regards, tom lane
Re: [SQL] C functions and int8?
Forest Wilkinson <[EMAIL PROTECTED]> writes: > That's an int8 meaning "eight bit integer". I want to work with an int8 > meaning "64 bit integer", as described in the docs: > http://www.postgresql.org/users-lounge/docs/7.0/user/datatype.htm#AEN942 > So how do I return one of these suckers from a C function? Emulate the code in src/backend/utils/adt/int8.c. Currently this involves palloc'ing an int8, setting it, and returning a pointer to it. For instance, int8 addition is int64 * int8pl(int64 *val1, int64 *val2) { int64 *result = palloc(sizeof(int64)); if ((!PointerIsValid(val1)) || (!PointerIsValid(val2))) return NULL; *result = *val1 + *val2; return result; } In 7.1 it'll be a lot cleaner (IMNSHO anyway ;-)): Datum int8pl(PG_FUNCTION_ARGS) { int64val1 = PG_GETARG_INT64(0); int64val2 = PG_GETARG_INT64(1); PG_RETURN_INT64(val1 + val2); } which actually does about the same things under the hood, but you don't have to sully your hands with 'em ... regards, tom lane
Re: [SQL] sql query not using indexes
Stephan Szabo <[EMAIL PROTECTED]> writes: >> Ok I agree with you on the real database there are 127,300 rows and there >> are certanly a great number of rows > 'AAA'. But, supose I make a query >> select * from table where code > 'AAA' limit 10. it will read the entire >> table only to give me the first 10 while in release 6.5 it will fetch the >> index for the first 10 in a very fast manner, indeed the 6.5 release >> resolves in 1 second while the 7.0 release resolves in 10-20 sec. > Hmm, I believe Tom Lane was doing alot of stuff with the optimizer and > limit but I don't remember if that was before or after the 7.0 release. > It might be worth trying on current sources to see if that goes back to > an index scan. No, it'll still do a seqscan. 6.5 was in fact too ready to use indexscans; the current code may have overcorrected a shade, but I think it's closer to reality than 6.5 was. As Hiroshi already commented, the difference in results suggests that the desired data is very nonuniformly scattered in the table. 7.0 computes cost estimates on the assumption that the target data is uniformly scattered. For a sufficiently nonselective WHERE condition (ie, one that the planner thinks will match a large fraction of the table's rows) it looks better to do a seqscan and pick up the matching rows than to follow the index pointers. Adding a LIMIT doesn't change this equation. I like Hiroshi's recommendation: add an ORDER BY to help favor the indexscan. regards, tom lane
Re: [SQL] how to store a query, that results in a table
Paul Wehr <[EMAIL PROTECTED]> writes: > create view complex_view as select blah, . , effective_date.date > from tablea, tableb, tablec, effective_date > where tablea.foo=tableb.foo > > and effective_date.date between tablec.start_date=tablec.end_date > ; > insert into effective_date values ('09/23/2000'); -- may "fail" if date is > already in the table, but if it is, who cares? > select * > from complex_view > where date='09/23/2000'; Er, why don't you just do select * from complex_view where '09/23/2000' between start_date and end_date; I don't see what the effective_date table is buying ... regards, tom lane
Re: [SQL] missing "day(s)" in interval (was: Convert from Seconds-Since-Epoch to Timestamp)
Louis-David Mitterrand <[EMAIL PROTECTED]> writes: > By the way, is it normal that the "day" word doesn't appear in the interval? > auction=# select interval(now()::abstime::int4); > interval > > 30 years 9 mons 3 15:42:09 > (1 row) Looks like a bug to me ... Thomas, do you agree? Curiously, it appears that the interval input converter will accept this with or without the 'days' keyword included. regards, tom lane
[SQL] Re: [HACKERS] RFC - change of behaviour of pg_get_userbyid & pg_get_viewdef?
Philip Warner <[EMAIL PROTECTED]> writes: > pg_get_viewdef > returns 'Not a view' when passed a non-existant or non-view table > it also signals errors when the underlying metadata can not be found. > The proposal is to return NULL in the above cases - in the final case, > probably also generate a NOTICE. I don't believe it's practical to trap errors and return a NULL for broken views. Moreover, I do not think it's a good idea to respond to client errors (invalid view name) the same as database problems (broken views). So, I agree with the part of the proposal that says to return NULL instead of 'Not a view' when there is no view by the given name, but I do not agree with trying to suppress errors due to metadata problems. regards, tom lane
Re: [SQL] SQL functions not locking properly?
Forest Wilkinson <[EMAIL PROTECTED]> writes: > session2<< select nextid('myid'); > (session2 blocks until session1 completes its transaction) > session1<< commit; session1> COMMIT > (session2 resumes) session2> nextid session2> session2> 0 session2> (1 row) > What gives??? I expected the second call to nextid() to return 2! Hmm. If you do the same commands without wrapping them in an SQL function, they operate as expected. I'll bet there's some nasty interaction between the FOR UPDATE support and the way that SQL functions twiddle the current-command counter. Sigh, another bug. You might try it in plpgsql to see if that language has the same bug (and please report back the results!). If so, the only workaround right now is not to use a function, or possibly to code it in C using the SPI interface. I'm up to my armpits in subselect-in-FROM right now, but will put this on my to-do list. Will look at it in a week or two if no one else has fixed it before then... regards, tom lane
Re: [SQL] Subqueries in from clause?
Meszaros Attila <[EMAIL PROTECTED]> writes: > How far is the above subject from beeing implemented? I'm looking at it right now ... no promises yet, though. regards, tom lane
Re: [SQL] Date problem
>> Ok. Let's work. I posted a mail before explaining a strange >> cituation if my Postgresql: when I use date_part() function to split >> day, month and year of a date type column, it returns one day before. >> In other words, '2000-01-01' returns day: 31, month:12, year: 1999. > No problem here. Or on PG-ACCESS. There's a known (and fixed for 7.1) bug of this sort that occurs on spring daylight-savings-time transition days: the date-to-timestamp conversion function picks the wrong timezone offset to interpret "midnight" in, yielding a timestamp for 11PM of the prior date. On the fall DST date, you get a timestamp for 1AM. For example, in USA zones: play=> select '2000-04-02'::date::timestamp; ?column? 2000-04-01 23:00:00-05 (1 row) play=> select '2000-10-29'::date::timestamp; ?column? 2000-10-29 01:00:00-04 (1 row) Not sure how this would apply to 2000-01-01, though. What timezone are you in, anyway? regards, tom lane
Re: [SQL] Q: spinlock on Alpha? (PG7.0.2)
"Emils Klotins" <[EMAIL PROTECTED]> writes: > RedHat Linux 6.2 on Alphaserver DS10 (egcs-1.1.2, glibc-2.1.3, > libstdc++-2.9.0). > Postgresql-7.0.2 source > Compiles and installs just fine. However, both the regular initdb and > parallel regression testing's initdb stage fails with a core file and > message: > FATAL: s_lock (2030d360) at spin.c:116, stuck spinlock. Aborting. > FATAL: s_lock (2030d360) at spin.c:116, stuck spinlock. Aborting. I was just fooling with PG on a RedHat Alpha box that DEC kindly loaned to the project. It looks like the above problem is caused by compiler optimization; although src/template/linux_alpha optimistically sets CFLAGS=-O2, I had to back off to -O1 to avoid that same spinlock complaint, and I couldn't get 7.0.* to pass regression tests with anything but -O0. (And even there, there were a bunch of failures in the datetime-related tests; it looks like our abstime datatype breaks rather badly on this platform.) I haven't had time yet to try current sources on that box, but I'm optimistic that the new function manager will solve a lot of portability problems on Alphas. Still, I don't understand why -O2 breaks spinlocks --- maybe egcs is misoptimizing around the inline assembly code of tas() ? regards, tom lane
Re: [SQL] Something I'd like to try...
Bruce Momjian <[EMAIL PROTECTED]> writes: > Can someone comment on this? >> >> I just noticed that postgres doesn't totally support >> column aliases on UPDATE statements, for example The SQL92 spec very clearly does not allow an alias on the target table: 13.10 ::= UPDATE SET [ WHERE ] While I'm willing to consider variations from the spec that add significant functionality, this proposed addition adds no functionality worth noticing. It'd just be another way to trip yourself up when moving across DBMSes. regards, tom lane
Re: [SQL] Something I'd like to try...
Barry Lind <[EMAIL PROTECTED]> writes: > I have a table that stores a hierarchy. Sometimes in an update I want > to join back to the same table to get other information related to > children or parent rows. In Oracle I can do this using the alias, but > in Postgresql I cannot. > update table_foo f1 > set column_b = 1 > where exists (select column_a from table_foo f2 > where f2.parent_foo_id = f1.foo_id); What's wrong with the spec-compliant update table_foo set column_b = 1 where exists (select column_a from table_foo f2 where f2.parent_foo_id = table_foo.foo_id); ? regards, tom lane
Re: [SQL] OID Perfomance - Object-Relational databases
Josh Berkus <[EMAIL PROTECTED]> writes: > 1. Is there a performance loss on searches and joins when I use the OID > as a liniking field as opposed to a SERIAL column? > 2. Can I define my own index on the OIDs of a table? There is absolutely no magic about OIDs, except that the system insists on adding one to every row you store. In particular, they don't offer any magic fast way to find a tuple. If you want fast retrieval by OID in a particular table then you *MUST* define an index on the OID column, like so: CREATE TABLE foo ( ... ); CREATE INDEX foo_oid_index ON foo (oid); The performance of an index on OID will be indistinguishable from the performance of an index on an int4 or serial column. By and large I'd recommend using a serial column in preference to OIDs, though, for two reasons: 1. dump/restore is more practical that way (don't have to worry about saving/reloading OIDs). 2. counter overflow problems hit you only per-table, not per-installation. regards, tom lane
Re: [SQL] pesky select query
"Ingram, Bryan" <[EMAIL PROTECTED]> writes: > What I'd like to do is produce a query that will > list each app_code and the count of distinct states for that app_code on one > line. I don't care what the state_code is, I just need to know the number > of distinct state_codes per app_code. > So far I've tried variations of select distincts with counts() and group > by's on various fields. I briefly considered trying to do this with an > outer join, but I'm not sure it'd work. I'm still using 6.5.x so I'd have to > simulate it with a union all/subquery. For that reason I thought'd I'd ask > here first to see if there is a better way. Update to 7.0.* and do select app_code, count(distinct state_code) from tab group by app_code; regards, tom lane
Re: [SQL] OID Perfomance - Object-Relational databases
Michael Fork <[EMAIL PROTECTED]> writes: > Just a thought, but couldn't you create a sequence that is referenced by > multiple tables, I was going to suggest exactly that. It's not quite as simple as a "serial" column declaration, but you can split the use of an ID sequence generator over just as many tables as you need to have unique IDs across. That way you don't have a problem at dump/reload time, and you don't exhaust your ID space any faster than you must. regards, tom lane
Re: [SQL] OID Perfomance - Object-Relational databases
Josh Berkus <[EMAIL PROTECTED]> writes: > SO I'm concerned about the problems you mentioned above. pg_dump has a > -o option; are there problems with this? And how liekly are counter > overflow problems? The trouble with pg_dump -o is that after reload, the OID generator will be set to max(any OID in the dumped data). So a dump & reload doesn't do anything to postpone OID-wraparound Ragnarok. As for the likelihood of overflow, figure 4G / tuple creation rate for your installation (not database, but whole installation controlled by one postmaster). Unless your installation has just one active table, per-table sequence values look like a better bet. BTW, there *is* talk of providing an 8-byte-OID option, but I'm not holding my breath for it. regards, tom lane
Re: [SQL] Q: performance on some selects (7.0.2)?
"Emils Klotins" <[EMAIL PROTECTED]> writes: > Sort (cost=171.93..171.93 rows=1 width=56) > -> Nested Loop (cost=0.00..171.92 rows=1 width=56) > -> Nested Loop (cost=0.00..169.95 rows=1 width=36) > -> Seq Scan on articles_groups x (cost=0.00..12.10 > rows=1 width=4) > -> Seq Scan on articles a (cost=0.00..135.55 rows=636 > width=32) > -> Seq Scan on newscategories c (cost=0.00..1.43 rows=43 > width=20) Given the small estimated costs and row counts, I wonder whether you've ever vacuumed articles_groups and newscategories. The plan is not too unreasonable if the planner is right about how many rows will be matched in each of those tables --- but if you've never done a vacuum then the planner has no accurate stats to work with, so its guesses are likely way off. The plan is clearly handicapped by the lack of indexes on article.id and articles_groups.groupid, also. You seem to have indexes on all the wrong columns of articles :-( ... each of those indexes costs you on updates, but will it ever be useful in a query? regards, tom lane
Re: [SQL] Typecast a user-defined type?
Mark Volpe <[EMAIL PROTECTED]> writes: > someone give me a hint, or point me to the appropriate material on how to get > "value::text" to work with my new type? Write a function "text(yourtype) RETURNS text", and the parser will use it for type coercions from yourtype to text. In general any type coercion can be provided this way --- a function of a single argument, having the same name as its return type, represents a coercion path. regards, tom lane
Re: [SQL] Object syntax
"Josh Berkus" <[EMAIL PROTECTED]> writes: > What you seem to be telling us is that, other than > inheritance, PGSQL doesn't really support OODB functionality > at this time. Is that an accurate summary assessment? What's your definition of "OODB functionality"? That's the kind of term that can mean different things to different people... regards, tom lane
Re: [SQL] Granting of permissions on tables
"Saltsgaver, Scott" <[EMAIL PROTECTED]> writes: > Is this a bug or desired behavior? I would imagine since I owned the tables > and then granted permissions to another user, I wouldn't lose my > permissions. It's a bug, or at least a misfeature. As long as you haven't done any explicit grants or revokes, 7.0 uses an implicit access control list that grants all privileges to the owner and none to anyone else. However, the moment you do any explicit grant/revoke, that implicit ACL entry for the owner isn't used anymore. You have to explicitly grant rights to yourself again :-(. You don't need superuser help to do this, you just have to do GRANT ALL ON table TO yourself as the table owner. But it's stupid to have to do that when it's supposed to be the default condition. Fixed for 7.1. regards, tom lane
[SQL] Re: [HACKERS] Counting bool flags in a complex query
Quite awhile ago, Michael Richards <[EMAIL PROTECTED]> wrote: > It looks like the order by is only being applied to the original select, > not the unioned select. Some authority should check on it, but by thought > it that a union does not necessarily maintain the order, so the entire > select should be applied to the order. Just FYI, I have committed code for 7.1 that allows ORDER BY to work correctly for a UNION'd query. A limitation is that you can only do ordering on columns that are outputs of the UNION: regression=# select q1 from int8_tbl union select q2 from int8_tbl order by 1; q1 --- -4567890123456789 123 456 4567890123456789 (4 rows) regression=# select q1 from int8_tbl union select q2 from int8_tbl order by int8_tbl.q1+1; ERROR: ORDER BY on a UNION/INTERSECT/EXCEPT result must be on one of the result columns In the general case of an arbitrary ORDER BY expression, it's not clear how to transpose it into each UNION source select anyway. It could be made to work for expressions using only the output columns, but since ORDER BY expressions are not standard SQL I'm not in a big hurry to make that happen... regards, tom lane
Re: [SQL] Granting of permissions on tables
"Saltsgaver, Scott" <[EMAIL PROTECTED]> writes: > After I ran into this condition, the first thing I tried was to grant > permissions back to myself. PostgreSQL shot me down with a permission > denied error. So I had to log is as the superuser and then grant > permissions to myself. Are you sure about that? What version are you running? I get play=> select version(); version -- PostgreSQL 7.0.2 on hppa2.0-hp-hpux10.20, compiled by gcc 2.95.2 (1 row) play=> select usename, usesuper from pg_user; usename | usesuper --+-- postgres | t tgl | f tree | f (3 rows) play=> select current_user; getpgusername --- tgl (1 row) play=> create table bar (f1 int); CREATE play=> select * from bar; f1 (0 rows) play=> grant all on bar to tree; CHANGE play=> select * from bar; ERROR: bar: Permission denied. play=> grant all on bar to tgl; CHANGE play=> select * from bar; f1 (0 rows) play=> AFAICT from both experiment and looking at the sources, a table owner is allowed to change the table's access permissions whether or not he's currently got any permissions granted to himself; ie, changing permissions is not a grantable/revokable right, it's just checked on the basis of who you are. regards, tom lane
Re: [SQL] Re: [HACKERS] Counting bool flags in a complex query
"Josh Berkus" <[EMAIL PROTECTED]> writes: >> Just FYI, I have committed code for 7.1 that allows ORDER BY to work >> correctly for a UNION'd query. A limitation is that you can only do >> ordering on columns that are outputs of the UNION: > As far as I know, that limitation is standard to all SQL > that supports UNION; the relational calculus (I'm told) is > impossible otherwise. It's not very reasonable to imagine ordering on arbitrary expressions; how would you interpret the expression in each sub-SELECT? But it's reasonable to imagine ordering on expressions that use only the output columns of the UNION-type query: SELECT q1, q2 FROM tbl1 UNION SELECT ... ORDER BY q1+q2; However, I didn't try to implement this yet. > So ... we keep hearing about all the fantastic fixes in 7.1. > When will a stable build show up? :-) How stable is stable? I'd say it's plenty stable enough for beta testing now, even though we're not putting out formal beta releases quite yet. You could grab a nightly snapshot off the FTP server if you want to try it. (Beware that you will most likely have to do another initdb before beta, so loading lots and lots of data into a snapshot installation is probably a waste of time.) regards, tom lane
Re: [SQL] Can I use subselect as a function parameter?
Philip Warner <[EMAIL PROTECTED]> writes: > The short answer is that subselect in CHECK is not currently supported, and > is not planned for 7.1. We should, however, try to make 7.1 deliver a more helpful error message ;-). I've put a note about it on my todo list. regards, tom lane
Re: [SQL] Standard syntax?
"Franz J Fortuny" <[EMAIL PROTECTED]> writes: > I have been using (with success) this SQL statement in PostgreSQL: > select col1, > case when col2 = true then > col3 > else > col4 > end as colw, > colM > where etc. > The above syntax, however, does not work for Interbase (6.01). CASE expressions are specified in SQL92, but they're labeled as an "intermediate SQL" feature rather than an "entry SQL" feature. So I'm not surprised that some other DBMSes don't have them. Entry SQL is a pretty impoverished subset (no VARCHAR type, to take a random example), so nearly everyone implements at least some intermediate- and full-SQL features. But exactly which ones is highly variable. regards, tom lane
Re: [SQL] Variable-length Types
Josh Berkus <[EMAIL PROTECTED]> writes: > I'm a bit confused on custom type declarations, actually. I'd like to > create a custom type for American phone numbers, such that: > ... > 1. Can I reference a custom function (phoneformat) in a type definition? Of course. The input and output converters for the new type would be the natural place to do the reformatting. You'd probably make the input converter do error checking, insertion of default area code, and reduction to a pure digit string, and then make the output converter insert the fluff data like parentheses and dashes. However, building a new type for this seems like overkill, because you'd also have to supply a set of functions and operators for the type. It would be a lot less work just to provide a normalization function interpret_phone_no(text) returns text which could be invoked explicitly, eg insert into tab values (..., interpret_phone_no('5551212'), ...) or implicitly in ON INSERT and ON UPDATE triggers for the table. > 3. What sort of trouble am I going to get into trying to pull data from > a custom type into an external interface (i.e. PHP4)? Good question --- the interface code might or might not have a sensible default behavior for types it doesn't recognize. regards, tom lane
Re: [SQL] problem with select where like ']'
hubert depesz lubaczewski <[EMAIL PROTECTED]> writes: > but i'm just wondering why like ']' doesn't work. What LOCALE setting are you running the postmaster in? ']' is not a special character as far as LIKE is concerned, but I suspect you may be seeing another variant of the problems that LIKE index optimization has with peculiar collation rules. You can find plenty of discussion of this in the mailing list archives :-( If there is an index on the field you are doing LIKE on, try dropping the index to see if that makes the problem go away. Another possibility is that you have been careless about always starting the postmaster with the same LOCALE setting, in which case the index may actually be corrupt (out of order) due to different records having been inserted with different ideas about what the sort ordering should be. In that case, dropping and recreating the index should help. regards, tom lane
Re: [SQL] problem with select where like ']'
Joseph Shraibman <[EMAIL PROTECTED]> writes: > WHAT mailing list archives? > They aren't linked to anywhere on www.postgresql.org that I can find. Hmm. My bookmark is http://www.postgresql.org/lists/mailing-list.html Dunno how to get there from the site toplevel... regards, tom lane
Re: [SQL] Function that returns a tuple.
Josh Berkus <[EMAIL PROTECTED]> writes: >> How can I write a function in sql or pl- pgsql that returns a set of >> tuples ? > I've had the same question myself for the past month, This is easy enough in terms of what you do in the function, but it's not as useful as you might think because of the weird POSTQUEL-derived syntax for calling such a function. The key concept is that a table name is also a type name for the struct type of its rows. So for example, create table tab1 (f1 int, f2 text); create function find_tab(int) returns setof tab1 as 'select * from tab1 where f1 = $1' language 'sql'; insert into tab1 values (1,'one'); insert into tab1 values (1,'another one'); insert into tab1 values (2, 'two'); The problem is that struct types aren't really supported by the expression evaluator; the only thing you can usefully do with them is select out one column. So this doesn't work very well: select find_tab(1); ?column? 1074362792 1074362792 (2 rows) The obvious way to do the selecting of a column is select find_tab(1).f2; ERROR: parser: parse error at or near "." but for some reason the parser won't accept that. (Perhaps this could be fixed, I haven't looked into it.) The only way to invoke such a function at present is to apply it to a column from some other table and write it as a POSTQUEL dot-function: select tab2.fld1.find_tab.f2 from tab2; which is the POSTQUEL notation for what a saner person would call select find_tab(tab2.fld1).f2 from tab2; Bizarre syntax aside, the real problem with this whole approach is that if you've got a function returning tuple(s), you probably want to be able to get at the tuple elements and do things with them, *without* re-evaluating the function for each such use. So I think what we really want to do is to allow functions returning tuple sets to be elements of a FROM clause: select f1 + 1, f2 from find_tab(33); This has been speculated about but no one's looked at what it would take to make it work. regards, tom lane
Re: [SQL] substr
Jeff MacDonald <[EMAIL PROTECTED]> writes: > i noticed that substr behaves a bit different in pgsql than perl > ie select foo from table where substr(foo,1,1) = 'X'; > just wondering on the reasoning for this offset ? Larry Wall and the SQL92 authors didn't talk to each other... We are implementing SQL around here, not Perl, so we have to follow the SQL spec's definition of substr(). regards, tom lane
Re: [SQL] knowing which columns have beend UPDATEd inside a TRIGGER?
Louis-David Mitterrand <[EMAIL PROTECTED]> writes: > Is there a way to know which columns are being UPDATEd or INSERTEd from > inside a trigger, either in C or pl/pgsql? Huh? An INSERT always inserts all columns, by definition. Some of them might be null and/or equal to their default values, but they're all there. For an UPDATE, you could check to see whether old.col = new.col. This would miss the case where an UPDATE command is explicitly setting a column to the same value it already had; dunno if you care or not. regards, tom lane
Re: [SQL] Alternate Database Locations
"Brian C. Doyle" <[EMAIL PROTECTED]> writes: > I am still trying to find out how to get multiple postmasters running on > different ports at the same time. Does anyone have any clue how to do that? Uh, you just do it. Start each postmaster in a different data directory and with a different port number (-D and -P switches) and you're set. Unless you run out of shared memory or some such, in which case some tweaking of kernel parameters is called for... regards, tom lane
Re: [SQL] Surprising sequence scan when function call used
"Will Fitzgerald" <[EMAIL PROTECTED]> writes: > vdsq=> explain select * from login where login.login = lower('foo'); > Seq Scan on login (cost=0.00..1361.86 rows=609 width=62) 7.0 is a little bit stupid about cross-data-type comparisons (lower() yields text, not char(n)). This example works OK in current sources, but until 7.1 comes out you'll need to write something like where login.login = lower('foo')::char; Or change the login field to type text... regards, tom lane
[SQL] Re: [HACKERS] Conditional query plans.
"Michael Richards" <[EMAIL PROTECTED]> writes: > The second and probably less optimal plan would be to create a hash > of these 25 answers and do a sequential scan on users updating rows > where id is found in that hash. Given the presence of the "materialize" nodes, I don't think this query plan is quite as nonoptimal as you think, especially for ~25 rows out of the subplan. It's a linear search over a 25-entry table for each outer row, but so what? With hundreds or thousands of rows out of the subquery, it'd be nice to have a smarter table lookup method, agreed, but here it hardly matters. Something that's been on the todo list for a long time is to try to convert WHERE foo IN (SELECT ...) queries into some kind of join, instead of a subselect. With that approach we'd be able to use merge or hash strategies to match up inner and outer rows, which'd work a lot better when there are large numbers of rows involved. It might actually happen for 7.2... regards, tom lane
Re: [SQL] plperl
Jie Liang <[EMAIL PROTECTED]> writes: > su-2.04# make > "../../../src/Makefile.global", line 135: Need an operator > "../../../src/Makefile.global", line 139: Missing dependency operator Hmm, is "make" on your machine GNU make? If not try "gmake". regards, tom lane
Re: [SQL] Alternate Database Locations
"Brian C. Doyle" <[EMAIL PROTECTED]> writes: > createdb user1 -D 'PGDATA2' > it creates but it is putting it in /home/user/database > the first alt location i set up!!! Um, are you sure createdb is connecting to the right postmaster? When you use multiple postmasters you have to keep a close eye on clients' PGPORT settings ... If it is the right postmaster but wrong interpretation of PGDATA2, the odds are that you started that postmaster with the wrong value of PGDATA2. Remember that it's the postmaster's environment, not the client's, in which alternate-DB environment variables are expanded. (initlocation is an exception because it doesn't contact the postmaster) regards, tom lane
Re: [SQL] Alternate Database Locations
"Brian C. Doyle" <[EMAIL PROTECTED]> writes: > Okay I am still doing something wrong here > I set PGDATA2=/home/user1/database > export PGDATA2 > then I start postmaster > postmaster -d PGDATA2 -i -p 5431 & I think you are confusing the "alternate database directory" stuff with having a separate installation (primary directory). To set up a second postmaster, you first have to initdb its installation directory: initdb -D /home/user1/database then start the postmaster like so: postmaster -D /home/user1/database -p whatever ... Instead of -D (note upper case), you can alternatively set env variable PGDATA for these two programs. The stuff in the manual about alternate database directories is to allow a *single* postmaster to manage databases located somewhere other than underneath its installation directory. It's got nothing at all to do with starting additional postmasters. regards, tom lane
Re: [SQL] Query Problem
Josh Berkus <[EMAIL PROTECTED]> writes: > Thus, I need to select: > SELECT Data FROM Table A > WHERE CaseID NOT IN ( > SELECT CaseID FROM Table_B, Table_C > WHERE Table_B.GroupID = TableC.GroupID > AND TableC.AccountID = 11) > The problem is, since Table_B and Table_C are large (10,000 records +) > this exclusion query takes several *minutes* to run. I don't think there is any good way to make this fast in current sources. A partial workaround is to use a temp table: SELECT CaseID INTO TEMP TABLE mycaseids FROM Table_B, Table_C WHERE Table_B.GroupID = TableC.GroupID AND TableC.AccountID = 11; CREATE INDEX mycaseids_idx ON mycaseids(caseid); -- critical! SELECT Data FROM TableA upper WHERE NOT EXISTS (select 1 from mycaseids where caseid = upper.caseid); You'd need to check with EXPLAIN, but the EXISTS subplan should make use of the index to probe the temp table, so you get one index lookup per outer tuple. Better than a complete scan of the subselect outputs, which is what you'll get with the NOT IN style. In 7.1 it'll be possible to do this with an outer join, which should be a lot quicker: SELECT Data FROM TableA LEFT JOIN (SELECT CaseID FROM Table_B, Table_C WHERE Table_B.GroupID = TableC.GroupID AND TableC.AccountID = 11) subselect ON (tablea.caseid = subselect.caseid) WHERE subselect.caseid IS NULL; ie, do the outer join and then discard the successfully-matched rows. Further down the pike, we have plans to make the system smart enough to transform IN and NOT IN constructs into join-like queries automatically. Right now, though, they're best rewritten into something else when performance is important. regards, tom lane
Re: [SQL] HELP! ... pg_locale ???
Sandis Jerics <[EMAIL PROTECTED]> writes: > As result, now all queries, written inside the php code on multiply lines, > returns the following: > ERROR: parser: parse error at or near " " At a guess, you're having trouble with newline representations (Unix convention is \n only, DOS/Windows convention is \r\n, and then there's Macintosh which likes \r only). We've been working to change Postgres to accept all of these choices, but depending on which version of which tool you are using, you may need to toe the Unix line faithfully. The above message looks a lot like something spitting up on a stray \r. Dunno what your admin did to make the problem appear where you hadn't had it before... regards, tom lane
Re: [SQL] except on nulls?
Daniel Kalchev <[EMAIL PROTECTED]> writes: > [ EXCEPT behaves oddly in the presence of NULLs ] Yup, it does, because it's implemented like NOT IN, and NOT IN on a set containing nulls can never return 'true', only 'false' or 'unknown'. For example, 1 NOT IN (1,2,NULL) is clearly FALSE. But 3 NOT IN (1,2,NULL) is not clearly either true or false --- the null is effectively "I don't know what this value is", and so it's unknown whether 3 is equal to it or not. The SQL92 spec mandates that this NOT IN result be 'unknown' (NULL), which is then treated like 'false' by EXCEPT. Net result: nulls in EXCEPT's right-hand set cause its output set to be empty. While this behavior is all according to spec for IN/NOT IN, it's *not* according to spec for EXCEPT, because the spec defines UNION/INTERSECT/ EXCEPT in terms of a different concept, of rows being "distinct" or "not distinct". NULLs are distinct from non-NULLs and so a null row behaves the way you'd expect. UNION/INTERSECT/EXCEPT are reimplemented for 7.1 in a way that behaves according to spec. There's no simple patch for 7.0.* unfortunately. > (but I sort of think this worked before...) Could be. Before 7.0, IN/NOT IN were not up to spec on NULL handling either, so EXCEPT probably worked differently in this case then. > ERROR: Unable to identify an operator '<>' for types '_text' and '_text' > You will have to retype this query using an explicit cast There are no comparison operators for array types ... regards, tom lane
Re: [SQL] HELP! ... pg_locale ???
Sandis Jerics <[EMAIL PROTECTED]> writes: > As result, now all queries, written inside the php code on multiply lines, > returns the following: > ERROR: parser: parse error at or near " " At a guess, you're having trouble with newline representations (Unix convention is \n only, DOS/Windows convention is \r\n, and then there's Macintosh which likes \r only). We've been working to change Postgres to accept all of these choices, but depending on which version of which tool you are using, you may need to toe the Unix line faithfully. The above message looks a lot like something spitting up on a stray \r. Dunno what your admin did to make the problem appear where you hadn't had it before... regards, tom lane
Re: [SQL] except on nulls?
Daniel Kalchev <[EMAIL PROTECTED]> writes: > [ EXCEPT behaves oddly in the presence of NULLs ] Yup, it does, because it's implemented like NOT IN, and NOT IN on a set containing nulls can never return 'true', only 'false' or 'unknown'. For example, 1 NOT IN (1,2,NULL) is clearly FALSE. But 3 NOT IN (1,2,NULL) is not clearly either true or false --- the null is effectively "I don't know what this value is", and so it's unknown whether 3 is equal to it or not. The SQL92 spec mandates that this NOT IN result be 'unknown' (NULL), which is then treated like 'false' by EXCEPT. Net result: nulls in EXCEPT's right-hand set cause its output set to be empty. While this behavior is all according to spec for IN/NOT IN, it's *not* according to spec for EXCEPT, because the spec defines UNION/INTERSECT/ EXCEPT in terms of a different concept, of rows being "distinct" or "not distinct". NULLs are distinct from non-NULLs and so a null row behaves the way you'd expect. UNION/INTERSECT/EXCEPT are reimplemented for 7.1 in a way that behaves according to spec. There's no simple patch for 7.0.* unfortunately. > (but I sort of think this worked before...) Could be. Before 7.0, IN/NOT IN were not up to spec on NULL handling either, so EXCEPT probably worked differently in this case then. > ERROR: Unable to identify an operator '<>' for types '_text' and '_text' > You will have to retype this query using an explicit cast There are no comparison operators for array types ... regards, tom lane
Re: Re(2): Re(2): [SQL] Large Object dump ?
"pgsql-sql" <[EMAIL PROTECTED]> writes: > Exported 1131 large objects. > NOTICE: LockReleaseAll: xid loop detected, giving up Pre-7.0 LockReleaseAll() contained an entirely arbitrary assumption that it should never encounter a situation where there were more than 1000 locks held by one transaction :-(. So it chokes when you access more than 1000 LOs in the same transaction. Dunno about your other issue, but clearly your application is failing to report whatever error message was returned when the transaction was aborted... regards, tom lane
Re: [SQL] Subqueries in from clause
Pierre Habraken <[EMAIL PROTECTED]> writes: > It looks like if subqueries in from clause are not supported by > PostgreSQL. Am I right ? If yes, are there any plans to provide this > feature soon ? Already there in current sources for 7.1 ... regards, tom lane
Re: [SQL] UNION in views
Roberto Mello <[EMAIL PROTECTED]> writes: > I was wondering if UNIONs in VIEWS will be supported too. Already there for 7.1. regards, tom lane
Re: [SQL] how to index a numeric(12,2) column?
Forest Wilkinson <[EMAIL PROTECTED]> writes: > I'd like to create an index on a column whose type is NUMERIC(12,2). > There appears to be no default operator class for the numeric type. Uh, what version are you using? Works fine for me in 7.0.2. regards, tom lane
Re: [SQL] how to index a numeric(12,2) column?
Forest Wilkinson <[EMAIL PROTECTED]> writes: > Sorry; I hit send before adding that information. I'm using postgres > 6.5.3 on i386 Red Hat 6.1. Time to update, then. > It allows me to create an index on a NUMERIC(12,2) field using the > int8_ops class, but I'm wondering if this might have some undesirable > hidden side effects. Yes, and they won't be too hidden either: it won't work :-( Current sources check for that sort of type mismatch, but 6.5 failed to do so. regards, tom lane
Re: [SQL] shared memory problem
<[EMAIL PROTECTED]> writes: > /usr/local/pgsql/bin/postmaster -B 1000 -o "-S 2000" -S -D > /usr/local/pgsql/data > and it says: > IpcMemoryCreate: shmget failed (Invalid argument) key=5432001, > size=8899584,permission=600 > This type of erro is usually caused by an improper shared memory or System V > IPC semaphore configuration. Form more information Most likely your kernel isn't set to allow shared memory blocks as large as 8M. regards, tom lane
Re: [SQL] alter table add column implementation undesirable?
Forest Wilkinson <[EMAIL PROTECTED]> writes: > A coworker told me that the postgres implementation of ALTER TABLE ADD > COLUMN creates an inefficient database. Dunno where he got that idea. There are some problems lurking in ADD COLUMN when used on a table with inheritance children --- the new column is added to the children too, as it should be, but in an unexpected column position, which causes trouble for pg_dump (a dump and reload will do the wrong thing). Perhaps what you heard is a garbled report of that issue. regards, tom lane
Re: [SQL] SELECT FROM t1 WHERE id IN (SELECT id FROM t2 UNION SELECT id FROM t3)
Christophe Boyanique <[EMAIL PROTECTED]> writes: > SELECT idProduct FROM products > WHERE idProduct IN > (SELECT id FROM orders WHERE ts>'2000-10-01 17:04:00' > UNION SELECT id FROM preorders WHERE ts>'2000-10-01 17:04:00') > ORDER by name; > and I've got a parse error near UNION or SELECT depending of the Current releases don't handle UNION in sub-selects. 7.1 will. In the meantime, consider doing the UNION into a temp table and then using that for the IN operator. regards, tom lane
Re: [SQL] Re: [GENERAL] Problem with coalesce..
Stephan Szabo <[EMAIL PROTECTED]> writes: > There are still some contexts in which subqueries in expressions are > wierd. The example below appears to work in current sources however. >> SELECT COALESCE ((SELECT CURVALUE FROM TABLEKEYS WHERE TABLENAME = 'BUYER'),1) >> returns a "UNKNOWN expression type 501" AFAICT it works in 7.0.* as well. Are you sure this was 7.0.2, and not 6.5.something? regards, tom lane
Re: [SQL] shared memory problem
Michael Teter <[EMAIL PROTECTED]> writes: > Perhaps there's some issue with the shared memory not > being freed quickly enough by the kernel? Never heard of that before. There is an issue with time delays on release of port addresses --- sometimes you may get a failure to bind() to a port address that was just released by a previous incarnation of the postmaster. But I haven't heard of such a thing for shared memory or semaphores. regards, tom lane
Re: [SQL] Trigger cant find function
"Graham Vickrage" <[EMAIL PROTECTED]> writes: > ERROR: CreateTrigger: function get_prod_cost_price() does not exist > It is clear that it does exist so why does the trigger creation code not > find it? Because the code is looking for a function of no arguments, which yours is not. The method for dealing with arguments passed to triggers is, um, arcane --- I think you look in an implicitly declared array named TGARG, or something like that. You don't receive them as normal function arguments, anyway. regards, tom lane
Re: [SQL] Using Array-Values in subselect
Stephan Szabo <[EMAIL PROTECTED]> writes: > If you look in contrib of the source, there is a set of > array operators(functions) including element in set. > That'll probably do what you want (you don't do an in > actually, it'll be like ) The array stuff is pretty simplistic at the moment, and could be taken a lot further if there were someone who wanted to work on it. (hint hint) regards, tom lane
Re: [SQL] Index problem
Patrik Kudo <[EMAIL PROTECTED]> writes: > select * from elever where userid like 'walth%'; <-- Not OK! > Droping and recreating the index solves the problem, but that's not > good enough for me since the problem has reoccured on a different > machine with a different database. vacuum and vacuum analyze does not > report any problem with the table. > Both times the problem occured with userid's starting with a > "w". Postgres is running with a Swedish locale, and on FreeBSD this > means that "w" and "v" (among a number of other letter) are treated > equally when collating/sorting. I suppose this could be part of the > problem. > Is this a known problem? If so, is it fixed in 7.0.2? I've not seen > this happen on any of our postgres 7.0.2 systems, but as I can't even > reproduce it on the 6.5.3, that's no guarantee it's fixed... Hmm. I can think of two known issues that may be relevant. First off, there are indeed known problems with LIKE index optimization in non-ASCII locales. I believe that 7.0 fixes the problems for locales that just have a non-ASCII sort order of individual characters, but we've since realized that it can still do the wrong thing in locales where there are special rules for handling multi-character sequences. I don't know the Swedish rules offhand, so don't know if that's a problem for you. However, a LIKE optimization problem would not be fixed by dropping and recreating the index. This makes me think you are being bit by the other issue: if you compile with LOCALE support then it is critical that the postmaster *always* be started with the same LOCALE settings (at least the same LC_COLLATE value). If LOCALE varies that means your text sort ordering varies, which means that indexes on text columns may appear out-of-order, which causes index searches to miss entries they should have found. If you insert entries under different LOCALE settings then you may wind up with an index that is not consistent with *any* single LOCALE; the only cure for that is to drop and rebuild the index. Unfortunately, it's way too easy to get bit by this bug. The most common error is to start the postmaster by hand from a shell account whose LOCALE environment is different from what's supplied when the postmaster is started from a boot-time script. Best bet is to set the correct LOCALE values in a wrapper script that you use in both cases. We have talked about fixing this by saving the active LOCALE variables at initdb time, and having the postmaster adopt those values whenever it's started. But it hasn't got done yet. (Peter, would it be easy to make GUC handle this? There'd need to be some way to cause guc.c to do a putenv() ...) regards, tom lane
Re: [SQL] Index problem
Peter Eisentraut <[EMAIL PROTECTED]> writes: >> But it hasn't got done yet. (Peter, would it be >> easy to make GUC handle this? There'd need to be some way to cause >> guc.c to do a putenv() ...) > I thought pg_control was the place for this. Any idea how long a locale > string can legally be? Doesn't look too hard beyond that. That would work too, I guess. > Then again, perhaps this should be a per-database thing, much like the > encoding. The postmaster doesn't do any collating, so it should be safe > to have different locale in different backends. -- Unless someone puts an > index on pg_database.datname. :-) There already is an index on pg_group.groname, so I think we'd better be conservative and require the same locale across the whole installation. Hmm ... doesn't that mean that per-database encoding is a bogus concept too!? regards, tom lane
Re: [SQL] Why is this doing a seq scan?
Hmm. Have you VACUUM ANALYZED the tables? If so, what do you get from these queries: select attname,attdisbursion,s.* from pg_statistic s, pg_attribute a, pg_class c where starelid = c.oid and attrelid = c.oid and staattnum = attnum and relname = 'zips'; select attname,attdisbursion,s.* from pg_statistic s, pg_attribute a, pg_class c where starelid = c.oid and attrelid = c.oid and staattnum = attnum and relname = 'atms'; Also it would be useful to see the full declarations of the tables and their indexes; I'm wondering what datatype the zip columns are, for example. regards, tom lane
Re: [SQL] Why is this doing a seq scan?
"Ingram, Bryan" <[EMAIL PROTECTED]> writes: >> Also it would be useful to see the full declarations of the tables >> and their indexes; I'm wondering what datatype the zip columns are, >> for example. > Table= atms > | zip | text | > Table= zips > | zip | varchar()| Ah, there's your problem --- the planner is not very smart about optimizing cross-datatype comparisons. Make these columns both text, or both varchar, and I'll bet you get a more intelligent plan. Current sources (7.1-to-be) are a little smarter than 7.0 about cross-data-type joins, but they still don't get this case right. I have a TODO item about that, but I dunno if it'll get done before 7.1 ... regards, tom lane
Re: [SQL] Why is this doing a seq scan?
I said: > Ah, there's your problem --- the planner is not very smart about > optimizing cross-datatype comparisons. Make these columns both text, > or both varchar, and I'll bet you get a more intelligent plan. After a little further thought, I realize that the planner may be handicapped by not realizing it can do a merge or hash join across datatypes, but even without that problem, this is not going to be a fast query. What you've got is select ... from atms x, zips y where x.zip = y.zip order by 1 limit 3; and there just isn't any way to process this without forming the full join product --- ie, the thing will sit there and form a join tuple for *every* valid combination of ATM and ZIP in your database, and then compute the distance to the target point for every one of those ATMs, and then sort that result, and finally give you only the top three rows. A smarter kind of join isn't going to help all that much; to make this fast, you need to be filtering using the really selective condition (distance to the target point) *before* you do the join. If you are sufficiently interested in the speed of this query to want to maintain a specialized index for it, I'd suggest looking at an r-tree index on the location data, and then using a WHERE condition on the r-tree index to prefilter the rows before you join. r-trees only work on boxes and polygons AFAICT --- what would work nicely is to store a "box" of very small dimensions surrounding the location of each ATM, index that column, and then use a WHERE test for overlap between that box column and a box surrounding the target point out to as far as you think is likely to be interesting. This gives you back a fairly small number of candidate ATMs for which you compute the exact distance to the target, sort, and limit. Not sure that you need to join to zips at all if you do it this way. regards, tom lane