Re: [GENERAL] utf8 errors
Hi, FYI, I had the exact same problem earlier this week, while building a new Debian Stable (Wheezy) server where postgresql version is 9.1.9-1 for a database containing accented characters. Steps where : pg_dump of a database encoded in LATIN9 on the old machine which uses the fr_FR@euro locale use iconv to convert the dump file to utf-8 on the new machine where locale is fr_FR.UTF-8 edit dump file, change : SET client_encoding = 'LATIN9'; to: SET client_encoding = 'UTF-8'; recreate db on the new machine with the dump file The database is used in a mod_perl application accessed via a navigator, similar to the one in my sig. While accented characters coming from the perl code were fine, all those out of the database would appear garbled (like : @Å ) and update queries were impossible, generating the same error message as the OP (ERROR: invalid byte sequence for encoding UTF8: 0x9c) When using ssh, I had to manually change my client encoding to UTF-8 (my workstation uses LATIN9) for the data to appear correctly on the screen. The machine had to go into production, so I finally gave up on UTF-8 and used LATIN9 as the locale. I tried reproducing the problem with 9.1 on a stock Debian Squeeze machine using backports. On this machine, accented characters would appear garbled, but update queries were possible. -- Salutations, Vincent Veyron http://marica.fr/ Gestion des contrats, des contentieux juridiques et des sinistres d'assurance -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] utf8 errors
Hello 2013/6/28 Vincent Veyron vv.li...@wanadoo.fr: Hi, FYI, I had the exact same problem earlier this week, while building a new Debian Stable (Wheezy) server where postgresql version is 9.1.9-1 for a database containing accented characters. Steps where : pg_dump of a database encoded in LATIN9 on the old machine which uses the fr_FR@euro locale use iconv to convert the dump file to utf-8 on the new machine where locale is fr_FR.UTF-8 edit dump file, change : SET client_encoding = 'LATIN9'; to: SET client_encoding = 'UTF-8'; recreate db on the new machine with the dump file The database is used in a mod_perl application accessed via a navigator, similar to the one in my sig. While accented characters coming from the perl code were fine, all those out of the database would appear garbled (like : @Å ) and update queries were impossible, generating the same error message as the OP (ERROR: invalid byte sequence for encoding UTF8: 0x9c) When using ssh, I had to manually change my client encoding to UTF-8 (my workstation uses LATIN9) for the data to appear correctly on the screen. The machine had to go into production, so I finally gave up on UTF-8 and used LATIN9 as the locale. there is a same issues in perl dbi driver with UTF8 strings - it does some artificial intelligence and try to do some utf transformations. Pavel I tried reproducing the problem with 9.1 on a stock Debian Squeeze machine using backports. On this machine, accented characters would appear garbled, but update queries were possible. -- Salutations, Vincent Veyron http://marica.fr/ Gestion des contrats, des contentieux juridiques et des sinistres d'assurance -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] utf8 errors
I forgot to mention that the machines use an amd64 processor. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] utf8 errors
On Jun 28, 2013, at 8:10, Vincent Veyron vv.li...@wanadoo.fr wrote: Hi, FYI, I had the exact same problem earlier this week, while building a new Debian Stable (Wheezy) server where postgresql version is 9.1.9-1 for a database containing accented characters. You probably had a rather different problem, as you are actually dealing with accented characters in your data. The OP was dealing with integers, which tend to not have accented characters in them. I suggest that you create a separate thread for your issue, as they're probably not related. Alban Hertroys -- If you can't see the forest for the trees, cut the trees and you'll find there is no forest. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] CASE Statement - Order of expression processing
But in the following expression: template1=# SELECT CASE WHEN (SELECT 0)=0 THEN 0 ELSE 1/0 END; ERROR: division by zero (Just to be sure, a SELECT (SELECT 0)=0; returns true) It seems that when the CASE WHEN expression is a query, the evaluation order changes. According to the documentation, this behaviour is wrong. Just to keep you updated: We have updated the documentation to alert people to this behaviour: http://www.postgresql.org/docs/devel/static/functions-conditional.html#FUNCTIONS-CASE There were considerations to change the behaviour, but that would mean that query execution time suffers in many cases. It was decided that the problem occurs only in rather artificial queries, and that it would not be worth changing the normally useful behaviour of constant folding during query planning. Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] utf8 errors
On Jun 26, 2013, at 16:58, Alban Hertroys haram...@gmail.com wrote: On 26 June 2013 11:03, Jiří Pavlovský jir...@gmail.com wrote: On 26.6.2013 10:58, Albe Laurenz wrote: Jirí Pavlovský wrote: I have a win32 application. LOG: statement: INSERT INTO recipients (DealID, Contactid) VALUES (29009, 9387) ERROR: invalid byte sequence for encoding UTF8: 0x9c But the query is clean ascii and it doesn't even contain the mentioned character. My database is in UNICODE, client encoding is utf8. Could you run the log message through od -c on a UNIX machine and post the result? Maybe there are some weird invisible bytes in there. Hi, I've already tried that before posting. See below for results. Is the message in the log the same as the message that postgres receives? 000 I N S E R T I N T O r e c i 020 p i e n t s ( D e a l I D , 040 C o n t a c t i d ) 060 100 V A L U E S What bytes are in the above between the closing brace and VALUES? Is that really white-space? Did you perhaps intentionally put white-space in between there? I just tested my theory that there may be garbage characters in your query string tripping the encoding error before a parse error: postgres= \i /usr/bin/at psql:/usr/bin/at:15: ERROR: invalid byte sequence for encoding UTF8: 0x80 (/usr/bin/at is a UNIX command executable, for this case it works as binary data) Alban Hertroys -- If you can't see the forest for the trees, cut the trees and you'll find there is no forest. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Re: [HACKERS] Frontend/backend protocol improvements proposal (request).
Dmitriy Igrishin wrote: Since there can be only one unnamed prepared statement per session, there should be only one such object per connection. It should not get deallocated; maybe it could be private to the connection, which only offers a parseUnnamed and executeUnnamed mathod. More precisely, there can be only one uniquely named prepared statement (named or unnamed) per session. Could you provide a signature of parseUnnamed and executeUnnamed please? I don't clearly understand this approach. I'm just brainstorming. I'm thinking of something like void Connection::prepareUnnamed(const char *query, int nParams, const Oid *paramTypes); and Result Connection::executeUnnamed(int nParams, const char * const *paramValues, const int *paramLengths, const int *paramFormats, int resultFormat); But I'm not saying that this is the perfect solution. If you really want your users to be able to set prepared statement names, you'd have to warn them to be careful to avoid the problem of name collision -- you'd handle the burden to them. That's of course also a possible way, but I thought you wanted to avoid that. The mentioned burden is already handled by backend which throws duplicate_prepared_statement (42P05) error. I mean the problem that you create a prepared statement, then issue DEALLOCATE stmt_name create a new prepared statement with the same name and then use the first prepared statement. Prepared_statement* pst1 = connection-describe(name); Prepared_statement* pst2 = connection-describe(name); // pst2 points to the same remote object That seems like bad design to me. I wouldn't allow different objects pointing to the same prepared statement. What is the benefit? Shouldn't the model represent reality? Well, then the C and C++ languages are bad designed too, because they allow to have as many pointers to the same as the user like (needs) :-) That's a different thing, because all these pointers contain the same value. So if pst1 and pst2 represent the same object, I'd like pst1 == pst2 to be true. Really, I don't see bad design here. Describing prepared statement multiple times will results in allocating several independent descriptors. ... but for the same prepared statement. (As with, for example, performing two SELECTs will result in allocating several independent results by libpq.) But those would be two different statement to PostgreSQL, even if the query strings are identical. Mind you, I'm not saying that I am the person that decides what is good taste and what not, I'm just sharing my sentiments. Of course an error during DEALLOCATE should be ignored in that case. It's hard to conceive of a case where deallocation fails, but the connection is fine. And if the connection is closed, the statement will be deallocated anyway. Why this error should be ignored? I believe that this should be decided by the user. As a library author I don't know (and cannot know) how to react on such errors in the end applications. Again, I would say that that is a matter of taste. I just cannot think of a case where this would be important. Btw, by the reason 2) there are no any transaction RAII classes as in some other libraries, because the ROLLBACK command should be executed in the destructor and may throw. I tend to believe that such errors could also be ignored. If ROLLBACK (or anything else) throws an error, the transaction will get rolled back anyway. Perhaps, but, again, I don't know how the user will prefer to react. So, I prefer just to throw and allow the user to decide. Agreed, it's a matter of taste. Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] utf8 errors
On 28.6.2013 9:09, Alban Hertroys wrote: On Jun 26, 2013, at 16:58, Alban Hertroys haram...@gmail.com wrote: On 26 June 2013 11:03, Jiří Pavlovský jir...@gmail.com wrote: On 26.6.2013 10:58, Albe Laurenz wrote: Jirí Pavlovský wrote: I have a win32 application. LOG: statement: INSERT INTO recipients (DealID, Contactid) VALUES (29009, 9387) ERROR: invalid byte sequence for encoding UTF8: 0x9c But the query is clean ascii and it doesn't even contain the mentioned character. My database is in UNICODE, client encoding is utf8. Could you run the log message through od -c on a UNIX machine and post the result? Maybe there are some weird invisible bytes in there. Hi, I've already tried that before posting. See below for results. Is the message in the log the same as the message that postgres receives? 000 I N S E R T I N T O r e c i 020 p i e n t s ( D e a l I D , 040 C o n t a c t i d ) 060 100 V A L U E S What bytes are in the above between the closing brace and VALUES? Is that really white-space? Did you perhaps intentionally put white-space in between there? I just tested my theory that there may be garbage characters in your query string tripping the encoding error before a parse error: postgres= \i /usr/bin/at psql:/usr/bin/at:15: ERROR: invalid byte sequence for encoding UTF8: 0x80 (/usr/bin/at is a UNIX command executable, for this case it works as binary data) Alban Hertroys -- If you can't see the forest for the trees, cut the trees and you'll find there is no forest. Hi, I've already found the problem - as could have been expected it was due to a bug in my code. And the offending query was not the one above. It was the next one, which did not get logged. So, actually, you are right. Thanks, -- Jiří Pavlovský
Re: [GENERAL] auto_explain FDW
David Greco wrote: In my development environment, I am using the auto_explain module to help debug queries the developers complain about being slow. I am also using the oracle_fdw to perform queries against some oracle servers. These queries are generally very slow and the application allows them to be. The trouble is that it appears auto_explain kicks in on the query to try and explain them when they take longer than the configured threshold. In this particular case, the Oracle user is very locked down and cannot actually perform an explain. Therefore an error gets raised to the client. I would suggest one of two things- either make the error that gets raised simply be a notice/warning, or preferably just add an option to auto_explain to enable/disable its operation on queries involving foreign servers. I'm reluctant to change oracle_fdw to not throw an error if it doesn't have the permission to explain the query when you ask it to --- for one, what should it return in that case? I'd say that the solution in this case would be to temporarily allow the user to query the necessary Oracle catalogs. If you debug in a production scenario, you'll have to make compromises (similar to granting the PLUSTRACE role if you want to use AUTOTRACE with SQL*Plus). Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Why are there no inequality scans for ctid?
Hi, while working on removing bloat from some table, I had to use ltos of logic simply because there are no (idnexable) inequality scans for ctids. Is it because just noone thought about adding them, or are there some more fundamental issues? I could imagine that things like: select * from table where ctid @ '123' could return all rows from 123rd page, or I could: select * from table where ctid = '(123,0)' and ctid '(124,0)'; Having such operators work would greatly improve bloat reducing options. Best regards, depesz -- The best thing about modern society is how easy it is to avoid contact with it. http://depesz.com/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Why are there no inequality scans for ctid?
On Fri, Jun 28, 2013 at 2:07 PM, hubert depesz lubaczewski dep...@depesz.com wrote: Hi, while working on removing bloat from some table, I had to use ltos of logic simply because there are no (idnexable) inequality scans for ctids. Is it because just noone thought about adding them, or are there some more fundamental issues? I could imagine that things like: select * from table where ctid @ '123' could return all rows from 123rd page, or I could: select * from table where ctid = '(123,0)' and ctid '(124,0)'; Having such operators work would greatly improve bloat reducing options. How would this be helpful for general use cases? Querying on tids on a specific page doesn't seem too useful for any other case than the one you mentioned above, and IMHO it seems to be the job of vacuum. I may be missing something here though. Regards, Atri -- Regards, Atri l'apprenant -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Why are there no inequality scans for ctid?
On Fri, Jun 28, 2013 at 02:21:10PM +0530, Atri Sharma wrote: How would this be helpful for general use cases? Querying on tids on a specific page doesn't seem too useful for any other case than the one you mentioned above, and IMHO it seems to be the job of vacuum. I may be missing something here though. Vacuum doesn't move rows around (as far as I can tell by running vacuum ~ 100 times on bloated table). And as for general case - sure. It's not really useful aside from bloat removal, but I think that bloat removal is important enough to warrant some help from Pg. Best regards, depesz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] utf8 errors
Le vendredi 28 juin 2013 à 08:15 +0200, Pavel Stehule a écrit : there is a same issues in perl dbi driver with UTF8 strings - it does some artificial intelligence and try to do some utf transformations. Hi Pavel, I glanced over it, but dismissed it as the problem also appeared in my ssh sessions. I'll look again and open another thread if needed, as Alban suggested. Thank you. -- Salutations, Vincent Veyron http://marica.fr/ Gestion des contrats, des contentieux juridiques et des sinistres d'assurance -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Postgres case insensitive searches
Hello, Grettings, What is the best way of doing case insensitive searches in postgres using Like. Ilike - does not use indexes function based indexes are not as fast as required. CITEXT - it still taking 600 ms - 1 second on a 2.2 million rows... does not use index Collation Indexes creation with POSIX - does not really work. GIST/GIN indexes are faster when using like, but not case insenstive. Is there a better way of resolving this case insenstive searches with fast retrieval. Thanks and Regards Radha Krishna
Re: [GENERAL] auto_explain FDW
-Original Message- From: Albe Laurenz [mailto:laurenz.a...@wien.gv.at] Sent: Friday, June 28, 2013 4:05 AM To: David Greco; pgsql-general@postgresql.org Subject: RE: auto_explain FDW David Greco wrote: In my development environment, I am using the auto_explain module to help debug queries the developers complain about being slow. I am also using the oracle_fdw to perform queries against some oracle servers. These queries are generally very slow and the application allows them to be. The trouble is that it appears auto_explain kicks in on the query to try and explain them when they take longer than the configured threshold. In this particular case, the Oracle user is very locked down and cannot actually perform an explain. Therefore an error gets raised to the client. I would suggest one of two things- either make the error that gets raised simply be a notice/warning, or preferably just add an option to auto_explain to enable/disable its operation on queries involving foreign servers. I'm reluctant to change oracle_fdw to not throw an error if it doesn't have the permission to explain the query when you ask it to --- for one, what should it return in that case? I'd say that the solution in this case would be to temporarily allow the user to query the necessary Oracle catalogs. If you debug in a production scenario, you'll have to make compromises (similar to granting the PLUSTRACE role if you want to use AUTOTRACE with SQL*Plus). I'm inclined to agree. The problem with granting the user in Oracle the permissions is that we simply do not have control over this. The Oracle database is maintained by a separate company. I have requested the rights, but it up to their DBAs discretion. Therefore, I'd suggest adding an option to auto_explain to enable or disable for foreign servers, or perhaps a separate threshold setting for them. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] How to REMOVE an on delete cascade?
Hi. Hard to find this command in the documentation - how should I alter a table to REMOVE the on delete cascade constraint from a table? Thanks.
Re: [GENERAL] Postgres case insensitive searches
bhanu udaya wrote: What is the best way of doing case insensitive searches in postgres using Like. Table laurenz.t Column | Type | Modifiers +-+--- id | integer | not null val| text| not null Indexes: t_pkey PRIMARY KEY, btree (id) CREATE INDEX t_val_ci_ind ON t ((upper(val) text_pattern_ops); ANALYZE t; EXPLAIN SELECT id FROM t WHERE upper(val) LIKE 'AB%'; QUERY PLAN -- Index Scan using t_val_ci_ind on t (cost=0.01..8.28 rows=1 width=4) Index Cond: ((upper(val) ~=~ 'AB'::text) AND (upper(val) ~~ 'AC'::text)) Filter: (upper(val) ~~ 'AB%'::text) (3 rows) Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] AFTER triggers and constraints
Came across an interesting situation as part of our Oracle to PostgreSQL migration. In Oracle, it appears that immediate constraints are checked after the entire statement is run, including any AFTER ROW triggers. In Postgres, they are applied before the AFTER ROW triggers. In some of our AFTER ROW triggers, we had logic and deletes that will satisfy the constraint. In Postgres, these are causing problems. Excerpt from ISO SQL 92, section 4.10.1: If the constraint mode is immedi- ate, then the constraint is effectively checked at the end of each SQL-statement. Since the trigger is defined as AFTER ROW, versus AFTER STATEMENT, I believe the trigger should be considered part of the statement, therefore the constraint should not be checked until after the row triggers have run. Any thoughts? Here is a simplified example: CREATE TABLE demo.parent ( id integer PRIMARY KEY ); CREATE TABLE demo.child ( id integer PRIMARY KEY, parent_id integer ); ALTER TABLE demo.child ADD CONSTRAINT parent_fk FOREIGN KEY (parent_id) REFERENCES demo.parent (id) ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE; CREATE OR REPLACE FUNCTION demo.parent_delete_trg_fnc() RETURNS trigger AS $BODY$ BEGIN DELETE FROM demo.child WHERE parent_id = OLD.id; return OLD; END; $BODY$ LANGUAGE 'plpgsql'; CREATE TRIGGER parent_ar_trg AFTER DELETE ON demo.parent FOR EACH ROW EXECUTE PROCEDURE demo.parent_delete_trg_fnc(); INSERT INTO demo.parent VALUES (1); INSERT INTO demo.child VALUES (1, 1); delete from demo.parent WHERE id=1; The last delete statement will throw a referential integrity error. In Oracle, same example, it does not as the trigger deletes the child.
Re: [GENERAL] How to REMOVE an on delete cascade?
Phoenix Kiula wrote: Hi. Hard to find this command in the documentation - how should I alter a table to REMOVE the on delete cascade constraint from a table? Thanks. Unless you want to mess with the catalogs directly, I believe that you have to create a new constraint and delete the old one, like: Table laurenz.b Column | Type | Modifiers +-+--- b_id | integer | not null a_id | integer | not null Indexes: b_pkey PRIMARY KEY, btree (b_id) b_a_id_ind btree (a_id) Foreign-key constraints: b_a_id_fkey FOREIGN KEY (a_id) REFERENCES a(a_id) ON DELETE CASCADE ALTER TABLE b ADD CONSTRAINT scratch FOREIGN KEY (a_id) REFERENCES a(a_id); ALTER TABLE b DROP CONSTRAINT b_a_id_fkey; ALTER TABLE b RENAME CONSTRAINT scratch TO b_a_id_fkey; Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Why are there no inequality scans for ctid?
Re: hubert depesz lubaczewski 2013-06-28 20130628085246.ga25...@depesz.com On Fri, Jun 28, 2013 at 02:21:10PM +0530, Atri Sharma wrote: How would this be helpful for general use cases? Querying on tids on a specific page doesn't seem too useful for any other case than the one you mentioned above, and IMHO it seems to be the job of vacuum. I may be missing something here though. Vacuum doesn't move rows around (as far as I can tell by running vacuum ~ 100 times on bloated table). And as for general case - sure. It's not really useful aside from bloat removal, but I think that bloat removal is important enough to warrant some help from Pg. It would also be useful for querying broken tables where you SELECT * FROM badtable WHERE ctid '(123,0)'; to avoid dying on a bad block. Christoph -- c...@df7cb.de | http://www.df7cb.de/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] AFTER triggers and constraints
On Fri, Jun 28, 2013 at 8:45 AM, David Greco david_gr...@harte-hanks.comwrote: The last delete statement will throw a referential integrity error. In Oracle, same example, it does not as the trigger deletes the child. Not sure your real case, but why not just make the FK on delete cascade and get rid of your trigger entirely? Alternatively, what if you make your constratint initially deferred?
Re: [GENERAL] AFTER triggers and constraints
From: Vick Khera [mailto:vi...@khera.org] Sent: Friday, June 28, 2013 9:35 AM To: David Greco Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] AFTER triggers and constraints On Fri, Jun 28, 2013 at 8:45 AM, David Greco david_gr...@harte-hanks.commailto:david_gr...@harte-hanks.com wrote: The last delete statement will throw a referential integrity error. In Oracle, same example, it does not as the trigger deletes the child. Not sure your real case, but why not just make the FK on delete cascade and get rid of your trigger entirely? Alternatively, what if you make your constratint initially deferred? The actual use case is a bit different and complicated. When the constraint is initially deferred, it works as expected, and that is how I will work around the issue. But my point is, is this how it is SUPPOSED to work? It's not clear to me yet that is the case. I would expect the statement to include the after row triggers (but not the after statement triggers).
Re: [GENERAL] AFTER triggers and constraints
David Greco david_gr...@harte-hanks.com writes: Since the trigger is defined as AFTER ROW, versus AFTER STATEMENT, I believe the trigger should be considered part of the statement, therefore the constraint should not be checked until after the row triggers have run. Any thoughts? Not sure that this is terribly well documented, but you can arrange for your triggers to fire before the FK-enforcement triggers. Triggers on the same table and event type fire in alphabetical (in ASCII) order, so just choose a name that's before the FK triggers, which if memory serves have names starting with RI_. So for instance CREATE TRIGGER Parent_ar_trg ... would have worked the way you want. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] AFTER triggers and constraints
-Original Message- From: Tom Lane [mailto:t...@sss.pgh.pa.us] Sent: Friday, June 28, 2013 10:10 AM To: David Greco Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] AFTER triggers and constraints David Greco david_gr...@harte-hanks.com writes: Since the trigger is defined as AFTER ROW, versus AFTER STATEMENT, I believe the trigger should be considered part of the statement, therefore the constraint should not be checked until after the row triggers have run. Any thoughts? Not sure that this is terribly well documented, but you can arrange for your triggers to fire before the FK-enforcement triggers. Triggers on the same table and event type fire in alphabetical (in ASCII) order, so just choose a name that's before the FK triggers, which if memory serves have names starting with RI_. So for instance CREATE TRIGGER Parent_ar_trg ... would have worked the way you want. regards, tom lane Thanks Tom, Yes, renaming the trigger does in fact work. Any thoughts on the theory of this behavior? i.e. is this ANSI compliant? Or should there be a mechanism in place that guarantees the FK-enforcement trigger runs after all others? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] AFTER triggers and constraints
On 28 June 2013 16:09, David Greco david_gr...@harte-hanks.com wrote: Yes, renaming the trigger does in fact work. Any thoughts on the theory of this behavior? i.e. is this ANSI compliant? Or should there be a mechanism in place that guarantees the FK-enforcement trigger runs after all others? Hmm, it doesn't conform to the SQL standard, which clarifies that with NOTE 31, p.66 in 4.17.2 though doesn't specifically mention triggers. We claim conformance to the standard on this. You can change the name of the constraint that implements the FKs on the DDL but can't change the names of the underlying triggers except by doing that directly, which doesn't seem that useful. Should we have a parameter to define precedence of RI checks? We could hoik out the triggers and execute them last, or leave them as they are, depending upon the setting. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services
Re: [GENERAL] AFTER triggers and constraints
Simon Riggs si...@2ndquadrant.com writes: We claim conformance to the standard on this. Not really. The fact that we do RI actions via triggers is already not what the spec envisions. As an example, it's well known that you can subvert RI actions entirely by installing triggers on the target table that make the RI actions into no-ops. It would be difficult to justify that behavior by reference to the standard, but we leave it like that because there are effects you really couldn't get if RI actions were somehow lower-level than triggers. (Simple example: if you have a business rule that updates on a table should update a last-modified timestamp column, you might wish that updates caused by an ON UPDATE CASCADE action did that too.) Should we have a parameter to define precedence of RI checks? That seems like a recipe for breaking things. Apps already have the ability to control whether their triggers fire before or after the RI triggers; changing the rule for trigger firing order is going to break anybody who's depending on that. I'm inclined to leave well enough alone here --- especially given that, AFAIR, this is the first complaint of this sort in the fifteen years or so that PG's RI actions have worked this way. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Cleaning up a text import
Hi I imported some text using the Quantum GIS dxf2postgiswhich somehow became distorted through the import. What should have been imported was TK-208. What I got was %%UTK-208%%U. Perhaps I did something wrong while using dxf2postgis? Otherwise, I can trim the text using - select trim (both '% U' from '%%UTK-208%%U') . However I would need to know what it is that needs to be trimmed from future imports, which isn't always possible. I would appreciate any suggestions on how to resolve this. Many thanks in advance. Bob
Re: [GENERAL] AFTER triggers and constraints
On 28 June 2013 17:17, Tom Lane t...@sss.pgh.pa.us wrote: Simon Riggs si...@2ndquadrant.com writes: We claim conformance to the standard on this. Not really. The fact that we do RI actions via triggers is already not what the spec envisions. As an example, it's well known that you can subvert RI actions entirely by installing triggers on the target table that make the RI actions into no-ops. It would be difficult to justify that behavior by reference to the standard, but we leave it like that because there are effects you really couldn't get if RI actions were somehow lower-level than triggers. (Simple example: if you have a business rule that updates on a table should update a last-modified timestamp column, you might wish that updates caused by an ON UPDATE CASCADE action did that too.) I'm certainly happy with the way our RI works, for those reasons and others. This was just a matter of altering the precedence since applications written to the standard won't work right, not about altering the level at which RI acts. Should we have a parameter to define precedence of RI checks? That seems like a recipe for breaking things. Apps already have the ability to control whether their triggers fire before or after the RI triggers; changing the rule for trigger firing order is going to break anybody who's depending on that. I'm inclined to leave well enough alone here --- especially given that, AFAIR, this is the first complaint of this sort in the fifteen years or so that PG's RI actions have worked this way. It won't break anything because it would be a parameter, not a change in default behaviour. If your completely set against this then I'll add a note to our conformance statement. -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services
[GENERAL] Application locking
We want to make sure no two examiners are working on the same case at the same time, where the cases are found by searching on certain criteria with limit 1 to get the next case. A naive approach would be (in a stored procedure): next_case_id := null; select id into next_case_id from cases c where unfinished = true and not exists (select 1 from table_lock where table_name = 'case' and row_id = c.id) limit 1; if found then insert into table_lock (table_name, row_id) values ('case', next_case_id); end if; return next_case_id; I suspect it would be possible for two users to get the same case locked that way. Yes? If so, would adding for update to the initial select prevent a second caller to block on their select until the first caller had written out the lock, effectively preventing two callers from locking the same case? If not, can we do better by bundling it all into one statement?: with nc as (select c.id clm_id from cases c where unfinished = true and not exists (select 1 from table_lock where table_name = 'case' and row_id = c.id) limit 1) , ic as (insert into rdf (iasid, s,p,oint) select 42, user_id, 'started-editing', clm_id from nc returning oint locked) select locked from ic limit 1 into locked_id; return locked_id; If I am all wet, is their a reliable way to achieve this? Thx, kt -- Kenneth Tilton *Director of Software Development* *MCNA Dental Plans* 200 West Cypress Creek Road Suite 500 Fort Lauderdale, FL 33309 954-730-7131 X181 (Office) 954-628-3347 (Fax) 1-800-494-6262 X181 (Toll Free) ktil...@mcna.net glip...@mcna.net (Email) www.mcna.net (Website) CONFIDENTIALITY NOTICE: This electronic mail may contain information that is privileged, confidential, and/or otherwise protected from disclosure to anyone other than its intended recipient(s). Any dissemination or use of this electronic mail or its contents by persons other than the intended recipient(s) is strictly prohibited. If you have received this communication in error, please notify the sender immediately by reply e-mail so that we may correct our internal records. Please then delete the original message. Thank you.
Re: [GENERAL] Application locking
Sorry, big typo below: On Fri, Jun 28, 2013 at 1:16 PM, Kenneth Tilton ktil...@mcna.net wrote: We want to make sure no two examiners are working on the same case at the same time, where the cases are found by searching on certain criteria with limit 1 to get the next case. A naive approach would be (in a stored procedure): next_case_id := null; select id into next_case_id from cases c where unfinished = true and not exists (select 1 from table_lock where table_name = 'case' and row_id = c.id) limit 1; if found then insert into table_lock (table_name, row_id) values ('case', next_case_id); end if; return next_case_id; I suspect it would be possible for two users to get the same case locked that way. Yes? If so, would adding for update to the initial select prevent a second caller to block on their select until the first caller had written out the lock, effectively preventing two callers from locking the same case? Change prevent to cause: If so, would adding for update to the initial select cause a second caller to block on their select until the first caller had written out the lock, effectively preventing two callers from locking the same case? -kt If not, can we do better by bundling it all into one statement?: with nc as (select c.id clm_id from cases c where unfinished = true and not exists (select 1 from table_lock where table_name = 'case' and row_id = c.id) limit 1) , ic as (insert into rdf (iasid, s,p,oint) select 42, user_id, 'started-editing', clm_id from nc returning oint locked) select locked from ic limit 1 into locked_id; return locked_id; If I am all wet, is their a reliable way to achieve this? Thx, kt -- Kenneth Tilton *Director of Software Development* *MCNA Dental Plans* 200 West Cypress Creek Road Suite 500 Fort Lauderdale, FL 33309 954-730-7131 X181 (Office) 954-628-3347 (Fax) 1-800-494-6262 X181 (Toll Free) ktil...@mcna.net glip...@mcna.net (Email) www.mcna.net (Website) CONFIDENTIALITY NOTICE: This electronic mail may contain information that is privileged, confidential, and/or otherwise protected from disclosure to anyone other than its intended recipient(s). Any dissemination or use of this electronic mail or its contents by persons other than the intended recipient(s) is strictly prohibited. If you have received this communication in error, please notify the sender immediately by reply e-mail so that we may correct our internal records. Please then delete the original message. Thank you. -- Kenneth Tilton *Director of Software Development* *MCNA Dental Plans* 200 West Cypress Creek Road Suite 500 Fort Lauderdale, FL 33309 954-730-7131 X181 (Office) 954-628-3347 (Fax) 1-800-494-6262 X181 (Toll Free) ktil...@mcna.net glip...@mcna.net (Email) www.mcna.net (Website) CONFIDENTIALITY NOTICE: This electronic mail may contain information that is privileged, confidential, and/or otherwise protected from disclosure to anyone other than its intended recipient(s). Any dissemination or use of this electronic mail or its contents by persons other than the intended recipient(s) is strictly prohibited. If you have received this communication in error, please notify the sender immediately by reply e-mail so that we may correct our internal records. Please then delete the original message. Thank you.
Re: [GENERAL] How to REMOVE an on delete cascade?
You can do all that in a single sql command. ALTER TABLE b DROP CONSTRAINT b_a_id_fkey, ADD CONSTRAINT b_a_id FOREIGN KEY (a_id) REFERENCES a(a_id); On Fri, Jun 28, 2013 at 5:55 AM, Albe Laurenz laurenz.a...@wien.gv.atwrote: Phoenix Kiula wrote: Hi. Hard to find this command in the documentation - how should I alter a table to REMOVE the on delete cascade constraint from a table? Thanks. Unless you want to mess with the catalogs directly, I believe that you have to create a new constraint and delete the old one, like: Table laurenz.b Column | Type | Modifiers +-+--- b_id | integer | not null a_id | integer | not null Indexes: b_pkey PRIMARY KEY, btree (b_id) b_a_id_ind btree (a_id) Foreign-key constraints: b_a_id_fkey FOREIGN KEY (a_id) REFERENCES a(a_id) ON DELETE CASCADE ALTER TABLE b ADD CONSTRAINT scratch FOREIGN KEY (a_id) REFERENCES a(a_id); ALTER TABLE b DROP CONSTRAINT b_a_id_fkey; ALTER TABLE b RENAME CONSTRAINT scratch TO b_a_id_fkey; Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Regards, Richard Broersma Jr.
Re: [GENERAL] Postgres case insensitive searches
Thanks. But, I do not want to convert into upper and show the result. Example, if I have records as below: id type 1. abcd 2. Abcdef 3. ABcdefg 4. aaadf The below query should report all the above select * from table where type like 'ab%'. It should get all above 3 records. Is there a way the database itself can be made case-insensitive with UTF8 characterset. I tried with character type collation POSIX, but it did not really help. Thanks and Regards Radha Krishna From: laurenz.a...@wien.gv.at To: udayabhanu1...@hotmail.com; pgsql-general@postgresql.org Subject: RE: Postgres case insensitive searches Date: Fri, 28 Jun 2013 12:32:00 + bhanu udaya wrote: What is the best way of doing case insensitive searches in postgres using Like. Table laurenz.t Column | Type | Modifiers +-+--- id | integer | not null val| text| not null Indexes: t_pkey PRIMARY KEY, btree (id) CREATE INDEX t_val_ci_ind ON t ((upper(val) text_pattern_ops); ANALYZE t; EXPLAIN SELECT id FROM t WHERE upper(val) LIKE 'AB%'; QUERY PLAN -- Index Scan using t_val_ci_ind on t (cost=0.01..8.28 rows=1 width=4) Index Cond: ((upper(val) ~=~ 'AB'::text) AND (upper(val) ~~ 'AC'::text)) Filter: (upper(val) ~~ 'AB%'::text) (3 rows) Yours, Laurenz Albe
Re: [GENERAL] Postgres case insensitive searches
On 6/28/2013 6:59 PM, bhanu udaya wrote: select * from table where type like 'ab%'. It should get all above 3 records. Is there a way the database itself can be made case-insensitive with UTF8 characterset. I tried with character type collation POSIX, but it did not really help. use ILIKE -- john r pierce 37N 122W somewhere on the middle of the left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Cleaning up a text import
On 06/28/2013 09:36 AM, Bob Pawley wrote: Hi I imported some text using the Quantum GIS dxf2postgiswhich somehow became distorted through the import. What should have been imported was TK-208. What I got was %%UTK-208%%U. Perhaps I did something wrong while using dxf2postgis? Otherwise, I can trim the text using - select trim (both '% U' from '%%UTK-208%%U') . However I would need to know what it is that needs to be trimmed from future imports, which isn't always possible. I would appreciate any suggestions on how to resolve this. Have not used dxf2postgis, but at a guess the %%U markers are used to denote Unicode? Maybe look in the dxf2postgis docs to see what it has to say about encoding? Many thanks in advance. Bob -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general