[GENERAL] Convert string to UNICODE migration FROM 7.4 to 9.1
Hi, I'm migrating a PG 7.4 database (encoded in LATIN1) to PG 9.1. The upgrade is just fine, except a problem with a conversion to UNICODE for which I was not able to find a solution yet: I have a function under PG 7.4 that returns an xml structure, with the content encoded in UNICODE. The function is like: - CREATE OR REPLACE FUNCTION get_xml() RETURNS text AS $$ DECLARE output text; BEGIN -- We fill the output variable with xml ... -- We return it in unicode RETURN convert(output, 'LATIN1', 'UTF8'); END; $$ LANGUAGE 'plpgsql'; - After migrating to PG 9.1, I noticed that convert now requires a bytea, and not a text. I tried different things: - Change the output variable to a bytea - Use RETURN convert(convert_to(output, 'LATIN1'), 'LATIN1', 'UTF8'); - Encode the database in UTF8 instead of LATIN1 ... but no output is similar to what I had under PG 7.4. More precisely, I had under PG 7.4 something like (notice the Name Hélène converted into Hélène) ?xml version=1.0 encoding=UTF-8? DispatchAuftrag Versicherungsnehmer NameMartelli/Name VornameHélène/Vorname Strasserue des Comptes/Strasse LandSuisse/Land PLZ123456/PLZ OrtFribourg/Ort Email./Email TelMobil./TelMobil TelPrivat./TelPrivat TelGeschaeft./TelGeschaeft Fax./Fax /Versicherungsnehmer /DispatchAuftrag And now I get something like: ?xml version=1.0 encoding=UTF-8?\012DispatchAuftrag\012 /Versicherungsnehmer\012 Geschaedigter\012NameEtat du Valais/Name\012Vorname/Vorname\012StrasseIndivis / Centre entretien Autoroute/Strasse\012LandSuisse/Land\012 PLZ1906/PLZ\012OrtCharrat/Ort\012Email/Email\012 TelMobil/TelMobil\012TelPrivat027 747 61 00/TelPrivat\012 TelGeschaeft./TelGeschaeft\012Fax/Fax\012 /Geschaedigter\012 Schadendaten\012SchadenDatum2005-01-23/SchadenDatum\012 SchadenNrJR/41123-208/JPS/SchadenNr\012 GeschaetzteSchadenhoehe/GeschaetzteSchadenhoehe\012 SchadenAmFzDommages aux installations routi\303\250res/SchadenAmFz\012 Bemerkung/Bemerkung\012 /Schadendaten\012 Fahrzeugstandort\012 Name/Name\012Vorname/Vorname\012Strasse/Strasse\012 Land/Land\012PLZ/PLZ\012Ort/Ort\012 Telefon/Telefon\012Fax/Fax\012Email/Email\012 /Fahrzeugstandort\012/DispatchAuftrag\012 Newlines don't seem to be handled properly, and I'm unable to find out how to change that. UTF8 encoding is not good either. Any idea how to correct that? Thanks! Philippe - Attik System web: http://www.attiksystem.ch Philippe Lang phone : +41 26 422 13 75 rte de la Fonderie 2 gsm: +41 79 351 49 94 1700 Fribourg twitter: @philippelang pgp: http://keyserver.pgp.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Non-reentrant plperlu function concurrent access
Hi, I have a non-reentrant plperlu function, which does no database modification. It basically stores input data into a file, calls a unix shell command, and reads the result back from another file. I don't really care about database isolation here, phantom reads and such. It is not likely to be a problem. What could be a problem, is if another call to this function is fired while another one is running. In this specific case, I could solve the problem by generating random input and ouput filenames, but I would prefer a more general solution, like using some sort of mutex for the function. What is the best way to do that under Postgresql? Although not designed for this (if I understand correctly), would a serializable isolation level help here? Best regards! - Attik System web : http://www.attiksystem.ch Philippe Lang phone: +41 26 422 13 75 rte de la Fonderie 2 gsm : +41 79 351 49 94 1700 Fribourg pgp : http://keyserver.pgp.com PGP.sig Description: PGP signature
Re: [GENERAL] Non-reentrant plperlu function concurrent access
Hi, I have a non-reentrant plperlu function, which does no database modification. It basically stores input data into a file, calls a unix shell command, and reads the result back from another file. I don't really care about database isolation here, phantom reads and such. It is not likely to be a problem. What could be a problem, is if another call to this function is fired while another one is running. In this specific case, I could solve the problem by generating random input and ouput filenames, but I would prefer a more general solution, like using some sort of mutex for the function. What is the best way to do that under Postgresql? Although not designed for this (if I understand correctly), would a serializable isolation level help here? I answer to my own post, sorry... Maybe Postgresql functions pg_try_advisory_lock_shared and pg_advisory_unlock_shared are the solution? - Attik System web : http://www.attiksystem.ch Philippe Lang phone: +41 26 422 13 75 rte de la Fonderie 2 gsm : +41 79 351 49 94 1700 Fribourg pgp : http://keyserver.pgp.com PGP.sig Description: PGP signature
[GENERAL] Sanity check on view
Hi, I've got a view, which is supposed to be called with a WHERE clause, like: SELECT * FROM data_view WHERE od_id = '1234'; I'd like to make sure it is called correctly: not all od_id values should be permitted. I came up with this solution: -- FUNCTION: get_data_view CREATE OR REPLACE FUNCTION get_data_view(integer) RETURNS SETOF data_view AS ' DECLARE dataRECORD; BEGIN FOR data IN SELECT * FROM data_view WHERE od_id = $1 LOOP IF data.foo != ''bar'' THEN RAISE EXCEPTION ''blablabla''; END IF; RETURN NEXT data; END LOOP; RETURN; END; ' LANGUAGE 'plpgsql' VOLATILE; Is there maybe a better alternative? I tried to raise an exception from sql directly, in a case... when..., but it didn’t work... Thanks! - Attik System web : http://www.attiksystem.ch Philippe Lang phone: +41 26 422 13 75 rte de la Fonderie 2 gsm : +41 79 351 49 94 1700 Fribourg pgp : http://keyserver.pgp.com PGP.sig Description: PGP signature
Re: [GENERAL] Avoiding surrogate keys
Hi, I think nobody mentioned Object-Relational mappers. If you intend to used one (or think you may be using one in the future), using surrogate keys is more straightforward, if not necessary. Best regards, - Attik System web : http://www.attiksystem.ch Philippe Lang phone: +41 26 422 13 75 rte de la Fonderie 2 gsm : +41 79 351 49 94 1700 Fribourg pgp : http://keyserver.pgp.com -Message d'origine- De : pgsql-general-ow...@postgresql.org [mailto:pgsql-general- ow...@postgresql.org] De la part de Thom Brown Envoyé : mercredi 21 avril 2010 21:01 À : PGSQL Mailing List Objet : [GENERAL] Avoiding surrogate keys I think I know what I plan to do, but want to throw this out there to see if there are differing points of view. I have a mailing list table, and 2 of the columns contain values which have to be from a list. These are country and status. There are 237 possible countries and 3 possible statuses. Now I know some people would assign a sequence ID (surrogate key) to the country and status values, and have them looked up in separate tables to get the textual value, but I think I'll still have those tables, just without an ID column, so 1 column for both the countries and statuses tables. This means storing the proper value in the main table. So instead of name, email, country, status 'mr smith', 'emailaddr...@example.com', 44, 2 'mrs jones', 'm...@emailcompany.com', 21, 1 I'd have name, email, country, status 'mr smith', 'emailaddr...@example.com', 'China', 'Registered' 'mrs jones', 'm...@emailcompany.com', 'Belgium', 'Unconfirmed' The values of course would be constrained by foreign key lookup to their associated tables. Are there any serious downsides to this? If so, what would you recommend? Thanks Thom PGP.sig Description: PGP signature
Re: [GENERAL] Index on immutable function call
pgsql-general-ow...@postgresql.org wrote: Did you analyse the table? Hi, Yes, I did. Can you show us an explain analyse? Here is the explain with my initial index: create index long_transformation2_index on indexed_table (this_is_a_long_transformation(data2)); - Seq Scan on indexed_table (cost=0.00..26791.00 rows=3 width=12) (actual time=0.158..5379.933 rows=49740 loops=1) Filter: (data1 this_is_a_long_transformation(data2)) Total runtime: 5606.855 ms - What I notice off-hand is that you don't appear to have an index on data1, so Postgres doesn't know for which rows that is some_immutable_function(data2). I tried adding an index on data1: create index long_transformation1_index on indexed_table (data1); create index long_transformation2_index on indexed_table (this_is_a_long_transformation(data2)); But I still have an sequential scan: - Seq Scan on indexed_table (cost=0.00..26791.00 rows=3 width=12) (actual time=0.199..5284.322 rows=49739 loops=1) Filter: (data1 this_is_a_long_transformation(data2)) Total runtime: 5513.676 ms - --- Philippe Lang Web: www.attiksystem.ch Attik SystemEmail : philippe.l...@attiksystem.ch rte de la Fonderie 2Phone : +41 26 422 13 75 1700 Fribourg Mobile : +41 79 351 49 94 Switzerland Fax: +41 26 422 13 76 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Index on immutable function call
pgsql-general-ow...@postgresql.org wrote: On Tue, Jan 19, 2010 at 10:22 AM, Alban Hertroys dal...@solfertje.student.utwente.nl wrote: On 19 Jan 2010, at 8:26, A. Kretschmer wrote: The planner doesn't know the actual parameter for the function, so he picked out the wrong plan. You can force the planner to re-planning with a dynamical statement within the function and EXECUTE that. There are no queries in his function, I think you misread his post. Perhaps this might work? create index transform_index on indexed_table(serial) where data1 this_is_a_long_transformation(data2); Hi, It does! With your index alone: --- Bitmap Heap Scan on indexed_table (cost=815.17..10106.08 rows=3 width=12) (actual time=7.796..236.722 rows=50116 loops=1) Recheck Cond: (data1 this_is_a_long_transformation(data2)) - Bitmap Index Scan on transform_index (cost=0.00..806.84 rows=3 width=0) (actual time=7.665..7.665 rows=50116 loops=1) Total runtime: 459.380 ms --- That works just fine, but is there maybe a way of creating a slighly more generic index? If I change the with a in the query, index cannot of course be used. According to documentation, answer seems to be no... Thanks, and best regards. --- Philippe Lang Web: www.attiksystem.ch Attik SystemEmail : philippe.l...@attiksystem.ch rte de la Fonderie 2Phone : +41 26 422 13 75 1700 Fribourg Mobile : +41 79 351 49 94 Switzerland Fax: +41 26 422 13 76 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Index on immutable function call
Alban Hertroys wrote: Hi, It does! With your index alone: --- Bitmap Heap Scan on indexed_table (cost=815.17..10106.08 rows=3 width=12) (actual time=7.796..236.722 rows=50116 loops=1) Recheck Cond: (data1 this_is_a_long_transformation(data2)) - Bitmap Index Scan on transform_index (cost=0.00..806.84 rows=3 width=0) (actual time=7.665..7.665 rows=50116 loops=1) Total runtime: 459.380 ms --- That works just fine, but is there maybe a way of creating a slighly more generic index? If I change the with a in the query, index cannot of course be used. According to documentation, answer seems to be no... I have to say I'm a bit surprised this works, as the database still needs to check all these rows for existence. Apparently the index is sufficiently selective with your database tuning parameters. That said, if this works then a combined index on (data1, this_is_a_long_transformation(data2)) will probably also work and give you the flexibility you need. I have tried with a combined index: create index long_transformation_index on indexed_table (data1, this_is_a_long_transformation(data2)); Unfortunately, it does not work: --- Seq Scan on indexed_table (cost=0.00..26791.00 rows=3 width=12) (actual time=0.327..5805.199 rows=49959 loops=1) Filter: (data1 this_is_a_long_transformation(data2)) Total runtime: 6340.772 ms --- --- Philippe Lang Web: www.attiksystem.ch Attik SystemEmail : philippe.l...@attiksystem.ch rte de la Fonderie 2Phone : +41 26 422 13 75 1700 Fribourg Mobile : +41 79 351 49 94 Switzerland Fax: +41 26 422 13 76 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Index on immutable function call
pgsql-general-ow...@postgresql.org wrote: In response to Philippe Lang : My goal is to make query... select * from indexed_table WHERE data1 this_is_a_long_transformation(data2); ... as fast as select * from indexed_table WHERE data1 data2; ... with the help of the index long_transformation_index. Around 50% of your data comply with your where-condition, i think, this is the reason why the planner decides for a seq-scan. I tried to insert data differently, with a strong dissymetry: -- insert data into table insert into indexed_table select i, cast((select random() * 10 * i) as integer), cast((select random() * 2000 * i) as integer) from generate_series(1, 10) as i; OR -- insert data into table insert into indexed_table select i, cast((select random() * 2000 * i) as integer), cast((select random() * 10 * i) as integer) from generate_series(1, 10) as i; I use 3 indexes, just in case: --- Philippe Lang Web: www.attiksystem.ch Attik SystemEmail : philippe.l...@attiksystem.ch rte de la Fonderie 2Phone : +41 26 422 13 75 1700 Fribourg Mobile : +41 79 351 49 94 Switzerland Fax: +41 26 422 13 76 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Index on immutable function call
pgsql-general-ow...@postgresql.org wrote: In response to Philippe Lang : My goal is to make query... select * from indexed_table WHERE data1 this_is_a_long_transformation(data2); ... as fast as select * from indexed_table WHERE data1 data2; ... with the help of the index long_transformation_index. Around 50% of your data comply with your where-condition, i think, this is the reason why the planner decides for a seq-scan. Andreas -- Andreas Kretschmer Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: - Header) GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99 Sorry... I tried to insert data differently, with a strong dissymetry: -- insert data into table insert into indexed_table select i, cast((select random() * 10 * i) as integer), cast((select random() * 2000 * i) as integer) from generate_series(1, 10) as i; OR -- insert data into table insert into indexed_table select i, cast((select random() * 2000 * i) as integer), cast((select random() * 10 * i) as integer) from generate_series(1, 10) as i; I use 3 indexes, just in case: create index long_transformation1_index on indexed_table (data1); create index long_transformation2_index on indexed_table (this_is_a_long_transformation(data2)); create index long_transformation3_index on indexed_table (data1, this_is_a_long_transformation(data2)); But in every case, I end up with a sequential scan. The partial index is for the moment the only working solution... --- Philippe Lang Web: www.attiksystem.ch Attik SystemEmail : philippe.l...@attiksystem.ch rte de la Fonderie 2Phone : +41 26 422 13 75 1700 Fribourg Mobile : +41 79 351 49 94 Switzerland Fax: +41 26 422 13 76 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Index on immutable function call
pgsql-general-ow...@postgresql.org wrote: Hello Philippe, if you always select data1 this_is_a_long_transformation(data2) you could use the following index: create index long_transformation_index_2 on indexed_table ( ( data1 this_is_a_long_transformation(data2) ) ); Index Scan using long_transformation_index_2 on indexed_table (cost=0.25..2450.96 rows=3 width=12) Index Cond: ((data1 this_is_a_long_transformation(data2)) = true) Filter: (data1 this_is_a_long_transformation(data2)) Hi Timo, Thanks, that was certainly what I was searching for... I tried your solution, but it's slower than the partial index: 1) Index create index long_transformation4_index on indexed_table ( ( data1 this_is_a_long_transformation(data2) ) ); -- Index Scan using long_transformation4_index on indexed_table (cost=0.25..3466.51 rows=3 width=12) (actual time=0.252..3125.308 rows=50281 loops=1) Index Cond: ((data1 this_is_a_long_transformation(data2)) = true) Filter: (data1 this_is_a_long_transformation(data2)) Total runtime: 3505.435 ms -- 2) Partial index create index transform_index on indexed_table(id) where data1 this_is_a_long_transformation(data2); -- Bitmap Heap Scan on indexed_table (cost=815.09..10106.01 rows=3 width=12) (actual time=7.477..237.331 rows=50101 loops=1) Recheck Cond: (data1 this_is_a_long_transformation(data2)) - Bitmap Index Scan on transform_index (cost=0.00..806.76 rows=3 width=0) (actual time=7.339..7.339 rows=50101 loops=1) Total runtime: 459.657 ms -- I guess it's because the partial index is smaller? --- Philippe Lang Web: www.attiksystem.ch Attik SystemEmail : philippe.l...@attiksystem.ch rte de la Fonderie 2Phone : +41 26 422 13 75 1700 Fribourg Mobile : +41 79 351 49 94 Switzerland Fax: +41 26 422 13 76 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Index on immutable function call
pgsql-general-ow...@postgresql.org wrote: On Tue, Jan 19, 2010 at 07:40:00AM +0100, Philippe Lang wrote: I'm trying to figure out how to use an index on an immutable function call in order to speed up queries. [..] Unfortunately, Postgreql does not use the index at all. Yup, an index isn't going to be very useful in what you're doing. It's arranged the wrong way around---you want something in the table itself to compare against. I'd just have another column in the table that stores the result of the function call (use a trigger to keep it up to date) and then create an index on this column if you want/need. Hi, I'm already doing caching with triggers, it works great. The only problem is that it costs an extra machinery: a column and triggers. I was wondering this morning if a proper index could replace this. It looks like the answer is yes, but I'm worried about stability: you are never sure in advance how the planner will work exactly... Best regards, Philippe --- Philippe Lang Web: www.attiksystem.ch Attik SystemEmail : philippe.l...@attiksystem.ch rte de la Fonderie 2Phone : +41 26 422 13 75 1700 Fribourg Mobile : +41 79 351 49 94 Switzerland Fax: +41 26 422 13 76 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Index on immutable function call
Hi, I'm trying to figure out how to use an index on an immutable function call in order to speed up queries. I came up with this small test: --create database foo; --drop table indexed_table; create table indexed_table ( id serial primary key, data1 integer, data2 integer ); create or replace function this_is_a_long_transformation(d integer) returns integer as $$ declare l integer; begin -- wait l = 0; while l 100 loop l = l + 1; end loop; -- return same value return d; end $$ language plpgsql immutable; -- insert data into table insert into indexed_table select i, cast((select random() * 1000 * i) as integer), cast((select random() * 1000 * i) as integer) from generate_series(1, 10) as i; -- create index create index long_transformation_index on indexed_table (this_is_a_long_transformation(data2)); --select * from indexed_table WHERE data1 data2; select * from indexed_table WHERE data1 this_is_a_long_transformation(data2); My goal is to make query... select * from indexed_table WHERE data1 this_is_a_long_transformation(data2); ... as fast as select * from indexed_table WHERE data1 data2; ... with the help of the index long_transformation_index. Unfortunately, Postgreql does not use the index at all. What am I doing wrong? I use the default query tuning options of Postgresql 8.3.7. Best regards, --- Philippe Lang Web: www.attiksystem.ch Attik SystemEmail : philippe.l...@attiksystem.ch rte de la Fonderie 2Phone : +41 26 422 13 75 1700 Fribourg Mobile : +41 79 351 49 94 Switzerland Fax: +41 26 422 13 76 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Dependency tracking tool
Hello, In order to have a global map of the dependencies of the functions, views, tables in a PG database, I'd like to have some sort of dependency tracking tool, that would show for each object: - which other objects depend on this object - on which other objects this object depends This would for example show that a table T1 has a trigger that depends on a trigger function F1 which itself depends on another function F2 that updates table T2, or that function F2 has a dependent trigger function F1 linked to a table T1, and that it updates table T2, etc... Is it correct to say that this hierarchy does not exist inside Postgresql, and that it is necessary to parse the INFORMATION_SCHEMA.* tables to get this information? Because of function overloading allowed in PG, I guess it is not an easy job. Does such a tool exist maybe? Best regards, Philippe --- Philippe Lang Web: www.attiksystem.ch Attik SystemEmail : philippe.l...@attiksystem.ch rte de la Fonderie 2Phone : +41 26 422 13 75 1700 Fribourg Mobile : +41 79 351 49 94 Switzerland Fax: +41 26 422 13 76 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Dependency tracking tool
Tom wrote: In order to have a global map of the dependencies of the functions, views, tables in a PG database, I'd like to have some sort of dependency tracking tool... snip Is it correct to say that this hierarchy does not exist inside Postgresql, and that it is necessary to parse the INFORMATION_SCHEMA.* tables to get this information? The information_schema views do not expose that information at all. The information schema exposes at least the functions definitions, through INFORMATION_SCHEMA.routines.routine_definition. My idea was to parse the functions definitions in order to build dependencies between the functions. I'm not sure how difficult it is, especially with overloaded functions, which require more than a simple pattern search inside the function definition in order to be distinguished... Best regards, --- Philippe Lang Web: www.attiksystem.ch Attik SystemEmail : philippe.l...@attiksystem.ch rte de la Fonderie 2Phone : +41 26 422 13 75 1700 Fribourg Mobile : +41 79 351 49 94 Switzerland Fax: +41 26 422 13 76 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Start With... Connect By?
Hi, Now that Postgresql 8.4 has a WITH RECURSIVE operator (thanks! :)), is there a chance we can see one day START WITH... CONNECT BY in Postgresql, or is that something 100% oracle-specific? Best regards, Philippe Lang -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Query 4-5 times slower after ANALYZE
Hi, I'm using Postgresql 8.3.6 under Freebsd 7.1. After a fresh restore of a customer dump (running version 8.2.7 at the moment), a rather big query executes in about 30 seconds. As soon as I run ANALYZE, it is instantly 4-5 times slower. I could check that multiples times. Here is the EXPLAIN ANALYZE before the ANALYZE: http://www.attiksystem.ch/postgresql/query_slower_after_analyze/before.t xt And here the the EXPLAIN ANALYZE after the ANALYZE: http://www.attiksystem.ch/postgresql/query_slower_after_analyze/after.tx t Any idea what could be turned on/off in order not to have this slowdown after the ANALYZE? Best regards, --- Philippe Lang Attik Syste, - Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Query 4-5 times slower after ANALYZE
pgsql-general-ow...@postgresql.org wrote: In response to Philippe Lang philippe.l...@attiksystem.ch: I'm using Postgresql 8.3.6 under Freebsd 7.1. After a fresh restore of a customer dump (running version 8.2.7 at the moment), a rather big query executes in about 30 seconds. As soon as I run ANALYZE, it is instantly 4-5 times slower. I could check that multiples times. Here is the EXPLAIN ANALYZE before the ANALYZE: http://www.attiksystem.ch/postgresql/query_slower_after_analyze/before .txt And here the the EXPLAIN ANALYZE after the ANALYZE: http://www.attiksystem.ch/postgresql/query_slower_after_analyze/after. txt Any idea what could be turned on/off in order not to have this slowdown after the ANALYZE? I opened one of those links figuring I'd take a few minutes to see if I could muster up some advice ... and just started laughing ... definitely not the type of query that one can even understand in just a few minutes! Anyway, the real reason I posted -- I doubt if anyone will be able to make sense of a query plan that complex without the actual query, so you'll probably want to post it as well. :) What? I thought you would read that like Neo was reading the Matrix... :) Bill, Gregory, Tomas and Sebastian, thanks for your answers. I have tried everything your mentioned: - increase seq_page_cost - increase geqo_threshold - increase join_collapse_limit - increase from_collapse_limit But it did not help (except disabling completely sequential scans), and for a reason I think I understand better now: part of the query looks like: SELECT c.id AS customer_id, c.name AS customer_name, d.id AS document_id, d.number AS document_number, d.vref AS document_vref, dt.name AS type, d.creation_date AS value_date FROM documents AS d LEFT JOIN payment_terms AS pt ON d.payment_term_id = pt.id INNER JOIN reminder_levels AS rl ON d.reminder_level_id = rl.id INNER JOIN document_types AS dt ON d.document_type_id = dt.id INNER JOIN projects AS p ON d.project_id = p.id INNER JOIN customers AS c ON p.customer_id = c.id WHERE d.reminder = 1 AND solde_po(CURRENT_DATE, c.id) 0 AND d.creation_date = CURRENT_DATE The heavy part here is the solde_po call (at the end), which takes up most CPU time. That's why scanning the customers table takes up so much time. I imagine a small change in the way this table is scanned can have enormous effects in the overall execution time, like when an sequential scan is preferred over an index scan. Does that sound correct? A small question here: solde_po is an SQL function (not PLPGSQL). Is it inlined in the parent query before the whole query execution plan is calculated? Or are they treated completely separately? Philippe P.S. Thanks for the link to explain.depesz.com! Great tool! - Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] tablelog
Hi, I'm using tablelog (http://pgfoundry.org/projects/tablelog/) on an old FreeBSD 6 / Postgresql 7.4 server, and I'm really happy with it. It always worked great. I saw this morning that the project used to be accepted for a while in the debian packages repository, but has been removed last year: http://www.mail-archive.com/debian-bugs-clo...@lists.debian.org/msg19992 4.html. It looks like it is not really maintained anymore, but maybe it does not need to? Debian maintainers have a particularely bad opinion on this package: ...nfortunatly development and package maintaining died short after. So I ask for removal of this (buggy) package. Is anyone using it with Postgresql 8.3? Or is there an alternative to this? While we are talking about this, is a development like Oracle Flashback queries planned maybe? -- Philippe Lang Attik System Rte de la Fonderie 2 1700 Fribourg -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Killing an offending process without restarting Postgresql safe?
Hi, I added an index on a Postgresql 7.4 database this morning (about 50 concurrent users), and then deleted it. I noticed the delete took too much time to complete, so I logged on the server, and did a ps -afxu. I saw a process was stuck in a drop index waiting state, and several others in a select waiting state. I killed the drop index waiting process, and all the select waiting processes disappeared. Then I dropped the index again, and it succeeded this time. I never restarted the database server at any time. Is that safe, or is it a good thing to restart the server in such cases? Best regards, Philippe Lang -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Role attribute for user email?
Hi, Is there a way to create new attributes for a login role, like an email attribute? I have to keep a few informations on all the users that connect to the PG server, and their attributes list looks like an elegant place to store these informations. But I cannot find a way to add new attributes... Is that possible? Thanks, Philippe Lang -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] contrib/xml2 with 7.4
Hi, Is there any chance I can use the contrib/xml2 package with a 7.4 database, or does it need features only available with version 8 and later? I'd prefer not upgrading the database for the moment, and would like to use XSLT functions in Postgresql. Thanks Philippe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Execution plan caching
Hi, I have recently used the MS SQL Server 2005 database, and found out that there is no mecanism where an execution plan can be reused between two successive calls to a view. This is only true with stored procedures. Is that also true with the Postgresql engine? Philippe ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] ERROR: must be superuser to alter superusers
Hi, I had the very bad idea of removing the superuser flag from the pgsql role. Now each time I try to set it back, with: ALTER ROLE pgsql SUPERUSER; ... I get: ERROR: must be superuser to alter superusers SQL state: 42501 Since there is no superuser in the database engine anymore, what option do I have? I tried: CREATE ROLE pgsql2 SUPERUSER; ... But I get: ERROR: must be superuser to create superusers SQL state: 42501 Thanks, Philippe Lang ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] How to prevent modifications in a tree of rows, based on a condition?
Hi, I have a database which can be simplified in the following way, with three tables: An order has multiple order lines, and an order line has multiple line steps. I'd like to prevent any kind of modification (insert, update, delete) in a order (and its lines, and steps) if all the steps in the lines of the order are checked. If that condition is not true, a modification is accepted. A brute force solution is to write 3 x 3 triggers, and code that properly, but I suspect there something more elegant to do with Postgresql? Does anyone have an idea maybe? The rule system? Thanks for your ideas. Philippe Lang --- CREATE TABLE public.orders ( idint4 PRIMARY KEY, value int4 ) WITHOUT OIDS; CREATE TABLE public.order_lines ( idint4 PRIMARY KEY, value int4, order_id int4 REFERENCES public.orders ) WITHOUT OIDS; CREATE TABLE public.line_steps ( idint4 PRIMARY KEY, value int4, checked bool, order_line_id int4 REFERENCES public.order_lines ) WITHOUT OIDS; -- Insert values INSERT INTO orders VALUES (1, 1); INSERT INTO order_lines VALUES (1, 1, 1); INSERT INTO line_steps VALUES (1, 1, true, 1); INSERT INTO line_steps VALUES (2, 2, true, 1); INSERT INTO line_steps VALUES (3, 3, true, 1); INSERT INTO order_lines VALUES (2, 2, 1); INSERT INTO line_steps VALUES (4, 1, true, 2); INSERT INTO line_steps VALUES (5, 2, true, 2); INSERT INTO line_steps VALUES (6, 3, true, 2); INSERT INTO order_lines VALUES (3, 3, 1); INSERT INTO line_steps VALUES (7, 1, true, 3); INSERT INTO line_steps VALUES (8, 2, true, 3); INSERT INTO line_steps VALUES (9, 3, true, 3); -- Modifications that should be forbidden UPDATE orders SET value = 123 WHERE id = 1; UPDATE order_lines SET value = 123 WHERE id = 1; UPDATE line_steps SET value = 123 WHERE id = 1; -- We show final data SELECT o.id AS order_id, o.value AS order_value, ol.id AS order_line_id, ol.value AS order_line_value, ls.id AS line_steps_id, ls.value AS line_step_value, ls.checked AS check FROM orders AS o INNER JOIN order_lines AS ol ON o.id = ol.order_id INNER JOIN line_steps AS ls ON ol.id = ls.order_line_id ORDER BY o.id, ol.id, ls.id; ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Postgresql 8.1: plperl code works with LATIN1, fails with UTF8
[EMAIL PROTECTED] wrote: If you can't upgrade to 8.2 then you might be able to work around the problem by creating the function as plperlu and adding 'use utf8;'. After upgrading to 8.2.1, everything runs just fine. Thanks a lot for your help, Regards, Philippe Lang ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Postgresql 8.1: plperl code works with LATIN1, fails with UTF8
) { $output{'larg_maconnerie'} = $fab-{rows}[$j]-{'larg'}; } else { $output{'larg_maconnerie'} = ''; }; # Hauteur de maçonnerie for ($j = 0; ($fab-{rows}[$j]-{'article'} ne 'Hauteur de maçonnerie') and ($j $fab_nrows); $j = $j + 1) {}; if ($j $fab_nrows) { $output{'haut_maconnerie'} = $fab-{rows}[$j]-{'haut'}; } else { $output{'haut_maconnerie'} = ''; }; # Largeur de vantail if (defined($vtxg) and defined($vtxd)) { # Vantaux asymétriques $output{'larg_vtx'} = $vtxg . / . $vtxd; } else { # Vantaux symétriques for ($j = 0; ($fab-{rows}[$j]-{'article'} ne 'Largeur de vantail') and ($j $fab_nrows); $j = $j + 1) {}; if ($j $fab_nrows) { $output{'larg_vtx'} = $fab-{rows}[$j]-{'larg'}; } else { $output{'larg_vtx'} = ''; }; } # Hauteur de vantail for ($j = 0; ($fab-{rows}[$j]-{'article'} ne 'Hauteur de vantail') and ($j $fab_nrows); $j = $j + 1) {}; if ($j $fab_nrows) { $output{'haut_vtx'} = $fab-{rows}[$j]-{'haut'}; } else { $output{'haut_vtx'} = ''; }; # Type d'ouverture $output{'ouv'} = $lignes-{rows}[$lignes_rn]-{'ouv'}; # Image des couvre-joints for ($j = 0; ($fab-{rows}[$j]-{'article'} ne 'Couvre-joints') and ($j $fab_nrows); $j = $j + 1) {}; if ($j $fab_nrows) { $output{'couvre_joints'} = $fab-{rows}[$j]-{'image'}; } else { $output{'couvre_joints'} = ''; }; # Image de la coupe verticape for ($j = 0; ($fab-{rows}[$j]-{'article'} ne 'Coupe verticale') and ($j $fab_nrows); $j = $j + 1) {}; if ($j $fab_nrows) { $output{'coupe_verticale'} = $fab-{rows}[$j]-{'image'}; } else { $output{'coupe_verticale'} = ''; }; # Vide intérieur if (defined($vtxg) and defined($vtxd)) { # Vantaux asymétriques $output{'vide_interieur'} = ($vtxg - 106) . / . ($vtxd - 106); } else { # Vantaux symétriques for ($j = 0; ($fab-{rows}[$j]-{'article'} ne 'Largeur de vantail') and ($j $fab_nrows); $j = $j + 1) {}; if ($j $fab_nrows) { $output{'vide_interieur'} = $fab-{rows}[$j]-{'larg'} - 106; } else { $output{'vide_interieur'} = ''; }; } # Type de volet $output{'typ'} = $lignes-{rows}[$lignes_rn]-{'typ'}; # Sortie $ret(@_); } # #-- Fin de la fonction # $end_sub(@_); $$ LANGUAGE 'plperl' VOLATILE; # When running: - select * from volets_fiche_fab_1(1) Database replies: - ERROR: error from Perl function: invalid input syntax for integer: at line 54. SQL state: XX000 Does anyone have a small idea where to search? Thanks Philippe Lang ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] XEON familiy 5000, 5100 or 5300?
Shane wrote: No - a *core* is another cpu, basically you will have 2 or 4 cpu's in the one physical package. HT creates 2 virtual cpu's sharing the same cpu resources but the cores are seperate cpu's in themselves. The Quad-core will only benefit you more if you have more users running queries at the same time. Each core can run a query at the same time without slowing the others down (allowing for disk access/FSB limits). Jose wrote: PostgreSQL handles each connection in a dedicated process, so you won't get better performance for a single connection by adding more CPUs (I mean, beyond the benefit of having the postmaster and the specific connection running in separate CPUs). This means that a query will not be resolved by more than one CPU. What you will get is better performance for multiple connections. Shane, Jose, Thanks for your answers. In my very-low-concurrency scenario, I guess then that multiple cores won't really help, as I suspected. I think I have better take (for the same price) a ... Dual-Core Intel Xeon 5060, 3.2 GHz, 4MB ... instead of a ... Quad-Core Intel Xeon 5310, 1.6 GHz, 4MB With my CPU-bound query, it will perform better. But what about Hyperthreading then? Is it able to spread two threads over two different cores? I guess the answer is no... Philippe ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] XEON familiy 5000, 5100 or 5300?
Hi, I'm about to buy a few new servers, and I'm trying to determine if I should buy XEON family 5000, 5100 or 5300 processors. For about the same price, I can have: 2 Dual-Core Intel Xeon 5060, 3.2 GHz, 4MB 2 Dual-Core Intel Xeon 5130, 2.0 GHz, 4MB 2 Quad-Core Intel Xeon 5310, 1.6 GHz, 4MB I have a few queries that take about 4 minutes each to complete on a single Pentium 4, and all of them are CPU-bound, with the whole database in RAM. With the new system, I expect a performance boost, of course! If I'm not wrong, a single postgresql sql query cannot be spread over two processors, but can it be spread over multiple cores? If that's true, does that mean the best CPU would be the last one, although the clock is lower that the one of the other processors? Thanks for the infos, Cheers, --- Philippe Lang Attik System ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Plperl functions with OUT parameters crashing each other when used in the same connection
'}; ret(@_); } } } # #-- Helper functions # end_sub(@_); sub start_sub { init(@_); } sub end_sub { return undef; } sub init { $c = 0; foreach $i (@i) {$input{$i} = @_[$c++]}; foreach $io (@io) {$input{$io} = @_[$c]; $output{$io} = @_[$c++]}; foreach $o (@o) {$output{$o} = @_[$c++]}; } sub ret { #while (($key, $value) = each %output) {if (!defined($value)) {elog(ERROR, 'Valeur indéfinie pour ' . $key)}}; return_next \%output; init(@_); } $$ LANGUAGE 'plperl' VOLATILE; -- Philippe Lang, Ing. Dipl. EPFL Attik System rte de la Fonderie 2 1700 Fribourg Switzerland http://www.attiksystem.ch Tel: +41 (26) 422 13 75 Fax: +41 (26) 422 13 76 smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] Plperl functions with OUT parameters crashing each other when used in the same connection
[EMAIL PROTECTED] wrote: Hi, I've got a strange bug with two plperl functions using OUT parameters: with a fresh ODBC or pgAdmin connection, I can call the first function, but then all further calls to the second function fail, or call the the second function, but then all further calls to the first function fail. Even more strange: when the second call fails, the message changes at each new try, mentioning one of the variables used as OUT parameters in the other function. Something is apprently not released from memory between each calls. Here is a reduced example that shows the problem. Calls at the end work independantly, but the second one called fails each time, mentioning columns from the other function. -- FUNCTION: foo1 CREATE OR REPLACE FUNCTION public.foo1 ( IN ainteger, IN binteger, OUT c integer, OUT d integer ) RETURNS SETOF record AS $$ @i = ('a', 'b'); @io = (); @o = ('c', 'd'); start_sub(@_); $output{'c'} = $input{'a'} + $input{'b'}; $output{'d'} = $input{'a'} * $input{'b'}; ret(); end_sub(@_); sub start_sub { init(@_); } sub end_sub { return undef; } sub init { $c = 0; foreach $i (@i) {$input{$i} = @_[$c++]}; foreach $io (@io) {$input{$io} = @_[$c]; $output{$io} = @_[$c++]}; foreach $o (@o) {$output{$o} = @_[$c++]}; } sub ret { while (($key, $value) = each %output) {if (!defined($value)) {elog(ERROR, 'Valeur indéfinie pour ' . $key)}}; return_next \%output; init(@_); } $$ LANGUAGE 'plperl' VOLATILE; -- FUNCTION: foo2 CREATE OR REPLACE FUNCTION public.foo2 ( IN nvarchar(50), IN mvarchar(50), OUT r integer, OUT s varchar(50) ) RETURNS SETOF record AS $$ @i = ('n', 'm'); @io = (); @o = ('r', 's'); start_sub(@_); $output{'r'} = $input{'n'} + $input{'m'}; $output{'s'} = $input{'n'} * $input{'m'}; ret(); end_sub(@_); sub start_sub { init(@_); } sub end_sub { return undef; } sub init { $c = 0; foreach $i (@i) {$input{$i} = @_[$c++]}; foreach $io (@io) {$input{$io} = @_[$c]; $output{$io} = @_[$c++]}; foreach $o (@o) {$output{$o} = @_[$c++]}; } sub ret { while (($key, $value) = each %output) {if (!defined($value)) {elog(ERROR, 'Valeur indéfinie pour ' . $key)}}; return_next \%output; init(@_); } $$ LANGUAGE 'plperl' VOLATILE; -- FUNCTION TESTS select * from foo1(45,10); select * from foo2('45','10'); --- Philippe Lang Attik System smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] Plperl functions with OUT parameters crashing each other when used in the same connection
Tom Lane wrote: Philippe Lang [EMAIL PROTECTED] writes: Here is a reduced example that shows the problem. Hm, I'm no Perl guru, but isn't the second script to be loaded going to redefine those subroutines that the first script defined? I'm pretty sure that there's not an implicit independent namespace for each plperl function. regards, tom lane Hi Tom, I'm using PGSQL 8.1.4. I have deleted the subroutines now, but problem remains. Does that mean the variables created inside a plperl function are alive for the duration of the database connection? -- FUNCTION: foo1 CREATE OR REPLACE FUNCTION public.foo1 ( IN ainteger, IN binteger, OUT c integer, OUT d integer ) RETURNS SETOF record AS $$ @i = ('a', 'b'); @io = (); @o = ('c', 'd'); $c = 0; foreach $i (@i) {$input{$i} = @_[$c++]}; foreach $io (@io) {$input{$io} = @_[$c]; $output{$io} = @_[$c++]}; foreach $o (@o) {$output{$o} = @_[$c++]}; $output{'c'} = $input{'a'} + $input{'b'}; $output{'d'} = $input{'a'} * $input{'b'}; return_next \%output; return undef; $$ LANGUAGE 'plperl' VOLATILE; -- FUNCTION: foo2 CREATE OR REPLACE FUNCTION public.foo2 ( IN nvarchar(50), IN mvarchar(50), OUT r integer, OUT s varchar(50) ) RETURNS SETOF record AS $$ @i = ('n', 'm'); @io = (); @o = ('r', 's'); $c = 0; foreach $i (@i) {$input{$i} = @_[$c++]}; foreach $io (@io) {$input{$io} = @_[$c]; $output{$io} = @_[$c++]}; foreach $o (@o) {$output{$o} = @_[$c++]}; $output{'r'} = $input{'n'} + $input{'m'}; $output{'s'} = $input{'n'} * $input{'m'}; return_next \%output; return undef; $$ LANGUAGE 'plperl' VOLATILE; -- FUNCTION TESTS select * from foo1(45,10); select * from foo2('45','10'); --- Philippe Lang Attik System smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] Plperl functions with OUT parameters crashing each other when used in the same connection
[EMAIL PROTECTED] wrote: Tom Lane wrote: Philippe Lang [EMAIL PROTECTED] writes: Here is a reduced example that shows the problem. Hm, I'm no Perl guru, but isn't the second script to be loaded going to redefine those subroutines that the first script defined? I'm pretty sure that there's not an implicit independent namespace for each plperl function. regards, tom lane Hi Tom, I'm using PGSQL 8.1.4. I have deleted the subroutines now, but problem remains. Does that mean the variables created inside a plperl function are alive for the duration of the database connection? It seems to be the case: if I rename all the variables in foo2 function, I do not have anymore problems. Is there a way to flush all the variables explicitely? -- FUNCTION: foo1 CREATE OR REPLACE FUNCTION public.foo1 ( IN ainteger, IN binteger, OUT c integer, OUT d integer ) RETURNS SETOF record AS $$ @i = ('a', 'b'); @io = (); @o = ('c', 'd'); $c = 0; foreach $i (@i) {$input{$i} = @_[$c++]}; foreach $io (@io) {$input{$io} = @_[$c]; $output{$io} = @_[$c++]}; foreach $o (@o) {$output{$o} = @_[$c++]}; $output{'c'} = $input{'a'} + $input{'b'}; $output{'d'} = $input{'a'} * $input{'b'}; return_next \%output; return undef; $$ LANGUAGE 'plperl' VOLATILE; -- FUNCTION: foo2 CREATE OR REPLACE FUNCTION public.foo2 ( IN nvarchar(50), IN mvarchar(50), OUT r integer, OUT s varchar(50) ) RETURNS SETOF record AS $$ @i2 = ('n', 'm'); @io2 = (); @o2 = ('r', 's'); $c2 = 0; foreach $i2 (@i2) {$input2{$i2} = @_[$c2++]}; foreach $io2 (@io2) {$input2{$io2} = @_[$c2]; $output2{$io2} = @_[$c2++]}; foreach $o2 (@o2) {$output2{$o2} = @_[$c2++]}; $output2{'r'} = $input2{'n'} + $input2{'m'}; $output2{'s'} = $input2{'n'} * $input2{'m'}; return_next \%output2; return undef; $$ LANGUAGE 'plperl' VOLATILE; --- Philippe Lang Attik System smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] Migrating PostgreSQL database to MySQL/MS Access
[EMAIL PROTECTED] wrote: How can I migrate a PostgreSQL database to MySQL or MS Access? Are there any tools available? Hi, Can we know maybe why you want to do this? --- Philippe Lang Attik System smime.p7s Description: S/MIME cryptographic signature
[GENERAL] Intel commercial compiler
Hi, http://www.enterprisedb.com mentions they compile postgresql with the Intel commercial compiler, for optimal performances: http://www.intel.com/cd/software/products/asmo-na/eng/compilers/284132.htm Has anyone done comparison tests with a normal gcc compilation? --- Philippe Lang Attik System smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] Plperl return_next and bytea binary data?
Martijn van Oosterhout wrote: On Mon, Jul 24, 2006 at 11:43:39AM +0200, Philippe Lang wrote: The problem seems to come from the return_next, in conjunction with binary data: return_next ( { val = $_[0] * $_[1], image = $im-gif() } ); I don't know exact how pl/perl works, but given that it has no idea what type the data is, chances are it's passing it through cstring-to-text conversion. You probably want to force it to return bytea or some such (without going through cstring-to-bytea conversion hopefully). I don't see a way to do it in the documentation though... Casting $im-gif() to bytea with $im-gif()::bytea does not help. It even makes things slightly worse: the image returned (ethereal sniff) is completely empty, where before it was filled with the first characters GIF89ad of the image. Still searching... Philippe smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] Plperl return_next and bytea binary data?
[EMAIL PROTECTED] wrote: Martijn van Oosterhout wrote: On Mon, Jul 24, 2006 at 11:43:39AM +0200, Philippe Lang wrote: The problem seems to come from the return_next, in conjunction with binary data: return_next ( { val = $_[0] * $_[1], image = $im-gif() } ); I don't know exact how pl/perl works, but given that it has no idea what type the data is, chances are it's passing it through cstring-to-text conversion. You probably want to force it to return bytea or some such (without going through cstring-to-bytea conversion hopefully). I don't see a way to do it in the documentation though... Casting $im-gif() to bytea with $im-gif()::bytea does not help. It even makes things slightly worse: the image returned (ethereal sniff) is completely empty, where before it was filled with the first characters GIF89ad of the image. Still searching... I finally found a solution to my problem by using a base64 encoded gif image, in conjunction with a text column: -- CREATE OR REPLACE FUNCTION test_gd ( IN aint4, IN bint4, OUT val int4, OUT image text ) RETURNS SETOF record AS $$ use GD; use MIME::Base64::Perl; $im = new GD::Image(100,100); $white = $im-colorAllocate(255,255,255); $black = $im-colorAllocate(0,0,0); $red = $im-colorAllocate(255,0,0); $im-transparent($white); $im-rectangle(0,0,89,89,$black); $im-arc(50,30,95,75,0,360,$black); $im-fill(50,50,$red); $image = $im-gif(); return_next ( { val = $_[0] * $_[1], image = encode_base64($image, '') } ); return undef; $$ LANGUAGE 'plperlu'; -- For the small technical drawings this system is suppose to handle, this is just fine. Juste one more question: what is the maximum size of a TEXT column in PG 8.1.4? --- Philippe Lang Attik System smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] Plperl return_next and bytea binary data?
Alvaro Herrera wrote: Philippe Lang wrote: Juste one more question: what is the maximum size of a TEXT column in PG 8.1.4? A handful of bytes less than 1 GB. It should be ok then... :) --- Philippe Lang Attik System smime.p7s Description: S/MIME cryptographic signature
[GENERAL] Plperl return_next and bytea binary data?
Hi, I'm trying to generate graphics dynamically from a plperl function, like this: CREATE OR REPLACE FUNCTION test_gd ( IN aint4, IN bint4, OUT val int4, OUT image bytea ) RETURNS SETOF record AS $$ use GD; $im = new GD::Image(100,100); $white = $im-colorAllocate(255,255,255); $black = $im-colorAllocate(0,0,0); $im-transparent($white); $im-interlaced('false'); $im-line(0,0,150,150,gdBrushed); $im-rectangle(10,10,89,89,$black); $im-arc(50,50,95,75,0,360,$black); $im-fill(50,50,$black); return_next ( { val = $_[0] * $_[1], image = $im-gif() } ); return undef; $$ LANGUAGE 'plperlu'; The image, in hexadecimal, is: :47 49 46 38 39 61 64 00 64 00 80 00 00 ff ff ffGIF89ad.d.€..ÿÿÿ 0010:00 00 00 21 f9 04 01 00 00 00 00 2c 00 00 00 00...!ù.., 0020:64 00 64 00 00 02 fe 84 8f a9 cb ed 0f a3 9c b4d.d...þ„©Ëí.£œ´ 0030:da 8b b3 de bc fb 0f 86 e2 48 96 e6 89 a6 ea caÚ‹³Þ¼û.†âH–扦êÊ 0040:b6 ee 0b 63 c1 4c d7 f6 8d e7 fa ce f7 b3 1c 2b¶î.cÁL×öçúÎ÷³.+ 0050:05 80 c1 d1 f0 72 14 f9 7e 4a 22 67 e9 eb 24 2b.€ÁÑðr.ù~Jgéë$+ 0060:53 24 f4 5a a5 3a a9 d8 2e 0d b9 8d 78 c7 37 4aS$ôZ¥:©Ø..¹xÇ7J 0070:56 72 66 90 d7 65 71 58 cd 8e db 20 69 3a 5d 8eVrf×eqXÍŽÛ i:]Ž 0080:ff 3a ea 0f 7e fe 5f d3 c0 b7 07 07 68 c8 94 30ÿ:ê.~þ_ÓÀ·..hÈ”0 0090:28 b8 70 d8 88 78 a0 a8 96 e8 48 39 15 c9 88 50(¸p؈x ¨–èH9.ɈP 00a0:a9 09 e9 a4 e9 69 70 a9 70 e4 49 0a 10 9a 68 4a©.é¤éip©päI..šhJ 00b0:aa 0a a4 ba 6a d5 5a fa 0a fb c9 35 fb 79 9a 6aª.¤ºjÕZú.ûÉ5ûyšj 00c0:bb 79 3a a4 bb 69 8a 06 ea 4b c9 d9 37 3a ec 08»y:¤»iŠ.êKÉÙ7:ì. 00d0:39 98 84 9c 5c bc 28 dc 6c 98 49 a8 2c 0d 88 2a9˜„œ\¼(Ül˜I¨,.ˆ* 00e0:69 7d 9d 97 2d 6a c9 fd 37 89 99 19 de ed 4d 5di}—-jÉý7‰™.ÞíM] 00f0:6e 2e 27 4a be bd be d6 3e ae 0e 1f 3f ff fc 5en.'J¾½¾Ö®..?ÿü^ 0100:ef e5 0e fa ad 4f c6 0f d8 a4 7f 63 02 66 21 08ïå.úOÆ.ؤc.f!. 0110:50 1e be 68 08 b1 18 f4 d7 f0 ca c3 81 11 a1 4cP.¾h.±.ô×ðÊÃ.¡L 0120:a4 57 31 8a c2 b1 7e 14 33 f6 30 58 c5 a3 c5 7b¤W1ŠÂ±~.3ö0XÅ£Å{ 0130:0b 45 2e d9 58 d2 e4 47 74 29 55 ee 40 29 0c a3.E.ÙXÒäGt)Uî@).£ 0140:cb 36 ca b4 31 9c 49 93 23 3f 9c 3a 58 7a e3 99Ë6Ê´1œI“#?œ:Xzã™ 0150:23 1d b4 5c 40 03 2d fc 76 b3 28 33 45 c7 94 ea[EMAIL PROTECTED](3EÇ”ê 0160:11 68 c7 a9 51 5e 52 bf e0 aa 6a d5 02 d6 47 13.hÇ©Q^R¿àªjÕ.ÖG. 0170:b6 b2 92 fa d5 29 11 a5 50 b5 24 35 a9 d3 ec d9¶²’úÕ).¥Pµ$5©ÓìÙ 0180:8c 47 83 c9 ac 28 54 ed db 7f 30 dd 42 24 18 b0ŒGƒÉ¬(TíÛ0ÝB$.° 0190:eb 1e ba 36 e5 16 0a d7 e7 0d 23 c0 81 c1 a0 69ë.º6å..×ç.#ÀÁ i 01a0:66 d7 ef 1d 5b 66 04 8b 89 a5 38 b1 2c 6c 19 70f×ï.[f.‹‰¥8±,l.p 01b0:b5 9d 6c 6f 83 e5 b4 20 46 86 d8 5c b6 c8 07 d0µloƒå´ F†Ø\¶È.Ð 01c0:5e 45 9b 3e 8d 3a b5 ea d5 ac 5b bb 7e 0d 3b b6^E›:µêÕ¬[»~.;¶ 01d0:ec d9 b4 6b bb 2e 00 00 3b ìÙ´k»...; By sniffing the network, I could see that the image is not transferred completely to the client. Only the first part is being transferred: :47 49 46 38 39 61 64 GIF89ad The problem seems to come from the return_next, in conjunction with binary data: return_next ( { val = $_[0] * $_[1], image = $im-gif() } ); Doe anybody know how to solve that problem? When saving data to the hard disk, with GD and perl, examples show that you have to force the stream in binary mode, like this: open (MYOUTFILE, /tmp/outfile.gif) || die 'Cannot open output file'; binmode MYOUTFILE; print MYOUTFILE $im-gif(); close MYOUTFILE; I tried, and it works... Is there a trick in order to force binary mode with return_next too? Thanks for your help! -- Philippe Lang, Ing. Dipl. EPFL Attik System rte de la Fonderie 2 1700 Fribourg Switzerland http://www.attiksystem.ch Tel: +41 (26) 422 13 75 Fax: +41 (26) 422 13 76 smime.p7s Description: S/MIME cryptographic signature
[GENERAL] Dynamic technical graphics generation inside plperl query?
Hi, Is there a way to use a graphics drawing library (like GD for example) inside a plperl function, and stream the result image (bytea column?) without storing anything to the disk or database? Has anyone done that before? Thanks, --- Philippe Lang Attik System smime.p7s Description: S/MIME cryptographic signature
[GENERAL] Plperl and my() lexical variables bug?
Hi, I have something strange here, with Postgresql 8.1.4 under Linux ES 4, installed from the PG Group binaries: If I run this script: CREATE OR REPLACE FUNCTION foo() RETURNS void AS $$ my $val; sub init { $val = @_[0]; elog(NOTICE, 1: @_[0]\n); } init(12); elog(NOTICE, 2: $val\n); $$ LANGUAGE 'plperl'; select * from foo(); I get in return something correct: NOTICE: 1: 12 NOTICE: 2: 12 Total query runtime: 63 ms. Data retrieval runtime: 62 ms. 1 rows retrieved. But then, if I simply call the function, with: select * from foo(); I get: NOTICE: 1: 12 NOTICE: 2: Total query runtime: 63 ms. Data retrieval runtime: 62 ms. 1 rows retrieved. $val variable is missing. Even more strange: if I replace my $val; with $val;, this does not happen at all: CREATE OR REPLACE FUNCTION foo() RETURNS void AS $$ $val; sub init { $val = @_[0]; elog(NOTICE, 1: @_[0]\n); } init(12); elog(NOTICE, 2: $val\n); $$ LANGUAGE 'plperl'; Now I can call the function with select * from foo(); it works as expected: NOTICE: 1: 12 NOTICE: 2: 12 Total query runtime: 390 ms. Data retrieval runtime: 797 ms. 1 rows retrieved. Am I missing something maybe? It sounds like a bug with lexical variables to me... Cheers, -- Philippe Lang, Ing. Dipl. EPFL Attik System rte de la Fonderie 2 1700 Fribourg Switzerland http://www.attiksystem.ch Tel: +41 (26) 422 13 75 Fax: +41 (26) 422 13 76 smime.p7s Description: S/MIME cryptographic signature
[GENERAL] cpan perl module - plperlu danger?
Hi, I would like to access a cpan perl module (FSA::Rules) in a perl procedure, inside Postgresql 8.1.4. FSA::Rules is a library that allows you to define a finite state machine in perl. No disk access. In order to to that, I need to use plperlu instead of plperl. And it works just fine. What exactly is the danger using a cpan library under plperlu? How can I make sure it won't crash my system, even when run concurrently by a few people? Thanks! -- Philippe Lang, Ing. Dipl. EPFL Attik System rte de la Fonderie 2 1700 Fribourg Switzerland http://www.attiksystem.ch Tel: +41 (26) 422 13 75 Fax: +41 (26) 422 13 76 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] pgxml xpath_table
Hi, I'm playing with the contrib/pgxml library under PG 8.1.4, and I'm not sure if what I found with pgxml is a feature of a bug: I've got the following table: CREATE TABLE test ( id int4 NOT NULL, xml varchar(200), CONSTRAINT pk PRIMARY KEY (id) ) WITHOUT OIDS; INSERT INTO test VALUES (1, 'doc num=C1line num=L1a1/ab2/bc3/c/lineline num=L2a11/ab22/bc33/c/line/doc'); INSERT INTO test VALUES (2, 'doc num=C2line num=L1a111/ab222/bc333/c/lineline num=L2a111/ab222/bc333/c/line/doc'); If I launch this query: select * from xpath_table('id','xml','test', '/doc/@num|/doc/line/@num|/doc/line/a|/doc/line/b|/doc/line/c','1=1') AS t(id int4, doc_num varchar(10), line_num varchar(10), val1 int4, val2 int4, val3 int4) where id = 1 order by doc_num, line_num I get: iddoc_num line_numval1val2val3 1 C1 L1 1 2 3 1 L2 11 22 33 I was expecting doc_num would receive twice the C1 value, just like with a normal sql join. Regards, -- Philippe Lang, Ing. Dipl. EPFL Attik System rte de la Fonderie 2 1700 Fribourg Switzerland http://www.attiksystem.ch Tel: +41 (26) 422 13 75 Fax: +41 (26) 422 13 76 smime.p7s Description: S/MIME cryptographic signature
[GENERAL] Set-returning function returning a set-returning function?
Hi, ( Yes, I re-read the title 5 times, it sounds correct to me! :) ) I've got a SRF similar to: --- CREATE FUNCTION srf_perl ( IN i_a int4, INOUT io_bint4, OUT o_c int4 ) RETURNS SETOF record AS $$ @i = ('i_a'); @io = ('io_b'); @o = ('o_c'); $c = 0; foreach $i (@i) {$input{$i} = @_[$c++]}; foreach $io (@io) {$input{$io} = @_[$c]; $output{$io} = @_[$c++]}; foreach $o (@o) {$output{$o} = @_[$c++]}; $output{'io_b'} = $input{'i_a'} * 2; $output{'o_c'} = $input{'io_b'} * 3; return_next \%output; $output{'io_b'} = $input{'i_a'} * 22; $output{'o_c'} = $input{'io_b'} * 33; return_next \%output; $output{'io_b'} = $input{'i_a'} * 222; $output{'o_c'} = $input{'io_b'} * 333; return_next \%output; return undef; $$ LANGUAGE 'plperl' VOLATILE; -- I can call it with select * from srf_perl(1, 2), that's fine. Now I'd like to call srf_perl from another SRF function, which would return the same data. I've tried different things, with no success. The last thing I tried was: -- CREATE OR REPLACE FUNCTION srf_plpgsql() RETURNS SETOF record AS $$ DECLARE rec RECORD; BEGIN for rec in select * from srf_perl(1,2) loop return next rec; end loop; return; END; $$ LANGUAGE 'plpgsql' VOLATILE; -- I always get the following error: ERROR: a column definition list is required for functions returning record Any idea how I can acheive this? Note that what I'm showing here is just an illustration. I agree the second function is a nonsense... Thanks! -- Philippe Lang, Ing. Dipl. EPFL Attik System rte de la Fonderie 2 1700 Fribourg Switzerland http://www.attiksystem.ch Tel: +41 (26) 422 13 75 Fax: +41 (26) 422 13 76 smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] PGSQL 7.4 - 8.1 migration performance problem
Hi Tom, hi Ludwig, Thanks for your support. Yes, this query has grown very big with time, and I was always asked to add exceptions in it, so the result is quite frightening! TOM: If you try setting enable_bitmapscan off, you'll probably find 8.1 beating 7.4 handily for this query. Correct. I had 239 seconds on the old 7.4 server, and I get 189 seconds with enable_bitmapscan = off. LUDWIG: What about adding an index to the field etapes_lignes_commandes(code_etape) I have this index already. LUDWIG: What about increasing the settings of work_mem? default work_mem = 1024 == 511 seconds work_mem = 2048 == 531 seconds TOM: Am I right in guessing that your database is small enough to fit into RAM on the new server? select pg_database_size('groupefpdb'); returns 360428168 That's 360 MB. It fits in RAM, correct! TOM: If so, it would be reasonable to reduce random_page_cost, perhaps all the way to 1.0, and this would probably improve the quality of the planner's choices for you. With that change I get results in 193 seconds. TOM: What might work better is to get rid of the indexes w_code_type_workflow and lw_ordre --- do you have any queries where those are actually useful? Yes, I think I do, but let's try: Drop both indexes VACUUM FREEZE ANALYZE ANALYZE I get the results in 199 seconds. TOM: Another thing you should look at is increasing the cpu-cost parameters. The numbers in your EXPLAIN ANALYZE results suggest that on your new machine the cost of processing an index tuple is about 1/50th of the cost of touching an index page; that is, you ought to have cpu_index_tuple_cost plus cpu_operator_cost around 0.02. I'd try setting each of them to 0.01 and increasing cpu_tuple_cost a little bit, maybe to 0.02. cpu_index_tuple_cost = 0.01 cpu_operator_cost = 0.01 cpu_tuple_cost = 0.02 With these change, plus random_page_cost = 1, I get results in 195 seconds. (Yes, I did restart the server!) The new EXPLAIN ANALYSE at this point is here: Postgresql 8.1.4 with tuning: http://www.attiksystem.ch/explain_analyze_81_2.txt The old EXPLAIN ANALYZE are still here: Postgresql 7.4.5: http://www.attiksystem.ch/explain_analyze_74.txt Postgresql 8.1.4 without tuning: http://www.attiksystem.ch/explain_analyze_81.txt Is there maybe something I could tune further on the kernel side? I get only 20 % improvements with the new server with Linux, compared to the workstation with freebsd... Maybe my query is so CPU-bound that the most important thing is the CPU clock speed, and in both cases I have a single 3Ghz CPU to run the query? What do you think? Philippe -Message d'origine- De : Tom Lane [mailto:[EMAIL PROTECTED] Envoyé : mercredi, 31. mai 2006 18:21 À : Philippe Lang Cc : pgsql-general@postgresql.org Objet : Re: [GENERAL] PGSQL 7.4 - 8.1 migration performance problem Philippe Lang [EMAIL PROTECTED] writes: Here are both EXPLAIN ANALYSE results, plus the query itself: Postgresql 7.4.5: http://www.attiksystem.ch/explain_analyze_74.txt Postgresql 8.1.4: http://www.attiksystem.ch/explain_analyze_81.txt Query is here: http://www.attiksystem.ch/big_query.txt My goodness, that is one big ugly query :-( Trawling through the EXPLAIN ANALYZE results, it seems that the lossage comes from the repeated occurrences of this pattern: SELECT travaux_en_cours_mvu FROM lignes_workflows AS lw INNER JOIN workflows AS w ON lw.id_workflow = w.id WHERE w.code_article = lignes_commandes.code_article AND w.code_type_workflow = commandes.code_type_workflow AND SUBSTRING(lw.code_etape FROM 1 FOR 3) = SUBSTRING(etapes_lignes_commandes.code_etape FROM 1 FOR 3) AND lw.ordre = etapes_lignes_commandes.ordre 7.4 is doing this as - Nested Loop (cost=0.00..37.28 rows=1 width=8) (actual time=0.056..0.087 rows=1 loops=13653) - Index Scan using w_code_article on workflows w (cost=0.00..15.76 rows=1 width=4) (actual time=0.016..0.024 rows=1 loops=13653) Index Cond: (code_article = $1) Filter: (code_type_workflow = $2) - Index Scan using lw_id_workflow on lignes_workflows lw (cost=0.00..21.51 rows=1 width=12) (actual time=0.023..0.036 rows=1 loops=13651) Index Cond: (lw.id_workflow = outer.id) Filter: ((substring((code_etape)::text, 1, 3) = substring(($3)::text, 1, 3)) AND (ordre = $4)) 8.1 is doing - Nested Loop (cost=18.93..26.84 rows=1 width=8) (actual time=0.431..0.434 rows=1 loops=13630) - Bitmap Heap Scan on workflows w (cost=6.63..10.51 rows=1 width=4) (actual time=0.107..0.107 rows=1 loops=13630) Recheck Cond: ((code_article = $1) AND (code_type_workflow = $2)) - BitmapAnd (cost=6.63..6.63 rows=1 width=0) (actual time=0.104..0.104 rows=0 loops=13630) - Bitmap Index Scan on w_code_article (cost=0.00..2.02 rows=5 width=0) (actual time=0.017..0.017 rows=5 loops=13630
Re: [GENERAL] PGSQL 7.4 - 8.1 migration performance problem
Hi, I made some tests, with and without Hyperthreading: with hyperthreading, on a dual-processor, top mentions a 25% load, and without, 50%, but computing time is exactly the same. Philippe -Message d'origine- De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] De la part de Arnaud Lesauvage Envoyé : mercredi, 31. mai 2006 16:20 À : pgsql-general@postgresql.org Objet : Re: [GENERAL] PGSQL 7.4 - 8.1 migration performance problem Philippe Lang a écrit : Hardware is much more powerful: intel server motherboard, dual-xeon 3GHz, SCSI disks (raid 1), 4GB RAM. Do you need the hyperthreading ? Depending on your case, you might have better results with 2x3GHz thant 4x1.5GHz. -- Arnaud ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq smime.p7s Description: S/MIME cryptographic signature
[GENERAL] Problems posting messages in this mailing-list?
Hi, I'm desperately trying to post a message to this mailing-list since 10 o'clock this morning, although everything worked fine yesterday. I get no error indicated my message is rejected, but nothing appears in the list. Can anyone read this message? -- Philippe Lang, Ing. Dipl. EPFL Attik System rte de la Fonderie 2 1700 Fribourg Switzerland http://www.attiksystem.ch Tel: +41 (26) 422 13 75 Fax: +41 (26) 422 13 76 smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] Problems posting messages in this mailing-list?
This is really strange... Is there an anti-spam software runnning on the mailing-list server? Or maybe my english is too bad for the server... Does anyone have access to the /var/maillog of the mailing-list server? -Message d'origine- De : Dave Page [mailto:[EMAIL PROTECTED] Envoyé : mercredi, 31. mai 2006 13:43 À : Philippe Lang; pgsql-general@postgresql.org Objet : RE: [GENERAL] Problems posting messages in this mailing-list? -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Philippe Lang Sent: 31 May 2006 12:33 To: pgsql-general@postgresql.org Subject: [GENERAL] Problems posting messages in this mailing-list? Hi, I'm desperately trying to post a message to this mailing-list since 10 o'clock this morning, although everything worked fine yesterday. I get no error indicated my message is rejected, but nothing appears in the list. Can anyone read this message? Yup. Regards, Dave. smime.p7s Description: S/MIME cryptographic signature
[GENERAL] PGSQL 7.4 - 8.1 migration performance problem
Hi, For an unknown reason, I cannot post this message to the mailing-list! Here it is: http://www.attiksystem.ch/postgresql-general.txt Cheers, Philippe Lang smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] PGSQL 7.4 - 8.1 migration performance problem
Hi, I did not mention it in my first email, but I did run ANALYSE before running the query... So statistics are just fine for the execution plan engine. Philippe -Message d'origine- De : Ludwig Isaac Lim [mailto:[EMAIL PROTECTED] Envoyé : mercredi, 31. mai 2006 15:52 À : Philippe Lang Cc : pgsql-general@postgresql.org Objet : Re: [GENERAL] PGSQL 7.4 - 8.1 migration performance problem Hi : Maybe you forget to run the ANALYZE command afterwards. ludwig. --- Philippe Lang [EMAIL PROTECTED] wrote: Hi, For an unknown reason, I cannot post this message to the mailing-list! Here it is: http://www.attiksystem.ch/postgresql-general.txt Cheers, Philippe Lang smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] PGSQL 7.4 - 8.1 migration performance problem
Hi, No, the only change I made in postgresql.conf is in order to have the postmaster listen on the network, and change the default datestyle, just like I did with all the other installations. Philippe -Message d'origine- De : Ludwig Isaac Lim [mailto:[EMAIL PROTECTED] Envoyé : mercredi, 31. mai 2006 16:13 À : Philippe Lang Cc : pgsql-general@postgresql.org Objet : Re: [GENERAL] PGSQL 7.4 - 8.1 migration performance problem Hi: How about the postgresql.conf settings? Did you changed them? ludwig. --- Philippe Lang [EMAIL PROTECTED] wrote: Hi, I did not mention it in my first email, but I did run ANALYSE before running the query... So statistics are just fine for the execution plan engine. Philippe -Message d'origine- De : Ludwig Isaac Lim [mailto:[EMAIL PROTECTED] Envoy#65533;: mercredi, 31. mai 2006 15:52 #65533;: Philippe Lang Cc : pgsql-general@postgresql.org Objet : Re: [GENERAL] PGSQL 7.4 - 8.1 migration performance problem Hi : Maybe you forget to run the ANALYZE command afterwards. ludwig. --- Philippe Lang [EMAIL PROTECTED] wrote: Hi, For an unknown reason, I cannot post this message to the mailing-list! Here it is: http://www.attiksystem.ch/postgresql-general.txt Cheers, Philippe Lang __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] PGSQL 7.4 - 8.1 migration performance problem
Hi, I have now disabled hyperthreading in /etc/grub.conf (added noht at the end of kernel lines), rebooted the server, run ANALYSE on the database again, and launched the query: things are even a little bit worse, it completes in 540 seconds now. So this is no ANALYSE or hyperthreading problem. Here are both EXPLAIN ANALYSE results, plus the query itself: Postgresql 7.4.5: http://www.attiksystem.ch/explain_analyze_74.txt Postgresql 8.1.4: http://www.attiksystem.ch/explain_analyze_81.txt Query is here: http://www.attiksystem.ch/big_query.txt The freebsd box, as far as I can remember, has not been specifically tuned. I did recompile a kernel with a few things inside, but really, I did not spend hours on that two years ago. Thanks, Philippe -Message d'origine- De : Tom Lane [mailto:[EMAIL PROTECTED] Envoyé : mercredi, 31. mai 2006 16:39 À : Philippe Lang Cc : pgsql-general@postgresql.org Objet : Re: [GENERAL] PGSQL 7.4 - 8.1 migration performance problem Philippe Lang [EMAIL PROTECTED] writes: http://www.attiksystem.ch/postgresql-general.txt Please provide EXPLAIN ANALYZE, not just EXPLAIN, output ... and try to post it in an un-line-wrapped form. Also, as Ludwig mentioned, a common gotcha is to forget to ANALYZE your data after moving it to a new database. regards, tom lane smime.p7s Description: S/MIME cryptographic signature
[GENERAL] Which RPM for RH Linux ES 4? PGDB or RH?
Hi, I have to install PGSQL 7.4.13 under RH Linux Entreprise Server 4, and I can apparently use either the rpm postgresql--7.4.13-1PGDG.i686.rpm from the Postgresql Development Group, or use the rpm postgresql--7.4.13-2.RHEL4.1.i386 from Redhat itself. What is the best to do? Thanks, -- Philippe Lang, Ing. Dipl. EPFL Attik System rte de la Fonderie 2 1700 Fribourg Switzerland http://www.attiksystem.ch Tel: +41 (26) 422 13 75 Fax: +41 (26) 422 13 76 smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] Which RPM for RH Linux ES 4? PGDB or RH?
Hi, I only run RHEL because of the progress database, running on this server. Technical support for progress is only available when servers are running RHEL, so that's why we are running this OS. But if I had the choice, I would have installed freebsd there... Alan, you are right, it's time maybe to try migrating from Postgresql 7 to 8. I'll try that... Thanks Philippe -Message d'origine- De : Tom Lane [mailto:[EMAIL PROTECTED] Envoyé : mercredi, 31. mai 2006 05:42 À : Philippe Lang Cc : pgsql-general@postgresql.org Objet : Re: [GENERAL] Which RPM for RH Linux ES 4? PGDB or RH? Philippe Lang [EMAIL PROTECTED] writes: I have to install PGSQL 7.4.13 under RH Linux Entreprise Server 4, and I can apparently use either the rpm postgresql--7.4.13-1PGDG.i686.rpm from the Postgresql Development Group, or use the rpm postgresql--7.4.13-2.RHEL4.1.i386 from Redhat itself. What is the best to do? They are the same thing to within measurement error ;-). Or at least, if you find an important difference, feel free to tell off Devrim or me respectively. But I agree with Alan's point: if you are running RHEL at all, it's probably because you want Red Hat support, and Red Hat won't support RPMs not built by Red Hat. regards, tom lane smime.p7s Description: S/MIME cryptographic signature
[GENERAL] PL/Ruby, INOUT parameters / SETOF record
Hi, I am testing PL/Perl under Postgresql 8.1.3, and I habe the following test procedure: -- CREATE FUNCTION perl_test(a inout integer, b inout integer, r1 out integer, r2 out integer) SETOF record AS ' my ($a, $b) = @_; $r1 = $a + $b; $r2 = $a * $b; return_next {a = $a, b = $b, r1 = $r1, r2 = $r2}; return_next {a = $b, b = $a, r1 = $r1, r2 = $r2}; return undef; ' LANGUAGE plperl; -- How would you translate that in PL/Ruby? I have a link to http://moulon.inra.fr/ruby/plruby.html, which is apprently the only documentation, and I find it quite hard to do the translation. Thanks! -- Philippe Lang, Ing. Dipl. EPFL Attik System rte de la Fonderie 2 1700 Fribourg Switzerland http://www.attiksystem.ch Tel: +41 (26) 422 13 75 Fax: +41 (26) 422 13 76 smime.p7s Description: S/MIME cryptographic signature
[GENERAL] PG 8.1 on Dual XEON with FreeBSD 6.0
Hi, The FreeBSD 6.0 SMP Kernel recognizes my two XEONS as 4 CPUs. A single postgresql query, as I could see in the top utility, can use a maximum of 25% of CPU time, since it runs on one single virtual CPU, which means for me half of a XEON. Is that correct? If yes, is there a way to change that, and accelerate long queries (which are all CPU-bound) in giving them more processing time, like a full XEON? Both XEONS? Should I disable Hyperthreading for that? Thanks, -- Philippe Lang Attik System rte de la Fonderie 2 1700 Fribourg Switzerland http://www.attiksystem.ch Tel: +41 (26) 422 13 75 Fax: +41 (26) 422 13 76 GSM: +41 (79) 351 49 94 Email: [EMAIL PROTECTED] Skype: philippe.lang ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Trigger disactivation and SELECT WAITING
Thanks Tom, thanks Janning, I found triggers very convenient to do different tasks in the database, and these tasks go far beyond what we can do in rules, Janning. When a line is being inserted in an order, the insert trigger automatically inserts data in a subtable of the order line, for example. In this subtable, there are informations regarding the planning of the order. People can use the GUI to populate the order, but things won't break if the user opens the database table directly, which can happen sometimes. Without the trigger, an insert function click would have to be used each time an order line is being added, and this is not that effective from a user-experience point of view, I think. Or would require a lot a client-coding. Now the use of a trigger has a drawback: when you want to duplicate an order, for example. During the duplication function, I would like to disable the trigger, in order to make a copy of the order, order lines, and order lines subtable data. This is much easier than keeping the trigger, and having to delete default data it inserts in the new order. I'm not sure how I can improve the trigger in this case, and make it smarter, so I don't have to disable it during duplication... I hope I was clear... Philippe -Message d'origine- De : Tom Lane [mailto:[EMAIL PROTECTED] Envoyé : mardi, 26. juillet 2005 19:57 À : Philippe Lang Cc : pgsql-general@postgresql.org Objet : Re: [GENERAL] Trigger disactivation and SELECT WAITING Philippe Lang [EMAIL PROTECTED] writes: I meant: in 7.4.X databases, is there a way of disabling a trigger without deleting it? I guess the answer is no. Nothing officially supported, anyway. There's a pg_trigger.tgenabled column but I'm not sure which operations pay attention to it. That's what my plpgsql insert function does, and because of this, if a view is running at the same moment on the same tables (some views can take up to 2 hours to be calculated), the insert function gets stuck in a SELECT WAITING state. So insertions are impossible in the database when views are being calculated. I guess I question a database design in which you routinely have to drop triggers in order to get your work done. Why have the trigger at all if you do so many changes to the table with it deleted? Why not improve the trigger to be smart enough to not interfere with what you need the insertion function to do? regards, tom lane -Message d'origine- De : Janning Vygen [mailto:[EMAIL PROTECTED] Envoyé : mardi, 26. juillet 2005 17:39 À : pgsql-general@postgresql.org Cc : Philippe Lang Objet : Re: [GENERAL] Trigger disactivation and SELECT WAITING Am Dienstag, 26. Juli 2005 16:07 schrieb Philippe Lang: Hi, I meant: in 7.4.X databases, is there a way of disabling a trigger without deleting it? I guess the answer is no. That's what my plpgsql insert function does, and because of this, if a view is running at the same moment on the same tables (some views can take up to 2 hours to be calculated), the insert function gets stuck in a SELECT WAITING state. So insertions are impossible in the database when views are being calculated. I guess you should rethink your databse design. Disabling triggers is convinient if your populate a database or you do bulk inserts, but you shouldn't disable them in a production database. In my experience rules are much more powerful and faster than triggers but on the other side much more difficult. Triggers are procedural. they fire on every inserted row. A rule is relational instead. If you use a rule you have only one more statement on insert even if you insert lots of data. On the other hand rules are not called by COPY Statements. And some things can't be done with rules. The waiting state ist ok, because other transaction can just not know if you commit your changes to the trigger or not. And i don't know what you mean with view is running for 2 hours i guess you have some functionality to build so called materialized views, right? if you give me some more information waht you are really doing i can help you. as your mail is .ch you might prefer german language and can contact via personal mail. kind regards, janning ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Trigger disactivation and SELECT WAITING
Hi, I have a database with views that can take up to 2 hours to be calculated. During that time, it's not possible to run a function that inserts data into the database, apparently because this function disactivates a trigger while it runs, by deleting and creating the trigger again at the end. (At least in 7.4.X database, this is the only solution, right?) Running ps -afxu show that the process that tries to insert data gets stuck in the SELECT WAITING state. Is there a solution to this? Thanks! -- Philippe Lang Attik System ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Trigger disactivation and SELECT WAITING
Hi, I meant: in 7.4.X databases, is there a way of disabling a trigger without deleting it? I guess the answer is no. That's what my plpgsql insert function does, and because of this, if a view is running at the same moment on the same tables (some views can take up to 2 hours to be calculated), the insert function gets stuck in a SELECT WAITING state. So insertions are impossible in the database when views are being calculated. Regards, -Message d'origine- De : Tom Lane [mailto:[EMAIL PROTECTED] Envoyé : mardi, 26. juillet 2005 15:14 À : Philippe Lang Cc : pgsql-general@postgresql.org Objet : Re: [GENERAL] Trigger disactivation and SELECT WAITING Philippe Lang [EMAIL PROTECTED] writes: I have a database with views that can take up to 2 hours to be calculated. During that time, it's not possible to run a function that inserts data into the database, apparently because this function disactivates a trigger while it runs, by deleting and creating the trigger again at the end. (At least in 7.4.X database, this is the only solution, right?) Only solution to what? Why in the world would a view fool around with removing triggers? regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
MD5 computation (was: RE: [GENERAL] For Tom Lane)
Hi, What is the best way to calculate an MD5 Sum for a set of rows in a table, on a Postgresql server? -Message d'origine- De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] De la part de Scott Marlowe Envoyé : mardi, 31. mai 2005 18:37 À : [EMAIL PROTECTED] Cc : pgsql-general Objet : Re: [GENERAL] For Tom Lane On Fri, 2005-05-27 at 09:57, [EMAIL PROTECTED] wrote: Thanks for answer Tom Consider what happens when the user leaves for lunch Well, I've already thought about it.But I'm working with VS2003 and disconnected dataset.. so when user edit data he's modifying an old disconnected row, while real updated row is in the database.. So my strategy would be (as I already written): 1. refresh data recalling current row from database to the form's fields 2. lock the row 3. update modified data in the database through stored procedure (function) 4. commit and unlock the row Have you another idea that could work better with disconnected objects ? While this ensures that the update is atomic, it doesn't ensure that no one else is trying to edit it at the same time. What you might want to do is either optimistically lock it, or use application level locking. To use optimistic locking, you'll need to do something like make an md5 of all the fields being edited, then, right before you write back the data, check to see if the md5 you created at the beginning still matches by re-reading the data and md5ing it again. If it doesn't match, then you can throw a mid air collision error, so to speak, and tell them that the record changed underneath them, or do some kind of merging / or whatnot. If you want to do application level locking, then create a field and use that for locks. Just make it a timestamp field and put in the current time value when the lock is taken. When the predetermined timeout occurs, the user lock is removed by the next person to access it, or offer them chance to, or email the original locker, etc... Handle it the way you want or need to. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Visual Studio 2005 .Net - Npgsql - Postgresql
Hi, I'v been playing with Visual Studio 2005 beta 2 recently. I'm evaluating it as a RAD tool for developing GUI front-ends to Postgresql. RAD means for me something similar to Delphi: you configure a database connexion, drop a few objects on a form, controls, you bind the controls to the datasource, and finished. No glue code or whatever. In Visual Studio 2005, that's something I could achieve with a Microsoft database - Access or SQL Server -. When you select one of these databases, code is being generated, that works, really, no problem. Although Npgsql seems to be a very nice library to access Postgresql, I was not able to do that. Adding a database to the project is fine for MS Access / SQL Server / Oracle databases, but I'm still searching how to do it for Postgresql. So my question is: is it reasonable to expect as much integration of Postgresql in Visual Studio than with the other databases? Does anyone work under Visual Studio as easily with Postgresql than with MS Access / SQL Server? --- Philippe ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] Postgresql .NET Data Provider for Visual Studio 2005 beta
Hi, Does anyone successfully use a .Net Data provider for Postgresql? I was able to find two of them: - npsgsql open-source driver - crlab provider (which does not install correctly for me) Any recommendation? Thanks -- Philippe Lang ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Postgresql .NET Data Provider for Visual Studio 2005
Hi, Is there a way to install the npgsql driver under Visual Studio so it appears as a set of visual controls we can drop on a form, or is it a code library only? -Message d'origine- De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] De la part de Howard Cole Envoyé : lundi, 23. mai 2005 15:27 À : 'PgSql General' Objet : Re: [GENERAL] Postgresql .NET Data Provider for Visual Studio 2005 I have been using the npgsql driver on both .NET and Mono for over 6 months now with no problems. Howard Cole www.selestial.com Philippe Lang wrote: Does anyone successfully use a .Net Data provider for Postgresql? I was able to find two of them: ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Delphi 2005, Postgresql, ZEOS optimistic locking
Hi, Yes, I'm sure this kind of feature is application-based. There is nothing in the driver itself. If I'm not wrong, MS Access uses a timestamp column to check if the record was updated meanwhile, or the column values if not timestamp is available. This might be true only with a MS SQL Database, though. I'll try doing something similar in a Delphi event-handler. Thanks, bye Philippe -Message d'origine- De : Tony Caduto [mailto:[EMAIL PROTECTED] Envoyé : jeudi, 12. mai 2005 21:07 À : Philippe Lang Cc : pgsql-general@postgresql.org Objet : Re: [GENERAL] Delphi 2005, Postgresql, ZEOS optimistic locking Why not just add a onenter event handler to the forms field in question and just check the data before they edit it. If it's different update the field with the most current data. Access probably just does something similar under the hood for you. I don't think that's a feature of the ODBC driver or is it? If it is you could always use the ODBC driver from Delphi as well. With Dephi/BDE/ODBC, this is different: as soon as you try updating a field that has been modified by someone else meanwhile, the field is automatically updated for you before you start making your own changes, and of course before you try to commit them. That's fine too. I would have preferred an error personnally. Is there a way to do the same with ZEOS? Or maybe is there another mecanism that could be used to do optimistic locking? Some kind of long transactions, in the database server? Philippe Lang ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] MS-Access and Stored procedures
Hi, You can use pass-through queries with parameters. You have to edit the pass-through querydef at run-time before opening it, and it works. That's fine if you want to use this query as a datasource for a form or a report. Sub search_store(query As String, p As String) On Error GoTo search_storeError Dim MyDatabase As DAO.DataBase Dim MyQueryDef As DAO.QueryDef cmdSourisSablier Set MyDatabase = CurrentDb() If (QueryExists(query)) Then MyDatabase.QueryDefs.Delete query Set MyQueryDef = MyDatabase.CreateQueryDef(query) MyQueryDef.Connect = ODBC;DSN= global_dsn_name() ; MyQueryDef.SQL = SELECT * FROM public.query(' p '); MyQueryDef.ReturnsRecords = True MyQueryDef.Close Set MyQueryDef = Nothing MyDatabase.Close Set MyDatabase = Nothing search_storeExit: cmdSourisNormal Exit Sub search_storeError: MsgBox Error in search_store. Resume search_storeExit End Sub Regarding DAO/ADO, I suggest you have a look a performances. The fastest way for me to call PG functions was to use DAO, which is a bit obsolete, I agree. But there was an initial overhead with ADO that made me use DAO instead. Since I put all the logic on the server, this is only glue code, so using DAO is not a problem, even if ADO is supposed to be the future... If you put logic on the client, that's another problem maybe. Philippe Lang -Message d'origine- De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] De la part de Zlatko Matic Envoyé : vendredi, 13. mai 2005 00:07 À : Hervé Inisan; pgsql-general@postgresql.org Objet : Re: [GENERAL] MS-Access and Stored procedures Importance : Haute I was using ADO command object and both refresh method and method with creating parameter object while working with Access Project...but I didn't try to use it with PostgreSQL... I would rather like to have all queries on client side anyway. Therefore I use pass-through queries. But it doesn't allow using parameters (execept by concatenation). Also, you can't base subforms on pass-through queries, so now I use strange combination of local tables, append queries with parameters based on pass-through queries etc. It works but I'm aware that it is not very clever:)... I think that it would be great if pass-through queries could accept parameters. That would be a powerfull way for executing queries on client, while keeping all the code on front-end side...But I doubt that Microsoft will work on further Access improving anymore. It seems that Access is left behind while VS.NET is top technology. Too bad... IS there any good book covering MS Access usage as front-end for different database servers except MSDE ? Do you have form/subform/subform...based on stored procedures ? If so, how do you synchronize form with subform ? Greetings, Zlatko - Original Message - From: Hervé Inisan [EMAIL PROTECTED] To: pgsql-general@postgresql.org Sent: Thursday, May 12, 2005 11:06 PM Subject: Re: [GENERAL] MS-Access and Stored procedures Hello...This is very interesting. I have also asked myself how to prepare and execute stored procedures on POstgre from MS Access. Could you, please, give some example of Postgre function with parameters that is executed as stored procedure from MS Access? How would you pass parameters ? Using ADO Command object? AFAIK, there are 2 ways to send parameters from Access to a PG function, using ADO: 1. Write the parameters as the CommandText string: Set cmd = New ADODB.Command cmd.ActiveConnection = cnn cmd.CommandText = mypgfunction('this is a parameter', 25) cmd.CommandType = adCmdStoredProc cmd.Execute Set cmd = Nothing The CommandText string can be the result of a concatenation: Cmd.CommandText = mypgfunction(' strMyString ', intMyValue ) 2. Another way is to use true ADO parameters: Set cmd = New ADODB.Command cmd.ActiveConnection = cnn cmd.CommandText = mypgfunction cmd.CommandType = adCmdStoredProc Dim prm1 As ADODB.Parameter Set prm1 = New ADODB.Parameter With prm1 .Type = adVarChar .Direction = adParamInput .Value = another string sent to PG .Name = param1 .Size = 30 End With Dim prm2 As ADODB.Parameter Set prm2 = New ADODB.Parameter With prm2 .Type = adInteger .Direction = adParamInput .Value = 25 .Name = param2 .Size = 0 End With cmd.Parameters.Append prm1 cmd.Parameters.Append prm2 cmd.Execute Set cmd = Nothing Voilà! -- Hervé Inisan, www.self-access.com ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match ---(end of broadcast
[GENERAL] Delphi 2005, Postgresql, ZEOS optimistic locking
Hi, I've been testing Delphi 2005 with Postgresql 7.4.5, through ZEOS Lib 6.5.1, and I have a question: How do you implement an optimistic locking strategy with these tools? With an Access front-end, and the ODBC driver, this is completely transparent. A test showed me that the Delphi client writes to the database without worrying about another user doing that meanwhile... I saw it's possible to manipulate the isolation level (read commited or serializable only) in the ZEOS controls, but it does not help at all here. An optimistic lock is a kind of long transaction for me. Thanks for your time! Philippe -Message d'origine- De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] De la part de Arthur Hoogervorst Envoyé : lundi, 9. mai 2005 12:46 À : pgsql-general@postgresql.org Objet : Re: [GENERAL] Adventures in Quest for GUI RAD Hi, The company I work for actually uses the Zeos lib/Postgres extensively to track the shipping and sales side for almost 3 years. We're still running on a 7.2/7.4 Postgres database, because I haven't been convinced yet to either update or upgrade to 8.x.x. I'm curious if others have successfully moved their (production) database successfully to Postgres 8.0. Regards, Arthur On 5/9/05, Philippe Lang [EMAIL PROTECTED] wrote: Hi, I'm testing Delphi 2005 at the moment, with ZEOS Lib (libpq), and I have to say it work fine, as Tony mentioned. I have a few questions: 1) I'm curious: are there a lot of big projects using ZEOS with PG or is that technology still relatively new? I would like to use it a replacement for ODBC, but I have no experience regarding its stability. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] MS-Access and Stored procedures
Hi, 1) The simplest way to call a function from MS Access is to use a pass-through query, like: SELECT * FROM public.search_article(); 2) If the parameter is/are dynamic, that's more complicated. You have to edit the query at run-time, like with this kind of code: Sub search_store(query As String, p As String) On Error GoTo search_storeError Dim MyDatabase As DAO.DataBase Dim MyQueryDef As DAO.QueryDef cmdSourisSablier Set MyDatabase = CurrentDb() If (QueryExists(query)) Then MyDatabase.QueryDefs.Delete query Set MyQueryDef = MyDatabase.CreateQueryDef(query) MyQueryDef.Connect = ODBC;DSN= global_dsn_name() ; MyQueryDef.SQL = SELECT * FROM public.query(' p '); MyQueryDef.ReturnsRecords = True MyQueryDef.Close Set MyQueryDef = Nothing MyDatabase.Close Set MyDatabase = Nothing search_storeExit: cmdSourisNormal Exit Sub search_storeError: MsgBox Error in search_store. Resume search_storeExit End Sub That's fine if your query is linked to a report, for example. 3) You can also call a function from code without using a pass-through query, just to retreive a result: Function charge_disponible_semaine(code_etape As String, semaine As Integer, année As Integer) As Double On Error GoTo charge_disponible_semaineError Dim MyWorkspace As DAO.Workspace Dim MyConnection As DAO.Connection Dim MyRecordset As DAO.Recordset Dim MySQLString As String Dim MyODBCConnectString As String Dim query As String query = charge_disponible_semaine Set MyWorkspace = CreateWorkspace(ODBCWorkspace, , , dbUseODBC) MyODBCConnectString = ODBC;DSN= global_dsn_name() ; Set MyConnection = MyWorkspace.OpenConnection(Connection1, dbDriverNoPrompt, , MyODBCConnectString) MySQLString = SELECT * FROM public.query(' code_etape ', semaine , année ); Set MyRecordset = MyConnection.OpenRecordset(MySQLString, dbOpenDynamic) With MyRecordset If Not .EOF Then charge_disponible_semaine = MyRecordset(charge_disponible_semaine) Else charge_disponible_semaine = 0 End If End With MyRecordset.Close Set MyRecordset = Nothing MyConnection.Close Set MyConnection = Nothing MyWorkspace.Close Set MyWorkspace = Nothing charge_disponible_semaineExit: Exit Function charge_disponible_semaineError: MsgBox Error in charge_disponible_semaine. Resume charge_disponible_semaineExit End Function I hope this helps. One or two utility function are needed: Public Function global_dsn_name() As String global_dsn_name = you_dsn_name End Function Public Function QueryExists(QueryName As String) As Boolean On Error Resume Next QueryExists = IsObject(CurrentDb().QueryDefs(QueryName)) End Function Philippe Lang De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] De la part de Ets ROLLAND Envoyé : jeudi, 12. mai 2005 17:28 À : pgsql-general@postgresql.org Objet : [GENERAL] MS-Access and Stored procedures Hello ! How can I use stored procedures (functions) with MS-Access 2002 connected to PostgreSQL 8.0 ? Best regards. Luc ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Delphi 2005, Postgresql, ZEOS optimistic locking
Hi, Thanks for your answer, but I'm not sure we are talking about the same thing. What I was used to with MS Access/ODBC/Postgresql, in a multiuser network scenario, is the following: - User A loads data in a form - User B loads the same data in a form. - Before user A makes changes, B makes his changes, and commits them. - User A makes changes, and tries to commit them, but... the software shouts, because meanwhile, someone else made changes to the data loaded in the form. That's what I call optimistic locking. Between each step, you can easily imagine a duration of several minutes, so from my point of view, this is not a feature that has to be implemented with things like isolation level database locking. These transactions have to be as short a possible, right? Until now, I was not able to reproduce optimistic locking with Delphi/ZEOS. With Dephi/BDE/ODBC, this is different: as soon as you try updating a field that has been modified by someone else meanwhile, the field is automatically updated for you before you start making your own changes, and of course before you try to commit them. That's fine too. I would have preferred an error personnally. Is there a way to do the same with ZEOS? Or maybe is there another mecanism that could be used to do optimistic locking? Some kind of long transactions, in the database server? Philippe Lang -Message d'origine- De : Tony Caduto [mailto:[EMAIL PROTECTED] Envoyé : jeudi, 12. mai 2005 18:43 À : Philippe Lang Cc : pgsql-general@postgresql.org Objet : Re: [GENERAL] Delphi 2005, Postgresql, ZEOS optimistic locking Hi Philippe, It still works the same way as the ODBC driver, because of Postgresql's multi version concurrency. Zeos uses libpq exactly like the ODBC driver does, except it talks directly to libpq without the overhead of ODBC, and all you have to deploy with your app is the super small libpq.dll. You really never have to worry about locks. However if you want to do a bunch of commands in the context of a long transaction you need to pick one of the isolation levels like read commited and then in your code do something like this: with myconnection do begin Myconnection.connection.StartTransaction; try sql.add('insert into sometable (field1) VALUES ('bla')'); execsql; //do some more operations in the same transaction sql.clear; sql.add('select * from sometable'); open; Myconnection.commit; except //if a error occurs rollback everything we did in the transaction Myconnection.connection.Rollback; end; end; When ever I use Zeos I always set the isolation level to tiNone and let the server handle the transactions. When you use tiNone you simply do all your statements in one operation, just do a bunch of adds and seperate each statement with a semi colon, then do the execsql. All the statements will be executed in a single transaction by the server and if a error occurs they all get rolled back. Hope this helps you out. -- Tony Caduto AM Software Design Home of PG Lightning Admin for Postgresql 8.x http://www.amsoftwaredesign.com Philippe Lang wrote: Hi, I've been testing Delphi 2005 with Postgresql 7.4.5, through ZEOS Lib 6.5.1, and I have a question: How do you implement an optimistic locking strategy with these tools? With an Access front-end, and the ODBC driver, this is completely transparent. A test showed me that the Delphi client writes to the database without worrying about another user doing that meanwhile... I saw it's possible to manipulate the isolation level (read commited or serializable only) in the ZEOS controls, but it does not help at all here. An optimistic lock is a kind of long transaction for me. Thanks for your time! Philippe ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Adventures in Quest for GUI RAD
Hi, I'm testing Delphi 2005 at the moment, with ZEOS Lib (libpq), and I have to say it work fine, as Tony mentioned. I have a few questions: 1) I'm curious: are there a lot of big projects using ZEOS with PG or is that technology still relatively new? I would like to use it a replacement for ODBC, but I have no experience regarding its stability. 2) Is it possible to link Crystal Reports, the integrated reporting tool of Delphi 2005, to PG through ZEOS? I couldn't find how to do it... 3) ZEOS has support for the PG 7.4 protocol. Correct if I'm wrong, but the PG 8.0 protocol is exactly the same, right? 4) Are there know limitations would should be aware of before when building a PG GUI with Delphi 2005 / ZEOS 6.5.1? Like uncompatible tools or things like that... Thanks for your time! Philippe -Message d'origine- De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] De la part de Tony Caduto Envoyé : vendredi, 6. mai 2005 00:48 À : [EMAIL PROTECTED] Cc : pgsql-general@postgresql.org Objet : Re: [GENERAL] Adventures in Quest for GUI RAD Programmers Paradise has delphi in their catalogs... If you call borland and tell them you want to do a competive upgrade for the old VB you are using they will let you buy the upgrade to Delphi 2005 PRO. You have to ask. Like I said before you get what you pay for and all this time you have been fooling around you could have just bought Delphi. All the time you spent researching is actually costing someone money. Not sure what you mean by this And several years ago, I had some experiences with people using Delphi, and their database server that make me not so enthusiastic to get involved... but I did want to give credit to Tony that his arguments for using Delphi are convincing, if it were readily available. And I do thank him for all his time and efforts to advise me. Delphi has been superior to VB,Access etc since 1995 when version 1 came out. Version 1 had features that MS has only recently included in .net I am just giving you good advice, I have been a developer for a long time and have used VB, C++, Assembly, Access etc and when I say Delphi is the best tool for creating win32 database apps, I mean it. If Access or C# was better I would for sure be using it. Also I have had good luck with Microolap, I use their postgresdac components, and guess what? That MySQL RAD thing they sell is created with.yep you guessed it Delphi. -- Tony Caduto AM Software Design Home of PG Lightning Admin for Postgresql 8.x http://www.amsoftwaredesign.com ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] PL/PERL: raise notice, exception ?
Hi, Is there in PL/PERL, under PG 8.01, an equivalent for the raise notice, exception commands of PL/PGSQL? Philippe ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] PL/PERL with composite return values PGSQL 7.4?
Hi, Documentation mentions that PGSQL 8 supports a version of PL/PERL with composite return values. Is there a way to install this new version of PL/PERL on an old 7.4 database, or is it absolutely necessary to upgrade? Thanks Philippe Lang ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] ms access app ?
Hello, Yes, you can do that, programmatically: Here is some DAO code for your Access project: Dim strTblName As String Dim strConn As String Dim db As DAO.DataBase Dim rs As DAO.Recordset Dim login As DAO.Recordset Dim tbl As DAO.TableDef Dim strDSN As String Set db = CurrentDb Set login = db.OpenRecordset(select * from tblLogin) Set rs = db.OpenRecordset(select * from tblODBCDataSources) While Not rs.EOF strTblName = rs(LocalTableName) strConn = ODBC; strConn = strConn DSN=your_global_dsn_name; strConn = strConn APP=Microsoft Access; strConn = strConn DATABASE= login(DataBase) ; strConn = strConn UID= login(UID) ; strConn = strConn PWD= login(PWD) ; strConn = strConn TABLE= rs(ODBCTableName) If (DoesTblExist(strTblName) = False) Then Set tbl = db.CreateTableDef(strTblName, _ dbAttachSavePWD, rs(ODBCTableName), _ strConn) db.TableDefs.Append tbl Else Set tbl = db.TableDefs(strTblName) tbl.Connect = strConn tbl.RefreshLink End If rs.MoveNext Wend Where: -- 1) tblLogin is a local table with the definition: UID Text PWD Text DatabaseText Server Text 2) tblODBCDataSources is a local table with the definition ODBCTablName Text LocalTableNameText 3) Function DoesTblExist(strTblName As String) As Boolean On Error Resume Next Dim db As DAO.DataBase Dim tbl As DAO.TableDef Set db = CurrentDb Set tbl = db.TableDefs(strTblName) If Err.Number = 3265 Then ' Item not found. DoesTblExist = False Exit Function End If DoesTblExist = True End Function 4) your_global_dsn_name refers to your DNS project name With the table tblODBCDataSources, you can choose the local table name. I hope this helps. Philippe Lang -Message d'origine- De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] la part de [EMAIL PROTECTED] Envoyé : mardi, 26. octobre 2004 12:20 À : [EMAIL PROTECTED] Objet : [GENERAL] ms access app ? Hi, we have a large ms-access application (as .mde file) which is connected to an oracle database. Now we want to migrate to postgresql. Database migration has been done successfully but when starting the access-application we get the following error: Cannot find table ... on database I manually connected to postgres via access and find out that postgresql provides every table with the full qualifier, which means, that while access is looking for a table (e.g. mytable) postgresql provides only a table ( myschema.mytable ). Is there a setting which prevent ms-access from for an unqualified name or the other way around, is there an setting, which disable the full-qualified name to be shown to access in the case where the tableowner is connected to postgresql ? thanks in advance tom ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] FKs and deadlocks
Hello, I got a deadlock in my database this morning. This time it happened in an UPDATE, but sometimes it's in an INSERT, or during a transaction too. Here is what I could gather before killing the offending processes: ps -afx: 7075 ?? I 0:00.72 postmaster: jldousse groupefpdb 172.17.10.37 idle (postgres) 7448 ?? I 0:00.01 postmaster: ldupuis groupefpdb 172.18.10.248 idle (postgres) 8756 ?? I 0:00.48 postmaster: lbeselga groupefpdb 172.17.10.30 idle (postgres) 9034 ?? I 0:00.01 postmaster: ybastide groupefpdb 172.18.10.249 idle (postgres) 9141 ?? I 0:00.24 postmaster: jdcurrat groupefpdb 172.18.10.253 idle (postgres) 10407 ?? I 0:00.90 postmaster: cdunand groupefpdb 172.18.10.245 idle (postgres) 11346 ?? R236:43.23 postmaster: jlroubaty groupefpdb 172.17.10.14 UPDATE (postgres) 11439 ?? S 0:00.27 postmaster: jlroubaty groupefpdb 172.17.10.14 UPDATE waiting (postgres) 12345 ?? I 0:00.03 postmaster: jlroubaty groupefpdb 172.17.10.14 UPDATE waiting (postgres) 12397 ?? I 0:00.17 postmaster: jlroubaty groupefpdb 172.17.10.14 UPDATE waiting (postgres) 13167 ?? I 0:00.24 postmaster: lbielmann groupefpdb 172.17.10.29 idle (postgres) 13440 ?? I 0:00.07 postmaster: jmjordil groupefpdb 172.18.10.243 idle (postgres) 13668 ?? I 0:00.09 postmaster: candrey groupefpdb 172.17.10.43 idle (postgres) 13973 ?? I 0:00.63 postmaster: paruozzi groupefpdb 172.17.10.46 idle (postgres) 14059 ?? I 0:00.07 postmaster: pgsql groupefpdb 10.1.1.4 idle (postgres) 14073 ?? I 0:00.04 postmaster: hbourguet groupefpdb 172.17.10.23 idle (postgres) 14311 ?? S 0:00.27 postmaster: jmrisse groupefpdb 172.17.10.12 idle (postgres) 14339 ?? I 0:00.01 postmaster: nschroeter groupefpdb 172.17.10.3 idle (postgres) 14381 ?? I 0:00.14 postmaster: pgsql groupefpdb 10.1.1.4 idle (postgres) 14385 ?? I 0:00.01 postmaster: pgsql groupefpdb 10.1.1.4 idle (postgres) 97763 ?? I 0:00.13 postmaster: nbussard groupefpdb 172.18.10.252 idle (postgres) SELECT * FROM pg_locks; --- relationdatabasetransaction pid modegranted 6489299 12345 ShareLock f 12610 11346 AccessShareLock t 6489299 11346 ExclusiveLock t -- jlroubaty 12600 11346 AccessShareLock t 33308 32920 11346 AccessShareLock t 16759 32920 14385 AccessShareLock t 33044 32920 12397 AccessShareLock t 33044 32920 12397 RowExclusiveLockt -- jlroubaty 33211 32920 11346 AccessShareLock t 32939 32920 11346 AccessShareLock t 33044 32920 11346 AccessShareLock t 33044 32920 11346 RowExclusiveLockt -- jlroubaty 33308 32920 12397 AccessShareLock t 33308 32920 11439 AccessShareLock t 6489299 11439 ShareLock f 33044 32920 12345 AccessShareLock t 33044 32920 12345 RowExclusiveLockt -- jlroubaty 6489299 12397 ShareLock f 32937 32920 11346 AccessShareLock t 33044 32920 11439 AccessShareLock t 33044 32920 11439 RowExclusiveLockt -- jlroubaty 6514392 14385 ExclusiveLock t -- pgsql 6495858 11439 ExclusiveLock t -- jlroubaty 33018 32920 11346 AccessShareLock t 6496304 12345 ExclusiveLock t -- jlroubaty 33308 32920 12345 AccessShareLock t 6500291 12397 ExclusiveLock t -- jlroubaty Apparently, a user has locked himself in the database. All ExclusiveLock and RowExclusiveLock are linked to the user jlroubaty, except one, which is pgsql. pgsql username is sometimes used for statistics from Excel. An Excel sheet is linked through ODBC to a view which has several joins. I had a look at the the pg_class table, and found a relfilenode with OID 33044, the OID mentionned in the locks. This refers to a table that has 4 FKs and 5 triggers. It's one of the cental tables in the database. Any idea how I could dig further? Philippe Lang -Message d'origine- De : Stephan Szabo [mailto:[EMAIL PROTECTED] Envoyé : vendredi, 22. octobre 2004 15:30 À : Philippe Lang Cc : [EMAIL PROTECTED] Objet : Re: [GENERAL] FKs and deadlocks On Fri, 22 Oct 2004, Philippe Lang wrote: I have tried to correct that by adding a SET CONSTRAINTS ALL DEFERRED in every trigger and function, hoping it would solve my problem. Maybe it helped, but it did not solve anything. Note that set constraints all deferred does nothing unless you made the constraint deferrable which is not the default. If your constraints aren't then you won't see any effect, and you'll probably want
Re: [GENERAL] FKs and deadlocks
Thanks a lot Tom. One more question: i'm surprised there are so many ExclusiveLocks when displaying pg_lock: 33044 32920 11439 RowExclusiveLockt 6514392 14385 ExclusiveLock t 6495858 11439 ExclusiveLock t ...etc... I found in the documentation EXCLUSIVE: This lock mode is not automatically acquired by any PostgreSQL command. I'm not using any TABLE LOCK or SET TRANSACTION ISOLATION call in the whole database, so where do they come from? I'm accessing the database through ODBC, is that maybe the reason? Philippe -Message d'origine- De : Tom Lane [mailto:[EMAIL PROTECTED] Envoyé : lundi, 25. octobre 2004 16:16 À : Philippe Lang Cc : [EMAIL PROTECTED] Objet : Re: [GENERAL] FKs and deadlocks Philippe Lang [EMAIL PROTECTED] writes: I got a deadlock in my database this morning. There is no deadlock here. The ungranted rows in pg_locks all point to the transaction ID 6489299, which belongs to PID 11346, which is this one: 11346 ?? R236:43.23 postmaster: jlroubaty groupefpdb 172.17.10.14 UPDATE (postgres) An UPDATE that churns for hours and hours may well represent a bug in your application (unconstrained join maybe?) but it's not a deadlock. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] FKs and deadlocks
Hello, As the amount of simultaneous users of my database grows (25 users sometimes, PGSQL 7.4.5), deadlocks are unfortunately more and more frequent. I guess this is due to the FKs problem with Postgresql. I have tried to correct that by adding a SET CONSTRAINTS ALL DEFERRED in every trigger and function, hoping it would solve my problem. Maybe it helped, but it did not solve anything. I don't know if anyone has a better idea, but I would like to try taking away some FKs in my schema. My problem is that I really don't know which one to delete. There are over 40 tables. Are there rules to do that? Or maybe can I simply wait on the next deadlock, and try understanding who got locked by who? OK, but how can I do that? Thanks for your help! Philippe Note: I have read about a patch for FK's, is that something that can really be used in production? ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Postgresql -- webservices?
Hello, Does anyone have experience in interfacing a Postgresql database (tables? plpgsql functions? perl functions?) with the outside world through webservices? (XML-RPC, SOAP, UDDI, WSDL...) Philippe ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] DROP TRIGGER permission
Hello, Dropping a trigger is permitted if the user is the owner of the table for which the trigger is defined. In a plpgsql function, used by different users, I need to disable some triggers for a short period of time. With the pgsql user login, I can succesfully drop and create the trigger again (after locking the tables for which I drop triggers), but unfortunately this not possible with a different user login, since they are not the owner of the table. What could I do in this situation? Thanks Philippe Lang ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] RAD with postgreSQL
I would prefer open-source solution if possible. I also would like to developp on my linux box but the result will be used on a windows 2000 and XP machine. But If I have no choice, I will developp on winXP. Any suggestion? It has nothing to do with open-source, but I suggest you have a look at MS Access. The ODBC driver for Postgresql gives me very good results. Development is really quick, and performance is great if you push your code to the server, in plpgsql functions. This is what we call client-server thin client and is opposed to client-server fat client. Philippe Lang ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] Idle connections - Too many clients connected already
Hello, After using Postgresql for a while, through ODBC / MS Access, I'm not able to connect to the server anymore. The server reports Too many clients connected already. With a ps -afx, I can see a lot of idle postgresql connections, which correspond for sure to ODBC connections that haven't been closed properly, and that remain opened. Stopping and restarting the server solves the problem. Is there a way to avoid that? I cannot certifiy that clients will always close their connexions in a clean way. So, is the server able to check if clients are still alive, and close their connexion if it's not the case? I never had anything like that with MS SQL Server... Thanks Philippe ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] Shell access send email from PLPGSQL?
Hello, How can open a unix shell from a PLPGSQL function / trigger? I would like to send an email from a postgres database, and also send some unix commands to the server, through the database... Thanks - Philippe Lang Attik System http://www.attiksystem.ch ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Shell access send email from PLPGSQL?
Hello, This is exactly what I needed. It works just fine for me, except when a function is being created through the pgAdmin III (version 1.02 under Windows). In this case, I think functions are being stored in the database with a CR/LF at the end, what the plpgsh engine does not like at all! Except this, that's fine. Thanks. - Philippe Lang Attik System http://www.attiksystem.ch -Message d'origine- De : Pavel Stehule [mailto:[EMAIL PROTECTED] Envoyé : mardi, 9. décembre 2003 11:33 À : Philippe Lang Cc : [EMAIL PROTECTED] Objet : Re: [GENERAL] Shell access send email from PLPGSQL? Hello If you require only send mail, try pgsendmail. http://sourceforge.net/project/showfiles.php?group_id=35804. You can use plsh too. http://developer.postgresql.org/~petere/pgplsh/ regards Pavel Stehule On Tue, 9 Dec 2003, Philippe Lang wrote: Hello, How can open a unix shell from a PLPGSQL function / trigger? I would like to send an email from a postgres database, and also send some unix commands to the server, through the database... Thanks - Philippe Lang Attik System http://www.attiksystem.ch ---(end of broadcast)--- TIP 8: explain analyze is your friend ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Restore-point?
Hello, Is it possible to restore a database previously backed-up with dump, and then recover some of the changes made after the backup, until a specific timestamp, by using some sort of journal? Is there such a feature in Postgresql? Thanks - Philippe Lang Attik System http://www.attiksystem.ch ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] Dynamic expression evaluation
Hello, Imagine we have the following kind of table, with two values (a and b), and a varchar (f) representing an expression. -- CREATE TABLE public.test ( id serial NOT NULL, a int4, b int4, f varchar(50), CONSTRAINT id PRIMARY KEY (id) ) WITHOUT OIDS; INSERT INTO public.test(a,b,f) VALUES(2,3,'a+b'); INSERT INTO public.test(a,b,f) VALUES(12,3,'a*b'); INSERT INTO public.test(a,b,f) VALUES(5,6,'a+2*b'); -- Is there a simple way of doing kind of a SELECT *, EVAL(f) FROM public.test; ... and having f evaluated as an expression, so that we get back: -- id a bfeval -- 12 3a+b 5 2123a*b 36 35 6a+2*b17 -- Has anyone done anything like that already? Thanks! Philippe ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]