Re: [GENERAL] scripts in Postgres
[EMAIL PROTECTED] (Craig Bryden) wrote in news:[EMAIL PROTECTED]: Hi Sean Thanks for that. Does psql work differently to pgAmin III's Query program? I have tried exactly what you showed me below, and it did not work. It seems like the script stops on first error, and the first error is that the table does not exist. Could it be that it pgAdmin creates a transaction. What happens if you set in some begin and commit. (Just a wild suggestion, not tested in any way). begin; DROP TABLE tb_messages; commit; begin; CREATE TABLE tb_messages ( ); commit; -- Rolf ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Tuning queries inside a function
Mike Nolan wrote: Maybe you could return a refcursor pointing to the EXPLAIN ANALYZE of the query inside the function. The raw materials exist to do this: if you know which elements of a query will be replaced by plpgsql variables, you can duplicate the results via PREPARE foo(...) AS ... EXPLAIN EXECUTE foo(...) Certainly there is a lot more that we can and must do about making it easier to debug and tune plpgsql functions. But you can fix 'em with a little determination even now... If I know which elements of a query will be replaced by variables, I can enter the query in psql, which I've done. (I can always output the variables to the log from inside the function.) Be aware that if you're pasting values in the place of the variables then PG can come up with a different plan. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] scripts in Postgres
On May 3, 2005, at 2:50 AM, Rolf Østvik wrote: [EMAIL PROTECTED] (Craig Bryden) wrote in news:[EMAIL PROTECTED]: Hi Sean Thanks for that. Does psql work differently to pgAmin III's Query program? I have tried exactly what you showed me below, and it did not work. It seems like the script stops on first error, and the first error is that the table does not exist. Could it be that it pgAdmin creates a transaction. What happens if you set in some begin and commit. (Just a wild suggestion, not tested in any way). begin; DROP TABLE tb_messages; commit; begin; CREATE TABLE tb_messages ( ); commit; That was my suspicion, also. I haven't tried to confirm it, though. (I'm a MacOS user.) Sean ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] query is wery slow with _t() function
Hi, I made function: CREATE FUNCTION _t(varchar,integer) RETURNS varchar AS ' DECLARE str ALIAS FOR $1; lang ALIAS FOR $2; value varchar; BEGIN SELECT t.txt INTO value FROM sys_txt t INNER JOIN sys_txt_code c ON c.id = t.code_id WHERE ''#''||c.code||''#'' = str AND t.lang_id = lang; --RAISE NOTICE ''%'', value; IF value IS NULL THEN value := str; END IF; RETURN (value); END; ' LANGUAGE plpgsql immutable Now I make query without _t() finction and speed is normal. test=# EXPLAIN ANALYZE SELECT taskid.id, clfID2clfName(taskid.task_type) AS task_type, accounts.nimi as account FROM taskid INNER JOIN accounts ON taskid.account = accounts.id ; QUERY PLAN --- Hash Join (cost=3.36..751.14 rows=2166 width=22) (actual time=1.065..203.845 rows=2105 loops=1) Hash Cond: (outer.account = inner.id) - Seq Scan on taskid (cost=0.00..698.45 rows=2245 width=10) (actual time=0.041..29.704 rows=2246 loops=1) - Hash (cost=3.09..3.09 rows=109 width=20) (actual time=0.522..0.522 rows=0 loops=1) - Seq Scan on accounts (cost=0.00..3.09 rows=109 width=20) (actual time=0.090..0.371 rows=109 loops=1) Total runtime: 206.261 ms (6 rows) in table taskid is 2246 records. Now make query with _t() function and speed is very slow :( test=# EXPLAIN ANALYZE SELECT taskid.id, _t(clfID2clfName(taskid.task_type),11) AS task_type, accounts.nimi as account FROM taskid INNER JOIN accounts ON taskid.account = accounts.id ; QUERY PLAN --- Hash Join (cost=3.36..756.55 rows=2166 width=22) (actual time=5.568..9093.637 rows=2105 loops=1) Hash Cond: (outer.account = inner.id) - Seq Scan on taskid (cost=0.00..698.45 rows=2245 width=10) (actual time=0.041..35.313 rows=2246 loops=1) - Hash (cost=3.09..3.09 rows=109 width=20) (actual time=0.529..0.529 rows=0 loops=1) - Seq Scan on accounts (cost=0.00..3.09 rows=109 width=20) (actual time=0.092..0.376 rows=109 loops=1) Total runtime: 9098.051 ms (6 rows) 206.261 ms versus 9098.051 ms!!! What's wrong _t() function? table taskid structure is: Table public.taskid Column |Type | Modifiers ---+-+--- id| integer | not null default nextval('taskid_id_seq'::text) task_name | character varying(255) | not null task_type | smallint| account | integer | mn_actual | character(10) | mh_planned| character(10) | finish_planed | timestamp without time zone | finish_actual | timestamp without time zone | prioriteet| integer | default 1 created_on| timestamp without time zone | not null created_by| character varying(50) | not null show_to_client| boolean | not null default false assigned_to | integer[] | not null private | boolean | default false t_status | smallint| default (1)::smallint problem | text| solution | text| product | integer | area1 | integer | area2 | integer | area3 | integer | project | integer | start_planed | timestamp with time zone| start_actual | timestamp with time zone| team | integer | opportunity_id| integer | split | boolean | not null default false copy_id | integer | default 0 task_extid1 | character varying(20) | task_extid2 | character varying(20) | task_seqno| integer | task_parentid | integer | task_color| character varying(10) | task_contact_id | integer | task_timeless | boolean | default false task_milestone| boolean | default false task_notify | integer | task_security | integer | task_location | integer | task_type2| integer | task_dur_min |
Re: [GENERAL] query is wery slow with _t() function
Margusja wrote: Hi, I made function: CREATE FUNCTION _t(varchar,integer) RETURNS varchar AS ' DECLARE str ALIAS FOR $1; lang ALIAS FOR $2; value varchar; BEGIN SELECT t.txt INTO value FROM sys_txt t INNER JOIN sys_txt_code c ON c.id = t.code_id WHERE ''#''||c.code||''#'' = str AND t.lang_id = lang; --RAISE NOTICE ''%'', value; IF value IS NULL THEN value := str; END IF; RETURN (value); END; ' LANGUAGE plpgsql immutable I'm not sure you could describe this function as immutable, since it queries the databse. If you alter the contents of sys_txt or sys_txt_code then its results will change. I'd recommend re-reading that section of the manuals. Now I make query without _t() finction and speed is normal. test=# EXPLAIN ANALYZE SELECT taskid.id, clfID2clfName(taskid.task_type) AS task_type, accounts.nimi as account FROM taskid INNER JOIN accounts ON taskid.account = accounts.id ; QUERY PLAN --- Hash Join (cost=3.36..751.14 rows=2166 width=22) (actual time=1.065..203.845 rows=2105 loops=1) [snip] Total runtime: 206.261 ms (6 rows) in table taskid is 2246 records. Now make query with _t() function and speed is very slow :( test=# EXPLAIN ANALYZE SELECT taskid.id, _t(clfID2clfName(taskid.task_type),11) AS task_type, accounts.nimi as account FROM taskid INNER JOIN accounts ON taskid.account = accounts.id ; QUERY PLAN --- Hash Join (cost=3.36..756.55 rows=2166 width=22) (actual time=5.568..9093.637 rows=2105 loops=1) [snip] Total runtime: 9098.051 ms Well, it's not the same query is it? In the second you are calling _t() for each of your 2105 rows. If it takes 2ms for each call of _t() then that would account for the difference. Is there any reason why you are using functions for these simple lookups rather than joining to the translation table? -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] some questions : psql
Title: iterate over refcursor hi, i have some questions about psql. i'm trying to writea script for deploying my database and i have some questions : 1- is it possible to set a default schema ? i tried : set DEFAULT SCHEMA base1; but it didn't worked 2- is it possible to have one file with global values and to use them when creating functions. one example is : i have a temporary directory, andseveral functions need to access this folder, so is it possible to pass the path as with Oracle DEFINE TempDirectory = /home/ @createMDNS.sql TempDirectory 3- my psql script create 2 tablespaces, an user, the database, 1 schema and the tables and functions but when i launch pgadmin, i see my new user, the tablespace and the database, butI don't see the schema and in consequence the tables and functions. but all have been created because i can ask tables, but they don't appear and i don't know why...; thank a lot regards Will
Re: [GENERAL] some questions : psql
Hello 1- is it possible to set a default schema ? i tried : set DEFAULT SCHEMA base1; but it didn't worked SET search_path to base1; 2- is it possible to have one file with global values and to use them when creating functions. one example is : i have a temporary directory, and several functions need to access this folder, so is it possible to pass the path as with Oracle DEFINE TempDirectory = /home/ @createMDNS.sql TempDirectory Yes. You can use commands and variables psql. \set TempDirectory /home/... -- in file global.sql \i global.sql SELECT somefce(:TempDirectory) more on http://developer.postgresql.org/docs/postgres/app-psql.html 3- my psql script create 2 tablespaces, an user, the database, 1 schema and the tables and functions but when i launch pgadmin, i see my new user, the tablespace and the database, but I don't see the schema and in consequence the tables and functions. but all have been created because i can ask tables, but they don't appear and i don't know why...; I don't know. I don't use pgadmin Regards Pavel Stehule ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [Dbdpg-general] Re: [GENERAL] 'prepare' is not quite schema-safe
Greg, thanks for the suggestion. looking into other thread on the list, it looks like setting $dbh-{pg_server_prepare} = 0; would solve my problem as well. With this setting will dbd::pg behave in old-style (i.e. prepare_cached prepared and stored on dbd::pg side), or it won't cache anything at all? Besides, why don't you recommend turning pg_server_prepare off? On 5/2/05, Greg Sabino Mullane [EMAIL PROTECTED] wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Which is why Vlad should use prepare() instead of prepare_cached(). in our web application similar SQL queries (like load an object) executed over and over again with high frequency. So it's very desirable to use prepare_cached. I think we are going to adjust our ORM (object relation mapper) to always use full path to reference DB objects in query. This is actually a perfect case for prepare (and server-side prepare at that), and not prepare_cached(). The latter has some overhead as a hash table has to be searched and the right statement handle produced. One thing I sometimes do is pre-prepare a lot of my frequently used statements at the top of a long-running script (e.g. mod_perl). Then you simply refer to the statement handle rather than prepare() or prepare_cached. It also has the advantage of consolidating most of your SQL calls into one place in your script. You can even create different handles for changed schemas. It goes something like this: #!pseudo-perl BEGIN { ## mod_perl only runs this once use DBI; my %sth; $dbh = DBI-connect... ## Grab a user's information $SQL = SELECT * FROM u WHERE status = 2 AND username=?; $sth{grabuser} = $dbh-prepare($SQL); ## Insert a widget $SQL = INSERT INTO widgets(partno, color) VALUES (?,?,?); $sth{addwidget} = $dbh-prepare($SQL); ## Insert a widget into the jetson schema $dbh-do(SET search_path TO jetson); $sth{addwidget_jetson} = $dbh-prepare($SQL); ## (reset search_path, keep going with all common SQL statements) } ## mod_perl runs all this each time: ...skip lots of code... my $username = $forminput{'username'}; $sth = $sth{grabuser}; $count = $sth-execute($username); ...and later on... for (@widgets) { if (jetson eq $_-{owner}) { $dbh-do(SET search_path TO jetson); $sth{addwidget_jetson}-execute($_-{partnumber}, $_-{color}); $dbh-do(SET search_path TO public); ## Silly example, better to use fully qualified names of course, ## or perhaps a custom function that inserts for you } else { $sth{addwidget}-execute($_-{partnumber}, $_-{color}); } } A simplified example, but the take home moral of all this is to be very careful when using prepare_cached (which is actually a DBI feature, not a DBD::Pg one). - -- Greg Sabino Mullane [EMAIL PROTECTED] PGP Key: 0x14964AC8 20050509 http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8 -BEGIN PGP SIGNATURE- iD8DBQFCdPSrvJuQZxSWSsgRAsntAJ4iqrfqkj/f5Dqc4Ya7Vs4h0XZhGwCgxC15 mM86zvTH/mXdAACBKPDG//4= =vZ2+ -END PGP SIGNATURE- ___ Dbdpg-general mailing list [EMAIL PROTECTED] http://gborg.postgresql.org/mailman/listinfo/dbdpg-general -- Vlad ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Deep integration of PostgreSQL with Apache
Hi, Apologies if this has been discussed before, but I was wondering if there have been any efforts in the past to provide a deep integration of PostgreSQL with Apache. What I mean by deep integration is that the PostgreSQL server logic runs inside the Apache server processes, rather than separate processes. In particular, the postmaster server logic would run inside the Apache master process and the postgres server logic would run inside Apache child processes. The main advantage of this approach would be that it avoids the Apache/PostgreSQL context switch when executing SQL requests from the web server. It looks like the Apache server and PostgreSQL server architectures are quite similar to make this feasible. Any thoughts? Best regards, Robin Boerdijk __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] some questions : psql
sorry, i badly explain my second problem (and it make me think about an other question :) ) what i really want to say is : i would like to declare a global variable : DEFINE pool = 10 and use it into a function @createMDNS.sql pool and my file createMDNS.sql is create or replace package body MDXML_EXPORT as pool varchar2(150) := '1'; procedure toto(... . . / that what i wanted to explain and my new question is :) : i wrote a first file : \set databaseName 'base1' \set createMDXML '/home/toto/MDXML/execCreateMDXML.sql' \i :createMDXML and in my second file, i need the name af the database but if i wrote this : SET search_path TO :databaseName; it doesn't work regards -Message d'origine- De : Pavel Stehule [mailto:[EMAIL PROTECTED] Envoyé : mardi 3 mai 2005 14:30 À : FERREIRA, William (COFRAMI) Cc : pgsql-general@postgresql.org Objet : Re: [GENERAL] some questions : psql Hello 1- is it possible to set a default schema ? i tried : set DEFAULT SCHEMA base1; but it didn't worked SET search_path to base1; 2- is it possible to have one file with global values and to use them when creating functions. one example is : i have a temporary directory, and several functions need to access this folder, so is it possible to pass the path as with Oracle DEFINE TempDirectory = /home/ @createMDNS.sql TempDirectory Yes. You can use commands and variables psql. \set TempDirectory /home/... -- in file global.sql \i global.sql SELECT somefce(:TempDirectory) more on http://developer.postgresql.org/docs/postgres/app-psql.html 3- my psql script create 2 tablespaces, an user, the database, 1 schema and the tables and functions but when i launch pgadmin, i see my new user, the tablespace and the database, but I don't see the schema and in consequence the tables and functions. but all have been created because i can ask tables, but they don't appear and i don't know why...; I don't know. I don't use pgadmin Regards Pavel Stehule This mail has originated outside your organization, either from an external partner or the Global Internet. Keep this in mind if you answer this message. This e-mail is intended only for the above addressee. It may contain privileged information. If you are not the addressee you must not copy, distribute, disclose or use any of the information in it. If you have received it in error please delete it and immediately notify the sender. Security Notice: all e-mail, sent to or from this address, may be accessed by someone other than the recipient, for system management and security reasons. This access is controlled under Regulation of Investigatory Powers Act 2000, Lawful Business Practises. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] postgresql 8 abort with signal 10
Hi list, I'm running postgresql 8.0.1 on FreeBSD 4.11-STABLE, the machine is and AMD Sempron 2.2, 1GB Ram.. I use postgresql as database for dspam, an spam classification program. This database have and moderated use, on averange 10 simultaneous conections executing relative big queries using in clausule. Watching postgresql logs I see the following messages ocurs a lot of times in a day: May 3 06:58:44 e-filter postgres[250]: [21-1] LOG: server process (PID 59608) was terminated by signal 10 May 3 06:58:44 e-filter postgres[250]: [22-1] LOG: terminating any other active server processes May 3 06:58:44 e-filter postgres[59605]: [21-1] WARNING: terminating connection because of crash of another server process May 3 06:58:44 e-filter postgres[59605]: [21-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server May 3 06:58:44 e-filter postgres[59605]: [21-3] process exited abnormally and possibly corrupted shared memory. May 3 06:58:44 e-filter postgres[59605]: [21-4] HINT: In a moment you should be able to reconnect to the database and repeat your command. May 3 06:58:44 e-filter postgres[59607]: [21-1] WARNING: terminating connection because of crash of another server process May 3 06:58:44 e-filter postgres[59607]: [21-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server May 3 06:58:44 e-filter postgres[59607]: [21-3] process exited abnormally and possibly corrupted shared memory. May 3 06:58:44 e-filter postgres[59607]: [21-4] HINT: In a moment you should be able to reconnect to the database and repeat your command. May 3 06:58:44 e-filter postgres[59606]: [21-1] WARNING: terminating connection because of crash of another server process May 3 06:58:44 e-filter postgres[59606]: [21-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server May 3 06:58:44 e-filter postgres[59606]: [21-3] process exited abnormally and possibly corrupted shared memory. May 3 06:58:44 e-filter postgres[59606]: [21-4] HINT: In a moment you should be able to reconnect to the database and repeat your command. May 3 06:58:44 e-filter postgres[59626]: [21-1] WARNING: terminating connection because of crash of another server process May 3 06:58:44 e-filter postgres[59626]: [21-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server May 3 06:58:44 e-filter postgres[59626]: [21-3] process exited abnormally and possibly corrupted shared memory. May 3 06:58:44 e-filter postgres[59626]: [21-4] HINT: In a moment you should be able to reconnect to the database and repeat your command. May 3 06:58:44 e-filter postgres[59628]: [21-1] WARNING: terminating connection because of crash of another server process May 3 06:58:44 e-filter postgres[59629]: [21-1] WARNING: terminating connection because of crash of another server process May 3 06:58:44 e-filter postgres[59629]: [21-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server May 3 06:58:44 e-filter postgres[59629]: [21-3] process exited abnormally and possibly corrupted shared memory. May 3 06:58:44 e-filter postgres[59629]: [21-4] HINT: In a moment you should be able to reconnect to the database and repeat your command. May 3 06:58:44 e-filter postgres[59628]: [21-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server May 3 06:58:44 e-filter postgres[59628]: [21-3] process exited abnormally and possibly corrupted shared memory. May 3 06:58:44 e-filter postgres[59628]: [21-4] HINT: In a moment you should be able to reconnect to the database and repeat your command. May 3 06:58:44 e-filter postgres[59609]: [21-1] WARNING: terminating connection because of crash of another server process May 3 06:58:44 e-filter postgres[59609]: [21-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server May 3 06:58:44 e-filter postgres[59609]: [21-3] process exited abnormally and possibly corrupted shared memory. May 3 06:58:44 e-filter postgres[59609]: [21-4] HINT: In a moment you should be able to reconnect to the database and repeat your command. May 3 06:58:44 e-filter postgres[59627]: [21-1] WARNING: terminating connection because of crash of another server process May 3 06:58:44 e-filter postgres[59627]: [21-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server May 3 06:58:44 e-filter postgres[59627]: [21-3] process exited abnormally and possibly corrupted shared memory. May 3 06:58:44 e-filter postgres[59627]: [21-4] HINT: In a moment you should
[GENERAL] Dynamically access to field on a RECORD variable
Supose I have this function CREATE OR REPLACE my_func(TEXT) RETURNS text AS ' DECLARE var_name ALIAS FOR $1; rec RECORD; BEGIN SELECT * INTO rec FROM my_table WHERE my_key = 1; -- Here is my problem RETURN rec.var_name; END; ' LANGUAGE plpgsql; SELECT my_func('my_field'); I want the return row in the function executes as: RETURN rec.my_field; Is it possible? Thank you. -- Ricardo Vaz Mannrich [EMAIL PROTECTED] ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] some questions : psql
On Tue, 3 May 2005, FERREIRA, William (COFRAMI) wrote: sorry, i badly explain my second problem (and it make me think about an other question :) ) what i really want to say is : i would like to declare a global variable : There is one big difference. PostgreSQL don't know global variables. There is only local variables in stored procedures or local variables of sql monitor ~ psql. You cannot directly read psql's variables. and my new question is :) : i wrote a first file : \set databaseName 'base1' \set createMDXML '/home/toto/MDXML/execCreateMDXML.sql' \i :createMDXML and in my second file, i need the name af the database but if i wrote this : SET search_path TO :databaseName; it doesn't work I am sorry. I can't help. create schema fx; create table fx.foo(i integer); select * from foo; ERROR: relation foo does not exist \set sp fx set search_path to :sp intra=# \set sp 'fx' intra=# set search_path to :sp; SET Time: 9,349 ms intra=# select * from foo; i --- (0 rows) look to /home/toto/MDXML/execCreateMDXML.sql sqlscript. Works really well? Pavel ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] Deep integration of PostgreSQL with Apache
On Tue, 2005-05-03 at 07:53, Robin Boerdijk wrote: Hi, Apologies if this has been discussed before, but I was wondering if there have been any efforts in the past to provide a deep integration of PostgreSQL with Apache. What I mean by deep integration is that the PostgreSQL server logic runs inside the Apache server processes, rather than separate processes. In particular, the postmaster server logic would run inside the Apache master process and the postgres server logic would run inside Apache child processes. The main advantage of this approach would be that it avoids the Apache/PostgreSQL context switch when executing SQL requests from the web server. It looks like the Apache server and PostgreSQL server architectures are quite similar to make this feasible. Any thoughts? And a single crashed or corrupted apache thread or process can trash your database. Connection pooling is generally a better answer to this type of problem. PostgreSQL just isn't designed to be embedded into other people's code. SQLLite, on the other hand, might be a good option for such a project. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Deep integration of PostgreSQL with Apache
Robin Boerdijk wrote: Hi, Apologies if this has been discussed before, but I was wondering if there have been any efforts in the past to provide a deep integration of PostgreSQL with Apache. What I mean by deep integration is that the PostgreSQL server logic runs inside the Apache server processes, rather than separate processes. In particular, the postmaster server logic would run inside the Apache master process and the postgres server logic would run inside Apache child processes. The main advantage of this approach would be that it avoids the Apache/PostgreSQL context switch when executing SQL requests from the web server. It looks like the Apache server and PostgreSQL server architectures are quite similar to make this feasible. Any thoughts? The PostgreSQL backend is inherently single-threaded and a new process is forked each time you establish a new connection (session) so the integration you ask for is not in anyway possible unless you are content with one single database connection. Regards, Thomas Hallgren ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] some questions : psql
the search_path works well thanks a lot -Message d'origine- De : Pavel Stehule [mailto:[EMAIL PROTECTED] Envoyé : mardi 3 mai 2005 16:09 À : FERREIRA, William (COFRAMI) Cc : pgsql-general@postgresql.org Objet : Re: [GENERAL] some questions : psql On Tue, 3 May 2005, FERREIRA, William (COFRAMI) wrote: sorry, i badly explain my second problem (and it make me think about an other question :) ) what i really want to say is : i would like to declare a global variable : There is one big difference. PostgreSQL don't know global variables. There is only local variables in stored procedures or local variables of sql monitor ~ psql. You cannot directly read psql's variables. and my new question is :) : i wrote a first file : \set databaseName 'base1' \set createMDXML '/home/toto/MDXML/execCreateMDXML.sql' \i :createMDXML and in my second file, i need the name af the database but if i wrote this : SET search_path TO :databaseName; it doesn't work I am sorry. I can't help. create schema fx; create table fx.foo(i integer); select * from foo; ERROR: relation foo does not exist \set sp fx set search_path to :sp intra=# \set sp 'fx' intra=# set search_path to :sp; SET Time: 9,349 ms intra=# select * from foo; i --- (0 rows) look to /home/toto/MDXML/execCreateMDXML.sql sqlscript. Works really well? Pavel This mail has originated outside your organization, either from an external partner or the Global Internet. Keep this in mind if you answer this message. This mail has originated outside your organization, either from an external partner or the Global Internet. Keep this in mind if you answer this message. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] does database shut down cleanly when WAL device fails?
In planning a Postgresql deployment, our team discovered that we have different understandings of how the WAL affects database reliability, and we have not found the Postgresql manual quite explicit enough to distinguish between the following two theories: a) Putting the WAL on a separate device from the database tables not only increases efficiency, but reliability as well - because as long as one keeps a database backup and a WAL history that goes back to the last backup (as described in 22.3 of the 8.0.1 manual), then one is insulated from losing data from a single disk failure: - If the drive holding the tables fails, then take the most recent backup and bring it up to date using the WAL. - If the backup fails, make another backup. And the disputed point: - If the drive holding the WAL fails, then the database engine will shut down cleanly by writing everything in RAM out to the real database tables, and no data will be lost. The counter theory: b) Putting the WAL on a separate device from the database tables increases efficiency but does not by itself improve reliability. Just as, when the WAL and tables are on a single device, a failure of that device makes you lose all data written since the your last backup, so when the WAL is placed on a separate drive, a failure of that drive may result in the loss of any data committed but not yet written out to the main database tables. Essentially, the documentation does not make clear what recovery can take place - under what guarantees - should the WAL be destroyed in mid-operation. Please advise as to which understanding is the correct one. -- Brandon Craig Rhodes [EMAIL PROTECTED] http://rhodesmill.org/brandon ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] postgresql 8 abort with signal 10
Alexandre Biancalana [EMAIL PROTECTED] writes: Watching postgresql logs I see the following messages ocurs a lot of times in a day: May 3 06:58:44 e-filter postgres[250]: [21-1] LOG: server process (PID 59608) was terminated by signal 10 You need to find out what's triggering that. Turning on query logging would be a good way of investigating. regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Dynamically access to field on a RECORD variable
Ricardo Vaz Mannrich [EMAIL PROTECTED] writes: Is it possible? Not in plpgsql. I believe you could do it in any of the other PLs though. regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] postgresql 8 abort with signal 10
On Tue, 2005-05-03 at 08:39, Alexandre Biancalana wrote: Hi list, I'm running postgresql 8.0.1 on FreeBSD 4.11-STABLE, the machine is and AMD Sempron 2.2, 1GB Ram.. I use postgresql as database for dspam, an spam classification program. This database have and moderated use, on averange 10 simultaneous conections executing relative big queries using in clausule. Watching postgresql logs I see the following messages ocurs a lot of times in a day: May 3 06:58:44 e-filter postgres[250]: [21-1] LOG: server process (PID 59608) was terminated by signal 10 May 3 06:58:44 e-filter postgres[250]: [22-1] LOG: terminating any other active server processes SNIP This is my postgresql.conf max_connections = 70 superuser_reserved_connections = 2 shared_buffers = 81920 Rather large, shared buffers for a machine with only 1 gig of ram. 640 Meg of RAM means the kernel is basically double buffering everything. have you tested with smaller settings and this setting was the best? You might want to look in your signal man page on BSD and see what signal 10 means. On solaris it's a bus error. Not a clue what it is in FreeBSD myself though. work_mem = 10240 maintenance_work_mem = 51200 fsync = true checkpoint_segments = 8 effective_cache_size = 10 log_destination = 'syslog' silent_mode = true lc_messages = 'C' lc_monetary = 'C' lc_numeric = 'C' lc_time = 'C' and the shared memory configuration: kern.ipc.shmmax: 7 kern.ipc.shmmin: 1 kern.ipc.shmmni: 192 kern.ipc.shmseg: 256 kern.ipc.shmall: 7 I have some configuration error that could result in this kind of problem ? Any ideas ? Any thoughts ? Best Regards, Alexandre ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] does database shut down cleanly when WAL device fails?
Brandon Craig Rhodes [EMAIL PROTECTED] writes: And the disputed point: - If the drive holding the WAL fails, then the database engine will shut down cleanly by writing everything in RAM out to the real database tables, and no data will be lost. Whoever claimed that has no familiarity with the code at all, and no understanding of the basic WAL rule: write to the log BEFORE you write data. In point of fact, loss of the WAL drive will mean a database PANIC stop and probably a corrupt data area afterwards, since there'd be no guarantee that related page updates had all made it to disk. regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Deep integration of PostgreSQL with Apache
Thomas Hallgren wrote: Robin Boerdijk wrote: Hi, Apologies if this has been discussed before, but I was wondering if there have been any efforts in the past to provide a deep integration of PostgreSQL with Apache. What I mean by deep integration is that the PostgreSQL server logic runs inside the Apache server processes, rather than separate processes. In particular, the postmaster server logic would run inside the Apache master process and the postgres server logic would run inside Apache child processes. I think the closest you are going to get is Apache::DBI however sense PostgreSQL is processed based you are going to get a new connection for every connection to Apache. On any reasonably busy site that can spell doom. Sincerely, Joshua D. Drake ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] Deep integration of PostgreSQL with Apache
--- Thomas Hallgren [EMAIL PROTECTED] wrote: Robin Boerdijk wrote: Hi, Apologies if this has been discussed before, but I was wondering if there have been any efforts in the past to provide a deep integration of PostgreSQL with Apache. What I mean by deep integration is that the PostgreSQL server logic runs inside the Apache server processes, rather than separate processes. In particular, the postmaster server logic would run inside the Apache master process and the postgres server logic would run inside Apache child processes. The main advantage of this approach would be that it avoids the Apache/PostgreSQL context switch when executing SQL requests from the web server. It looks like the Apache server and PostgreSQL server architectures are quite similar to make this feasible. Any thoughts? The PostgreSQL backend is inherently single-threaded and a new process is forked each time you establish a new connection (session) so the integration you ask for is not in anyway possible unless you are content with one single database connection. I agree that it is not trivial, but is it feasible? Specifically, I'm thinking about the following approach: 1. Strip all networking logic and the logic that manages the postgres child servers from the postmaster server. The logic that remains is code that manages the auxiliary processes such as the bgwriter and statistics collector. Integrate this remaining logic in the the Apache master server. 2. Strip all networking logic from the postgres server. The logic that remains is logic for executing queries against the database. Integrate this remaining logic in the Apache child server. The result of this is an integrated web/database server where all networking is handled by Apache instead of postmaster/postgres. Other than that, I see no difference with the way PostgreSQL works out of the box. The Apache master server functions as the postmaster (i.e. managing the child server processes) and the Apache child servers function as the postgres servers (i.e. access the database). Why would this web/database server be limited to using only one connection? __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Deep integration of PostgreSQL with Apache
Apologies if this has been discussed before, but I was wondering if there have been any efforts in the past to provide a deep integration of PostgreSQL with Apache. What I mean by deep integration is that the PostgreSQL server logic runs inside the Apache server processes, rather than separate processes. In particular, the postmaster server logic would run inside the Apache master process and the postgres server logic would run inside Apache child processes. I think the closest you are going to get is Apache::DBI however sense PostgreSQL is processed based you are going to get a new connection for every connection to Apache. Yes, I didn't think about that one. Those Apache child processes serve only a single connection at a time. I guess a single postgres server process must be quite heavywait then compared to an Apache child process. __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] postgresql 8 abort with signal 10
Alexandre, I saw reports (and observed the problem myself) that all sort of different softwares suffering from signal 11 under FreeBSD (more often seen on 5-STABLE). So far the collection is: Apache 1.3 (myself), Mysql (recent descussion on freebsd-stable list) and now postgresql... The hardware is not the point of failure here. Try to post this into freebsd-stable - perhaps additional problem report will help them find the cause. p.s. here is the last one I see in my apache error log: [Wed Mar 9 17:50:45 2005] [notice] child pid 95642 exit signal Segmentation fault (11) On 5/3/05, Alexandre Biancalana [EMAIL PROTECTED] wrote: Hi list, I'm running postgresql 8.0.1 on FreeBSD 4.11-STABLE, the machine is and AMD Sempron 2.2, 1GB Ram.. I use postgresql as database for dspam, an spam classification program. This database have and moderated use, on averange 10 simultaneous conections executing relative big queries using in clausule. Watching postgresql logs I see the following messages ocurs a lot of times in a day: May 3 06:58:44 e-filter postgres[250]: [21-1] LOG: server process (PID 59608) was terminated by signal 10 May 3 06:58:44 e-filter postgres[250]: [22-1] LOG: terminating any other active server processes May 3 06:58:44 e-filter postgres[59605]: [21-1] WARNING: terminating connection because of crash of another server process May 3 06:58:44 e-filter postgres[59605]: [21-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server May 3 06:58:44 e-filter postgres[59605]: [21-3] process exited abnormally and possibly corrupted shared memory. May 3 06:58:44 e-filter postgres[59605]: [21-4] HINT: In a moment you should be able to reconnect to the database and repeat your command. May 3 06:58:44 e-filter postgres[59607]: [21-1] WARNING: terminating connection because of crash of another server process May 3 06:58:44 e-filter postgres[59607]: [21-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server May 3 06:58:44 e-filter postgres[59607]: [21-3] process exited abnormally and possibly corrupted shared memory. May 3 06:58:44 e-filter postgres[59607]: [21-4] HINT: In a moment you should be able to reconnect to the database and repeat your command. May 3 06:58:44 e-filter postgres[59606]: [21-1] WARNING: terminating connection because of crash of another server process May 3 06:58:44 e-filter postgres[59606]: [21-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server May 3 06:58:44 e-filter postgres[59606]: [21-3] process exited abnormally and possibly corrupted shared memory. May 3 06:58:44 e-filter postgres[59606]: [21-4] HINT: In a moment you should be able to reconnect to the database and repeat your command. May 3 06:58:44 e-filter postgres[59626]: [21-1] WARNING: terminating connection because of crash of another server process May 3 06:58:44 e-filter postgres[59626]: [21-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server May 3 06:58:44 e-filter postgres[59626]: [21-3] process exited abnormally and possibly corrupted shared memory. May 3 06:58:44 e-filter postgres[59626]: [21-4] HINT: In a moment you should be able to reconnect to the database and repeat your command. May 3 06:58:44 e-filter postgres[59628]: [21-1] WARNING: terminating connection because of crash of another server process May 3 06:58:44 e-filter postgres[59629]: [21-1] WARNING: terminating connection because of crash of another server process May 3 06:58:44 e-filter postgres[59629]: [21-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server May 3 06:58:44 e-filter postgres[59629]: [21-3] process exited abnormally and possibly corrupted shared memory. May 3 06:58:44 e-filter postgres[59629]: [21-4] HINT: In a moment you should be able to reconnect to the database and repeat your command. May 3 06:58:44 e-filter postgres[59628]: [21-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server May 3 06:58:44 e-filter postgres[59628]: [21-3] process exited abnormally and possibly corrupted shared memory. May 3 06:58:44 e-filter postgres[59628]: [21-4] HINT: In a moment you should be able to reconnect to the database and repeat your command. May 3 06:58:44 e-filter postgres[59609]: [21-1] WARNING: terminating connection because of crash of another server process May 3 06:58:44 e-filter postgres[59609]: [21-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server May 3 06:58:44 e-filter
Re: [GENERAL] postgresql 8 abort with signal 10
oops... you were writing about signal 10 not signal 11. my bad - sorry On 5/3/05, Vlad [EMAIL PROTECTED] wrote: Alexandre, I saw reports (and observed the problem myself) that all sort of different softwares suffering from signal 11 under FreeBSD (more often seen on 5-STABLE). So far the collection is: Apache 1.3 (myself), Mysql (recent descussion on freebsd-stable list) and now postgresql... The hardware is not the point of failure here. Try to post this into freebsd-stable - perhaps additional problem report will help them find the cause. p.s. here is the last one I see in my apache error log: [Wed Mar 9 17:50:45 2005] [notice] child pid 95642 exit signal Segmentation fault (11) On 5/3/05, Alexandre Biancalana [EMAIL PROTECTED] wrote: Hi list, I'm running postgresql 8.0.1 on FreeBSD 4.11-STABLE, the machine is and AMD Sempron 2.2, 1GB Ram.. I use postgresql as database for dspam, an spam classification program. This database have and moderated use, on averange 10 simultaneous conections executing relative big queries using in clausule. Watching postgresql logs I see the following messages ocurs a lot of times in a day: May 3 06:58:44 e-filter postgres[250]: [21-1] LOG: server process (PID 59608) was terminated by signal 10 May 3 06:58:44 e-filter postgres[250]: [22-1] LOG: terminating any other active server processes May 3 06:58:44 e-filter postgres[59605]: [21-1] WARNING: terminating connection because of crash of another server process May 3 06:58:44 e-filter postgres[59605]: [21-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server May 3 06:58:44 e-filter postgres[59605]: [21-3] process exited abnormally and possibly corrupted shared memory. May 3 06:58:44 e-filter postgres[59605]: [21-4] HINT: In a moment you should be able to reconnect to the database and repeat your command. May 3 06:58:44 e-filter postgres[59607]: [21-1] WARNING: terminating connection because of crash of another server process May 3 06:58:44 e-filter postgres[59607]: [21-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server May 3 06:58:44 e-filter postgres[59607]: [21-3] process exited abnormally and possibly corrupted shared memory. May 3 06:58:44 e-filter postgres[59607]: [21-4] HINT: In a moment you should be able to reconnect to the database and repeat your command. May 3 06:58:44 e-filter postgres[59606]: [21-1] WARNING: terminating connection because of crash of another server process May 3 06:58:44 e-filter postgres[59606]: [21-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server May 3 06:58:44 e-filter postgres[59606]: [21-3] process exited abnormally and possibly corrupted shared memory. May 3 06:58:44 e-filter postgres[59606]: [21-4] HINT: In a moment you should be able to reconnect to the database and repeat your command. May 3 06:58:44 e-filter postgres[59626]: [21-1] WARNING: terminating connection because of crash of another server process May 3 06:58:44 e-filter postgres[59626]: [21-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server May 3 06:58:44 e-filter postgres[59626]: [21-3] process exited abnormally and possibly corrupted shared memory. May 3 06:58:44 e-filter postgres[59626]: [21-4] HINT: In a moment you should be able to reconnect to the database and repeat your command. May 3 06:58:44 e-filter postgres[59628]: [21-1] WARNING: terminating connection because of crash of another server process May 3 06:58:44 e-filter postgres[59629]: [21-1] WARNING: terminating connection because of crash of another server process May 3 06:58:44 e-filter postgres[59629]: [21-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server May 3 06:58:44 e-filter postgres[59629]: [21-3] process exited abnormally and possibly corrupted shared memory. May 3 06:58:44 e-filter postgres[59629]: [21-4] HINT: In a moment you should be able to reconnect to the database and repeat your command. May 3 06:58:44 e-filter postgres[59628]: [21-2] DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server May 3 06:58:44 e-filter postgres[59628]: [21-3] process exited abnormally and possibly corrupted shared memory. May 3 06:58:44 e-filter postgres[59628]: [21-4] HINT: In a moment you should be able to reconnect to the database and repeat your command. May 3 06:58:44 e-filter postgres[59609]: [21-1] WARNING: terminating connection because of crash of another server process May
[GENERAL] pgEdit 1.1 beta
The first beta of pgEdit 1.1 is now available. New features and enhancements include: - Error information is collected with each execution. New commands to jump to each error location in the file even if lines are added or removed. - Improved integration with psql; most psql commands are now supported including \connect and \copy. - PHP integration. PHP can act as a preprocessor for SQL code or perform other scripting tasks. Mixed syntax coloring of PHP and SQL in the same file is supported including syntax coloring for pl/PHP stored procedures. PHP 5 is included with the distribution, so no extra configuration is required to use this feature. See a PHP example at http://pgedit.com/node/view/28 - Faster and better syntax coloring. stdin data for COPY is ignored for coloring (unless followed a PHP tag). - New options for execution and output. Output can be sent to a file and then opened automatically with pgEdit, your web browser, or other applications. - Improved user interface with the editor and output panes available in the same view. pgEdit 1.1 beta is free of all demonstration limitations through the end of May. http://pgedit.com/download John DeSoi, Ph.D. http://pgedit.com/ Power Tools for PostgreSQL ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Dynamically access to field on a RECORD variable
I am newbie. Could you give a example? Thank you. Ricardo Vaz Mannrich [EMAIL PROTECTED] writes: Is it possible? Not in plpgsql. I believe you could do it in any of the other PLs though. regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] MVCC and insert
Hows does MVCC handle two concurrent tasks trying to insert the same row? Example pseudo-code: select row from table where... if not found, prepare row insert row else update row ... continue processing what happens if TASK1 inserts the row first, and continues processing (no commit yet), then TASK2 selects row (not found in its MVCC space), then tries to insert... would it get a duplicate key error? What if TASK1 later does a rollback? What's the best way to handle this? Using a LOCK TABLE name IN EXCLUSIVE MODE holds to end of transaction. This locks the whole table, limiting concurrency on other rows. I could not use a LOCK ROW as the row may not exist? (not sure how to use this yet either!) Would a PL/pgSQL stored procedure encapsulate this better? Specifically, my problem is using a table to count and limit the number of allowed transactions for a given key. The table would be used in different applications to perform the limit counting and checking. A commit immediately after this code may not be the best solution. I am getting a Duplicate Key error on my insert occasionally. I use Perl/DBI, FreeBSD 4.6.2, Postgres 7.2.3 (yes, upgrading soon!) Thanks, Allen ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] postgresql 8 abort with signal 10
On Tue, May 03, 2005 at 09:54:03AM -0500, Scott Marlowe wrote: You might want to look in your signal man page on BSD and see what signal 10 means. On solaris it's a bus error. Not a clue what it is in FreeBSD myself though. Signal 10 is SIGBUS (bus error) on FreeBSD 4.11. Somewhere under $PGDATA there might be a core dump named postmaster.core (or, more specifically, with a file name based on the kern.corefile sysctl setting) -- if there is, then a debugger like gdb might be able to show where the problem happened, especially if the postmaster was built with debugging info. -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Postgresql and VBA vs Python
Like I said before get the personal/standard version of Delphi 6,7 or 2005, it's 99.99 and you can connect to postgres with it using third party component sets like Zeos. (2005 may not be available yet) Zeos was ported to http://www.lazarus.freepascal.org/ (a free Delphi). I did test the Lazarus, but not the Zeos (yet). Vlad ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] postgresql 8 abort with signal 10
You need to find out what's triggering that. Turning on query logging would be a good way of investigating. Which directives can I use to enable this ? debug_print_parse ? debug_print_rewritten ? debug_print_plan ? debug_pretty_print ? Rather large, shared buffers for a machine with only 1 gig of ram. 640 Meg of RAM means the kernel is basically double buffering everything. have you tested with smaller settings and this setting was the best? I had 256 of RAM then I increase to 1GB thinking this could be a problem of out of memory or a buggy memory.. After this upgrade I increase the numbers of shared buffers,etc It's important to say that the max memory usage reach to only 80%. What values do you suggest ? You might want to look in your signal man page on BSD and see what signal 10 means. On solaris it's a bus error. Not a clue what it is in FreeBSD myself though. FreeBSD man page say: 10SIGBUS The system does not generate core dump file for this error. Regards, ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] Deep integration of PostgreSQL with Apache
Wouldn't it just be cheaper to buy a second machine than do all of that? Alex Turner netEconomist On 5/3/05, Robin Boerdijk [EMAIL PROTECTED] wrote: --- Thomas Hallgren [EMAIL PROTECTED] wrote: Robin Boerdijk wrote: Hi, Apologies if this has been discussed before, but I was wondering if there have been any efforts in the past to provide a deep integration of PostgreSQL with Apache. What I mean by deep integration is that the PostgreSQL server logic runs inside the Apache server processes, rather than separate processes. In particular, the postmaster server logic would run inside the Apache master process and the postgres server logic would run inside Apache child processes. The main advantage of this approach would be that it avoids the Apache/PostgreSQL context switch when executing SQL requests from the web server. It looks like the Apache server and PostgreSQL server architectures are quite similar to make this feasible. Any thoughts? The PostgreSQL backend is inherently single-threaded and a new process is forked each time you establish a new connection (session) so the integration you ask for is not in anyway possible unless you are content with one single database connection. I agree that it is not trivial, but is it feasible? Specifically, I'm thinking about the following approach: 1. Strip all networking logic and the logic that manages the postgres child servers from the postmaster server. The logic that remains is code that manages the auxiliary processes such as the bgwriter and statistics collector. Integrate this remaining logic in the the Apache master server. 2. Strip all networking logic from the postgres server. The logic that remains is logic for executing queries against the database. Integrate this remaining logic in the Apache child server. The result of this is an integrated web/database server where all networking is handled by Apache instead of postmaster/postgres. Other than that, I see no difference with the way PostgreSQL works out of the box. The Apache master server functions as the postmaster (i.e. managing the child server processes) and the Apache child servers function as the postgres servers (i.e. access the database). Why would this web/database server be limited to using only one connection? __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] postgresql 8 abort with signal 10
On Tue, May 03, 2005 at 01:36:13PM -0300, Alexandre Biancalana wrote: The system does not generate core dump file for this error. Are you sure? Where did you look and what file name did you look for? Unless you've changed the kern.corefile sysctl setting, the file should be named postgres.core, not just core, and it should be somewhere under $PGDATA. Whether a core file is produced is also affected by the kern.coredump sysctl setting and the coredumpsize resource limit. -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] Adding Records With SERIAL Primary Key
I may have missed this in the docs; if so, please tell me where to find the answer there. If not, I still need to learn how to resolve this situation. The database schema has been designed and the tables are ready to be created. Once they exist, I want to load data into the tables in batch mode (rather than one at a time, manually). The core table has a SERIAL data type field as the primary key. How is this field assigned values? Then, how do I load the related tables so they reference the proper records? Thanks, Rich -- Dr. Richard B. Shepard, President Applied Ecosystem Services, Inc. (TM) http://www.appl-ecosys.com Voice: 503-667-4517 Fax: 503-667-8863 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] postgresql 8 abort with signal 10
On Tue, May 03, 2005 at 10:37:03AM -0600, Michael Fuhr wrote: Signal 10 is SIGBUS (bus error) on FreeBSD 4.11. Somewhere under $PGDATA there might be a core dump named postmaster.core Correction: the core dump should be named postgres.core (at least it is on my FreeBSD 4.11-STABLE system if I send the backend a signal 10). -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] postgresql 8 abort with signal 10
On Tue, 2005-05-03 at 11:36, Alexandre Biancalana wrote: You need to find out what's triggering that. Turning on query logging would be a good way of investigating. Which directives can I use to enable this ? debug_print_parse ? debug_print_rewritten ? debug_print_plan ? debug_pretty_print ? Rather large, shared buffers for a machine with only 1 gig of ram. 640 Meg of RAM means the kernel is basically double buffering everything. have you tested with smaller settings and this setting was the best? I had 256 of RAM then I increase to 1GB thinking this could be a problem of out of memory or a buggy memory.. After this upgrade I increase the numbers of shared buffers,etc It's important to say that the max memory usage reach to only 80%. What values do you suggest ? Generally 25% of the memory or 256 Megs, whichever is less. In your case, they're the same. The Reasoning being that the kernel caches, while postgresql only really holds onto data as long as it needs it, then frees it, so having a really huge buffer space lets postgresql flush the kernel cache, then the next access, after postgresql has freed the memory that was holding the data, now has to go to disk. The kernel is generally a lot better at caching than most apps. So, 32768 is about as big as i'd normally go, and even that may be more than you really need. Note that there's overhead in managing such a large buffer as well. With pgsql 8.x and the new caching algorithms in place, such overhead may be lower, and larger buffer settings may be in order. But if testing hasn't shown them to be faster, i'd avoid them for now and see if your signal 10 errors start going away. If they do, then you've likely got a kernel bug in there somewhere. If they don't, I'd suspect bad hardware. You might want to look in your signal man page on BSD and see what signal 10 means. On solaris it's a bus error. Not a clue what it is in FreeBSD myself though. FreeBSD man page say: 10SIGBUS The system does not generate core dump file for this error. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] does database shut down cleanly when WAL device
On Tue, 2005-05-03 at 10:30 -0400, Brandon Craig Rhodes wrote: theories: a) Putting the WAL on a separate device from the database tables not only increases efficiency, but reliability as well - because as long as one keeps a database backup and a WAL history that goes back to the last backup (as described in 22.3 of the 8.0.1 manual), then one is insulated from losing data from a single disk failure: - If the drive holding the tables fails, then take the most recent backup and bring it up to date using the WAL. I assume the WAL history you mention are the WAL segments archived for PITR. they are usually shipped to a safe place, as soon as they are archived, but not kept on the same device as the WAL itself, if at all possible.(often near the database backup) Then, if either of the devices holding the database or WAL fails, the database can be rebuilt using backup+archived WALs upto the point of the last WAL archive. gnari ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Deep integration of PostgreSQL with Apache
I agree that it is not trivial, but is it feasible? Specifically, I'm thinking about the following approach: Everything is feasible but that doesn't mean it is sane to do so :). If you were going to do that you would probably be better served looking at something like sqllite. Sincerely, Joshua D. Drake Command Prompt, Inc. -- Your PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240 PostgreSQL Replication, Consulting, Custom Programming, 24x7 support Managed Services, Shared and Dedication Hosting Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/ ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Dynamically access to field on a RECORD variable
If it's not possible, can I create a function that gets a RECORD and a tablename and returns the correct value. For example: CREATE FUNCTION my_value(TEXT, RECORD) RETURNS TEXT AS ' DECLARE table_name ALIAS FOR $1 rec ALIAS FOR $2 BEGIN IF (table_name = 'my_table1') THEN RETURN rec.my1_field; ELSIF (table_name = 'my_table2') THEN RETURN rec.my2.field; ... END; ' LANGUAGE plpgsql; Em Ter, 2005-05-03 às 11:52, Tom Lane escreveu: Ricardo Vaz Mannrich [EMAIL PROTECTED] writes: Is it possible? Not in plpgsql. I believe you could do it in any of the other PLs though. regards, tom lane -- Ricardo Vaz Mannrich [EMAIL PROTECTED] ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Adding Records With SERIAL Primary Key
On Tue, 2005-05-03 at 11:58, Rich Shepard wrote: I may have missed this in the docs; if so, please tell me where to find the answer there. If not, I still need to learn how to resolve this situation. The database schema has been designed and the tables are ready to be created. Once they exist, I want to load data into the tables in batch mode (rather than one at a time, manually). The core table has a SERIAL data type field as the primary key. How is this field assigned values? Then, how do I load the related tables so they reference the proper records? The normal way it's added is with a DEFAULT clause. This means it can be overridden by inserting it by hand. Changing this to an after trigger can ensure that it is always inserted no matter what the user app tries to do. When loading in data using the copy command, one can set the columns to be copied, and the system will insert the serial value for you if you leave that column out of the list. If your data already has a value assigned, you can load those in, and then use setval() to set the value of the associated sequence. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Adding Records With SERIAL Primary Key
On Tue, 3 May 2005, Scott Marlowe wrote: When loading in data using the copy command, one can set the columns to be copied, and the system will insert the serial value for you if you leave that column out of the list. Scott, Aha! I thought this would be the case, but I was not sure. Many thanks, Rich -- Dr. Richard B. Shepard, President Applied Ecosystem Services, Inc. (TM) http://www.appl-ecosys.com Voice: 503-667-4517 Fax: 503-667-8863 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] Sorting by constant values
I have a column that I want to sort by certain values. The values are Unit, Exterior and Common. I want all the records with Unit first, Common second and Exterior last in the sort order. These are the only 3 possible values, is there a way to sort manually like that with the alphanumeric values? -- Robert ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Date addition/subtraction
Hi How in postgres can I do date/time subtraction or addition. e.g. If I want to get today's date - 30 days? or current_timestamp - 1 hour? Thanks Craig ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Date addition/subtraction
It's easy. You have to know that INTERVAL data type exist, so: SELECT current_date - '30 days'::interval SELECT current_timestamp - '1 hour'::interval 2005/5/3, Craig Bryden [EMAIL PROTECTED]: HiHow in postgres can I do date/time subtraction or addition.e.g. If I want to get today's date - 30 days? or current_timestamp - 1 hour?ThanksCraig---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriatesubscribe-nomail command to [EMAIL PROTECTED] so that yourmessage can get through to the mailing list cleanly
Re: [GENERAL] postgresql 8 abort with signal 10
On 5/3/05, Scott Marlowe [EMAIL PROTECTED] wrote: On Tue, 2005-05-03 at 11:36, Alexandre Biancalana wrote: You need to find out what's triggering that. Turning on query logging would be a good way of investigating. Which directives can I use to enable this ? debug_print_parse ? debug_print_rewritten ? debug_print_plan ? debug_pretty_print ? Rather large, shared buffers for a machine with only 1 gig of ram. 640 Meg of RAM means the kernel is basically double buffering everything. have you tested with smaller settings and this setting was the best? I had 256 of RAM then I increase to 1GB thinking this could be a problem of out of memory or a buggy memory.. After this upgrade I increase the numbers of shared buffers,etc It's important to say that the max memory usage reach to only 80%. What values do you suggest ? Generally 25% of the memory or 256 Megs, whichever is less. In your case, they're the same. The Reasoning being that the kernel caches, while postgresql only really holds onto data as long as it needs it, then frees it, so having a really huge buffer space lets postgresql flush the kernel cache, then the next access, after postgresql has freed the memory that was holding the data, now has to go to disk. The kernel is generally a lot better at caching than most apps. So, 32768 is about as big as i'd normally go, and even that may be more than you really need. Note that there's overhead in managing such a large buffer as well. With pgsql 8.x and the new caching algorithms in place, such overhead may be lower, and larger buffer settings may be in order. But if testing hasn't shown them to be faster, i'd avoid them for now and see if your signal 10 errors start going away. If they do, then you've likely got a kernel bug in there somewhere. If they don't, I'd suspect bad hardware. You might want to look in your signal man page on BSD and see what signal 10 means. On solaris it's a bus error. Not a clue what it is in FreeBSD myself though. FreeBSD man page say: 10SIGBUS The system does not generate core dump file for this error. Hi Michael, Here is my /etc/sysctl.conf: kern.corefile=/var/coredumps/%N.%P.core kern.sugid_coredump=1 and how I said before, there is no one core file in /var/coredumps I should say that this structure to store core files it's ok, in past I used this a lot Thanks Scott I will lower shared_buffers to 32768 and try again, but how about work_mem, maintenance_work_mem, effective_cache_size ?? ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] postgresql 8 abort with signal 10
On Tue, 2005-05-03 at 12:25, Alexandre Biancalana wrote: On 5/3/05, Scott Marlowe [EMAIL PROTECTED] wrote: On Tue, 2005-05-03 at 11:36, Alexandre Biancalana wrote: You need to find out what's triggering that. Turning on query logging would be a good way of investigating. Which directives can I use to enable this ? debug_print_parse ? debug_print_rewritten ? debug_print_plan ? debug_pretty_print ? Rather large, shared buffers for a machine with only 1 gig of ram. 640 Meg of RAM means the kernel is basically double buffering everything. have you tested with smaller settings and this setting was the best? I had 256 of RAM then I increase to 1GB thinking this could be a problem of out of memory or a buggy memory.. After this upgrade I increase the numbers of shared buffers,etc It's important to say that the max memory usage reach to only 80%. What values do you suggest ? Generally 25% of the memory or 256 Megs, whichever is less. In your case, they're the same. The Reasoning being that the kernel caches, while postgresql only really holds onto data as long as it needs it, then frees it, so having a really huge buffer space lets postgresql flush the kernel cache, then the next access, after postgresql has freed the memory that was holding the data, now has to go to disk. The kernel is generally a lot better at caching than most apps. So, 32768 is about as big as i'd normally go, and even that may be more than you really need. Note that there's overhead in managing such a large buffer as well. With pgsql 8.x and the new caching algorithms in place, such overhead may be lower, and larger buffer settings may be in order. But if testing hasn't shown them to be faster, i'd avoid them for now and see if your signal 10 errors start going away. If they do, then you've likely got a kernel bug in there somewhere. If they don't, I'd suspect bad hardware. You might want to look in your signal man page on BSD and see what signal 10 means. On solaris it's a bus error. Not a clue what it is in FreeBSD myself though. FreeBSD man page say: 10SIGBUS The system does not generate core dump file for this error. Hi Michael, Here is my /etc/sysctl.conf: kern.corefile=/var/coredumps/%N.%P.core kern.sugid_coredump=1 and how I said before, there is no one core file in /var/coredumps I should say that this structure to store core files it's ok, in past I used this a lot Thanks Scott I will lower shared_buffers to 32768 and try again, but how about work_mem, maintenance_work_mem, effective_cache_size ?? work_mem is how much memory things like sorts can allocate. It really kind of depends on the kind of parallel load you're looking at possibly handling. If you'll never have more than a dozen or so open connections that could be doing sorts (select distinct, order by, union, etc...) then having it be 10 to 20 meg is fine. If you're going to handle hundreds or even thousands of connections, you have to be careful it's not big enough to run your machine out of memory, or you'll start getting swap storms. maintenance_work_mem is used by processes like vacuum, which tend to be run one at a time, so having it be fairly large, like 32 to 64 meg is no big issue. Note that you can set either of these settings higher for one shot things, like nightly maintenance, if you need to keep them lower during the day to ensure proper operation. effective_cache_size is a setting that simply tells the query planner about how much the kernel / OS is caching of your data set. Generally the cached value shown in top or some other system monitor on a dedicated machine is about right. work_mem and maintenance_work_mem are in 1k increments, while the other two, (buffers and effective_cache_size) are in 8k increments, btw. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] Sorting by constant values
You can order by conditions, lets say column='Unit'. The evaluation of a conditions will give you 't' or 'f', and alfabetically 'f' 't'... you should use DESC to get the matches first. So, it would be more or less like this: ORDER BY column='Unit' DESC, column='Exterior' DESC, column='Common' DESC I don't think this is performant though. If you have many rows to evaluate, you could create a funtion like this: CREATE FUNCTION evaluate(TEXT) RETURNS TEXT LANGUAGE 'sql' AS ' SELECT $1='Unit' || $1='Exterior' || $1='Common'; ' This function would return something like 'tff', 'ftf', 'fft', and you should be able to create an index on that function. Then you can use the index to order your rows. Hope it helps ;) 2005/5/3, Robert Fitzpatrick [EMAIL PROTECTED]: I have a column that I want to sort by certain values. The values areUnit, Exterior and Common. I want all the records with Unit first,Common second and Exterior last in the sort order. These are the only 3possible values, is there a way to sort manually like that with the alphanumeric values?--Robert---(end of broadcast)---TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Date addition/subtraction
On Tue, 2005-05-03 at 12:32, Craig Bryden wrote: Hi How in postgres can I do date/time subtraction or addition. e.g. If I want to get today's date - 30 days? or current_timestamp - 1 hour? select now()-interval'1 hour' and so on. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] unsubscribe
unsubscribe -farewell general [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Sorting by constant values
On Tue, 2005-05-03 at 12:29, Robert Fitzpatrick wrote: I have a column that I want to sort by certain values. The values are Unit, Exterior and Common. I want all the records with Unit first, Common second and Exterior last in the sort order. These are the only 3 possible values, is there a way to sort manually like that with the alphanumeric values? case statement should work. http://www.postgresql.org/docs/8.0/static/functions-conditional.html ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Date addition/subtraction
On Tue, 2005-05-03 at 19:32 +0200, Craig Bryden wrote: How in postgres can I do date/time subtraction or addition. e.g. If I want to get today's date - 30 days? or current_timestamp - 1 hour? easier than you think select current_timestamp - interval '1 hour'; select current_date -interval '30 days'; -- timestamp select current_date + interval '1 week'; -- timestamp select date (current_date + interval '1 week'); -- date see: http://www.postgresql.org/docs/8.0/interactive/functions-datetime.html gnari ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Sorting by constant values
On Tue, 2005-05-03 at 13:29 -0400, Robert Fitzpatrick wrote: I have a column that I want to sort by certain values. The values are Unit, Exterior and Common. I want all the records with Unit first, Common second and Exterior last in the sort order. These are the only 3 possible values, is there a way to sort manually like that with the alphanumeric values? ... ORDER BY length(col); :-) gnari ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] MVCC and insert
On Tue, May 03, 2005 at 12:28:05PM -0400, Allen wrote: Hows does MVCC handle two concurrent tasks trying to insert the same row? Example pseudo-code: select row from table where... if not found, prepare row insert row else update row ... continue processing what happens if TASK1 inserts the row first, and continues processing (no commit yet), then TASK2 selects row (not found in its MVCC space), then tries to insert... would it get a duplicate key error? What if TASK1 later does a rollback? What's the best way to handle this? Is there a unique index? If there is, the index code will block. You can't see the row in your mvcc space, so you don't have a way to know beforehand whether the insertion would block or not. I think the solution (only in 8.0) is: :label update if rows updated == 0 set a savepoint insert if it fails due to duplicate key rollback to savepoint restart at label The real solution is MERGE, but we don't support that ATM. Specifically, my problem is using a table to count and limit the number of allowed transactions for a given key. The table would be used in different applications to perform the limit counting and checking. Sorry, I don't understand your scenario, can't help you further. -- Alvaro Herrera ([EMAIL PROTECTED]) Before you were born your parents weren't as boring as they are now. They got that way paying your bills, cleaning up your room and listening to you tell them how idealistic you are. -- Charles J. Sykes' advice to teenagers ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] 7.3.9 Install Question - init.d/postgresql error?
All, Thank you for the confirmation and reply. DC -- --- Devrim GUNDUZ [EMAIL PROTECTED] wrote: Anyway, I think it will be better to fix it compeletely in 7.3.10. Apologies everyone :( It will probably work, but I haven't compared the 7.3 and 7.4 initscripts in detail. It will work... Regards, - -- Devrim GUNDUZ devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr http://www.tdmsoft.com.tr http://www.gunduz.org __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] postgresql 8 abort with signal 10
Thank you for the detailed explanation Scott, they are very handy !! I reduced the shared_buffers to 32768, but the problem still occurs. Any other idea ?? ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] postgresql 8 abort with signal 10
On Tue, 2005-05-03 at 15:04, Alexandre Biancalana wrote: Thank you for the detailed explanation Scott, they are very handy !! I reduced the shared_buffers to 32768, but the problem still occurs. Any other idea ?? Yeah, I had a sneaking suspicion that shared_buffers wasn't causing the issue really. Sounds like either a hardware fault, or a BSD bug. I'd check the BSD mailing lists for mention of said bug, and see if you can grab a spare drive and install the last stable version of FreeBSD 4.x and if that fixes the problem. If you decide to try linux, avoid the 2.6 kernel, it's still got issues... 2.4 is pretty stable. I really doubt it's a problem in postgresql itself though. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] postgresql 8 abort with signal 10
Ohhh god :( The FreeBSD is the last STABLE version. I can try to change some hardware, I already changed memory, what can I try now ? the processor ? motherboard ?? On 5/3/05, Scott Marlowe [EMAIL PROTECTED] wrote: On Tue, 2005-05-03 at 15:04, Alexandre Biancalana wrote: Thank you for the detailed explanation Scott, they are very handy !! I reduced the shared_buffers to 32768, but the problem still occurs. Any other idea ?? Yeah, I had a sneaking suspicion that shared_buffers wasn't causing the issue really. Sounds like either a hardware fault, or a BSD bug. I'd check the BSD mailing lists for mention of said bug, and see if you can grab a spare drive and install the last stable version of FreeBSD 4.x and if that fixes the problem. If you decide to try linux, avoid the 2.6 kernel, it's still got issues... 2.4 is pretty stable. I really doubt it's a problem in postgresql itself though. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Deep integration of PostgreSQL with Apache
Robin Boerdijk wrote: Why would this web/database server be limited to using only one connection? No it wouldn't. I misunderstood your question. It's one process/one connection. If you don't use Apache as a multi-threaded server, then your question makes more sense. Still, it's a lot of work to make it happen and I'm not sure you'd gain anything. My advice would be to consider a multi-threaded server that uses a connection pool and perhaps include some shared caching of data that is more static in nature in the middle tier. That could really save some context switches. Regards, Thomas Hallgren ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] Favorable Postgresql write up
http://www.suite101.com/article.cfm/oracle/115560 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[GENERAL] plpythonu and dollar quoting
Can dollar quoting be used with plpythonu (like with plperl - http://www.postgresql.org/docs/8.0/interactive/plperl.html#PLPERL-FUNCS)? When trying to create a function I get this error: 'syntax error at or near $' If I use single quotes, then escape all quotes and (a lot of) backslashes in the python code then the function gets created fine. CSN __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Favorable Postgresql write up
It's nice, but...it couldn't possibly be more subjective, and his ranking of documentation as 5 out of 10 is funny. He apparently did not even find these many hosted listservs (only Usenet). shrug I hope he enjoys his ongoing reviews; I suppose it can't hurt? -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Dann Corbit Sent: Tuesday, May 03, 2005 5:28 PM To: pgsql-general@postgresql.org Subject: [GENERAL] Favorable Postgresql write up http://www.suite101.com/article.cfm/oracle/115560 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] Interesting article on transactional algorithms includes PostgreSQL study
http://www.cs.cmu.edu/~harchol/Papers/actual-icde-submission.pdf ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] calculated identity field in views, again...
I asked this question several weeks ago, but nobody proposed a solution, so I am repeating the same question again... I have an MS Access front-end for a database on PostgreSQL. I could use pass-through queries as record sources for reports and it works fine... Unfortunately, MS Access doesn't allow pass-through queries to be records sources for subforms. Therefore I tried to base subforms on regular JET queries on linked tables. It was too slow... Then I tried to base subforms on DAO recordset code generated from pass-through QueryDef objects. Although it worked, it was very unstable... Now it seems to me that POstgreSQL views are the best solution, but Access considers views as tables (!) and needs column with unique values. All those views are complicated queries on several tables, so I can't use any table's column as primary key. I need a calculated column in the view that Access will consider as primary key column. In regular tables, I use bigserial field, but how can I create calculated bigserial column in a view ? Thanks. ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] plpythonu and dollar quoting
Nevermind, I see dollar-quoting was added in 8.0. --- CSN [EMAIL PROTECTED] wrote: Can dollar quoting be used with plpythonu (like with plperl - http://www.postgresql.org/docs/8.0/interactive/plperl.html#PLPERL-FUNCS)? When trying to create a function I get this error: 'syntax error at or near $' If I use single quotes, then escape all quotes and (a lot of) backslashes in the python code then the function gets created fine. CSN __ Do you Yahoo!? Make Yahoo! your home page http://www.yahoo.com/r/hs ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] plpythonu and dollar quoting
On Tue, May 03, 2005 at 02:46:04PM -0700, CSN wrote: Can dollar quoting be used with plpythonu (like with plperl - Yes, if you're using PostgreSQL 8.0 or later. Dollar quotes have nothing to do with the function's language -- they're just another way to quote a string. http://www.postgresql.org/docs/8.0/interactive/sql-syntax.html#SQL-SYNTAX-DOLLAR-QUOTING When trying to create a function I get this error: 'syntax error at or near $' This is the error you'd get if you tried to use dollar quotes in PostgreSQL 7.4; you'd get a similar error in earlier versions. What version are you using? What does SELECT version(); show? -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] default column value using a function, trigger, or whatever
I am trying to come up with a method to have a default value for a column based on a function and other columns. I'm hoping ( well not too much ) that what I figure out here will apply to MS SQL Server as I am stuck using it unless I can prove there ain't no way it's going to happen. ( Now that the embarrassment of being stuck using SQL Server is out of the way. ) I'd like to populate a column with the md5 of ColA || ':' || ColB || ':' || ColC. A search of the archives ( with the terms I was using ) didn't turn up anything useful. Would this be a function or a trigger? Actually I can probably find examples in the two PostgreSQL books ( Bruce's and the one from Command Prompt ) I have when I get home tonight but I couldn't in the docs. There used to be a PostgreSQL Cookbook but it can't find it -- even doing the Google thing. Thought, ideas, pointers are welcome. Rod -- --- [This E-mail scanned for viruses by Declude Virus] ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] default column value using a function, trigger, or
On Tue, 2005-05-03 at 17:31, Roderick A. Anderson wrote: I am trying to come up with a method to have a default value for a column based on a function and other columns. I'm hoping ( well not too much ) that what I figure out here will apply to MS SQL Server as I am stuck using it unless I can prove there ain't no way it's going to happen. ( Now that the embarrassment of being stuck using SQL Server is out of the way. ) I'd like to populate a column with the md5 of ColA || ':' || ColB || ':' || ColC. A search of the archives ( with the terms I was using ) didn't turn up anything useful. Would this be a function or a trigger? This will be an after trigger function (everything's a function! woot! hehe. seriously though, all triggers are functions, but not all functions are triggers, in postgresql land). Basically, you'll need an after trigger that updates the md5 field with that cola/b/c bit you've got up there. Not much to it really. One of the examples on this page ought to help: http://www.postgresql.org/docs/8.0/static/plpgsql-trigger.html ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Very interesting article about the future of databases
For those who didn't see it on slashdot: http://www.acmqueue.org/modules.php?name=Contentpa=showpagepid=293 What's interesting is that PostgreSQL is already working on some of these things. Of note, there's a patch to allow sequential scans to 'piggyback' on top of other sequential scans. See the quote For petabyte-scale databases, the only solution may be to run continuous data scans, with queries piggybacked on top of the scans. on page 4. There's also been discussion about how to more intelligently cost UDF's, something also mentioned on page 4. -- Jim C. Nasby, Database Consultant [EMAIL PROTECTED] Give your computer some brain candy! www.distributed.net Team #1828 Windows: Where do you want to go today? Linux: Where do you want to go tomorrow? FreeBSD: Are you guys coming, or what? ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Very interesting article about the future of databases
w/o reading the URL docs, it sounds suspiciously like tagged command queueing for sequential scans. pause for comedic effect I wonder what the best way to spend $7K for performance improvement might be? ;-) -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Jim C. Nasby Sent: Tuesday, May 03, 2005 6:46 PM To: pgsql-general@postgresql.org Subject: [GENERAL] Very interesting article about the future of databases For those who didn't see it on slashdot: http://www.acmqueue.org/modules.php?name=Contentpa=showpagepid=293 What's interesting is that PostgreSQL is already working on some of these things. Of note, there's a patch to allow sequential scans to 'piggyback' on top of other sequential scans. See the quote For petabyte-scale databases, the only solution may be to run continuous data scans, with queries piggybacked on top of the scans. on page 4. There's also been discussion about how to more intelligently cost UDF's, something also mentioned on page 4. -- Jim C. Nasby, Database Consultant [EMAIL PROTECTED] Give your computer some brain candy! www.distributed.net Team #1828 Windows: Where do you want to go today? Linux: Where do you want to go tomorrow? FreeBSD: Are you guys coming, or what? ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] Deep integration of PostgreSQL with Apache
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I think the closest you are going to get is Apache::DBI however sense PostgreSQL is processed based you are going to get a new connection for every connection to Apache. On any reasonably busy site that can spell doom. Actually, this is a classic argument for using mod_perl. Each Apache child can connect once to the database, and stay connected as it serves requests to different clients. There is very little overhead, and if you take advantage of server-side prepares, things can run even faster. - -- Greg Sabino Mullane [EMAIL PROTECTED] PGP Key: 0x14964AC8 200505020927 http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8 -BEGIN PGP SIGNATURE- iD8DBQFCdiqzvJuQZxSWSsgRAvmcAKC8HX5E/QwzhHalM0PNg/rMmufFhgCfZFlA tt64Bin9lq7TFSSk/hCsYFg= =Nmnq -END PGP SIGNATURE- ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] getting the ranks of items
I'm probably asking a FAQ, but a few google searches didn't seem to point me in the right place. Is there a simple way with PostgreSQL to assign relative ranks to the result of a query ORDER BY? That is, I want to either have a view that cheaply assigns the ranks, or be able to update a column with the current ranks (yes, I know this latter version is more prone to error). I'm certain there's probably something I can do to laminate an array value to a query result. Am I confused? (Yes!) -- Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095 merlyn@stonehenge.com URL:http://www.stonehenge.com/merlyn/ Perl/Unix/security consulting, Technical writing, Comedy, etc. etc. See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training! ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] getting the ranks of items
On May 3, 2005, at 8:30 PM, Randal L. Schwartz wrote: Is there a simple way with PostgreSQL to assign relative ranks to the result of a query ORDER BY? What do you mean by ranks? ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] getting the ranks of items
Matthew == Matthew Terenzio [EMAIL PROTECTED] writes: Matthew On May 3, 2005, at 8:30 PM, Randal L. Schwartz wrote: Is there a simple way with PostgreSQL to assign relative ranks to the result of a query ORDER BY? Matthew What do you mean by ranks? If I order a query by ascending age, the youngest person gets rank 1, the second youngest gets rank 2, the third youngest gets rank 3, and if the fourth and fifth tie, they both get 4, and the next one gets 6. You know, rank? :) -- Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095 merlyn@stonehenge.com URL:http://www.stonehenge.com/merlyn/ Perl/Unix/security consulting, Technical writing, Comedy, etc. etc. See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training! ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] getting the ranks of items
Randal L. Schwartz wrote: I'm probably asking a FAQ, but a few google searches didn't seem to point me in the right place. Is there a simple way with PostgreSQL to assign relative ranks to the result of a query ORDER BY? That is, I want to either have a view that cheaply assigns the ranks, or be able to update a column with the current ranks (yes, I know this latter version is more prone to error). I'm certain there's probably something I can do to laminate an array value to a query result. Am I confused? (Yes!) Randal, May be you can use something like this: create sequence seq_tmp; select nextval('seq_tmp') as rank, a.id, a.name from (select id, name from t order by name desc) a; drop sequence seq_tmp; I don't know how cheap will this be (because of the sequence), but couldn't find another way. I do not think that we have something like Oracle's ROWNUM... Regards, Lyubomir Petrov P.S. I'm sure you can wrap it in plperl stored procedure :) ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] getting the ranks of items
How about something like: CREATE TABLE testrank ( id int, value varchar ); insert into testrank values(17,'way'); insert into testrank values(27,'foo'); insert into testrank values(278,'bar'); insert into testrank values(1,'abd'); insert into testrank values(2,'def'); CREATE OR REPLACE FUNCTION ranker(text) RETURNS SETOF RECORD AS $$ my ($query) = @_; my $rv = spi_exec_query($query); my $rows = []; foreach my $rn (0 .. ($rv-{processed})) { my $row = $rv-{rows}[$rn]; $row-{index} = $rn+1; push @$rows,$row; } return $rows; $$ language plperl; select * from ranker('select * from testrank order by value') as t(index int,id int,value varchar); 1,1,abc 2,278,bar 3,2,def 4,27,foo 5,17,way Sorry, the results don't paste in very well, but you get the idea. This would probably need to be cleaned up a bit, but I think would do something like what you need. Sean - Original Message - From: Lyubomir Petrov [EMAIL PROTECTED] To: Randal L. Schwartz merlyn@stonehenge.com Cc: pgsql-general@postgresql.org Sent: Tuesday, May 03, 2005 9:13 PM Subject: Re: [GENERAL] getting the ranks of items Randal L. Schwartz wrote: I'm probably asking a FAQ, but a few google searches didn't seem to point me in the right place. Is there a simple way with PostgreSQL to assign relative ranks to the result of a query ORDER BY? That is, I want to either have a view that cheaply assigns the ranks, or be able to update a column with the current ranks (yes, I know this latter version is more prone to error). I'm certain there's probably something I can do to laminate an array value to a query result. Am I confused? (Yes!) Randal, May be you can use something like this: create sequence seq_tmp; select nextval('seq_tmp') as rank, a.id, a.name from (select id, name from t order by name desc) a; drop sequence seq_tmp; I don't know how cheap will this be (because of the sequence), but couldn't find another way. I do not think that we have something like Oracle's ROWNUM... Regards, Lyubomir Petrov P.S. I'm sure you can wrap it in plperl stored procedure :) ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] getting the ranks of items
If I order a query by ascending age, the youngest person gets rank 1, the second youngest gets rank 2, the third youngest gets rank 3, and if the fourth and fifth tie, they both get 4, and the next one gets 6. You know, rank? :) You could use a plPerl function. Sincerely, Joshua D. Drake ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] getting the ranks of items
If I order a query by ascending age, the youngest person gets rank 1, the second youngest gets rank 2, the third youngest gets rank 3, and if the fourth and fifth tie, they both get 4, and the next one gets 6. You know, rank? :) You could use a plPerl function. To do it with ties, you'd need some way of passing the function the ranking criteria with persistence between calls, which might have some startup issues. Wouldn't that also cause problems with multiple users calling the function simultaneously? -- Mike Nolan ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Favorable Postgresql write up
Dann Corbit [EMAIL PROTECTED] writes: http://www.suite101.com/article.cfm/oracle/115560 Well, he does say that this is only a comparison of the install experience ... but I read that as the win32 installer guys get a gold star. Congrats guys! regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] [INTERFACES] calculated identity field in views, again...
Zlatko Matic wrote: I asked this question several weeks ago, but nobody proposed a solution, so I am repeating the same question again... I have an MS Access front-end for a database on PostgreSQL. I could use pass-through queries as record sources for reports and it works fine... Unfortunately, MS Access doesn't allow pass-through queries to be records sources for subforms. Therefore I tried to base subforms on regular JET queries on linked tables. It was too slow... Then I tried to base subforms on DAO recordset code generated from pass-through QueryDef objects. Although it worked, it was very unstable... Now it seems to me that POstgreSQL views are the best solution, but Access considers views as tables (!) and needs column with unique values. All those views are complicated queries on several tables, so I can't use any table's column as primary key. I need a calculated column in the view that Access will consider as primary key column. In regular tables, I use bigserial field, but how can I create calculated bigserial column in a view ? Thanks. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match Another option is to toss the MS Access altogether and program the front end entirely in VB. That is what we did. -- Kind Regards, Keith ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Date addition/subtraction
Select current_timestamp - '30 day'::interval Select current_timestamp - '1 hour'::interval Thanks Dinesh Pandey -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Craig Bryden Sent: Tuesday, May 03, 2005 11:02 PM To: pgsql Subject: [GENERAL] Date addition/subtraction Hi How in postgres can I do date/time subtraction or addition. e.g. If I want to get today's date - 30 days? or current_timestamp - 1 hour? Thanks Craig ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] Postgre 8.0 for Linux i586
Hello everyone, Anyone can help me find out RPM packs for Postgre 8.0 , i need it for Linux "i586" I just find RPMs for version i686 :( Anybody more luck than me? thanks in advance Atenciosamente, Lipy Reis Analista de Suporte === quando é hora de comê, nóis come... quando é hora de bebê , nóis bebe... quando é hora de amá, nóis ama... quando é hora de trabaiá... huumm... aí nóis tudo manda "i-meius"!...
Re: [GENERAL] [INTERFACES] calculated identity field in views, again...
Zlatko Matic [EMAIL PROTECTED] writes: In regular tables, I use bigserial field, but how can I create calculated bigserial column in a view ? You would have to create a sequence and reference it with nextval('sequencename') in your view. But I doubt very much that it will do anything useful. It sounds like Access wants to be able to update records by looking them up by primary key. In that case assigning a new value in your view will make Postgres make up a brand new number that is utterly useless for finding the record again later. -- greg ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster