[GENERAL] returning ref cursor
Please let me know what's wrong with below code CREATE LANGUAGE plpgsql; create or replace FUNCTION test_cu( p_cursor REFCURSOR) returns REFCURSOR AS $procedure$ BEGIN open p_cursor FOR select * from test; RETURN p_cursor; END; $procedure$ LANGUAGE plpgsql; create or replace FUNCTION test_call() RETURNS VOID AS $procedure$ DECLARE c_cursor REFCURSOR; r_emp test%rowtype; BEGIN PERFORM test_cu(c_cursor); loop fetch c_cursor into r_emp; exit when NOT FOUND; RAISE NOTICE '%',r_emp.aaa; end loop; close c_cursor; RETURN; END; $procedure$ LANGUAGE plpgsql; SELECT test_call(); When I execute above code I got below error ERROR: cursor variable "c_cursor" is null CONTEXT: PL/pgSQL function "test_call" line 7 at FETCH ** Error ** ERROR: cursor variable "c_cursor" is null SQL state: 22004 Context: PL/pgSQL function "test_call" line 7 at FETCH Thanks, Ravi Katkar
Re: [GENERAL] UPDATE ... RETURNING atomicity
On 05/24/2010 01:29 AM, Grzegorz Jaśkiewicz wrote: don't lock tables explicitly. That's a killer for (concurrent) performance. Just write queries properly, and use appropriate transaction level. And you are sorted. Read Committed is fine, as long as I restart the UPDATE query RETURNING nothing. The specifics of our app allow retrying the said query a few times and if it still did not get the id (like during the improbable total ID exhaustion), then pass through, this is considered a tolerable soft error. I suspect retrying just a single query is less expensive than retrying the failed serializable transaction, which is more heavy-weight in nature (and in practice). BTW, regarding your comment on avoiding to use explicit LOCKs: in one place which wasn't speed-sensitive I had to use the strictest LOCK mode because otherwise deadlocks occurred from time to time. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] ROLLBACK in a function
On 24/05/10 02:51, Ken Winter wrote: How can I write a PL/PgSQL function that rolls back every database change it has done? Have it raise an exception, causing the surrounding transaction to terminate with an error. Another function calling yours can still catch the exception and handle it, so it's possible for your function's changes to be applied, but only if someone explicitly and intentionally catches and ignores the error. BEGIN RAISE EXCEPTION ''; EXCEPTION WHEN raise_exception THEN SET search_path TO public, tests; -- Test code: SELECT count(*) INTO old_count FROM person_i; INSERT INTO person (last_name) VALUES ('_test'); SELECT count(*) INTO new_count FROM person_i; IF NOT new_count = old_count + 1 THEN errs := errs || ': Failed to insert into ''person''’; END IF; The function runs OK, but it does not roll back the actions it did (in this case, the INSERT). You're on the right track, but you've got the code that changes the database in the exception handler _after_ the exception is thrown. You want it in the BEGIN block _before_ the exception is thrown. However, you're also doing this quite an expensive way. An INSERT/UPDATE/DELETE can only fail to have any effect without throwing an exception if if has a WHERE clause, is intercepted by a BEFORE trigger that returns NULL, or is rewritten by a rule. If none of those are possible, your INSERT/UPDATE/DELETE is guaranteed to work or throw an exception. eg (untested code, please verify before relying on it): BEGIN SELECT count(*) INTO old_count FROM person_i; INSERT INTO person (last_name) VALUES ('_test'); SELECT count(*) INTO new_count FROM person_i; IF NOT old_count + 1 = new_count THEN RAISE EXCEPTION 'Update failed'; END IF; EXCEPTION WHEN raise_exception THEN errs := errs || ': Failed to insert into ''person''’; END; But ... please consider whether you really need this check with two expensive count(*) queries. Unless you're using BEFORE triggers that return NULL or rewrite rules, you can trust that an INSERT without a WHERE clause will always succeed or throw an exception. If you *are* using triggers or rules, you should be able to use GET DIAGNOSTICS to see the affected row count, saving yourself those count(*) queries. See: http://www.postgresql.org/docs/current/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-DIAGNOSTICS eg (untested code, should give you the idea): DECLARE rowcount INTEGER; BEGIN -- function body here BEGIN INSERT INTO person (last_name) VALUES ('_test'); GET DIAGNOSTICS rowcount = ROW_COUNT; IF NOT rowcount = 1 THEN RAISE EXCEPTION 'Update failed'; END IF; EXCEPTION WHEN raise_exception THEN errs := errs || ': Failed to insert into ''person''’; END; END; You *can* trick Pg into not seeing that an update has happened even when it really has. For example, a BEFORE trigger can do its own INSERT then return NULL to tell Pg to ignore the INSERT that invoked the trigger. Stored functions that do INSERTs don't set the rowcount either. If you're not doing anything like that, though, you're set. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] UPDATE ... RETURNING atomicity
don't lock tables explicitly. That's a killer for (concurrent) performance. Just write queries properly, and use appropriate transaction level. And you are sorted. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] ROLLBACK in a function
How can I write a PL/PgSQL function that rolls back every database change it has done? I'm about to write a set of database test functions. Each function needs to do some INSERT, UPDATE, or DELETE actions, test whether they had their intended effect, and then roll back the test changes to restore the database to its prior state. Here's a simple example: CREATE OR REPLACE FUNCTION tests.person() RETURNS varchar AS $BODY$ DECLARE errs VARCHAR = ''; -- Test declarations: old_count BIGINT; new_count BIGINT; BEGIN -- RAISE EXCEPTION ''; -- EXCEPTION -- WHEN raise_exception THEN SAVEPOINT s; SET search_path TO public, tests; -- Test code: SELECT count(*) INTO old_count FROM person_i; INSERT INTO person (last_name) VALUES ('_test'); SELECT count(*) INTO new_count FROM person_i; IF NOT new_count = old_count + 1 THEN errs := errs || ': Failed to insert into ''person''; END IF; ROLLBACK TO SAVEPOINT s; RETURN errs; END; $BODY$ LANGUAGE plpgsql VOLATILE ; When I run this function, I get this error: ERROR: SPI_execute_plan failed executing query "SAVEPOINT s": SPI_ERROR_TRANSACTION SQL state: XX000 Context: PL/pgSQL function "person" line 11 at SQL statement I tried another way to make the rollback happen: The last paragraph of http://www.postgresql.org/docs/8.3/static/plpgsql-structure.html said "a block containing an EXCEPTION clause effectively forms a subtransaction that can be rolled back without affecting the outer transaction." So I rewrote the test function like this: CREATE OR REPLACE FUNCTION tests.person() RETURNS varchar AS $BODY$ DECLARE errs VARCHAR = ''; -- Test declarations: old_count BIGINT; new_count BIGINT; BEGIN RAISE EXCEPTION ''; EXCEPTION WHEN raise_exception THEN SET search_path TO public, tests; -- Test code: SELECT count(*) INTO old_count FROM person_i; INSERT INTO person (last_name) VALUES ('_test'); SELECT count(*) INTO new_count FROM person_i; IF NOT new_count = old_count + 1 THEN errs := errs || ': Failed to insert into ''person'''; END IF; RETURN errs; END; $BODY$ LANGUAGE plpgsql VOLATILE ; The function runs OK, but it does not roll back the actions it did (in this case, the INSERT). ~ TIA ~ Ken
Re: [GENERAL] UPDATE ... RETURNING atomicity
On 05/23/2010 08:19 PM, Tom Lane wrote: =?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= writes: find in docs part that talks about transaction isolation levels, and translate it to your problem. Yes, please read the fine manual: http://www.postgresql.org/docs/8.4/static/mvcc.html What I think will happen in your example is that all concurrent executions will locate the same row-to-be-updated. The first one to get to the row "wins" and updates the row. All the rest will fail, either updating no rows (if not serializable) or throwing an error (if serializable). OK, thank you both, I had hoped that UPDATE would take a table level lock before running the inner select. But then I read that the type of locking done by UPDATE never conflicts with other such locks, so the queries would still run concurrently. We're running the default Read Commited mode. It's no problem for me to rewrite the Perl DBI query to check the return value and loop until it does get something. Which would have better performance: that, or an explicit LOCK on the table before the UPDATE ... SELECT? The transaction is committed shortly after, with no other queries in between. Thank you. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Full text search on a complex schema - a classic problem?
On 05/22/2010 09:40 PM, Ivan Voras wrote: Hello, I have a schema which tracks various pieces of information which would need to be globally searchable. One approach I came up with to make all of the data searchable is to create a view made of UNION ALL queries that would integrate different tables into a common structure which could be uniformly queried by using tsearch2 functions. This would work, up to the point where it would be practically unavoidable (for performance reasons) to create indexes on this view, which cannot be done. I would like to avoid using a "hand-made" materialized view (via triggers, etc.) because of administrative overhead and because it would duplicate data, of which there is potentially a lot. I think this looks like a fairly common problem with full text searches on a large-ish schemas, so I'm wondering what are the best practices here, specifically with using tsearch2? I have something like this, but with PostGIS layers. When a person clicks I search all the different layers (each a table) for information. I use a stored proc. Each table has its own index so each table is fast. It also lets me abstract out differences between the layers (I can search each a little differently). If each of your tables had its own full text fields and indexes, then write a stored proc to search them all individually, it should be pretty quick. -Andy -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] UPDATE ... RETURNING atomicity
=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= writes: > find in docs part that talks about transaction isolation levels, and > translate it to your problem. Yes, please read the fine manual: http://www.postgresql.org/docs/8.4/static/mvcc.html What I think will happen in your example is that all concurrent executions will locate the same row-to-be-updated. The first one to get to the row "wins" and updates the row. All the rest will fail, either updating no rows (if not serializable) or throwing an error (if serializable). regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] UPDATE ... RETURNING atomicity
by default query is wrapped in a transaction (if it is not run under a transaction). And this will be default transaction isolation level. some people think it works magic, but that's not true. find in docs part that talks about transaction isolation levels, and translate it to your problem. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] UPDATE ... RETURNING atomicity
On 05/23/2010 03:15 PM, Grzegorz Jaśkiewicz wrote: every single query in postrgresql runs as a transaction, on top of it, some are atomic, like when you use RETURNING statement. This is because postgresql doesn't actually have to select these rows as separate query. Please note the cooperation of the UPDATE and the inner sub-SELECT query, which was my point. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] UPDATE ... RETURNING atomicity
every single query in postrgresql runs as a transaction, on top of it, some are atomic, like when you use RETURNING statement. This is because postgresql doesn't actually have to select these rows as separate query. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Moving from Mysql
On Sat, May 22, 2010 at 11:06:02PM -0400, Stephen Frost wrote: > * Luis Daniel Lucio Quiroz (luis.daniel.lu...@gmail.com) wrote: > > 1. whar are equivalent for these commands: > > in mysql: mysqldump mydata_base_name > > pg_dump (pg_restore to restore from the dump, if you use a non-SQL > format for it, which can give you the ability to do a parallel-restore) The way to call pg_dump so it goes to the most flexible format in pg_dump -Fc. You can find out more about this format in the man page, if you're using a unix-like system, or in the on-line docs: http://www.postgresql.org/docs/current/static/app-pgdump.html > > mysql mydata_base_name < script.sql > > psql With the output of pg_dump -Fc, you'll be using pg_restore, which is more complex, but much more flexible. http://www.postgresql.org/docs/current/static/app-pgrestore.html > > 2. any link to read about how to admin pgsql with mysql backgraounds, > > The PG documentation is really quite good: > http://www.postgresql.org/docs/8.4/ Here are a few more specific ones, some of which may apply to your situation: http://sql-info.de/mysql/gotchas.html http://www.raditha.com/mysql/mysql2pgsql.php http://www.in-nomine.org/~asmodai/mysql-to-pgsql.html http://blog.gtuhl.com/2010/04/15/not-a-fan-of-mysql/ http://pgfoundry.org/projects/mysql2pgsql/ http://search.cpan.org/search?query=SQL%3A%3ATranslator&mode=all http://www.metatrontech.com/wpapers/mysql2postgresql.pdf http://www.data-conversions.net/products.php?prod_num=5&&dest=MENU&&ID=200 http://pgfoundry.org/projects/mysqlcompat > > 3. how users are managed in pgsql, i need to create a specifiq username for > > db, > > but how? > > PG Roles (users and groups) are managed on a per-cluster level. There > isn't a really good way to do them at a per-database level today. > A cluster in PG is a full PG instance and a single cluster contains > multiple databases. You can manage which databases users are allowed to > connect to though, check out the GRANT command. Also check out host-based authentication, which you control with an external file called pg_hba.conf: http://www.postgresql.org/docs/current/static/client-authentication.html Cheers, David. -- David Fetter http://fetter.org/ Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter Skype: davidfetter XMPP: david.fet...@gmail.com iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics Remember to vote! Consider donating to Postgres: http://www.postgresql.org/about/donate -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general