[GENERAL] syslog message I do not understand
I am trying to start postgresql via webmin, but not having success. I get a message in the syslog su:(to postgres) root on none This is suse 9.3. Could someone tell what that message means? I was working until I added the -i to the postmaster.opts file so I could access it via pgadmin. Art ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Problem running or executing a function in Postgresql
Hi, I have a table named mydata CREATE TABLE public.mydata ( id int4 DEFAULT nextval('public.mydata_id_seq'::text) NOT NULL, name varchar(50) ) WITH OIDS; and I have a function as follows CREATE OR REPLACE FUNCTION insertmydata(varchar) RETURNS int as ' declare new_id integer; begin INSERT INTO mydata("name") values($1); new_id = EXECUTE("SELECT FROM currval("mydata_id_seq")"); return new_id; end; ' LANGUAGE 'PLPGSQL'; when I run the function as select insertmydata('Venkatesh') I am getting the following error message "ERROR: syntax error at or near "mydata_id_seq" at character 39" can anyone help me in solving the above problem. Am I missing something here? How should I run the function to insert data? I am using postgresql version 7.4.6 in a linux box and running pgadmin from a Winxp PC. Kind Regards,Venki
Re: [GENERAL] Removing all users from a group
Thanks Patrick and Bruno for your replies, The auto-added FROM feature is pretty slick for enabling JOINs within a DELETE. Allowing this to be explicit in 8.1 is going to be even better. Since DELETEing from pg_users is an unsupported way to remove users, I am going to use the procedure in the end. It's a little-modified version of Patrick's code: CREATE OR REPLACE FUNCTION removeUsersFromGroup( groupName name ) RETURNS int4 AS $$ DECLARE userRecord RECORD; numUsersDropped int4 := 0; BEGIN FOR userRecord IN SELECT usename FROM pg_user,pg_group WHERE usesysid = ANY (grolist) AND groname = groupName LOOP numUsersDropped := numUsersDropped + 1; EXECUTE('DROP USER ' || userRecord.usename); END LOOP; RETURN numUsersDropped; END $$ LANGUAGE 'plpgsql'; Thanks again for the help, David J. Sankel On 8/31/05, David Sankel [EMAIL PROTECTED] wrote: Hello List, I'm trying to delete all the users from a group and it seems as though there isn't sufficient array functionality to do it. The pg_group columns: http://www.postgresql.org/docs/8.0/interactive/catalog-pg-group.html The pg_user columns: http://www.postgresql.org/docs/8.0/interactive/view-pg-user.html After having a peek at the above, we know we can see all the users in a group with this: SELECT * FROM pg_user, pg_group WHERE usesysid = ANY (grolist) AND groname = 'somegroupname'; ANY is a function that can tell if a value is in an array: http://www.postgresql.org/docs/8.0/interactive/arrays.html#AEN5491 Okay, that select function worked fine, but if we want to delete we cannot use a join (implicit by the ',') to help us out. So the following should work: DELETE FROM pg_user WHERE usesysid = ANY ( SELECT grolist FROM pg_group WHERE groname = 'somegroupname' ) But, alas, it doesn't. Neither does any combination of IN and ANY. It seems to me like this should work since the same syntax works if we weren't talking about arrays. So, how can we delete all users within a specified group? Is there a bug or is the above code incorrect? When testing the above delete function, I found it useful to substitute SELECT * for DELETE to get non-destructive queries. Thanks for any help, David J. Sankel
Re: [GENERAL] Problem running or executing a function in Postgresql
Venki wrote: Hi, I have a table named mydata CREATE TABLE public.mydata ( id int4 DEFAULT nextval('public.mydata_id_seq'::text) NOT NULL, name varchar(50) ) WITH OIDS; and I have a function as follows CREATE OR REPLACE FUNCTION insertmydata(varchar) RETURNS int as ' declare new_id integer; begin INSERT INTO mydata(name) values($1); new_id = EXECUTE(SELECT FROM currval(mydata_id_seq)); return new_id; end; ' LANGUAGE 'PLPGSQL'; when I run the function as select insertmydata('Venkatesh') I am getting the following error message ERROR: syntax error at or near mydata_id_seq at character 39 OK well, let's look at the line it's suggesting has a problem: new_id = EXECUTE(SELECT FROM currval(mydata_id_seq)); Well, there are two main things wrong with this. Firstly, the quoting is very suspect. You're using double-quotes () to represent a string (rather than quoting a named object to preserve its case) and then you've nested them. Strings need to use escaped single-quotes (either doubled-up '' or with a backslash \') Secondly, you can't use EXECUTE like that, it doesn't return a value. There's no dynamic element to the query so it's unnecessary. Perhaps: SELECT INTO new_id currval(''mydata_id_seq''); In your particular example, it's just a function-call anyway, so you can use simple assignment. new_id := currval(''mydata_id_seq''); See if that helps. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Problem running or executing a function in Postgresql
On Thu, Sep 01, 2005 at 11:58:27AM +0530, Venki wrote: new_id = EXECUTE(SELECT FROM currval(mydata_id_seq)); The above line has quoting problems and an erroneous SELECT query, and it wouldn't work anyway because that's not the way to retrieve results from EXECUTE. Aside from that, EXECUTE isn't necessary in this case. Try this: new_id := currval(''mydata_id_seq''); -- Michael Fuhr ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] syslog message I do not understand
Art Fore wrote: I am trying to start postgresql via webmin, but not having success. I get a message in the syslog su:(to postgres) root on none This is suse 9.3. Could someone tell what that message means? It just means something is switching to user=postgres from user=root. Nothing too unexpected. I was working until I added the -i to the postmaster.opts file so I could access it via pgadmin. Ah - you want to edit postgresql.conf and pg_hba.conf not postmaster.opts - turn internet access on in the first, and set authorisation rules for the connecting machine in the second. HTH -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Out Of Memory Error on GiST Index
Hello. I'm getting an Out Of Memory error when I try to create a GiST index on a geometry column (PostGIS) with approximately 33,000,000 rows. I can truncate the data in the table, create the GiST index on the empty table, but when I try to reload the data via pg_restore I get the error. I get the error also if I try creating the GiST index on the table with the 33,000,000 rows. Are there limitations with the GiST index? Has anyone seen this? Thanks. Jim - James Cradock, [EMAIL PROTECTED] - James Cradock, [EMAIL PROTECTED] me3 Technology Consultants, LLC 24 Preble Street, 2nd Floor Portland, ME 04101 207-772-3217 (office) 207-838-8678 (mobile) www.me3.com ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] bytea or large objects?
Peter Wilson wrote: Joshua D. Drake wrote: I've just re-written our Whitebeam code to drop large-objects in favour of BYTEA fields. All the old problems of large objects in backups exist, but the killer for us was that none of the current replication systems, at least that I could find, would replicate large objects. This became a mandatory requirements for us. Mammoth Replicator has always replicated Large Objects. The only backup issue to large objects is that you have to pass a separate flag and use the custom or tar format to dump them. Bytea has its own issues mostly based around memory usage. I am not saying you should or shouldn't switch as it really depends on your needs but the information above just isn't quite accurate. Sincerely, Joshua D. Drake Thanks Peter, Joshua, On this information I will probably opt for BYTEA. I do not use replication but aim to in the future, and would like to keep as many options open as possible. The memory problem of large BYTEA arrays does bother me. It would be nice to be able to open these types as file streams like the large object type and get the best of both worlds. Is this feasible? Best regards, Howard Cole www.selestial.com I should have added that my search was limited to open source/free replication systems. I'd have to have a *very* good reason to use large objects over BYTEA now. Pete -- http://www.whitebeam.org http://www.yellowhawk.co.uk - ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Out Of Memory Error on GiST Index
James Cradock wrote: Hello. I'm getting an Out Of Memory error when I try to create a GiST index on a geometry column (PostGIS) with approximately 33,000,000 rows. I can truncate the data in the table, create the GiST index on the empty table, but when I try to reload the data via pg_restore I get the error. I get the error also if I try creating the GiST index on the table with the 33,000,000 rows. Are there limitations with the GiST index? Has anyone seen this? I seem to remember mention of memory-leak fixes being applied to the GiST code not too long back - might be worth checking the archives for the bug/hackers list. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Out Of Memory Error on GiST Index
Thanks. I'm using PostgreSQL 8.0.2. On Sep 1, 2005, at 6:35 AM, Richard Huxton wrote: James Cradock wrote: Hello. I'm getting an Out Of Memory error when I try to create a GiST index on a geometry column (PostGIS) with approximately 33,000,000 rows. I can truncate the data in the table, create the GiST index on the empty table, but when I try to reload the data via pg_restore I get the error. I get the error also if I try creating the GiST index on the table with the 33,000,000 rows. Are there limitations with the GiST index? Has anyone seen this? I seem to remember mention of memory-leak fixes being applied to the GiST code not too long back - might be worth checking the archives for the bug/hackers list. -- Richard Huxton Archonet Ltd - James Cradock, [EMAIL PROTECTED] me3 Technology Consultants, LLC 24 Preble Street, 2nd Floor Portland, ME 04101 207-772-3217 (office) 207-838-8678 (mobile) www.me3.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Out Of Memory Error on GiST Index
On Thu, 1 Sep 2005, James Cradock wrote: Hello. I'm getting an Out Of Memory error when I try to create a GiST index on a geometry column (PostGIS) with approximately 33,000,000 rows. I can truncate the data in the table, create the GiST index on the empty table, but when I try to reload the data via pg_restore I get the error. I get the error also if I try creating the GiST index on the table with the 33,000,000 rows. Are there limitations with the GiST index? Has anyone seen this? AFAIK, no. Perhaps, it's PostGIS problem ? It'd be nice to build rtree index using contrib/rtree_gist to see if problem in GiST. Thanks. Jim - James Cradock, [EMAIL PROTECTED] - James Cradock, [EMAIL PROTECTED] me3 Technology Consultants, LLC 24 Preble Street, 2nd Floor Portland, ME 04101 207-772-3217 (office) 207-838-8678 (mobile) www.me3.com ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq Regards, Oleg _ Oleg Bartunov, sci.researcher, hostmaster of AstroNet, Sternberg Astronomical Institute, Moscow University (Russia) Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(095)939-16-83, +007(095)939-23-83 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Problem running or executing a function in Postgresql
Dear Mr. Richard Huxton Michael Fuhr, Thanks for your prompt replies, it has helped me a lot. The problem was solved by using this line SELECT INTO new_id currval(''mydata_id_seq''); Thanks again for your valuable suggestions Regards, venki ---Original Message--- From: Richard Huxton Date: 09/01/05 15:57:21 To: Venki Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] Problem running or executing a function in Postgresql Venki wrote: Hi, I have a table named mydata CREATE TABLE public.mydata ( id int4 DEFAULT nextval('public.mydata_id_seq'::text) NOT NULL, name varchar(50) ) WITH OIDS; and I have a function as follows CREATE OR REPLACE FUNCTION insertmydata(varchar) RETURNS int as ' declare new_id integer; begin INSERT INTO mydata("name") values($1); new_id = EXECUTE("SELECT FROM currval("mydata_id_seq")"); return new_id; end; ' LANGUAGE 'PLPGSQL'; when I run the function as select insertmydata('Venkatesh') I am getting the following error message "ERROR:syntax error at or near "mydata_id_seq" at character 39" OK well, let's look at the line it's suggesting has a problem: new_id = EXECUTE("SELECT FROM currval("mydata_id_seq")"); Well, there are two main things wrong with this. Firstly, the quoting is very suspect. You're using double-quotes (") to represent a string (rather than quoting a named object to preserve its case) and then you've nested them. Strings need to use escaped single-quotes (either doubled-up '' or with a backslash \') Secondly, you can't use EXECUTE like that, it doesn't return a value. There's no dynamic element to the query so it's unnecessary. Perhaps: SELECT INTO new_id currval(''mydata_id_seq''); In your particular example, it's just a function-call anyway, so you can use simple assignment. new_id := currval(''mydata_id_seq''); See if that helps. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] optimum settings for dedicated box
On Wed, 2005-08-31 at 00:50 -0700, Matthew Peter wrote: Hmmm. I was thinking of a more comprehensive solution or document resource. I would like to know what does what. Why tweak that or why not to ya know? Matt, I've found the annotated postgresql.conf references on this page (as well as rest of the site) to be helpful. http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php -K ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] newbie - postgresql or mysql
[Please copy the mailing list on replies so others can contribute to and learn from the discussion. Also, please don't top-post, as it destroys the flow of the discussion; I've moved your questions to a more logical place.] On Wed, Aug 31, 2005 at 03:49:57PM -0600, [EMAIL PROTECTED] wrote: On Wed, 31 Aug 2005, Michael Fuhr wrote: On Wed, Aug 31, 2005 at 03:17:29PM -0400, Frank wrote: insert into category values (4, 'Hardware - Monitor', 2, '2004-10-12 10:50:01'), (5, 'Hardware - Printer', 2, '2004-10-12 10:50:02'), (6, 'Hardware - Terminal', 2, '2004-10-12 10:50:02'), PostgreSQL doesn't support this form of INSERT; you'll have to use a separate INSERT per record or use COPY. Another method, although probably not useful in this case, is to insert records from a subquery; see the INSERT documentation for more information. Wouldn't it be simpler to do a dblink, and just get the data from MySQL and drop it into PostgreSQL ? Or is this too complicated ? dblink (at least the one distributed as contrib/dblink) is for making connections to other PostgreSQL databases. You could, however, use DBI-Link or something similar to make connections to MySQL or another data source; in that case you could use the subquery form of INSERT: INSERT INTO tablename (columnlist) SELECT columnlist FROM ... ; However, if you're just doing a one-time import of data from MySQL, then it might be simplest to dump the data with separate INSERT statements (mysqldump --extended-insert=FALSE). -- Michael Fuhr ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Exception in Query when mixing explicit join and implicit join
Hallo, in an existing application I got an exception and tracked it down to a generated query from the Java OR mapping solution Hibernate Actually I was suprised that the query does not work and created a simple use case to explain the problem (see and of this email). The postgre SQL exception is: ERROR: relation a does not exist It happens when a implicit join and a inner join is mixed. Do I have to except this as a fact or is there a logical explication for this behaviour. -- Best Regards / Viele Grüße Sebastian Hennebrueder http://www.laliluna.de Tutorials for JSP, JavaServer Faces, Struts, Hibernate and EJB -- the following query has a inner join and an implicit join and does not work. select a.name as act_name, u.name as trainer from activity a, localgroup lg, sponsor spon inner join tuser u on a.fk_trainer=u.id where spon.name='Jimmy Rich' and spon.fk_localgroup=lg.id and lg.fk_activity=a.id -- implicit join passing two tables works select a.name as act_name from activity a, localgroup lg, sponsor spon where spon.name='Jimmy Rich' and spon.fk_localgroup=lg.id and lg.fk_activity=a.id -- Best Regards / Viele Grüße Sebastian Hennebrueder http://www.laliluna.de Tutorials for JSP, JavaServer Faces, Struts, Hibernate and EJB Get support, education and consulting for these technologies - uncomplicated and cheap. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Select All Columns
If I want to retrieve all columns from a table, is there any disadvantage by using select * instead of listing all the columns ? Will select * cause overhead, more times to run ? Thanks ! Tan ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] newbie - postgresql or mysql
Michael Fuhr wrote: [Please copy the mailing list on replies so others can contribute to and learn from the discussion. Also, please don't top-post, as it destroys the flow of the discussion; I've moved your questions to a more logical place.] On Wed, Aug 31, 2005 at 03:49:57PM -0600, [EMAIL PROTECTED] wrote: On Wed, 31 Aug 2005, Michael Fuhr wrote: On Wed, Aug 31, 2005 at 03:17:29PM -0400, Frank wrote: insert into category values (4, 'Hardware - Monitor', 2, '2004-10-12 10:50:01'), (5, 'Hardware - Printer', 2, '2004-10-12 10:50:02'), (6, 'Hardware - Terminal', 2, '2004-10-12 10:50:02'), PostgreSQL doesn't support this form of INSERT; you'll have to use a separate INSERT per record or use COPY. Another method, although probably not useful in this case, is to insert records from a subquery; see the INSERT documentation for more information. Wouldn't it be simpler to do a dblink, and just get the data from MySQL and drop it into PostgreSQL ? Or is this too complicated ? dblink (at least the one distributed as contrib/dblink) is for making connections to other PostgreSQL databases. You could, however, use DBI-Link or something similar to make connections to MySQL or another data source; in that case you could use the subquery form of INSERT: INSERT INTO tablename (columnlist) SELECT columnlist FROM ... ; However, if you're just doing a one-time import of data from MySQL, then it might be simplest to dump the data with separate INSERT statements (mysqldump --extended-insert=FALSE). If you have time to play around a bit, and have Ruby, og could be utilized... create a test env, do a little hacking on the below ( from the og examples dir) i.e. you'd want to remove the destroy = true, or set to false, in the config, etc... $ cat mysql_to_psql.rb # = Mysql to PostgreSQL migration example. # # A simple example to demonstrate the flexibility of # Og. Two connections to different databases are # created and data is copied from a MySQL database # to a PostgreSQL database. # # Og makes it easier to switch to a REAL database :) require 'og' # Configure databases. psql_config = { :destroy = true, :name = 'test', :store = 'psql', :user = 'postgres', :password = 'navelrulez' } mysql_config = { :destroy = true, :name = 'test', :store = 'mysql', :user = 'root', :password = 'navelrulez' } # Initialize Og. psql = Og.connect(psql_config) mysql = Og.connect(mysql_config) # An example managed object. # Looks like an ordinary Ruby object. class Article property :name, :body, String def initialize(name = nil, body = nil) @name, @body = name, body end end # First populate the mysql database. mysql.manage(Article) a1 = Article.create('name1', 'body1') a1 = Article.create('name1', 'body1') a1 = Article.create('name1', 'body1') # Read all articles from Mysql. articles = Article.all # Switch to PostgreSQL. psql.manage(Article) # Store all articles. for article in articles article.insert end # Fetch an article from PostgreSQL # as an example. Lookup by name. article = Article.find_by_name('name1') reid ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Select All Columns
Tan Chen Yee [EMAIL PROTECTED] writes: If I want to retrieve all columns from a table, is there any disadvantage by using select * instead of listing all the columns ? Will select * cause overhead, more times to run ? No, it shouldn't make any difference there. The disadvantage of SELECT * is that adding, removing or changing columns can break your client code (depending on what client library you're using) because the records returned by a query will change format without warning. -Doug ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Select All Columns
The first question is do you really need all the columns. Most times you don't need them. There will be network overhead for sure in returning all columns instead of just the few that you want. Also select * is not very clear with what is going on in the statement. Where I work we have standards to always qualify the columns by name and to not use the *.Another issue I have seen is that if your code logic assumes that select * returns column1 column2 column3 and you add a new column to that table your logic can break. I often see the select * used when people get lazy. To me select * should only be used in ad-hoc fashion when digging around in tables and such. Just my 2 cents umn On 9/1/05, Tan Chen Yee [EMAIL PROTECTED] wrote: If I want to retrieve all columns from a table, is there any disadvantage by using select * instead of listing all the columns ? Will select * cause overhead, more times to run ? Thanks ! Tan ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] shared_buffers + Windows
max_connections = 50 on my server ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Order By for aggregate functions (Simulating Group_concat)
Hi All I'm trying to create a aggregate function similar 9but not identical) to mysql's group_concat. What I want to be able to do is pass and order by field to the aggregate so I can be certain I get the list of strings in the correct order. Does anyone have any ideas how this could be done? In a previous thread on aggregates a couple of people said that they thought it was doable but nobody hinted at how. Thanks in advance Charlotte Pollock ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] newbie - postgresql or mysql
Hi Frank; I will use your questions to help in the next version of my migration guide, but you might want to take a look at it anyway. You can find it at: http://www.metatrontech.com/wpapers/ It covers many areas of incompatibility between MySQL and PostgreSQL. My own suggestions is that you wait until 8.1 is released as some of the issues involved in application porting will be resolved (LEAST and GREATEST, for example). Any feedback on the guide would be appreciated. Best Wishes, Chris Travers Metatron Technology Consulting begin:vcard fn:Chris Travers n:Travers;Chris email;internet:[EMAIL PROTECTED] x-mozilla-html:FALSE version:2.1 end:vcard ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] shared_buffers + Windows
Hello I have next problem: I have Windows 2000 s. Postgresql 8.0 database have 300MB and hardware configuration (2 x P3 1Ghz,1GB RAM and SCSI HD). How set the shared_buffers and other parameters to better performance. for example: When i read table from workstation (20.000 record and 20 column) on the Windows 2000 s. - 40 sec. on the Debian (P 8.0) the same DB, the same Table - 3 sec. Kris ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Transaction error
Hi, I recently shut down my machine (as far as I know, it went down cleanly). When I brought it back up, one of my databases was messed up. Here is the error I get when I try to select: ERROR: could not access status of transaction 4114057 DETAIL: could not open file /var/lib/postgres/data/pg_clog/0003: No such file or directory I tried restoring a backup of the data directory that did contain the 0003 file, but that didn't work either. I don't have a copy of the dump before the crash--though I think that is what I'll backup next time. How can I get my data back? Debian GNU/Linux postgres 7.4.7 Casey ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Don't understand transaction error
I'm getting the following error when attempting to use my application: ERROR: current transaction is aborted, commands ignored until end of transaction block I have no clue... the only idea I have is to somehow release any transaction locks, but I don't how to list, or even if possible, to list current locks. If there is a way to do this, perhaps I can release it. But I don't know how to release it either *shrug* ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Exception in Query when mixing explicit join and implicit join
Sebastian Hennebrueder [EMAIL PROTECTED] writes: -- the following query has a inner join and an implicit join and does not work. select a.name as act_name, u.name as trainer from activity a, localgroup lg, sponsor spon inner join tuser u on a.fk_trainer=u.id where spon.name='Jimmy Rich' and spon.fk_localgroup=lg.id and lg.fk_activity=a.id I think you were brought up on MySQL, which has only a vague relationship to actual SQL :-(. Per the SQL standard, what you wrote is equivalent to ... from ((activity a cross join localgroup lg) cross join (sponsor spon inner join tuser u on a.fk_trainer=u.id)) where ... Notice the parenthesization. Table a is not part of the spon/u join and so the reference to it from the spon/u ON condition is invalid. MySQL, unfortunately, seems to render the above syntax as ... from (((activity a cross join localgroup lg) cross join sponsor spon) inner join tuser u on a.fk_trainer=u.id) where ... in which case the reference from the ON condition is valid. Basically they think that comma and JOIN in the FROM list have the same precedence and associate left-to-right ... but any casual glance at the spec's syntax rules shows this is wrong. JOIN is supposed to bind more tightly than comma. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Order By for aggregate functions (Simulating Group_concat)
On Thu, Sep 01, 2005 at 10:04:12AM +0100, Charlotte Pollock wrote: Hi All I'm trying to create a aggregate function similar 9but not identical) to mysql's group_concat. What I want to be able to do is pass and order by field to the aggregate so I can be certain I get the list of strings in the correct order. Order them before the aggregate? SELECT aggregate(field) FROM (SELECT field FROM xxx ORDER BY wherever) x; Hope this helps, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a tool for doing 5% of the work and then sitting around waiting for someone else to do the other 95% so you can sue them. pgpH29Y0trGxV.pgp Description: PGP signature
[GENERAL] Array of created types
Should something similar to the following be possible in PG 8.0.3? create type foo_t as ( c1 int, c2 int ); create table tab ( name varchar not null, foos foo_t[] ); The response I get is: ERROR: type foo_t[] does not exist The create type documentation says that postgres silently creates an array type for each base type with an underscore prepended to the base name. That makes it sound like the following should work: create table tab ( name varchar not null, foos _foo_t ); ERROR: type _foo_t does not exist How can I create a table containing an array of items of a created type? -K ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Transaction error
Casey T. Deccio [EMAIL PROTECTED] writes: I recently shut down my machine (as far as I know, it went down cleanly). When I brought it back up, one of my databases was messed up. Here is the error I get when I try to select: ERROR: could not access status of transaction 4114057 DETAIL: could not open file /var/lib/postgres/data/pg_clog/0003: No such file or directory I tried restoring a backup of the data directory that did contain the 0003 file, but that didn't work either. Define didn't work ... what happened, exactly, when you tried it? People have occasionally dealt with this sort of scenario by creating dummy clog segment files by hand, and I'd think that restoring the segment file from a backup would do as well or better. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Array of created types
Kelly Burkhart [EMAIL PROTECTED] writes: The create type documentation says that postgres silently creates an array type for each base type with an underscore prepended to the base name. Base type in this context specifically means not composite type. We may support that someday, but it doesn't work now. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Schema problems RedHat / SuSE 9.3 for version 7.4.8
Hi list, we are using schema's to separate different administrations, and are using Zope/Plone as a front-end portal to the database. In this installation almost the first statement is: cvix=# SELECT set_config('search_path', '' || t2.schema || '', true) cvix-# FROM Lindix.Gebruikers as t1, Lindix.Administratie as t2 cvix-# WHERE uid = 'zon0023' cvix-# AND t1.administratie_id = t2.administratie_id; set_config -- testschema (1 row) As you can see, I have a fixed schema called Lindix where the actual destination schema is in a table. Depending on the user, the search_path is set and it should be able to find the table. Now we have installed a new server, with the same db version, the same content (a restore from the original db) and the same coding. After setting the search path the query Select * from vwexternetoegang produces the required results in the first installation, but in the new installation, it cannot find the view. However, if I do an explicit Set search_path to testschema; it works as expected. I can reproduce the result not only from my own code, but also from psql. The only real difference between the 2 installations I see is that the working installation has a RedHat Enterprise Linux ES Release 4 (Nahant update1) version versus the new (not working) a SuSE 9.3 installation. For both installations I have compiled Postgresql from the standard source as is delivered from www.postgresql.org (or one of the mirrors). Difference is that the first installation dates back to june 27th and the new installation from august 20th. Other difference is that in the new situation, the postgres database is on the same machine as the Zope application is. Hope someone has a solution for this, because the new machine should be launched monday. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Don't understand transaction error
On Thu, 2005-09-01 at 10:36, Jonathan Villa wrote: I'm getting the following error when attempting to use my application: ERROR: current transaction is aborted, commands ignored until end of transaction block I have no clue... the only idea I have is to somehow release any transaction locks, but I don't how to list, or even if possible, to list current locks. If there is a way to do this, perhaps I can release it. But I don't know how to release it either What you're seeing is the standard way postgresql handles transactins. Any error in a transaction, and all of it is rolled back. Since the database knows it's going to roll back everything, when you tell it to do something without ending the transaction, it emits this error, telling you that whatever your asking it to do ain't gonna get done. begin starts a transaction, rollback ends one and rolls back all changes, commit commits the changes IF THERE WERE NO ERRORS (note that savepoints allow you to work around this limitation). There's a chapter on this behavior in the manual, under something to do with transactions... ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Order By for aggregate functions (Simulating Group_concat)
Charlotte Pollock [EMAIL PROTECTED] writes: I'm trying to create a aggregate function similar 9but not identical) to mysql's group_concat. What I want to be able to do is pass and order by field to the aggregate so I can be certain I get the list of strings in the correct order. The way this is usually done in PG is to order the data before it gets to the aggregate function. For the ungrouped case this is easy: SELECT my_concat(foo) FROM (SELECT foo FROM ... ORDER BY something) ss; If you're trying to aggregate within groups it's a bit trickier. The secret is that the ordering of the inner sub-select has to match the outer GROUP BY: SELECT my_concat(foo), bar FROM (SELECT foo,bar FROM ... ORDER BY bar, something) ss GROUP BY bar; In some cases it'll still work with just ORDER BY something, but that depends on which plan type the planner happens to choose, so it's not reliable to leave off the ORDER BY bar. This requires a fairly recent PG ... I think we fixed the planner to make this work properly in 7.4. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] syslog message I do not understand
Thanks for the info. I was trying the old .conf files from a database trasfered from 7.54. The tcpip enable is different between it and 8.03. Got it working now with pgadminIII. Art Richard Huxton wrote: Art Fore wrote: I am trying to start postgresql via webmin, but not having success. I get a message in the syslog su:(to postgres) root on none This is suse 9.3. Could someone tell what that message means? It just means something is switching to user=postgres from user=root. Nothing too unexpected. I was working until I added the -i to the postmaster.opts file so I could access it via pgadmin. Ah - you want to edit postgresql.conf and pg_hba.conf not postmaster.opts - turn internet access on in the first, and set authorisation rules for the connecting machine in the second. HTH ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Don't understand transaction error
Jonathan Villa wrote: I'm getting the following error when attempting to use my application: ERROR: current transaction is aborted, commands ignored until end of transaction block I have no clue... An error has occurred on a previous command. As a result, this transaction is aborted and you should issue ROLLBACK to cancel it. There should be another error previously that causes this. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Order By for aggregate functions (Simulating Group_concat)
On Thu, Sep 01, 2005 at 05:14:41PM +0200, Martijn van Oosterhout wrote: On Thu, Sep 01, 2005 at 10:04:12AM +0100, Charlotte Pollock wrote: What I want to be able to do is pass and order by field to the aggregate so I can be certain I get the list of strings in the correct order. Order them before the aggregate? SELECT aggregate(field) FROM (SELECT field FROM xxx ORDER BY wherever) x; I've occasionally relied on this but I've never been completely comfortable with it. Is there any guarantee that the subquery's ordering will be maintained as rows are fed to the aggregate, or is that just an accident of the current implementation? -- Michael Fuhr ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Don't understand transaction error
On Thu, Sep 01, 2005 at 10:36:07AM -0500, Jonathan Villa wrote: I'm getting the following error when attempting to use my application: ERROR: current transaction is aborted, commands ignored until end of transaction block Some earlier command failed so the entire transaction (or subtransaction) has failed and no further commands will be allowed; you'll have to issue a ROLLBACK and then start a new transaction (or issue ROLLBACK TO SAVEPOINT if you're using savepoints, which are available in 8.0 and later). If you're doing adequate error checking then you should be able to find out which command failed and why. -- Michael Fuhr ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] lock problem
Richard Sorry for the delay. question, the select * from pg_stat_activity thows datid|datname|procpid|usesysid|usename|current_query|query_start 52800|sume|30124|1|postgres|IDLE|2005-09-01 13:30:02.921844-03 52800|sume|30125|1|postgres|IDLE|2005-09-01 13:37:21.631802-03 52800|sume|30186|1|postgres|SELECT c.actuacion_car AS c_actuacion, c.comentario1 || ' ' || c.comentario2 || ' ' || c.comentario3 AS c_comentario FROM caratult AS c INNER JOIN extractt AS t1 ON (c.id_extracto_car = t1.id_extracto) INNER JOIN repartit AS r1 ON (c.id_reparticion_uc = r|2005-09-01 13:35:45.152586-03 and the select * from pg_locks relation|database|transaction|pid|mode|granted 53046|52800||30186|AccessShareLock|t ||159274343|30125|ExclusiveLock|t 73744|52800||30186|AccessShareLock|t 16759|52800||30125|AccessShareLock|t 53094|52800||30186|AccessShareLock|t 73770|52800||30186|AccessShareLock|t ||159274288|30186|ExclusiveLock|t 73824|52800||30186|AccessShareLock|t 53054|52800||30186|AccessShareLock|t 73726|52800||30186|AccessShareLock|t 53074|52800||30186|AccessShareLock|t 53049|52800||30186|AccessShareLock|t 53127|52800||30186|AccessShareLock|t 9567503|52800||30186|AccessShareLock|t 74274|52800||30186|AccessShareLock|t this queries show locks into 30816 pid or a'im wrong? best regards MDC --- Richard Huxton dev@archonet.com escribió: marcelo Cortez wrote: Hi Richard , folks the query is: SELECT c.actuacion_car AS c_actuacion, c.comentario1 || ' ' || c.comentario2 || ' ' || c.comentario3 AS c_comentario FROM caratult AS c INNER JOIN extractt AS t1 ON ...etc I'm not seeing anything here that should take locks, although it's late where I am. without table definition etc, you see are selects only. during this query, select * from pg_catalog.pg_locks show many rows with exclusive locks , and other activities or user's be freezed, Are you *sure* that it's this query? Could you show some of the rows from pg_locks along with corresponding rows from pg_stat_activity? -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings __ Correo Yahoo! Espacio para todos tus mensajes, antivirus y antispam ¡gratis! ¡Abrí tu cuenta ya! - http://correo.yahoo.com.ar ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] lock problem
--- Richard Huxton dev@archonet.com escribió: marcelo Cortez wrote: Hi Richard , folks the query is: SELECT c.actuacion_car AS c_actuacion, c.comentario1 || ' ' || c.comentario2 || ' ' || c.comentario3 AS c_comentario FROM caratult AS c INNER JOIN extractt AS t1 ON ...etc I'm not seeing anything here that should take locks, although it's late where I am. without table definition etc, you see are selects only. during this query, select * from pg_catalog.pg_locks show many rows with exclusive locks , and other activities or user's be freezed, Are you *sure* that it's this query? Could you show some of the rows from pg_locks along with corresponding rows from pg_stat_activity? -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings __ Correo Yahoo! Espacio para todos tus mensajes, antivirus y antispam ¡gratis! ¡Abrí tu cuenta ya! - http://correo.yahoo.com.ar ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Schema problems RedHat / SuSE 9.3 for version 7.4.8
On Thu, Sep 01, 2005 at 06:13:13PM +0200, Dick Kniep wrote: cvix=# SELECT set_config('search_path', '' || t2.schema || '', true) cvix-# FROM Lindix.Gebruikers as t1, Lindix.Administratie as t2 cvix-# WHERE uid = 'zon0023' cvix-# AND t1.administratie_id = t2.administratie_id; set_config -- testschema (1 row) Using quote_ident(t2.schema) would be cleaner and more reliable than quoting t2.schema yourself, and the final true argument to set_config() means that the setting is local to the current transaction. Now we have installed a new server, with the same db version, the same content (a restore from the original db) and the same coding. After setting the search path the query Select * from vwexternetoegang produces the required results in the first installation, but in the new installation, it cannot find the view. Have you used SHOW search_path to see what the setting is after executing the above command? Is the new installation using autocommit mode? If so then the setting won't take effect because you've told set_config() that the setting is local to the current transaction, and as soon as the SELECT completes that transaction is done. If you want the setting to persist then pass false as the last argument to set_config(). I can reproduce the result not only from my own code, but also from psql. Are you saying that the query works as expected with psql on the old server, but not on the new server? If you run \set in psql, what are the two servers' AUTOCOMMIT settings? If this is an autocommit issue then it must be on the client side, because 7.4 removed support for server-side autocommit and your Subject header says you're running 7.4.8. -- Michael Fuhr ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Transaction error
Casey T. Deccio [EMAIL PROTECTED] writes: On Thu, 2005-09-01 at 11:59 -0400, Tom Lane wrote: Define didn't work ... what happened, exactly, when you tried it? Sorry, I was too tired last night to try it again and post the error. Here it is: ERROR: could not access status of transaction 4114057 DETAIL: could not read from file /var/lib/postgres/data/pg_clog/0003 at offset 237568: Success Apparently your backup wasn't very current, and contains a version of that clog file that hasn't yet reached full size. Oh well, I was hoping you'd get valid data from the backup. How would I create a dummy clog segment file, if needed? What you'll have to do is pad out the segment file to full size (256K) by appending zeroes. (dd from /dev/zero is useful for this.) Note that you will probably not get a completely consistent database out of this --- there are going to be some transactions that appear uncommitted or partially committed. I'd recommend a dump and reload to help you revalidate your data, plus you'll have to apply any manual consistency checks you can think of. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Exception in Query when mixing explicit join and implicit
Tom Lane schrieb: Sebastian Hennebrueder [EMAIL PROTECTED] writes: -- the following query has a inner join and an implicit join and does not work. select a.name as act_name, u.name as trainer from activity a, localgroup lg, sponsor spon inner join tuser u on a.fk_trainer=u.id where spon.name='Jimmy Rich' and spon.fk_localgroup=lg.id and lg.fk_activity=a.id I think you were brought up on MySQL, which has only a vague relationship to actual SQL :-(. Per the SQL standard, what you wrote is equivalent to ... from ((activity a cross join localgroup lg) cross join (sponsor spon inner join tuser u on a.fk_trainer=u.id)) where ... Notice the parenthesization. Table a is not part of the spon/u join and so the reference to it from the spon/u ON condition is invalid. MySQL, unfortunately, seems to render the above syntax as ... from (((activity a cross join localgroup lg) cross join sponsor spon) inner join tuser u on a.fk_trainer=u.id) where ... in which case the reference from the ON condition is valid. Basically they think that comma and JOIN in the FROM list have the same precedence and associate left-to-right ... but any casual glance at the spec's syntax rules shows this is wrong. JOIN is supposed to bind more tightly than comma. regards, tom lane Thank you very much for the feedback. The program I try to make runnable on PostgreSQL has MySQL as main supported database. So you are right here. Interesting IMHO: Object Relational Mapping solutions like Hibernate are having their own QL language which should make the db code portable across differnet dbs But if you do not code your Hibernate QL queries properly this intention of ORM might fail. -- Best Regards / Viele Grüße Sebastian Hennebrueder ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Transaction error
On Thu, 2005-09-01 at 11:59 -0400, Tom Lane wrote: Casey T. Deccio [EMAIL PROTECTED] writes: I recently shut down my machine (as far as I know, it went down cleanly). When I brought it back up, one of my databases was messed up. Here is the error I get when I try to select: ERROR: could not access status of transaction 4114057 DETAIL: could not open file /var/lib/postgres/data/pg_clog/0003: No such file or directory I tried restoring a backup of the data directory that did contain the 0003 file, but that didn't work either. Define didn't work ... what happened, exactly, when you tried it? Sorry, I was too tired last night to try it again and post the error. Here it is: ERROR: could not access status of transaction 4114057 DETAIL: could not read from file /var/lib/postgres/data/pg_clog/0003 at offset 237568: Success People have occasionally dealt with this sort of scenario by creating dummy clog segment files by hand, and I'd think that restoring the segment file from a backup would do as well or better. How would I create a dummy clog segment file, if needed? Regards, Casey ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Transaction error
On Thu, 2005-09-01 at 13:11 -0400, Tom Lane wrote: How would I create a dummy clog segment file, if needed? What you'll have to do is pad out the segment file to full size (256K) by appending zeroes. (dd from /dev/zero is useful for this.) It worked! Thanks for your help! Casey ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Order By for aggregate functions (Simulating Group_concat)
Michael Fuhr [EMAIL PROTECTED] writes: On Thu, Sep 01, 2005 at 05:14:41PM +0200, Martijn van Oosterhout wrote: SELECT aggregate(field) FROM (SELECT field FROM xxx ORDER BY wherever) x; I've occasionally relied on this but I've never been completely comfortable with it. Is there any guarantee that the subquery's ordering will be maintained as rows are fed to the aggregate, or is that just an accident of the current implementation? Well, it's not required by the SQL spec (in fact I believe ORDER BY inside a subselect isn't even legal per the SQL spec) ... but we do promise it in the current implementation and I doubt we'd break the promise in future, because it is a mighty handy behavior for user-defined aggregates. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] newbie - postgresql or mysql
mysqldump --extended-insert=FALSE Thanks for the help, this is allowing me to import the data now. Regards, Frank At 09:37 AM 9/1/2005, Michael Fuhr wrote: [Please copy the mailing list on replies so others can contribute to and learn from the discussion. Also, please don't top-post, as it destroys the flow of the discussion; I've moved your questions to a more logical place.] On Wed, Aug 31, 2005 at 03:49:57PM -0600, [EMAIL PROTECTED] wrote: On Wed, 31 Aug 2005, Michael Fuhr wrote: On Wed, Aug 31, 2005 at 03:17:29PM -0400, Frank wrote: insert into category values (4, 'Hardware - Monitor', 2, '2004-10-12 10:50:01'), (5, 'Hardware - Printer', 2, '2004-10-12 10:50:02'), (6, 'Hardware - Terminal', 2, '2004-10-12 10:50:02'), PostgreSQL doesn't support this form of INSERT; you'll have to use a separate INSERT per record or use COPY. Another method, although probably not useful in this case, is to insert records from a subquery; see the INSERT documentation for more information. Wouldn't it be simpler to do a dblink, and just get the data from MySQL and drop it into PostgreSQL ? Or is this too complicated ? dblink (at least the one distributed as contrib/dblink) is for making connections to other PostgreSQL databases. You could, however, use DBI-Link or something similar to make connections to MySQL or another data source; in that case you could use the subquery form of INSERT: INSERT INTO tablename (columnlist) SELECT columnlist FROM ... ; However, if you're just doing a one-time import of data from MySQL, then it might be simplest to dump the data with separate INSERT statements (mysqldump --extended-insert=FALSE). -- Michael Fuhr ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] query
On Thu, 2005-09-01 at 13:34, Federico Balbi wrote: Hi, I was just wondering if postgresql supports queries between servers in order to pull info from different machines. Let's say server A can run a query against server B... something like: server1 running a SELECT * FROM server2.table or joins SELECT A.name, B.name FROM server1.table A, server2.table B WHERE A.id=B.id No, not directly. The two solutions are: use two schemas, not two db servers, and use that. Then the query would be the same, but instead of server1 and server2, it would be schema1 and schema2. OR use the dblink module. the dblink module does NOT allow you to rollback the transaction entirely, because the transaction semantics across machines aren't there or in it. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] query
On Thu, Sep 01, 2005 at 01:34:41PM -0500, Federico Balbi wrote: I was just wondering if postgresql supports queries between servers in order to pull info from different machines. Not natively, but contrib/dblink provides a way to query other PostgreSQL databases, which can be running on other machines. Functions in some procedural languages (e.g., PL/Perl) can query arbitrary data sources (other kinds of databases, spreadsheets, text files, etc.) and return data as sets of rows. With such external data sources you don't get transactional semantics, however. In PostgreSQL, if you need to partition data but also need to perform cross-partition operations, then the usual recommendation is to use multiple schemas inside a single database. Let's say server A can run a query against server B... something like: server1 running a SELECT * FROM server2.table or joins SELECT A.name, B.name FROM server1.table A, server2.table B WHERE A.id=B.id You could abstract a dblink query with a view, but unfortunately the WHERE clause wouldn't propogate to the underlying query on the remote database -- a dblink query would have to fetch the entire remote result set before the local database could do any restrictions. In some cases, however, it can be worthwhile to call dblink directly with an appropriate WHERE clause in the query string. -- Michael Fuhr ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] ODBC and inappropriate select *
We have an old legacy app that connects to our PostgreSQL (7.4.6) database. It is an old Visual Basic/Business Objects (VB 6.0, MDAC 2.5) program that selects a group of records and locks them by setting a field to an in-progress status. The user then works on those records and when done, returns the batch. We are having a terrible performance problem that we have traced to inappropriate queries being sent to the server. Whenever a user requests a batch, the app first runs the appropriate query with a where clause. This query returns virtually instantly. Unfortunately, it follows this by a select * from tablename which may return well over 100,000 records. Even this query run via psql on my linux desktop takes less than a second but apparently the VB app has trouble choking down all the unnecessary data it has requested leaving the user waiting 15 seconds or more for the update. Note: the end-users of the app may be remote and connecting via modem but the query is running between the server-side and PG on a 100MB connection. We have checked the VB app and tried a couple changes without success. Is anyone aware of any issue in VB or the ODBC driver that would cause the DB to be hit by a select * query when none exists in the app? Cheers, Steve ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] Check if SELECT is granted
I'm wondering if it's possible for the ordinary user to check if SELECT is granted on a give table, without doing the select ? I want to use it in an application with a menu structure, where I only want to enable those options where SELECT on a the tables used in the given menu is granted. Instead of keeping track on it in a separate table. Thanks, Poul ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] ODBC and inappropriate select *
Is that select * being used to COUNT the number of rows? If so, then do a select count(*) which will take the db engine about as long, but it won't need to transfer the data across. Beats me. WE are not requesting a select * at all in the VB code. We are selecting and updating the rows we want to select and update via the appropriate where information. Somewhere in the black box that is VB/BusinessObjects/ODBC something is deciding that a select * is necessary for reasons unknown and then choking on (well, not actually choking but digesting slowly) the data returned. I'm just trying to find out if anyone has seen this problem and is it something inherent in VB, BusinessObjects or ODBC? In other words, is it something we can fix or do we need to set aside time to rewrite the app in a language that doesn't have these problems? Cheers, Steve ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] ECPG examples...
Hello, I've been reading a little the ECPG (Embedded SQL in C) and the doc is (I guess) very clear, but I cannot find any examples in the documentation, any idea where to get examples? inside the pgsql source code? Thanks a lot!
Re: [GENERAL] ECPG examples...
Title: Message see the contrib directory,,, i think there is also a simple example in the docs... reid -Original Message-From: Cristian Prieto [mailto:[EMAIL PROTECTED] Sent: Thursday, September 01, 2005 4:10 PMTo: pgsql-general@postgresql.orgSubject: [GENERAL] ECPG examples... Hello, I've been reading a little the ECPG (Embedded SQL in C) and the doc is (I guess) very clear, but I cannot find any examples in the documentation, any idea where to get examples? inside the pgsql source code? Thanks a lot!
Re: [GENERAL] ECPG examples...
Title: Message http://tutorials.findtutorials.com/read/id/185/headline/Professional+Linux+Programming+Part+3+-+ECPG reid -Original Message-From: Cristian Prieto [mailto:[EMAIL PROTECTED] Sent: Thursday, September 01, 2005 4:10 PMTo: pgsql-general@postgresql.orgSubject: [GENERAL] ECPG examples... Hello, I've been reading a little the ECPG (Embedded SQL in C) and the doc is (I guess) very clear, but I cannot find any examples in the documentation, any idea where to get examples? inside the pgsql source code? Thanks a lot!
Re: [GENERAL] Check if SELECT is granted
=?UTF-8?B?UG91bCBNw7hsbGVyIEhhbnNlbg==?= [EMAIL PROTECTED] writes: I'm wondering if it's possible for the ordinary user to check if SELECT is granted on a give table, without doing the select ? See the has_table_privilege() function. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] ECPG examples...
Title: Message Thanks a lot! It was very helpfull!!! - Original Message - From: Reid Thompson To: Cristian Prieto ; pgsql-general@postgresql.org Sent: Thursday, September 01, 2005 2:43 PM Subject: Re: [GENERAL] ECPG examples... http://tutorials.findtutorials.com/read/id/185/headline/Professional+Linux+Programming+Part+3+-+ECPG reid -Original Message-From: Cristian Prieto [mailto:[EMAIL PROTECTED] Sent: Thursday, September 01, 2005 4:10 PMTo: pgsql-general@postgresql.orgSubject: [GENERAL] ECPG examples... Hello, I've been reading a little the ECPG (Embedded SQL in C) and the doc is (I guess) very clear, but I cannot find any examples in the documentation, any idea where to get examples? inside the pgsql source code? Thanks a lot!
Re: [GENERAL] Deferred triggers?
CSN wrote: Perhaps another possible feature request! I've looked through the docs and it doesn't appear that it's possible to create deferred triggers - i.e. they don't get called unless the current transaction commits. The semantics of such a thing appear to be indeterminate. What happens if something in the trigger would have caused the original transaction to fail? Most people would expect all changes made by the original transaction, as well as those made by the trigger, to be rolled back. Using deferred triggers as you've defined it would then require chainged transactions, which could get very messy. (My understanding is that they currently get called immediately whether or not there is a transaction in progress.) There is always a transaction in progress. -- Guy Rouillier ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] ODBC and inappropriate select *
Steve Crawford [EMAIL PROTECTED] writes: Somewhere in the black box that is VB/BusinessObjects/ODBC something is deciding that a select * is necessary for reasons unknown and then choking on (well, not actually choking but digesting slowly) the data returned. I'm just trying to find out if anyone has seen this problem and is it something inherent in VB, BusinessObjects or ODBC? If there is anyone around here who knows about it, you're more likely to find them hanging out in pgsql-odbc ... regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] ODBC and inappropriate select *
On Thu, 2005-09-01 at 15:21, Steve Crawford wrote: Is that select * being used to COUNT the number of rows? If so, then do a select count(*) which will take the db engine about as long, but it won't need to transfer the data across. Beats me. WE are not requesting a select * at all in the VB code. We are selecting and updating the rows we want to select and update via the appropriate where information. Somewhere in the black box that is VB/BusinessObjects/ODBC something is deciding that a select * is necessary for reasons unknown and then choking on (well, not actually choking but digesting slowly) the data returned. I'm just trying to find out if anyone has seen this problem and is it something inherent in VB, BusinessObjects or ODBC? In other words, is it something we can fix or do we need to set aside time to rewrite the app in a language that doesn't have these problems? This sounds like the way access behaves, or at least used to, when using a database other than MSSQL server. It's quite likely that whatever VB/BusinessObjects is doing was written by the same poor sap who wrote access's methods at the time, and just carried over that same brain dead logic. I'd check for updates to VB/WebObjects to see if there's a known problem and / or fix with it. I doubt it's ODBC in general, I've used that before, without this problem, including on MS boxes. But you may have an old version that was written, again, by the same guy who wrote access.Then who knows? ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] PL/pgSQL: EXCEPTION NOSAVEPOINT
[redirected from -patches] On Wed, 2005-08-03 at 16:25 -0400, Tom Lane wrote: Matt Miller [EMAIL PROTECTED] writes: allow a PL/pgSQL exception to not automatically rollback the work done by the current block. This fundamentally breaks the entire backend. You do not have the option to continue processing after elog(ERROR); Okay, I think I'm beginning to see the naivete of that patch's simplistic attempt to decouple backend error handling from transaction management. But I still haven't found a way to meet my original need: On Wed, 2005-08-03 at 19:58 +, Matt Miller wrote: The benefit is that [PL/pgSQL] exception handling can be used as a program flow control technique, without invoking transaction management mechanisms. This also adds additional means to enhanced Oracle PL/SQL compatibility. Basically I'd like my Pl/pgSQL code to be able to utilize the try/catch paradigm of error handling without the overhead of subtransactions and without the effect of a rollback. If I catch the exception then everything should be fine as far as the transaction is concerned. If don't catch the exception, or if I re-raise it, then the enclosing block can decide to rollback. This is more consistent with Oracle, and I have hundreds of Oracle procs to convert across multiple databases. I'm still thinking that some kind of hack to src/pl/plpgsql/src/pl_exec.c is probably where I'm headed, but I'm open to consider other approaches/advice. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Schema problems RedHat / SuSE 9.3 for version 7.4.8
[Please copy the mailing list on replies so others can contribute to and learn from the discussion. I've quoted more of your message than I ordinarily would because other people won't have seen it and they won't find it in the list archives.] On Thu, Sep 01, 2005 at 11:35:43PM +0200, Dick Kniep wrote: After starting psql, and executing the query, without a begin, after the query there is no search path SELECT set_config('search_path', '' || t2.schema || '', true) FROM Lindix.Gebruikers as t1, Lindix.Administratie as t2 WHERE uid = 'zon0023' AND t1.administratie_id = t2.administratie_id; set_config -- adeuxproductie (1 row) cvix=# SHOW search_path; search_path -- $user,public (1 row) Apparently you're in autocommit mode, which is the default for psql. Each statement is its own transaction, so you won't see the effects of set_config() when the third argument is true. Executed with third parameter false: cvix=# SELECT set_config('search_path', '' || t2.schema || '', false) FROM Lindix.Gebruikers as t1, Lindix.Administratie as t2 WHERE uid = 'zon0023' AND t1.administratie_id = t2.administratie_id; set_config -- adeuxproductie (1 row) cvix=# SHOW search_path; search_path -- adeuxproductie (1 row) Also the same result when I have a begin before the first statement. Which means that it seems to work correctly! Yep. If you're in a transaction block, or if you tell set_config() not to make the change local to the transaction, then you see the new setting take effect. Also, a thing I hadn't checked before, is that the psql results on the 2 servers are the same. Which leads to my conclusion that the autocommit settings are indeed different on the 2 servers. What do SELECT version() and SHOW autocommit show on both servers? If both servers are running 7.4 then they can't have different autocommit settings because 7.4 and later don't support server-side autocommit (it always shows on and you can't change it). Unless one of the servers is running 7.3, the autocommit settings must be on the client side. Are you using the same instance of the client to connect to both servers? OK, next question, how do I get rid of the autocommit in my application? I tried set autocommit to off; but that is deprecated. Using SET autocommit attempts to change the server-side setting, which was only supported in 7.3 (the developers removed it after deciding it had been a bad idea). How to disable autocommit on the client side depends on your client interface. What language and API are you using? -- Michael Fuhr ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Performance question (FOR loop)
hi, I have this preformance question. create view test_v as select 'text'::varchar as Field1, 'text'::varchar as Field2; create or replace function test() returns setof test_v as $$ declare res test_v%ROWTYPE; begin for res in select t1.field1, t1.field2 from table1 t1; loop return next res; end loop; return; end; $$ Language plpgsql; where table1 has fields other than field1 and field2. I can run this query at the prompt, but i do not want the aplication layer to know my database schema. The only way i know I can hide the database architecture is giving 'em the stored procedure name to call (in this case: test() ). The query I am actually trying to optimize is long and has a few joins (for normalization) and hence didn't copy it here. The function structure is similar to the one above. (a) Am i right in thinking that if I eliminate the for loop, some performance gain can be achieved? (b) Is there a way to eliminate this for loop? (c) Is there any other way anyone has implemented where they have Application layer API accessing the database with its schema hidden? thanks, vish
Re: [GENERAL] ODBC and inappropriate select *
On Thu, 2005-09-01 at 13:58, Steve Crawford wrote: We have an old legacy app that connects to our PostgreSQL (7.4.6) database. It is an old Visual Basic/Business Objects (VB 6.0, MDAC 2.5) program that selects a group of records and locks them by setting a field to an in-progress status. The user then works on those records and when done, returns the batch. We are having a terrible performance problem that we have traced to inappropriate queries being sent to the server. Whenever a user requests a batch, the app first runs the appropriate query with a where clause. This query returns virtually instantly. Unfortunately, it follows this by a select * from tablename which may return well over 100,000 records. Even this query run via psql on my linux desktop takes less than a second but apparently the VB app has trouble choking down all the unnecessary data it has requested leaving the user waiting 15 seconds or more for the update. Is that select * being used to COUNT the number of rows? If so, then do a select count(*) which will take the db engine about as long, but it won't need to transfer the data across. If a select * is really needed, then look at at least using a cursor. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Performance question (FOR loop)
vishal saberwal [EMAIL PROTECTED] writes: The query I am actually trying to optimize is long and has a few joins (for= =20 normalization) and hence didn't copy it here. The function structure is similar to the one above. (a) Am i right in thinking that if I eliminate the for loop, some=20 performance gain can be achieved? (b) Is there a way to eliminate this for loop? Is the plpgsql layer actually doing anything useful, or just passing back the results of the SQL command? If the latter, use a SQL function instead. Or perhaps even better, replace the function by a view. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Trouble with bytea in SPI...
Hello, I've been working just a little with SPI in a few stored functions, this is a model of my SP: PG_FUNCTION_INFO_V1(myspi); Datum myspi(PG_FUNCTION_ARGS) { bool isnull; bytea *val; Oid *karg; void *plan; int res; ret = SPI_connect(); karg = (Oid *) palloc(sizeof(Oid)); ret = SPI_exec("SELECT bytevalue FROM table1", 1); if (ret == SPI_OK_SELECT SPI_processed 0) { TupleDesc tupdesc = SPI_tuptable-tupdesc; SPITupleTable *tuptable = SPI_tuptable; val = DatumGetByteaP(SPI_getbinval(tuptable-vals[0], tupdesc, 1, isnull)); karg[0] = SPI_gettypeid(tupdesc, 1); } // Here I use and modified the val bytea value, but it stills as a bytea kind of data // Now I need to return the bytea value to the table plan = SPI_prepare("UPDATE table1 SET bytevalue=$1", 1, karg); if(!plan) elog(ERROR, "I don't know what happened!"); plan = SPI_saveplan(plan); // This is where the SP and the connection dies! ret = SPI_execp(plan, val, NULL, 1); SPI_finnish(); PG_RETURN_INT32(1); } Well, the funcion is compiled and it loads correctly using create or replace function myspi() returns integer as '/usr/lib/pgsql/myspi.so' language 'c'; but when I did a select myspi(); I get the non-friendly message "The server closed the connection unexpectedly..." blah blah blah... and the ! indicator... I checked out all the contrib dir (contrib/fulltextindex/fti.c, contrib/tsearch2/dict.c, and similar) and I guess my code is similar to them in that area, what did I do wrong??? Thanks a lot for your help!!!
Re: [GENERAL] Trouble with bytea in SPI...
On Thu, Sep 01, 2005 at 08:23:31PM -0600, Cristian Prieto wrote: Hello, I've been working just a little with SPI in a few stored functions, this is a model of my SP: Please post a real example instead of a model. The code you posted fails to compile, with errors and warnings like the following: spitest.c: In function `myspi': spitest.c:18: `ret' undeclared (first use in this function) spitest.c:18: (Each undeclared identifier is reported only once spitest.c:18: for each function it appears in.) spitest.c:39: warning: passing arg 2 of `SPI_execp' from incompatible pointer type spitest.c:41: warning: implicit declaration of function `SPI_finnish' spitest.c:16: warning: unused variable `res' Since what you posted doesn't compile, it can't be what you're really doing; that means we have to guess at what the real code looks like. It would be easier to help if we could see the real thing so we don't have to guess. // This is where the SP and the connection dies! ret = SPI_execp(plan, val, NULL, 1); val is a bytea * but the second argument to SPI_execp() is a Datum * (the compiler warning hints that something's wrong here). Try something like this: Datum values[1]; values[0] = PointerGetDatum(val); ret = SPI_execp(plan, values, NULL, 1); That works for me in simple tests. If anybody sees a problem with it then please make corrections. -- Michael Fuhr ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] same size VARCHAR or INT IX faster?
same size VARCHAR or INT IX faster? i assume INT. The reason I ask is I was wondering what (if any) is the avg delay from one over the other? And benefit of one over the other? Thanks. __ Yahoo! Mail Stay connected, organized, and protected. Take the tour: http://tour.mail.yahoo.com/mailtour.html ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Trouble with bytea in SPI...
Michael Fuhr [EMAIL PROTECTED] writes: On Thu, Sep 01, 2005 at 08:23:31PM -0600, Cristian Prieto wrote: Hello, I've been working just a little with SPI in a few stored functions, this is a model of my SP: Please post a real example instead of a model. Also, it's good to make at least some minimal effort with gdb to find out where your code is crashing. A backtrace from the core dump (or from catching the signal interactively) often tells a lot. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster