[GENERAL] New Student
I would like to subscribe to you mail list. I'm not highly technical, but am extremely interested in learning as much as I can. Thank-you very much. Phil Bolton ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] dsn to postgres database.
Hi, Im trying to access a postgres database on a Raq3 through a DSN in chilisoft asp. I have created a database called test, in the ./home/pgsql/base/test directory. I can log in as 'myuserid', and run queries on the database inside the psql monitor. But when i try to make a dsn in chilisoft with these settings: DSN mydsnname Driver PostgreSQL Host localhost Port 1025 User ID myuserid Password mypassword Database test it says: Error: 1000 - The database does not exist on the server or user authentication failed. Chilisofts sample databases (with sample dsn's that work ok) are in ./home/chilisoft/odbc/internaldb/data/base/xxwhateverxx My question is, how do i create a database using the psql program, that a chilisoft dsn can find? And how do i run queries on the databases in the chilisoft directories through the psql program? Cheers, Phil Jobbins
[GENERAL] A select DISTINCT query?
People, I want to select from a table ONLY unique records ie if a column has values: 1 2 3 3 4 5 I want ONLY these records returned: 1 2 4 5 Thanks, Phil. -- Philip Rhoades Pricom Pty Limited (ACN 003 252 275 ABN 91 003 252 275) GPO Box 3411 Sydney NSW 2001 Australia Fax: +61:(0)2-8221-9599 E-mail: [EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] A select DISTINCT query?
Pavel, You didn't read my note properly - your query gives: 1 2 3 4 5 I want: 1 2 4 5 Phil. On Sun, 2008-01-27 at 15:10 +0100, Pavel Stehule wrote: > Hello > > try > > SELECT DISTINCT col FROM table > > Pavel > > On 27/01/2008, Phil Rhoades <[EMAIL PROTECTED]> wrote: > > People, > > > > I want to select from a table ONLY unique records ie if a column has > > values: > > > > 1 > > 2 > > 3 > > 3 > > 4 > > 5 > > > > I want ONLY these records returned: > > > > 1 > > 2 > > 4 > > 5 > > > > Thanks, > > > > Phil. > > -- > > Philip Rhoades > > > > Pricom Pty Limited (ACN 003 252 275 ABN 91 003 252 275) > > GPO Box 3411 > > Sydney NSW 2001 > > Australia > > Fax: +61:(0)2-8221-9599 > > E-mail: [EMAIL PROTECTED] > > > > > > ---(end of broadcast)--- > > TIP 3: Have you checked our extensive FAQ? > > > >http://www.postgresql.org/docs/faq > > > > ---(end of broadcast)--- > TIP 2: Don't 'kill -9' the postmaster > -- Philip Rhoades Pricom Pty Limited (ACN 003 252 275 ABN 91 003 252 275) GPO Box 3411 Sydney NSW 2001 Australia Fax: +61:(0)2-8221-9599 E-mail: [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] A select DISTINCT query?
Tino, On Sun, 2008-01-27 at 15:16 +0100, Tino Wildenhain wrote: > Phil Rhoades wrote: > > People, > > > > I want to select from a table ONLY unique records ie if a column has > > values: > > > > 1 > > 2 > > 3 > > 3 > > 4 > > 5 > > > > I want ONLY these records returned: > > > > 1 > > 2 > > 4 > > 5 > > > SELECT count(*) as cnt,a,b,c FORM yourtable > GROUP BY a,b,c > HAVING cnt=1 > > should do. I get: SELECT count(*) as cnt, name FRoM tst GROUP BY name HAVING cnt = 1 ; ERROR: column "cnt" does not exist LINE 1: ...ount(*) as cnt, name FRoM tst GROUP BY name HAVING cnt = 1 ; ^ Thanks, Phil. -- Philip Rhoades Pricom Pty Limited (ACN 003 252 275 ABN 91 003 252 275) GPO Box 3411 Sydney NSW 2001 Australia Fax: +61:(0)2-8221-9599 E-mail: [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] A select DISTINCT query?
Guys, On Sun, 2008-01-27 at 17:38 +0100, Pavel Stehule wrote: > On 27/01/2008, Phil Rhoades <[EMAIL PROTECTED]> wrote: > > Tino, > > > > > > On Sun, 2008-01-27 at 15:16 +0100, Tino Wildenhain wrote: > > > Phil Rhoades wrote: > > > > People, > > > > > > > > I want to select from a table ONLY unique records ie if a column has > > > > values: > > > > > > > > 1 > > > > 2 > > > > 3 > > > > 3 > > > > 4 > > > > 5 > > > > > > > > I want ONLY these records returned: > > > > > > > > 1 > > > > 2 > > > > 4 > > > > 5 > > > > > > > > > SELECT count(*) as cnt,a,b,c FORM yourtable > > > GROUP BY a,b,c > > > HAVING cnt=1 > > > > > > should do. > > > > > > I get: > > > > SELECT count(*) as cnt, name FRoM tst GROUP BY name HAVING cnt = 1 ; > > ERROR: column "cnt" does not exist > > LINE 1: ...ount(*) as cnt, name FRoM tst GROUP BY name HAVING cnt = 1 ; > > ^ > > > select count(*) as cnt, name from tst group by name having count(*) = 1 Muchas gracias! Phil. -- Philip Rhoades Pricom Pty Limited (ACN 003 252 275 ABN 91 003 252 275) GPO Box 3411 Sydney NSW 2001 Australia Fax: +61:(0)2-8221-9599 E-mail: [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] A select DISTINCT query? - followup Q
People, > select count(*) as cnt, name from tst group by name having count(*) = 1 This worked for my basic example but not for my actual problem - I get "column comment must appear in the GROUP BY clause or be used in an aggregate function" errors so I have a related question: With table: name comment 1first comment 2second comment 3third comment 3fourth comment 4fifth comment 5sixth comment - how can I use something like the previous select statement but where the comment field does not appear in the "group by" clause and gives the following result: 1first comment 2second comment 4fifth comment 5sixth comment Thanks, Phil. -- Philip Rhoades Pricom Pty Limited (ACN 003 252 275 ABN 91 003 252 275) GPO Box 3411 Sydney NSW 2001 Australia Fax: +61:(0)2-8221-9599 E-mail: [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] A select DISTINCT query? - followup Q
Mike, I can't do that with my comments - I get all six of the records in the result with the example instead of just four like I want . . but someone else had a solution without using the "group by" clause . . Phil. On Sun, 2008-01-27 at 13:56 -0500, Mike Ginsburg wrote: > Hi Phil, > Each of columns that you specify in your SELECT clause, must also > appear in the GROPU BY clause. > > SELECT COUNT(*) AS cnt, name, comment, ... > FROM tst > GROUP BY name, comment, ... > HAVING COUNT(*) = 1; > > > Phil Rhoades wrote: > > People, > > > > > > > >> select count(*) as cnt, name from tst group by name having count(*) = 1 > >> > > > > > > This worked for my basic example but not for my actual problem - I get > > "column comment must appear in the GROUP BY clause or be used in an > > aggregate function" errors so I have a related question: > > > > With table: > > > > name comment > > > > 1first comment > > 2second comment > > 3third comment > > 3fourth comment > > 4fifth comment > > 5sixth comment > > > > - how can I use something like the previous select statement but where > > the comment field does not appear in the "group by" clause and gives the > > following result: > > > > 1first comment > > 2second comment > > 4fifth comment > > 5sixth comment > > > > Thanks, > > > > Phil. > > > > Mike Ginsburg > Collaborative Fusion, Inc. > [EMAIL PROTECTED] > 412-422-3463 x4015 > -- Philip Rhoades Pricom Pty Limited (ACN 003 252 275 ABN 91 003 252 275) GPO Box 3411 Sydney NSW 2001 Australia Fax: +61:(0)2-8221-9599 E-mail: [EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] ER Diagram design tools (Linux)
People, On Wed, 2008-03-05 at 20:56 -0700, Kevin Kempter wrote: > On Wednesday 05 March 2008 20:33:43 Conor McTernan wrote: > > I was wondering if anyone knows of any good ER Diagram tools for > > Postgres that run on Linux. > > > > I have been using DBDesigner by FabForce for a couple of years, but > > development has stopped while MySQL workbench is being built (for > > windows only). Neither of these applications will talk to Postgres and > > I've found DBDesigner to be a bit buggy at the best of times (it's > > still quite good and better than nothing I suppose). > > > > I've been using PgAdmin3 which is great for updating/managing > > tables/view etc, but I would really like something for modelling ER > > diagrams which will talk directly to Postgres. > > > > Does anyone know of any commercial or open source software that will do > > this? > > > > Cheers, > > > > Conor > > > > ---(end of broadcast)--- > > TIP 1: if posting/reading through Usenet, please send an appropriate > >subscribe-nomail command to [EMAIL PROTECTED] so that your > >message can get through to the mailing list cleanly > > Have a look at Data Architect (www.theKompany.com) > or Open System Architect (www.codebydesign.com) I have never had much success with DA, don't know about OSA and although I usually avoid Java, I have found Power*Architect excellent - they even fixed a bug I found overnight! http://www.sqlpower.ca/page/architect Regards, Phil. -- Philip Rhoades Pricom Pty Limited (ACN 003 252 275 ABN 91 003 252 275) GPO Box 3411 Sydney NSW 2001 Australia Fax: +61:(0)2-8221-9599 E-mail: [EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Libpq on windows
I want to develop an app which uses libpq, built with mingw. Is there a download package which contains just the include files/dlls? If not, what package do I download? I don't need the server, just the client libraries. Phil
Re: [GENERAL] Libpq on windows
On June 11, 2009 01:21:09 am Albe Laurenz wrote: > Phil wrote: > > I want to develop an app which uses libpq, built with mingw. > > Is there a download package which contains just the include files/dlls? > > If not, what package do I download? I don't need the server, just the > > client libraries. > > You can use the regular binary installer for Windows, > it gives you the option to deselect certain components. > So you can just install th client and libraries without the server. > > Yours, > Laurenz Albe I should have been more specific. I was hoping for a compressed file to download and uncompress, because this is part of an automated build process. Downloading the larger installer and manually selecting components doesn't help me. Thanks anyway. Phil
Re: [GENERAL] Libpq on windows
On June 10, 2009 10:00:48 pm Andy Colson wrote: > Phil Longstaff wrote: > > I want to develop an app which uses libpq, built with mingw. Is there a > > download package which contains just the include files/dlls? If not, > > what package do I download? I don't need the server, just the client > > libraries. > > > > > > Phil > > If you dont mind all the ssl stuff (5 or 6 dll's worth) you can borrow > the client from pgAdmin III. > > I, however, just build my own w/out the ssl stuff. There is no client > only, but its really not hard, just download the full source, unpack, > compile and then copy the libpq.dll out. > > If you want mine I could copy it out so you can download it. Thanks. How strict is pgsql about version matches (client to server). If I get your client, will it work with all 8.x versions of the server? Is there a place on the postgresql.org website that I could request that they package the library (built, or just source) by itself? Phil
Re: [GENERAL] Libpq on windows
I need the include files as well so I can build against the library. Phil From: Andy Colson To: Phil Longstaff Cc: pgsql-general@postgresql.org Sent: Thursday, June 11, 2009 9:47:52 AM Subject: Re: [GENERAL] Libpq on windows Phil Longstaff wrote: > On June 10, 2009 10:00:48 pm Andy Colson wrote: > > Phil Longstaff wrote: > > > I want to develop an app which uses libpq, built with mingw. Is there a > > > download package which contains just the include files/dlls? If not, > > > what package do I download? I don't need the server, just the client > > > libraries. > > > > > > > > > Phil > > > > If you dont mind all the ssl stuff (5 or 6 dll's worth) you can borrow > > the client from pgAdmin III. > > > > I, however, just build my own w/out the ssl stuff. There is no client > > only, but its really not hard, just download the full source, unpack, > > compile and then copy the libpq.dll out. > > > > If you want mine I could copy it out so you can download it. > > > Thanks. How strict is pgsql about version matches (client to server). If I > get your client, will it work with all 8.x versions of the server? Not very. I'm was using the 8.2 client to hit 8.2 and 8.3. I just recompiled the client to 8.4 (in beta 1, just to make sure it compiles ok on mingw). I'm currently using it to hit an 8.3 server. I'm not using anything "advanced" though... just the basics and I'v never had a problem > > Is there a place on the postgresql.org website that I could request that they > package the library (built, or just source) by itself? dunno. I saw a while ago on the newgroups someone asking about a "make client" for the source... but I think they just need someone to write it... I put a copy of my dll here if you wanna try it out: (this was built with 8.4 beta1, and hits 8.3 just fine) http://squeakycode.net/dn/libpq.dll -Andy
[GENERAL] Rewriting select statements
I want to have the server do this: If the query has no where clause, use a where clause of "where 1=0". Is this possible? Why would I want to do this? Because a third party library (ArcGIS) has a "feature" such that when a relation name is registered with it, it does a "select * from " and then does nothing with the results. It looks like it's just checking the field names or the fact that the relation exists. In this case, it's selecting a view that joins 14 tables. The main table has 350,000 rows, and some of the subsidiary tables have up to 1.5 million rows. This view is not queried without a where clause during the normal execution of the program, so to save between 6 and 20 minutes of startup time depending on server load, I'd like to be able to add the where clause. Thanks for any help, Phil Cairns. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Rewriting select statements
Tom Lane wrote: > "Phil Cairns" writes: > > I want to have the server do this: > > If the query has no where clause, use a where clause of "where 1=0". > > > Is this possible? > > It's doubtless *possible*, but if you're asking for it to actually > happen in any supported version of Postgres, the answer is no way. > It's directly contrary to the SQL standard. Yeah. Not asking for a feature to be added, just asking if something already existed, some sort of rule that could be created that would rewrite the query. > > Why would I want to do this? Because a third party library (ArcGIS) has a > > "feature" such that when a relation name is registered with it, it does a > > "select * from " and then does nothing with the results. > > Tell the library authors to fix their broken code. This is blithering > stupidity in *any* SQL database, not only Postgres. Tried that, and got no response. I fully agree that it's broken, but my users just see the delay. I might have to try some sort of proxy arrangement. Thanks for your response, Phil. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Extract (Recover) data from a cluster built on a different architecture (ARM).
Hi I'm looking for a way to extract the data from a PostgreSQL 8.3.14 database (cluster) that was built using an an ARM/Linux server. The problem is that the hardware itself is a brick and the replacement hardware will be X86/AMD64. Sadly my backups are all copies of the DB files and don't include a recent text based dump (fail!). All attempts so far to start up a server using the original files have failed with a couple of different errors (Failed to parse...). I'm rapidly coming to the conclusion that this is to do with endianness. X86 is little endian whereas ARM is primarily big endian. Are there any tools for recovering data from a database built with a different architecture or is my data toast unless I can lay my hands on an ARM box? Thanks so much for your time. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] find the greatest, pick it up and group by
Hi The method you're using is functionally correct and quite efficient if a little on the verbose side. Other non-postgres variants of SQL have a "DECODE" function which comes in very handy. I dont believe postgres has any equivalent. (Postgres decode() does something entirely differnt). I often use nested queries in the from clause for this purpose. SELECT a, b, c, x, y, z, case when gr = x then 'x' when gr = y then 'y' when gr = z then 'z' end FROM ( Select distinct a,b,c, x,y,z, greatest(x,y,z) as gr from foo ) Regards On 17 May 2011 01:26, Ivan Sergio Borgonovo wrote: > On Mon, 16 May 2011 20:05:45 -0400 > "David Johnston" wrote: > >> When asking for help on non-trivial SELECT queries it really helps >> to tell us the version of PG you are using so that responders know >> what functionality you can and cannot use. In this case >> specifically, whether WINDOW (and maybe WITH) clauses available? > > Unfortunately I'm on 8.3 so no WINDOW. > > I didn't even think of using them and I can't think of any way to > use WINDOW/WITH but if there is a more readable solution that use > them I'd like to see it even if I won't be able to use it. > Of course I'm more interested to know if there is any cleaner > solution for 8.3. > > thanks > > -- > Ivan Sergio Borgonovo > http://www.webthatworks.it > > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Convert data into horizontal from vertical form
Hi Adarsh You say you need this to be done dynamically. I assume that by this you're looking for a way to have 1 query produce an increasing number of columns as you increase the number of rows in your table. This really isn't possible and doesn't fit with the model SQL was designed for. The concept of tables is that each table represents a set of items of a single type with a set of known properties (the possible properties are known before the item itself). An item is represented by a row and a property is represented by a column. You are trying to create a query with an unknown set of properties. If the data must be represented as you've shown then you will need to get your front end application to transform the data for you. Regards On 19 May 2011 11:15, Adarsh Sharma wrote: > Dear all, > > I am not able to insert data into a table in horizontal form. > > The data is in below form : > > A show a small set of data :- > > c_id f_name f_value > 2 k1 v1 > 2 k2 v2 > 2 k3 v3 > 2 k4 v4 > 3 a1 b1 > 3 a2 b2 > 3 a3 b3 > 3 a4 b4 > 3 a5 b5 > 1 c1 d1 > 1 c2 d2 > 3 a1 e1 > 3 a2 e2 > 3 a3 e3 > 3 a4 e4 > 3 a5 e5 > > Now i want to show the above data in horizontal form as per c_id , fore.g if > a user enters c_id 3 then output is : > > c_id a1 a2 a3 a4 a5 > 3 b1 b2 b3 b4 b5 > 3 e1 e2 e3 e4 > e5 > > i.e f_name entries became the columns of the table & f_value become the rows > > > I research on crosstab function but i don'e think it is useful because we > have to give column names in the command. > I want to show it dynamically . I try to create a procedure & also attach > it. > > A user enters only c_id & output is shown fore.g if a user enters c_id 1 > then output is > > c_id c1 c2 > 1 d1 d2 > > I show the data in simple way bt there r 1 of rows & 100 of c_id's. > > > Please let me know if it is possible or any information is required. > > > Thanks > > > > create function user_news_new(text) returns void as $$ > declare > name text; > cat_name alias for $1; > begin > CREATE TEMPORARY TABLE temptest(category_id INTEGER,category_name text); > /* create a temp table to hold all the dynamic schemas*/ > for name in select label_name from category_new where category_id = (select > category_id from category where category_name=cat_name) loop > execute 'alter table temptest add column ' || name || ' text'; > end loop; > end; > $$ language plpgsql; > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Add quto increment to existing column
Hi Dropping the column is a bit drastic if you already have data in there. You could just set the default on the column: alter table my_table alter hist_id set default nextval('hist_id_seq') Also considder setting the sequence owner: alter sequence hist_id_seq owned by my_table.hist_id; This will mean if the table or collumn gets dropped so will the sequence and if the table is moved between schemas, so to will the sequence be moved. Regards On 4 October 2011 14:38, marc_firth wrote: > If you use the SERIAL (this is the auto-incrementing function that creates > sequences in the bankground for you) datatype you can accomplish it in one > go. > > So: > DROP sequence hist_id_seq; -- Get rid of your old sequence > > ALTER TABLE my_table DROP COLUMN hist_id; -- Remove id column > > ALTER TABLE my_table ADD COLUMN hist_id SERIAL PRIMARY KEY; -- Recreate it > as Primary Key and quto-incrementing. > > Btw: have you tried the http://www.pgadmin.org/ pgadmin gui for > postgres? It will help you do tasks like this and show you the SQL to do it > on the command line :) > > Cheers, > Marc > > > > Robert Buckley wrote: >> >> Hi, >> >> I have a column in a table called hist_id with the datatype "integer". >> When I created the table I assigned this column the primary key constraint >> but didn´t make it an auto-increment column. >> >> How could I do this to an the already existing column? >> >> I have created the sequence with the following command but don´t know how >> to change the existing column to auto-increment. >> >> >> $ create sequence hist_id_seq; >> >> thanks for any help, >> >> Rob >> > > > -- > View this message in context: > http://postgresql.1045698.n5.nabble.com/Add-quto-increment-to-existing-column-tp4868404p4868544.html > Sent from the PostgreSQL - general mailing list archive at Nabble.com. > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] how to select one column into another in same table?
I don't this this is possible as postgres. There is something simular with: alter table table_name alter column column_foo using column_bar But I don't think there's any performance advantage over a simple update and the using clause doesn't appear to have an equivalent in an add column statement. You could. alter table table_name rename column_foo to column_bar; alter table table_name add column_foo foo_data_type default = nextval('new_foo_sequence'); This has your best chance of success since renaming a column should not have to touch every row of the table. Regards On 4 October 2011 20:21, J.V. wrote: > What I need to do is to save the id column for future use and then modify > the id column resetting all values from another sequence. > > So I need to select the id column or somehow get the data into another > column in the same table. > > And then I can update the id column (after dropping the constraint). > > J.V. > > On 10/4/2011 1:09 PM, Scott Marlowe wrote: >> >> On Tue, Oct 4, 2011 at 12:24 PM, J.V. wrote: >>> >>> Currently I can select one column into another with two statements: >>> >>> alter table add column id_old int; >>> update set id_old = id; >>> >>> Is there a way to do this in one statement with a select into? I have >>> tried >>> various select statements but want the new column (with the same data) to >>> be >>> in the same table and to have it execute much more quickly that the two >>> statements currently do. >> >> Do you need another column or do you just want to alter a column that >> already exists? If so you can alter a column from one type to another >> and throw a using clause at it to convert the data in some way. I >> think we need to know a bit better what you're trying to do., >> > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] gaps/overlaps in a time table : current and previous row question
I think you need to get the full list of change dates first. Assuming you're searching over a time period between "period_from" and "period_to": SELECT change_time, sum(diff) as total_diff FROM ( SELECT starttime as change_time, 1 AS diff FROM t WHERE starttime > period_from AND endtime < period_to UNION ALL SELECT endtime as change_time, -1 AS diff FROM t WHERE endtime > period_from AND endtime < period_to ) a GROUP BY change_time HAVING sum(diff) <> 0 ORDER BY change_time asc I used this in a pgplsql function to produce a very simular result to what you were looking for. You need to start by finding how many time periods overlapped period_from, then accumulatively add on "total_diff" for each row you process. Hope this helps. 2011/10/5 Filip Rembiałkowski : > > > 2011/10/5 thomas veymont >> >> hello, >> >> let's say that each rows in a table contains a start time and a end >> time ("timeinterval" type), > > there is no such type ( no result for select * from pg_type where typname ~ > 'timeinterval' ). > can you show exact table structure (output of psql "\d" or better, CREATE > TABLE command)? > > >> but the index are not ordered nor consecutive, e.g : >> >> $ select * from T order by starttime >> >> index | starttime | endtime >> -+-+- >> 3 | t1 | t2 >> 1 | t3 | t4 >> 18 | t5 | t6 >> 12 | t7 | t8 >> >> I want a result that shows time gaps and overlaps in this table, that is : >> >> delta >> -+ >> t3 - t2 | >> t5 - t4 | >> t7 - t6 | >> >> how would I do that ? I guess this could be done with window function and >> lag() >> function but I don't know exactly how. Any suggestion ? >> > > > -- assuming that you actually want lag compared to previous starttime - try > this: > select index, starttime, endtime, starttime - lag(endtime) over(order by > starttime asc) as delta from test; > > > PS. this question should probably go to "pgslq-sql mailing list more than > "pgsql-general". also please give more details next time. Thanks. > > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Error: timestamp with timezone + interval is not immutable while creating index
Hi All I've got a table with (amongst others) two fields: last_updated timestamp with time zone; update_cycle interval; I'd like to create an index on these, to index time "next update" time (last_updated + update_cycle). When I try this I get an error though: main=> create index foo_next_update on foo( (last_updated + update_cycle) ) ; ERROR: functions in index expression must be marked IMMUTABLE Does anyone know why adding two fields like this results in anything other than an immutable function? Under what circumstances could it return a different result? Thanks very much for any help. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Error: timestamp with timezone + interval is not immutable while creating index
Hi All I've got a table with (amongst others) two fields: last_updated timestamp with time zone; update_cycle interval; I'd like to create an index on these, to index time "next update" time (last_updated + update_cycle). When I try this I get an error though: main=> create index foo_next_update on foo( (last_updated + update_cycle) ) ; ERROR: functions in index expression must be marked IMMUTABLE Does anyone know why adding two fields like this results in anything other than an immutable function? Under what circumstances could it return a different result? Thanks very much for any help. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Error: timestamp with timezone + interval is not immutable while creating index
On 14 October 2011 00:49, Steve Crawford wrote: > On 10/13/2011 04:32 PM, Tom Lane wrote: >> >> Phil Couling writes: >>> >>> main=> create index foo_next_update on foo( (last_updated + >>> update_cycle) ) ; >>> ERROR: functions in index expression must be marked IMMUTABLE... >> >> timestamptz + interval is not immutable because the results can vary >> depending on timezone. For instance, in my zone (America/New_York): >> > So it seems like a potential workaround, depending on the nature of your > data and applications, would be to convert the timestamptz into a timestamp > at a reference TZ: > > steve=# create table testfoo (a_timestamptz timestamptz, an_interval > interval); > CREATE TABLE > steve=# create index testfoo_index on testfoo ((a_timestamptz at time zone > 'UTC' + an_interval)); > CREATE INDEX > > You will have to be sure you are getting the results you want in the > vicinity of DST changes and if you are handling multiple timezones. > > Cheers, > Steve > > Thanks all That makes a lot of sense. For some reason I'd thought that having the timezone would make it immutable (since it represents an absolute point in time) whereas without it would not be (since the point in time it *actually* represents is dependant on time zone...). Guess I hadn't thought that through very well. Kudos to Postgres for pointing out a flaw in my design! I'll be adding in the timezone to the table (or at least a table it references). The new index looks more like this: create index foo_next_update on foo ( ((first_update + (update_cycle * update_count)) at time zone update_region) ) I'm not sure timezone will ever be anything but 'GB' in this case, but there's nothing like future proofing. Regards All -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [BUGS] Prepared Statement Name Truncation
On Nov 17, 2012 11:06 PM, "Gavin Flower" wrote: > > On 18/11/12 16:49, Greg Sabino Mullane wrote: >> >> -BEGIN PGP SIGNED MESSAGE- >> Hash: RIPEMD160 >> >> >>> NOTICE: identifier >>> "this_is_a_really_long_identifier_for_a_prepared_statement_name_ok" >>> will be truncated to >>> "this_is_a_really_long_identifier_for_a_prepared_statement_name_" >>> PREPARE >> >> ... >>> >>> The ORM could use a shorter identifier, but it supports multiple backends >>> and this is probably not something in their test suite. In addition it >>> actually works! >> >> For now. If it really works, then by definition it does not /need/ to >> be that long, as the truncated version is not blowing things up. >> >>> So I am sharing this with the list to see what people think. Is this a >>> configuration bug? An ORM bug? A postgres bug? An unfortunate >>> interaction? >> >> Part ORM fault, part Postgres. We really should be throwing something >> stronger than a NOTICE on such a radical change to what the user >> asked for. I'd lobby for WARNING instead of ERROR, but either way, one >> could argue that applications would be more likely to notice and >> fix themselves if it was stronger than a NOTICE. >> >>> If it's a postgres bug, what is the fix? Make the identifier max size >>> longer? >> >> I'd also be in favor of this, in addition to upgrading from a NOTICE. We >> no longer have any technical reason to keep it NAMEDATALEN, with >> the listen/notify rewrite, correct? If so, I'd like to see the max bumped >> to at least 128 to match the default SQL spec length for similar items. >> >>> Set a hard limit and ERROR instead of truncating and NOTICE? >>> Both? Neither because that would break backward compatibility? >> >> My vote is WARNING and bump limit to 128 in 9.3. That's the combo most >> likely to make dumb applications work better while not breaking >> existing smart ones. >> >> >> [...] >> > Would it be appropriate to make it a WARNING in 9.2.2, then increase the length in 9.3? > > Though I still feel I'd like it to be an ERROR, may be a configuration variable in 9.3 to promote it to an ERROR with WARNING being the default? > In that case I'd make it ERROR by default and make people override to WARNING if it breaks things. Otherwise no one will change. > > Cheers, > Gavin > > > > > > > -- > Sent via pgsql-bugs mailing list (pgsql-b...@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-bugs
Re: [GENERAL] pg_dirtyread doesnt work
Most likely it's because of TOAST'd records. The module is pretty naive and needs to be updated to handle such cases. On Fri, Dec 28, 2012 at 10:53 AM, Alejandro Carrillo wrote: > > Anybody knows why could be happening: ERROR: invalid memory alloc request > size 1850015748 > > Thanks > > > De: Alejandro Carrillo > Para: "pgsql-general@postgresql.org" > Enviado: Jueves 27 de diciembre de 2012 16:17 > Asunto: [GENERAL] pg_dirtyread doesnt work > > Hi, > > After of very tried to compile this PostgreSQL C function for Windows, I > compile that (with VS C++ 2008), but the function get a error when try to > read a deleted row. The example: > > CREATE FUNCTION pg_dirtyread(oid) > RETURNS setof record > AS E'$libdir/pg_dirtyread', 'pg_finfo_pg_dirtyread' LANGUAGE C STRICT ; > > Create table hola( > id bigserial, > dato1 varchar(199) not null, > fecha date > ); > > /*insert 3 rows and delete 1 row*/ > > > select * from pg_dirtyread('hola'::regclass) t (id bigint,dato1 > varchar(199), fecha date); > > ERROR: invalid memory alloc request size 1850015748 > > Anybody can help me? > > Thanks > > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Trying to install ODBC driver on Windows XP notebook
I have latest Postgresql installed on a notebook and now trying to install the driver which I have downloaded and run so that it is now available in the ODBC Data Source Administrator This machine has CA security suite, latest version installed and I have added a network rule that allows all applications using TCP and UDP on Port 4532 to gain access In and Out I have the correct database name and get the following message when I try to Test the connection. Could not connect to the server No Connection could be made because the target machine actively refused it. (192.168.1.7:5432) I don't know what else I can now do to get the connection. Any suggestions would be appreciated. Cheers Phil Jackson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Cannot open table in new database
We've set up a Sql database for the first time and get an error reported back to our application from the ODBC session object when we try to open one of the tables. [42p01][7]ERROR Relation "SqlAnal" does not exist; table not found! Here is my database outline in the Admin tool Servers(1) Concept Patterns (localhost:5432) Databases(1) Postgres casts languages schemas Public - - Tables(2) DocHeader SqlAnal Replication Tablespaces(2) pg_default pg_global Group Roles Login Roles(1) Concept We can access the list of tables from the ODBC driver which shows the above tables. We've obviously done something wrong but we don't know where to look. The Help button is not helpful at all. Any ideas on what we are doing incorrectly? Cheers Phil Jackson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Cannot open table in new database
Hi Adrian The link says that; "Identifier and key word names are case insensitive." But I have renamed the source table in lowercase and this gets me one step further. I'll carry on and see what happens next. Cheers Phil Jackson On 6/30/2010 3:18 PM, Adrian Klaver wrote: On 07/01/2010 09:46 AM, Phil Jackson wrote: We've set up a Sql database for the first time and get an error reported back to our application from the ODBC session object when we try to open one of the tables. [42p01][7]ERROR Relation "SqlAnal" does not exist; table not found! Here is my database outline in the Admin tool Servers(1) Concept Patterns (localhost:5432) Databases(1) Postgres casts languages schemas Public - - Tables(2) DocHeader SqlAnal Replication Tablespaces(2) pg_default pg_global Group Roles Login Roles(1) Concept We can access the list of tables from the ODBC driver which shows the above tables. We've obviously done something wrong but we don't know where to look. The Help button is not helpful at all. Any ideas on what we are doing incorrectly? Cheers Phil Jackson Using mixed case :) See here for gotchas of using mixed case: http://www.postgresql.org/docs/8.4/interactive/sql-syntax-lexical.html 4.1.1. Identifiers and Key Words -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Cannot open table in new database
Hi Adrian I had missed that bit. That makes sense now. Cheers Phil Jackson On 6/30/2010 5:04 PM, Adrian Klaver wrote: On Thursday 01 July 2010 11:11:29 am Phil Jackson wrote: Hi Adrian The link says that; "Identifier and key word names are case insensitive." But I have renamed the source table in lowercase and this gets me one step further. I'll carry on and see what happens next. Cheers Phil Jackson You need to go to bottom of that section where you would find :) "Quoting an identifier also makes it case-sensitive, whereas unquoted names are always folded to lower case. For example, the identifiers FOO, foo, and "foo" are considered the same by PostgreSQL, but "Foo" and "FOO" are different from these three and each other. (The folding of unquoted names to lower case in PostgreSQL is incompatible with the SQL standard, which says that unquoted names should be folded to upper case. Thus, foo should be equivalent to "FOO" not "foo" according to the standard. If you want to write portable applications you are advised to always quote a particular name or never quote it.) " -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] problem with ON UPDATE rule
Hi all, I'm new to Postgres and I'm not really a database expert but I was wondering if someone could help me out with this issue. I am trying to use event notifications controlled by an ON UPDATE rule. In the table I'm interested in I have created a rule that should be invoked when a particular column is updated to a value of 'complete': Table "public.table1" Column | Type | Modifiers +-+--- run_id | text | run_status | text | flag | boolean | Rules: r1 AS ON UPDATE TO table1 WHERE new.run_status = 'complete'::text DO INSERT INTO table2 (r_id, r_status) VALUES (new.run_id, new.run_status) The second table is supposed to have a row inserted into it when the conditional update takes place and has an ON INSERT rule that fires the notification: Table "public.table2" Column | Type | Modifiers --+--+--- r_id | text | r_status | text | Rules: r2 AS ON INSERT TO table2 DO NOTIFY table2 To test this I am using the example program in the section of the Postgres documentation that discusses event notification and libpq. And, this all works, when I update table1 and set a particular row's value for run_status to "complete" a row is inserted into table2, the rule on table2 fires and the example program, which is listening for "table2", is notified as expected. The problem is, this also happens when an update to table1 that sets the value of run_status to something other than "complete". In these cases, no row is inserted into table2 but the notification is issued just the same and is received by the example program. How is the notification being issued if no row is being inserted into table2? By the way I have also tried modifying the WHERE clause in R1 to: WHERE old.run_status <> new.run_status AND new.run_status = 'complete' but this did not change the behavior at all. I am using Postgres 8.3.11 and before you tell me to upgrade I do not have any choice about that. I'm working with a vendor system that uses Postgres and I have no control (read ZERO influence) on what version of Postgres is being used. Thanks in advance for any help or insight into how I can debug this problem.
[GENERAL] Problems with ODBC Driver
I have a notebook that I am using as a server for testing purposes and it has the official ODBC driver installed. I can access this and use it connect to PostreSql. On a second machine on the same network - I also have the same ODBC driver installed. The behaviour of this one is quite different. I have a test program that works on the server but when I run it on this workstation, it can create a file with a couple of fields which I can verify by looking inside PostgreSQL. But when I go to open up that file and refer to one of its fields, I get an error message saying the field doesn't exist. I am running XP Professional on both machines. I have no idea what is happening - I have opened up the firewall on both machines to allow port 5432 to be used by all applications with TCP (also tried TCP and UDP). Does anyone have any suggestions? Cheers Phil Jackson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Problems with ODBC Driver
Hi Hiroshi The same program works on the server unchanged - that's why I think it is something to do with the driver. The error message says that the field name doesn't exist when it does as can be verified by looking at the database in Postgresql. Cheers Phil Jackson On 9/2/2010 3:19 AM, Hiroshi Saito wrote: Hi. There is very little information which explores a problem. What thing is an actual error message? The following is my imagination As for specific field, it may be the LOWER problem of a character. example) -- CREATE TABLE xx("Field" TEXT); postgres=# SELECT Field FROM xx; ERROR: column "field" does not exist LINE 1: SELECT Field FROM xx; postgres=# SELECT "Field" FROM xx; Field --- (0 rows) If my misunderstand, the error message which you show will be able to be solved. Regards, Hiroshi Saito - Original Message - From: "Phil Jackson" I have a notebook that I am using as a server for testing purposes and it has the official ODBC driver installed. I can access this and use it connect to PostreSql. On a second machine on the same network - I also have the same ODBC driver installed. The behaviour of this one is quite different. I have a test program that works on the server but when I run it on this workstation, it can create a file with a couple of fields which I can verify by looking inside PostgreSQL. But when I go to open up that file and refer to one of its fields, I get an error message saying the field doesn't exist. I am running XP Professional on both machines. I have no idea what is happening - I have opened up the firewall on both machines to allow port 5432 to be used by all applications with TCP (also tried TCP and UDP). Does anyone have any suggestions? Cheers Phil Jackson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Why is cast array integer[] <--> text[] is not immutable.
Hi I'm struggling to understand why this casts is not immutable: integer[]::text[] text[]::integer[] The following are all immutable: integer::text text::integer integer[]::float[] integer::float I hit on this while trying to make a gin index which cast from one to the other. Why does the encapsulation of an array suddenly make this not immutable? Thanks -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Why is cast array integer[] <--> text[] is not immutable.
Thanks I'm having trouble finding any reference to array_out and array_in in the documentation. Is there a way to set a different cast for an array? Regards On 9 December 2011 15:09, Tom Lane wrote: > Phil Couling writes: >> I'm struggling to understand why this casts is not immutable: > >> integer[]::text[] >> text[]::integer[] > > Because it's implemented via array_out/array_in rather than any more > direct method, and those are marked stable because they potentially > invoke non-immutable element I/O functions. > > regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Do TEMP Tables have an OID? Can this be a problem if used too frequently?
Hi I've just written a search function which creates a temp table, preforms some reasoning on it returning results then drops it again. I'm using temp tables in an attempt to gain efficiency (not repeating work between one section of the function and another). However I'm worried that there may be some pit falls in doing this. I'm especially worried about OIDs. Does creating a temp table assign an OID to the table? If so am I right to assume that, if the function is used too frequently, it could cause the database to crash by wraping OIDs? Thanks very much for your time Phil -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Scalability with large numbers of tables
Dear Postgresql experts, I have a single database with one schema per user. Each user has a handful of tables, but there are lots of users, so in total the database has thousands of tables. I'm a bit concerned about scalability as this continues to grow. For example I find that tab-completion in psql is now unusably slow; if there is anything more important where the algorithmic complexity is the same then it will be causing a problem. There are 42,000 files in the database directory. This is enough that, with a "traditional" unix filesystem like ext2/3, kernel operations on directories take a significant time. (In other applications I've generally used a guide of 100-1000 files per directory before adding extra layers, but I don't know how valid this is.) I'm interested to know if anyone has any experiences to share with similar large numbers of tables. Should I worry about it? I don't want to wait until something breaks badly if I need architectural changes. Presumably tablespaces could be used to avoid the too-many-files-per-directory issue, though I've not moved to 8.0 yet. Thanks Phil. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Triggers, again.. ;-)
Greg wrote: > is "DROP TRIGGER" transaction safe?... I mean, could I do: > > BEGIN > DROP TRIGGER category_mv_ut; > ... insert rows > ... update materialized view table > CREATE TRIGGER category_mv_ut AFTER UPDATE ON items > FOR EACH ROW EXECUTE PROCEDURE update_ut(); > COMMIT; > > .. without other sessions being affected? This is exactly what I do to avoid matview bulk-update performance issues. --Phil. ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] Problems with select statement
Hi all, I'm haveing trouble writing a select statement that gives me a resulting row like this: player_number, player name,a,c,t, (total of a+c+t). here's what I have so far but it puts the a,c,t in separate rows. hardwoodthunder=# select (select player_points where aspect='A') as A,(select player_points where aspect='C') as C, (select player_points where aspect='T') as T, player_number, player_name from (select * from player as a, teamshare as b where a.player_number=b.player) as c; a | c | t | player_number | player_name --+--+-+---+- 0.27 | | | 42| S. Randolph | 0.33 | | 42| S. Randolph | | 0.8 | 42| S. Randolph 0.54 | | | 42| S. Randolph | 0.66 | | 42| S. Randolph | | 0.8 | 42| S. Randolph 0.27 | | | 40| R. Perkins | 0.33 | | 40| R. Perkins (8 rows) Any ideas on how to consolidate the rows? thanks in advance, Phil Campagine ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] Problems with select statement
I see that I need another qualifier, cum_score in the select statement, but I still need to consolidate the rows: hardwoodthunder=# select player_number, player_name, cum_score, (select player_points where aspect='A') as A ,(select player_points where aspect='C') as C, (select player_points where aspect='T') as T from (select * from player as a,teamshare as b where a.player_number=b.player) as c; player_number | player_name | cum_score | a | c | t ---+-+---+--+--+- 42| S. Randolph | 4 | 0.27 | | 42| S. Randolph | 4 | | 0.33 | 42| S. Randolph | 4 | | | 0.8 42| S. Randolph | 2 | 0.54 | | 42| S. Randolph | 2 | | 0.66 | 42| S. Randolph | 2 | | | 0.8 40| R. Perkins | 4 | 0.27 | | 40| R. Perkins | 4 | | 0.33 | (8 rows) How all three (Randolph, 4) rows into one row? thanks, Phil ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Problem with select statement
Yes Gnari it works now! all I had to do in addition to your advice was alias the sub select: hardwoodthunder=# select player_number,player_name,cum_score,sum(a),sum(c),sum(t) from ( select player_number, player_name, cum_score, (select player_points where aspect='A') as A ,(select player_points where aspect='C') as C, (select player_points where aspect='T') as T from (select * from player as a,teamshare as b where a.player_number=b.player) as c) as B group by player_number, player_name, cum_score; player_number | player_name | cum_score | sum | sum | sum ---+-+---+--+--+- 40| R. Perkins | 4 | 0.27 | 0.33 | 42| S. Randolph | 2 | 0.54 | 0.66 | 0.8 42| S. Randolph | 4 | 0.27 | 0.33 | 0.8 (3 rows) Now what about the total? How do I add another column that is the sum of a,c,t? I tried sum(sum(a),sum(c),sum(t)) but it doesn't work. Thanks, Phil ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Problem with select statement
Ragnar Hafstað wrote: On Sun, 2005-02-20 at 18:18 -0500, phil campaigne wrote: Yes Gnari it works now! all I had to do in addition to your advice was alias the sub select: hardwoodthunder=# select player_number,player_name,cum_score,sum(a),sum(c),sum(t) from ( select player_number, player_name, cum_score, (select player_points where aspect='A') as A ,(select player_points where aspect='C') as C, (select player_points where aspect='T') as T from (select * from player as a,teamshare as b where a.player_number=b.player) as c) as B group by player_number, player_name, cum_score; player_number | player_name | cum_score | sum | sum | sum ---+-+---+--+--+- 40| R. Perkins | 4 | 0.27 | 0.33 | 42| S. Randolph | 2 | 0.54 | 0.66 | 0.8 42| S. Randolph | 4 | 0.27 | 0.33 | 0.8 (3 rows) Now what about the total? How do I add another column that is the sum of a,c,t? maybe something like: select player_number,player_name,cum_score, suma,sumc,sumt, coalesce(suma,0)+coalesce(sumc,0)+coalesce(sumt,0) as sumtotal from ( select player_number,player_name,cum_score,sum(a) as a,sum(c) as c,sum(t) as t from ( select player_number, player_name, cum_score, (select player_points where aspect='A') as A ,(select player_points where aspect='C') as C, (select player_points where aspect='T') as T from (select * from player as a,teamshare as b where a.player_number=b.player) as c) as B group by player_number, player_name, cum_score) as foo); gnari gnari, Yup it works! all I changed in your code was parens around the a,c,t. Here is the final working code: select player_number,player_name,cum_score,sum(a) as Asum,sum(c) as Csum,sum(t) as Tsum, coalesce(sum(a),0)+coalesce(sum(c),0)+coalesce(sum(t),0) as sumtotal from ( select player_number, player_name, cum_score, (select player_points where aspect='A') as A ,(select player_points where aspect='C') as C, (select player_points where aspect='T') as T from (select * from player as a,teamshare as b where a.player_number=b.player) as c) as B group by player_number, player_name, cum_score; player_number | player_name | cum_score | asum | csum | tsum | sumtotal ---+-+---+--+--+--+-- 40| R. Perkins | 4 | 0.27 | 0.33 | | 0.6 42| S. Randolph | 2 | 0.54 | 0.66 | 0.8 |2 42| S. Randolph | 4 | 0.27 | 0.33 | 0.8 | 1.4 (3 rows) or, taking out the cum_score qualifier: select player_number,player_name,sum(a) as Asum,sum(c) as Csum,sum(t) as Tsum, coalesce(sum(a),0)+coalesce(sum(c),0)+coalesce(sum(t),0) as sumtotal from ( select player_number, player_name, (select player_points where aspect='A') as A ,(select player_points where aspect='C') as C, (select player_points where aspect='T') as T from (select * from player as a,teamshare as b where a.player_number=b.player) as c) as B group by player_number, player_name; player_number | player_name | asum | csum | tsum | sumtotal ---+-+--+--+--+-- 40| R. Perkins | 0.27 | 0.33 | | 0.6 42| S. Randolph | 0.81 | 0.99 | 1.6 | 3.4 (2 rows) Thanks so much, Phil pgsql-general@postgresql.org ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Problem with select statement
Ragnar Hafstað wrote: On Sun, 2005-02-20 at 20:01 -0500, phil campaigne wrote: Hi gnari, I forgot one thing. I also need to restrict the selecct with two more qualifiers where contest_id =1 and team='Duke' I'm confused where to put it in the select statement. I tried this but it doesn't work: you dont say where these fit in. they can obviously not be in B, as that one did not include such columns. if they are columns of player or teamshare, then you could probably: select player_number, player_name, sum(a) as Asum, sum(c) as Csum, sum(t) as Tsum, coalesce(sum(a),0)+coalesce(sum(c),0)+coalesce(sum(t),0) as sumtotal from ( select player_number, player_name, (select player_points where aspect='A') as A , (select player_points where aspect='C') as C, (select player_points where aspect='T') as T from ( select * from player as a,teamshare as b where a.player_number=b.player and contest_id=1 and team='Duke' ) as c ) as B group by player_number, player_name; gnari Gnari, Thanks, and hello to Iceland from Boston. The way you formated the text is a lesson for me. It makes the code much more intuitive but adding the phrase: and contest_id=2 and teamshare.team='Duke' as below produces results 8 times as large as the correct results. select player_number,player_name,sum(a) as Asum,sum(c) as Csum,sum(t) as Tsum, coalesce(sum(a),0)+coalesce(sum(c),0)+coalesce(sum(t),0) as sumtotal from ( select player_number, player_name, (select player_points where aspect='A') as A ,(select player_points where aspect='C') as C, (select player_points where aspect='T') as T from (select * from player as a,teamshare as b where a.player_number=b.player and teamshare.contest_id=2 and teamshare.team='Duke') as c) as B group by player_number, player_name; NOTICE: Adding missing FROM-clause entry in subquery for table "teamshare" player_number | player_name | asum | csum | tsum | sumtotal ---+-+--+--+--+-- 40| R. Perkins | 2.16 | 2.64 | | 4.8 42| S. Randolph | 6.48 | 7.92 | 12.8 | 27.2 (2 rows) I did get this NOTICE that I don't understand: NOTICE: Adding missing FROM-clause entry in subquery for table "teamshare" I tried changing the select statement to be select * from player as a *** and from *** teamshare as b But that didn't work. Here are my table definitions: Table "public.teamshare" Column | Type | Modifiers ---+---+--- teamshare_id | integer | not null possession_id | integer | not null contest_id | integer | not null team | character varying(8) | not null cum_score | integer | not null player| integer | not null aspect| character(1) | not null player_points | double precision | organization | character varying(14) | Indexes: teamshare_pkey primary key btree (teamshare_id) Table "public.player" Column| Type | Modifiers -+---+--- player_id| integer | not null player_number | character varying(3) | not null player_name | character varying(14) | not null team | character varying(24) | not null organization| character varying(12) | not null player_compensation | integer | Indexes: player_pkey primary key btree (player_id) I can't think of any rationale for placing the qualifying phrase anywhere else. Any Ideas? thanks, Phil ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] error while loading shared libraries: libpq.so.3
Russell Smith wrote: On Mon, 21 Feb 2005 08:05 pm, Surabhi Ahuja wrote: it now says: i do the following make sample LDLIBRARY = /usr/local/pgsql/lib ./sample LDLIBRARY: Command not found. do you still experience problems when you run LDLIBRARY=/usr/local/pgsql/lib ./sample Note there is no SPACE in the example I have given. LDLIBRARY=/usr/local... NOT LDLIBRARY = /usr/... Regards Russell Smith. From: Russell Smith [mailto:[EMAIL PROTECTED] Sent: Mon 2/21/2005 2:19 PM To: Surabhi Ahuja Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] error while loading shared libraries: libpq.so.3 *** Your mail has been scanned by InterScan VirusWall. ***-*** On Mon, 21 Feb 2005 06:17 pm, Surabhi Ahuja wrote: i have written a program sample.c it is as folllows: #include #include int main() { PGconn *myconnection = PQconnectdb(""); printf("Hi"); return 0; } I then compile this program using makefile make sample cc -I/usr/local/pgsql/includesample.c -L/usr/local/pgsql/lib -lpq -o sample I then run itand it gives and error [indra:/homes/surabi] ./sample ./sample: error while loading shared libraries: libpq.so.3: cannot open shared object file: No such file or directory Please tell what the problem is..and how should I rectify it? Your library is in /usr/local/pgsql/lib this is obviously not in your ld.so.conf do you still experience problems when you run LDLIBRARY=/usr/local/pgsql/lib ./sample Regards Russell Smith. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org Russell, Does it help if you execute the following? LD_LIBRARY_PATH=/usr/local/pgsql/lib export LD_LIBRARY_PATH PATH=/usr/local/pgsql/bin:$PATH export PATH good luck, Phil ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Scalability with large numbers of tables
Thanks to all who've responded to my concern about scalability with large numbers of tables. I am reasonably reassured. I don't really care about the tab-completion speed - it's just that its slowness was my "heads-up" to the fact that there could be something related to number of tables to worry about. Interestingly, however, bash tab completion on the files in the database directory is still instantaneous. The quoted linux-kernel thread about filesystem performance is related to creating and deleting files (the typical "untar" and "mail server" benchmarks) which is not typical of what postgres does - it simply opens them. I was unaware that ext3 had a tree structure - I assumed that its backward compatibility with ext2 made that impossible. So that's another thing I don't need to worry about. Regards, --Phil. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Triggers, again.. ;-)
Greg asked: > > is "DROP TRIGGER" transaction safe?... I mean, could I do: > > > > BEGIN > > DROP TRIGGER category_mv_ut; > > ... insert rows > > ... update materialized view table > > CREATE TRIGGER category_mv_ut AFTER UPDATE ON items > > FOR EACH ROW EXECUTE PROCEDURE update_ut(); > > COMMIT; > > > > .. without other sessions being affected? I replied: > This is exactly what I do to avoid matview bulk-update > performance issues. Greg then asked: > Are there any issues I should know about with this method? Will > other transactions be able to take place during this?... Any blocking > issues?. I've never attempted anything like this and it seems > sort-of scary to me (at the very least, applying an awfully big hammer > to the problem). I am not an expert, but I use this technique. Maybe other users will have some observations. But as I perceive it, the triggers currently in force are recorded in a (system) table somewhere and that table has the same well-behaved transactional semantics as other tables. So, as far as other transactions are concerned, the triggers are unchanged and this is entirely safe. My experience suggests that it is not inefficient. As for locking, my guess is that another transaction that was also trying to create or drop triggers could block especially if it was trying to change the same triggers, but other operations will be fine. It seems less scary when you think of metadata as just being the content of more tables, rather than something special. Hopefully someone will correct me if it is worse than this! --Phil. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Triggers, again.. ;-)
Tom Lane wrote: Phil Endecott <[EMAIL PROTECTED]> writes: It seems less scary when you think of metadata as just being the content of more tables, rather than something special. PG does just fine with handling metadata changes transactionally. However, most operations that affect a table's schema at all will take an exclusive lock on the table, thereby blocking out other operations on the table until the schema-altering operation commits. This could be pretty annoying if you have lots of concurrent activity that needs to keep going --- in particular the proposed approach would lock out access to the underlying table for as long as it takes to update the materialized view, since the DROP TRIGGER would take that exclusive lock and it'd be held till end of transaction. If that's OK then there's nothing wrong with doing it that way. Hi Tom, I was hoping that my positive-sounding message would flush out any problems... I would understand this if I were doing an "ALTER TABLE", for example. But does adding or removing a trigger really count as "schema-altering"? --Phil. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] [OT] Duration between two timestamps
Hi All, In my java application I need to subtract two java.sql.timestamps. and I want to store the result as sql type "interval". But my insert statement is failing. Does anyone know what java type I need to use in the insert statement? (it must accept null values) thanks in advance, Phil ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[GENERAL] Query performance problem
$sql = 'INSERT INTO chartdetails (accountcode, period) VALUES (' . $AccountRow['accountcode'] . ', ' . $PeriodNo . ')'; $InsChartDetails = DB_query($sql,$db); DB_free_result($InsChartDetails); } } /*Now run through each of the new chartdetail records created for each account and update them with the B/Fwd and B/Fwd budget no updates would be required where there were previously no chart details set up ie FirstPeriodPostedTo > 0 */ for ($PeriodNo=$CreateFrom;$PeriodNo<=$CreateTo; $PeriodNo++) { $sql = 'SELECT accountcode, period, actual + bfwd AS cfwd, budget + bfwdbudget AS cfwdbudget FROM chartdetails WHERE period =' . ($PeriodNo - 1); $ChartDetailsCFwd = DB_query($sql,$db); while ($myrow = DB_fetch_array($ChartDetailsCFwd)){ $sql = 'UPDATE chartdetails SET bfwd =' . $myrow['cfwd'] . ', bfwdbudget =' . $myrow['cfwdbudget'] . ' WHERE accountcode = ' . $myrow['accountcode'] . ' AND period >=' . $PeriodNo; $UpdChartDetails = DB_query($sql,$db, '', '', '', false); DB_free_result($UpdChartDetails); } DB_free_result($ChartDetailsCFwd); } } function DB_query ($SQL, &$Conn, $ErrorMessage='', $DebugMessage= '', $Transaction=false, $TrapErrors=true){ global $debug; $result = pg_query($Conn, $SQL); if ($DebugMessage == '') { $DebugMessage = _('The SQL that failed was:'); } //if (DB_error_no($Conn) != 0){ if ( !$result AND $TrapErrors){ prnMsg($ErrorMessage.'' . DB_error_msg($Conn),'error', _('DB ERROR:')); if ($debug==1){ echo '' . $DebugMessage. "$SQL"; } if ($Transaction){ $SQL = 'rollback'; $Result = DB_query($SQL,$Conn); if (DB_error_no($Conn) !=0){ prnMsg(''. _('Error Rolling Back Transaction!!'), '', _('DB DEBUG:') ); } } if ($TrapErrors){ include('includes/footer.inc'); exit; } } return $result; } I am hoping that someone will be able to see an alternative simpler method or suggest a method of indexing the pg tables to optmise the required queries. I would appreciate any help here men. Many thanks in advance -- Phil Daintree webERP Project Admin ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Query performance problem
First time I ran it it took 5127.243 ms .. then I did a full vacuum. then ... SQL executed. Total runtime: 33.707 ms I am keen to just have the one lot of code all in the scripts ... so I was pleased when the identical sql also worked on mysql!!! Your SQL-query has been executed successfully (Query took 0.0350 sec) SQL-query : [Edit] [Explain SQL] [Create PHP Code] SELECT chartmaster.accountcode, periods.periodno FROM chartmaster, periods WHERE ( periods.periodno BETWEEN 1 AND 12 ) AND ( chartmaster.accountcode, periods.periodno ) NOT IN (SELECT accountcode, period FROM chartdetails WHERE period BETWEEN 1 AND 12 ) LIMIT 0 , 30 You'll notice the discrepancy on the timings though! Whilst pg is not performing the way mysql does with innodb - it is at least usable this way. I am guessing there is some gremlin with my install - I'll try an upgrade to v 8. Phil On Fri, 18 Mar 2005 14:07, you wrote: > See the syntax for INSERT ... SELECT shown here: > http://www.postgresql.org/docs/8.0/static/sql-insert.html > > Instead of doing a nested loop to INSERT new records, do it like this: > > For ($period = start; $period < end; $period++) > { > INSERT INTO chartdetails (accountcode, period) > SELECT accountcode, $period FROM chartdetails WHERE > (accountcode, $period) NOT IN ( > SELECT accountcode, period FROM chardetails WHERE period = > $period > ); > } > > Or if you have some table that has 1 row for each period (call it > "periods") then you could simply do: > > INSERT INTO chartdetails (accountcode, period) > SELECT accountcode, period FROM accountcode, period > WHERE (period BETWEEN $start AND $end) AND (accountcode, > period) NOT IN ( > SELECT accountcode, period FROM chartdetails WHERE period > BETWEEN $start AND $end > ); > > Note to others: see the legitimate use of an unconstrained CROSS JOIN? > > -- > > Postgres's SELECT count(*) is slow if many records meet the WHERE clause > being used. It looks like you're only using testing for 0 or >0 in your > query, so you could use: > > SELECT EXISTS (SELECT 1 FROM chartdetails WHERE ); > > This will be much faster since with EXISTS, postgres only runs the query > long enough to find out whether even one row would be returned--if so, > it stops. > > Regards, > Paul Tillotson > > Phil Daintree wrote: > >Dear psqlers, > > > >I need your help! > > > >I administer/develop an open source PHP accounting software project > > (webERP) [snip] > > > > > > $ChartAccounts = DB_query('SELECT accountcode FROM chartmaster',$db); > > > > While ($AccountRow = DB_fetch_array($ChartAccounts)){ > > > > for ($PeriodNo=$CreateFrom;$PeriodNo <= $CreateTo;$PeriodNo++) { > > > > echo '' . _('Period Number') . ' ' . $PeriodNo . > > ''; > > > > // Check if there is an chart details record set up > > $sql = 'SELECT count(*) FROM chartdetails > > WHERE > > accountcode='.$AccountRow['accountcode'].' > > AND period=' . $PeriodNo; > > $InsChartDetails = DB_query($sql,$db,'','','',false); > > $CountRows = DB_fetch_row($InsChartDetails); > > $AccountExistsAlready = $CountRows[0]; > > DB_free_result($InsChartDetails); > > if(! $AccountExistsAlready) { > > $sql = 'INSERT INTO chartdetails (accountcode, > > period) > > VALUES (' . $AccountRow['accountcode'] > > . ', > > ' . $PeriodNo . ')'; > > $InsChartDetails = DB_query($sql,$db); > > DB_free_result($InsChartDetails); > > } > > > > } > > > > /*Now run through each of the new chartdetail records created for each > >account and update them with the B/Fwd and B/Fwd budget no updates would > > be required where there were previously no chart details set up ie > >FirstPeriodPostedTo > 0 */ > > > > for ($PeriodNo=$CreateFrom;$PeriodNo<=$Creat
Re: [GENERAL] Query performance problem
I can also do the same thing without sub-queries - I messed about some more since I was keen to ensure backward compatibility with prior versions of mysql that have left/right joins but no subqueries ... quite a bit quicker still! Query took 0.0037 sec - 1/10th of the sub-query time. SELECT chartmaster.accountcode, periods.periodno FROM chartmaster INNER JOIN periods ON True LEFT JOIN chartdetails ON chartmaster.accountcode = chartdetails.accountcode AND periods.periodno = chartdetails.period WHERE periods.periodno >=1 AND periods.periodno <=63 AND chartdetails.accountcode IS NULL LIMIT 0 , 30 In postgres: SQL executed. Total runtime: 12.241 ms Still this is a third of the time of the sub-query route but 4 times longer than mysql - this must be an install issue? Thanks again for this idea Paul phil On Fri, 18 Mar 2005 14:07, you wrote: > See the syntax for INSERT ... SELECT shown here: > http://www.postgresql.org/docs/8.0/static/sql-insert.html > > Instead of doing a nested loop to INSERT new records, do it like this: > > For ($period = start; $period < end; $period++) > { > INSERT INTO chartdetails (accountcode, period) > SELECT accountcode, $period FROM chartdetails WHERE > (accountcode, $period) NOT IN ( > SELECT accountcode, period FROM chardetails WHERE period = > $period > ); > } > > Or if you have some table that has 1 row for each period (call it > "periods") then you could simply do: > > INSERT INTO chartdetails (accountcode, period) > SELECT accountcode, period FROM accountcode, period > WHERE (period BETWEEN $start AND $end) AND (accountcode, > period) NOT IN ( > SELECT accountcode, period FROM chartdetails WHERE period > BETWEEN $start AND $end > ); > > Note to others: see the legitimate use of an unconstrained CROSS JOIN? > > -- > > Postgres's SELECT count(*) is slow if many records meet the WHERE clause > being used. It looks like you're only using testing for 0 or >0 in your > query, so you could use: > > SELECT EXISTS (SELECT 1 FROM chartdetails WHERE ); > > This will be much faster since with EXISTS, postgres only runs the query > long enough to find out whether even one row would be returned--if so, > it stops. > > Regards, > Paul Tillotson > > Phil Daintree wrote: > >Dear psqlers, > > > >I need your help! > > > >I administer/develop an open source PHP accounting software project > > (webERP) [snip] > > > > > > $ChartAccounts = DB_query('SELECT accountcode FROM chartmaster',$db); > > > > While ($AccountRow = DB_fetch_array($ChartAccounts)){ > > > > for ($PeriodNo=$CreateFrom;$PeriodNo <= $CreateTo;$PeriodNo++) { > > > > echo '' . _('Period Number') . ' ' . $PeriodNo . > > ''; > > > > // Check if there is an chart details record set up > > $sql = 'SELECT count(*) FROM chartdetails > > WHERE > > accountcode='.$AccountRow['accountcode'].' > > AND period=' . $PeriodNo; > > $InsChartDetails = DB_query($sql,$db,'','','',false); > > $CountRows = DB_fetch_row($InsChartDetails); > > $AccountExistsAlready = $CountRows[0]; > > DB_free_result($InsChartDetails); > > if(! $AccountExistsAlready) { > > $sql = 'INSERT INTO chartdetails (accountcode, > > period) > > VALUES (' . $AccountRow['accountcode'] > > . ', > > ' . $PeriodNo . ')'; > > $InsChartDetails = DB_query($sql,$db); > > DB_free_result($InsChartDetails); > > } > > > > } > > > > /*Now run through each of the new chartdetail records created for each > >account and update them with the B/Fwd and B/Fwd budget no updates would > > be required where there were previously no chart details set up ie > >FirstPeriodPostedTo > 0 */ > > > > for ($PeriodNo=$CreateFrom;$PeriodNo<=$CreateTo; $PeriodNo++) { > > > > $sql = 'SELECT accountcode, > >
[GENERAL] Restore to a database with another name?
Hi All, I want to move my development database (7.2.3) to my production server but the production server database has a different name. What is the procedure for restoring a database pg_dump to another machine with a different database name? Thanks in advance, Phil ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Restore to a database with another name?
Scott Marlowe wrote: On Wed, 2005-03-30 at 14:28, phil campaigne wrote: Hi All, I want to move my development database (7.2.3) to my production server but the production server database has a different name. What is the procedure for restoring a database pg_dump to another machine with a different database name? pg_dump -h source_host source_dbname |psql -h dest_host dest_dbname Hi Scott, Unfortunately my production machine is remote and I must transfer it via FTP. How would that change your suggestion? Thanks, Phil ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Restore to a database with another name?
Lonni J Friedman wrote: On Wed, 30 Mar 2005 15:58:02 -0500, phil campaigne <[EMAIL PROTECTED]> wrote: Scott Marlowe wrote: On Wed, 2005-03-30 at 14:28, phil campaigne wrote: Hi All, I want to move my development database (7.2.3) to my production server but the production server database has a different name. What is the procedure for restoring a database pg_dump to another machine with a different database name? pg_dump -h source_host source_dbname |psql -h dest_host dest_dbname Hi Scott, Unfortunately my production machine is remote and I must transfer it via FTP. How would that change your suggestion? You break it into two separate commands at the pipe, and use pg_restore to import it back in on the target box. Lonni, Before I try that, I should tell you that I used pg_dump to create the backup. I read in the manual that psql client restores text files from pg_dump. Should I still try pg_restore? thanks, Phil ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Postgresql .NET Data Provider for Visual Studio 2005
You can manually add the npgsql dll to the toolbar so that you can use the npgsql ado.net data objects in the same way as ado.net oledb objects. To do this right click on the toolbar. Select Add/Remove Items Browse to and add the npgsql.dll Click ok and the npgsql objects will have been added to your toolbar. Now just drag and drop on the form designer! phil.. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Philippe Lang Sent: 23 May 2005 14:44 To: Howard Cole; PgSql General Subject: Re: [GENERAL] Postgresql .NET Data Provider for Visual Studio 2005 Hi, Is there a way to install the npgsql driver under Visual Studio so it appears as a set of visual controls we can drop on a form, or is it a code library only? -Message d'origine- De : [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] De la part de Howard Cole Envoyé : lundi, 23. mai 2005 15:27 À : 'PgSql General' Objet : Re: [GENERAL] Postgresql .NET Data Provider for Visual Studio 2005 I have been using the npgsql driver on both .NET and Mono for over 6 months now with no problems. Howard Cole www.selestial.com Philippe Lang wrote: >Does anyone successfully use a .Net Data provider for Postgresql? I was >able to find two of them: > > ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[GENERAL] CPU-intensive autovacuuming
Dear Postgresql experts, I'm trying to work out why my system spends so much time autovacuuming. It looks like pg_autovacuum wakes up every few minutes and does a query on some system tables, and then spins doing no more queries but burning all available CPU cycles for a a couple of minutes, before sleeping again. I'm logging all queries to syslog, and the only queries that autovacuum seems to run are about 3 selecting from pg_class and some stats tables. They complete in a couple of seconds. I would see VACUUM commands in there as well (wouldn't I?) if it was actually doing anything, but I don't. Since not much is happening on the system I'm not suprised that it decides that nothing needs vacuuming. But instead I think it "spins"; and it's the pg_autovacuum process, not a postmaster, that is taking all the CPU. I wonder if this is because I have a lot of tables (about 50,000 I think - I have one schema per user and each schema a couple of dozen tables). Could it be that there is some code in autovacuum that is O(n^2) in the number of tables, or something like that? Has anyone seen anything like this before? Any debugging suggestions? This is with the Debian package of 7.4.2, and all the default autovacuum settings. Cheers, Phil. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] CPU-intensive autovacuuming
Following up on my own post from last night: > Could it be that there is some code in autovacuum that is O(n^2) in > the number of tables? Browsing the code using webcvs, I have found this: for (j = 0; j < PQntuples(res); j++) { tbl_elem = DLGetHead(dbs->table_list); while (tbl_elem != NULL) { I haven't really tried to understand what is going on in here, but it does look like it is getting the result of the "pg_class join stats" query and then matching it up against its internal list of tables using nested loops, which is undoubtedly O(n^2) in the number of tables. Have I correctly understood what is going on here? --Phil. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] CPU-intensive autovacuuming
Matthew T. O'Connor wrote: Phil Endecott wrote: > Could it be that there is some code in autovacuum that is O(n^2) in > the number of tables? Browsing the code using webcvs, I have found this: for (j = 0; j < PQntuples(res); j++) { tbl_elem = DLGetHead(dbs->table_list); while (tbl_elem != NULL) { Have I correctly understood what is going on here? Indeed you have. I have head a few similar reports but perhaps none as bad as yours. One person put a small sleep value so that it doesn't spin so tight. You could also just up the sleep delay so that it doesn't do this work quite so often. No other quick suggestions. I do wonder why autovacuum is keeping its table list in memory rather than in the database. But given that it is keeping it in memory, I think the real fix is to sort that list (or keep it ordered when building or updating it). It is trivial to also get the query results ordered, and they can then be compared in O(n) time. I notice various other places where there seem to be nested loops, e.g. in the update_table_list function. I'm not sure if they can be fixed by similar means. --Phil. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] CPU-intensive autovacuuming
Matthew T. O'Connor wrote: The integrated version of autovacuum that didn't make the cut before 8.0 avoids this problem since the autovacuum data is stored in the database. What is the status of this? Is it something that will be included in 8.1 or 8.0.n? I might be able to patch the current code but that doesn't seem like a useful thing to do if a better solution will arrive eventually. I am currently running vacuums from a cron job and I think I will be happy with that for the time being. (Incidentally, I have also found that the indexes on my pg_attributes table were taking up over half a gigabyte, which came down to less than 40 megs after reindexing them. Is there a case for having autovacuum also call reindex?) --Phil. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Propogating conditions into a query
Dear All, I have a number of complex views for which the typical use is to select exactly one row by id, e.g. "select * from V where id=nnn". Some of these selects run orders of magnitude faster than others. Looking at the output of "explain analyse" it seems that in the fast cases the "id=nnn" condition is passed down to the lower-level operations, while in the slower cases the entire view is created and then filtered using the condition as a final step. I am trying to narrow down what types of query I can use in the views to avoid the poor performance. Here are a couple of things that I have noticed: - One query had a "distinct on (id)" at the top level. This was only to cope with an obscure case where what is normally a one-to-one join could return multiple rows. Removing the "distinct" and discarding the duplicate rows in the calling code means that the "where id=nnn" is now applied as a condition for an index scan where it previously wasn't, reducing execution time by two orders of magnitude. But I can't see a reason why the "id=nnn" condition couldn't have been used inside the query, even in the presence of the "distinct" clause. - In another case I have a LEFT OUTER JOIN which can be made much faster by instead using a normal JOIN. Unfortunately a normal JOIN doesn't do what I want, but I can't see why the condition is propogated into the JOIN but not the LEFT OUTER JOIN. Here is an outline of the query: D left outer join (M join G on (M.g=G.id)) on (D.id=M.b) where D.id=nnn That does index scans on M and G and a merge join to create the complete "M join G" table. On the other hand, if I do D join (M join G on (M.g=G.id)) on (D.id=M.b) where D.id=nnn then it does conditional index scans on D.id=nnn and M.b=nnn and a nested loop join returning one row, followed by a conditional index scan on G. This is an order of magnitude faster. I don't think this is a problem with statistics; the row-count estimates are all reasonable. I imagine that the restriction is something missing in the query optimiser. Can I rewrite this query somehow? Is there anything else I can do about it? This is with 7.4.2. Cheers, Phil. ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Propogating conditions into a query
Tom Lane wrote: Phil Endecott <[EMAIL PROTECTED]> writes: I have a number of complex views for which the typical use is to select exactly one row by id, e.g. "select * from V where id=nnn". Some of these selects run orders of magnitude faster than others. Looking at the output of "explain analyse" it seems that in the fast cases the "id=nnn" condition is passed down to the lower-level operations, while in the slower cases the entire view is created and then filtered using the condition as a final step. When in doubt, use the source ;-) ... most sorts of things like this are pretty well commented, if you can find the relevant code. Good plan. * 3. If the subquery uses DISTINCT ON, we must not push down any quals that * refer to non-DISTINCT output columns, because that could change the set * of rows returned. OK, so this is why my query with DISTINCT ON was failing. I can fix that. I don't see anything in there about LEFT OUTER JOIN though. Any ideas? --Phil. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Propogating conditions into a query
Tom Lane wrote: Phil Endecott <[EMAIL PROTECTED]> writes: D join (M join G on (M.g=G.id)) on (D.id=M.b) where D.id=nnn A possible workaround is to generate your query like D left join (M join G on (M.g=G.id)) on (D.id=M.b AND M.b=nnn) where D.id=nnn I don't suppose it would work if I did D left join (M join G on (M.g=G.id)) on (D.id=M.b) where (D.id=nnn AND (M.b=nnn or M.b IS NULL)) would it? Otherwise it breaks the view, and makes the calling code rather more messy. --Phil. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Pushing limit into subqueries of a union
Dear Experts, Here is another "how can I rewrite this to go faster" idea. I have two tables T1 and T2 and a view V that is the UNION ALL of T1 and T2. The tables have an editdate field, and I want to get the n most recently changed rows: select * from V order by editdate desc limit 40; This seems to unconditionally read the whole of T1 and T2, so it is slow. T1 and T2 both have indexes on the editdate attribute, so if I write (select * from T1 order by editdate desc limit 40) union all (select * from T2 order by editdate desc limit 40) order by editdate desc limit 40; I get the same results, about 1000 times faster. I presume that PostgreSQL doesn't try to push the limit clause into the subqueries of a UNION ALL in this way. I believe it is safe, isn't it? Cheers, Phil. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Megabytes of stats saved after every connection
Dear Postgresql experts, For some time I had been trying to work out why every connection to my database resulted in several megabytes of data being written to the disk, however trivial the query. I think I've found the culprit: global/pgstat.stat. This is with 7.4.7. This is for a web application which uses a new connection for each CGI request. The server doesn't have a particularly high disk bandwidth and this mysterious activity had been the bottleneck for some time. The system is a little unusual as one of the databases has tens of thousands of tables (though I saw these writes whichever database I connected to). Looking at the output of vmstat I could see about 2.7Mbytes being written up to about 5 seconds after the query was processed. I was scratching my head about this for a long time, but today I noticed that this size was just a little larger than my global/pgstat.stat file. So I turned off stat_start_collector and stats_row_level and the writes vanished. Turing them back on, the pgstats.stats file is much smaller (10k) and the writes are invisible against the background noise. So can I expect this file to grow again? I think I need the stats, though I'm not entirely sure about that. Was the entire file re-written, even when the only query I've run is "select 1"? Is this necessary? Any comments or suggestions gratefully received. --Phil. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Megabytes of stats saved after every connection
Hello again, Just to give a bit of background, in case it is useful: this is my family tree website, treefic.com. I have a schema for each user, each with about a dozen tables. In most cases the tables are small, i.e. tens of entries, but the users I care about are the ones with tens of thousands of people in their trees. The schemas are independent of each other. Example web page: http://treefic.com/treefic/royal92 Jan Wieck <[EMAIL PROTECTED]> writes: PostgreSQL itself doesn't work too well with tens of thousands of tables. I've specifically asked about this here before. This is obviously important for my application so I invite all readers to share any thoughts they might have about possible problems with large numbers of tables. I also create and drop large numbers of temporary tables - can anyone think of any additional problems with that? Issues I have discussed here before include tab-completion in psql (unimportant) and autovacuum's O(n^2) performance (important). Okay, I should be more specific. The problem with tens of thousands of tables does not exist just because of them being there. It will emerge if all those tables are actually used because it will mean that you'd need all the pg_class and pg_attribute rows cached and also your vfd cache will constantly rotate. If many trees are being viewed simultaneously, another part of the system will be the bottleneck. Within any, say, 5 minute period, only hundreds of tables will be in use. Then again, the stats file is only written. There is nothing that actually forces the blocks out. On a busy system, one individual stats file will be created, written to, renamed, live for 500ms and be thrown away by the next stat files rename operation. I would assume that with a decent filesystem and appropriate OS buffers, none of the data blocks of most stat files even hit the disk. I must be missing something. This is possibly true --- Phil, do you see actual disk I/O happening from the stats writes, or is it just kernel calls? During my tests the system was idle; I would run "psql -c 'select 1;'" and see the blocks in vmstat's "bo" column a couple of seconds later. As I understand it that indicates actual I/O, and the delay suggests that it is being flushed by the kernel. When the system is busy it is harder to see what is going on and it is possible that at least some of this activity was not being written to the disk. Typically I would see a lot more write bandwidth than read bandwidth (by a factor of 5 or so) according to vmstat; any advice about how to identify what files or processes are involved would be appreciated. I had previously imagined that it could be temporary tables. This is Linux 2.4.26 and an ext3 filesystem. Having disabled stats earlier my stats file is still quite small. Presumably it will gradually grow back. In the meantime I cannot do any experiments. Thanks as ever for your prompt responses. Regards, --Phil. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Megabytes of stats saved after every connection
Scott Marlowe wrote: Yeah, I found these three facets of the OP's system a bit disconcerting: QUOTE --- This is for a web application which uses a new connection for each CGI request. The server doesn't have a particularly high disk bandwidth and this mysterious activity had been the bottleneck for some time. The system is a little unusual as one of the databases has tens of thousands of tables. ENDQUOTE --- Any two of those choices could cause some issues, but all three together are pretty much a death knell for performance, whether or not the global/pgstat file is being written or not. See my previous message for some background about the application and an example URL. When PostgreSQL is running smoothly, it is not the bottleneck in the system: all it has to do is read maybe 100k from the disk (or more likely the cache), do some in-memory sorts and joins, and pass it to the rest of the application. As far as I can see it is only because some parts of PostgreSQL have poor O(num tables) performance that things are slowing down. --Phil. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Megabytes of stats saved after every connection
>> This is Linux 2.4.26 and an ext3 filesystem. > With the dir_index feature or without? With, I believe. It is enabled in the superblock (tune2fs -O dir_index) but this was not done when the filesystem was created so only new directories are indexed I think. I don't think there's a way to index an existing directory on a mounted filesystem, or to tell if a particular directory is indexed. I created new directories for my postgres data and moved the files into them in the hope that they would then have indexes, but am not sure how to check. In any case, this does not seem to be a bottleneck. --Phil. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Megabytes of stats saved after every connection
Greg Stark wrote: Phil Endecott wrote: Just to give a bit of background, in case it is useful: this is my family tree website, treefic.com. I have a schema for each user, each with about a dozen tables. In most cases the tables are small, i.e. tens of entries, but the users I care about are the ones with tens of thousands of people in their trees. The schemas are independent of each other. Example web page: I would strongly suggest you reconsider this design altogether. A normal (and normalized) design would have a users table that assigns a sequential id to each user. Then every other table would combine everybody's data but have a user id column to indicate which user that row belonged to. If you don't believe there's anything wrong with your current system, consider what it would look like to query your existing schema to find out the answer to the question "how many users have > 1000 people in their tree". Or "how many users have updated their tree in the last 7 days". Those aren't questions that I need to answer often. The sort of question I do need to answer is this: starting from individual X, find all the ancestors and descendants for n generations. This involves n iterations of a loop, joining the relatives found so far with the next generation. If there are p people in the tree this has something like O(n log p) complexity. On the other hand, if I stored all users' data in the same tables and I had u users, this operation would have O(n log (u*p)) complexity. My guess is that it would be about an order of magnitude slower. The individual users' sites are entirely disjoint - there are no queries that overlap them. --Phil. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Megabytes of stats saved after every connection
Greg Stark wrote: The sort of question I do need to answer is this: starting from individual X, find all the ancestors and descendants for n generations. This involves n iterations of a loop, joining the relatives found so far with the next generation. If there are p people in the tree this has something like O(n log p) complexity. On the other hand, if I stored all users' data in the same tables and I had u users, this operation would have O(n log (u*p)) complexity. My guess is that it would be about an order of magnitude slower. You're omitting the time spent finding the actual table for the correct user in your current scheme. That's exactly the same as the log(u) factor above. I hope not - can anyone confirm? I have the impression that within a plpgsql function, the table lookup cost happens once, and subsequent accesses to the same table are cheap. In fact this characteristic has caused problems for me in the past, see http://archives.postgresql.org/pgsql-general/2004-09/msg00316.php I hope that the same is true of PQexecPrepared - can anyone confirm? You might be interested in the ltree contrib module and gist indexes. You might be able to do this recursive algorithm in a single indexed non-recursive query using them. I could use something like "CONNECT BY", though last time I investigated I believe there were some stability concerns with the patch. Unfortunately genealogies are not trees in anything other than the informal sense of the word, so I don't think ltree is applicable. The individual users' sites are entirely disjoint - there are no queries that overlap them. If you had a more flexible design you might find that you have a wealth of data that you're currently not able to see because your design hides it. I have a wealth of data that the majority of my users want me to keep private. There are other sites that try to match up peoples' genealogies, and I'm not competing with them. Thanks for your suggestions Greg, but I think I know what I'm doing. The Postgresql core copes well with this setup. It's just peripheral things, like autovacuum and this stats writing issue, where poor big-O complexity had gone un-noticed. --Phil. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] psql: server closed the connection unexpetedly
Hi, I can't connect to a new installation of PostgreSQL 8.1 on a XP SP1 laptop. Im trying to connect using psql in the laptop's command prompt. i keep getting the following error message: psql: server closed the connection unexplectedly this probably means the server terminated abnormally before or while processing the request I can start & stop the postgresql service with no problems. and if i try to connect while postgresql is not running i get the expected error: psql: could not connect to server: Connection refused (0x274D/10061) Is the server running on host "???" and accepting TCP/IP connections on port 5432? the laptop has a trendmicro firewall installed, but this is set to allow all connections from the localhost. plus i have other XP machines with the same setup and postgresql installation working perfectly. has anyone else ever come across the problem? or any suggestions as to what the problem may be? thanks in advance phil.. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] psql: server closed the connection unexpetedly
Sean Davis wrote: On 8/24/05 7:24 AM, "Phil Thornhill" <[EMAIL PROTECTED]> wrote: Hi, I can't connect to a new installation of PostgreSQL 8.1 on a XP SP1 laptop. Im trying to connect using psql in the laptop's command prompt. i keep getting the following error message: psql: server closed the connection unexplectedly this probably means the server terminated abnormally before or while processing the request I can start & stop the postgresql service with no problems. and if i try to connect while postgresql is not running i get the expected error: psql: could not connect to server: Connection refused (0x274D/10061) Is the server running on host "???" and accepting TCP/IP connections on port 5432? the laptop has a trendmicro firewall installed, but this is set to allow all connections from the localhost. plus i have other XP machines with the same setup and postgresql installation working perfectly. has anyone else ever come across the problem? or any suggestions as to what the problem may be? I'm not sure what the problem is, but have you looked at the log file? Also, you ARE running 8.1 (which isn't even in Beta testing yet)? Sean my apologies i'm using 8.0.1 I have had a look at the XP event Viewer , but that only has entries regarding the installation process. where else should i be looking and should i be changing anything in the postgresql.conf file to ensure connection attempts are logged? thanks phil..
[GENERAL] Installation Problem
Hi All, I'm trying ot install postgresql 8.0.4 on suse 9.0. I can run ./configure but when I try to run make, the program cannot find a usable c compiler. I tried ./configure CC=/usr/lib/gcc-lib/i586-suse-linux/3.3.3/cc1 but get the error: cannot run c compiled programs. I tried to downlaod and install GCC.3.3.4 but the error message says it needs a c compiler Any ideas on how I might proceed? thanks, Phil ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Optimising "full outer join where" for muti-row to multi-column view
gt; and photo_id=1201; I would like it to optimise away the unused joins, leaving: photos=> explain analyse select e1.value as orientation photos-> from exif e1 photos-> where e1.tag='Orientation' photos-> and photo_id=1201; Is there a good reason why it cannot do this? Remeber that photo_id and tag are the primary key for exif, so the view cannot have more than one row per photo_id. This is not the first problem I have had with non-trivial views, where PostgreSQL has failed to simplify a query on the view as I had hoped it would, either because the semantics of SQL mean that it is unable to (with no way of describing the additional constraints that apply to that data and could make the simplification possible), or because the query optimiser doesn't detect a possible optimisation. Back in 7.4 days I resorted to materialised views updated using triggers, which involved enormously more work than a normal view. Has anything been done, or is anything planned, to make this easier? I imagine that the basic problem here, projecting data from multiple rows into a single row with multiple columns, is a fairly common one. Are there any tricks to doing this that anyone can share? I'm using PostgreSQL 8.1. Many thanks, Phil. (You are welcome to CC: me in any replies.) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] "explain analyse" much slower than actual query
Dear All, I want to find all of the msg_ids from "messages" that are not in table "part_tsearch", where both tables are large but the result of the query is normally small, and often empty. To 'protect' the application against the unusual case where the result of the query is large I have added a limit clause: select msg_id from messages where msg_id not in (select msg_id from part_tsearch) limit 10; Currently there are about 30,000 rows in each table and about 500 rows in the result of the (un-limit-ed) query. So it is unusual in the sense that the size of the result is relatively large and the limit clause will take effect. Both tables are indexed by msg_id. This was taking longer than I expected, so I decided to explain-analyse it. However, when I run it under explain-analyse, it takes even longer than before: decimail=> select msg_id from messages where msg_id not in (select msg_id from part_tsearch) limit 10; msg_id 67894 67809 52548 67745 67538 67540 67329 67246 67235 67140 (10 rows) (that takes about 2 seconds) decimail=> explain analyse select msg_id from messages where msg_id not in (select msg_id from part_tsearch) limit 10; QUERY PLAN Limit (cost=4301.99..10534.34 rows=10 width=4) (actual time=6677.791..72417.068 rows=10 loops=1) -> Seq Scan on messages (cost=4301.99..11966058.86 rows=19193 width=4) (actual time=6677.725..72416.427 rows=10 loops=1) Filter: (NOT (subplan)) SubPlan -> Materialize (cost=4301.99..4830.07 rows=37908 width=4) (actual time=0.097..39.494 rows=862 loops=903) -> Seq Scan on part_tsearch (cost=0.00..4115.08 rows=37908 width=4) (actual time=0.104..1679.258 rows=37901 loops=1) Total runtime: 72424.066 ms (7 rows) As I increase the limit the runtime increases as follows: limit normal runtime explain-anlyse runtime 10 277 20 5309 40 12 807 80 88 160 149 320 1016 I was not patient enough to wait for the remaining explain-analyse results, but I feel that there is a linear slowdown of about 60x between the raw query and the explain-analyse version. In general, for other queries, explain-analyse reports runtimes that agree with the runtimes of the actual queries. But the peculiar behaviour of explain-analyse is really a distraction from the fact that the query is slow, especially when the limit value is large. The system seems to be largely CPU-bound during these long run-times. The query plan reported by explain-analyse is the same in each case. How many times is it actually doing the seq-scan on part_tsearch? I see that the rows value reported for "Materialize" is rather different in the planned and actual numbers. What is this telling me? I analysed the tables immediately before starting on this. I was hoping that it would be implemented using some sort of index scan on the two tables, maybe something involing bitmaps. Is there something that I can do to the query, or to my indexes, to make this happen? I tried using "except" rather than "not in "; this isn't exactly identical as "except" is required to return sorted results, which I don't need; when the limit clause is in effect I don't care which of the possible ids are returned and I never care about the order. In this case the runtimes are as follows: limit normal runtime explain-anlyse runtime 10 15 54 20 15 55 40 19 94 80 955 160 20 68 320 10 70 Note that again explain-analyse is slower than the normal runtime, but now by a factor of about 5 rather than the previous factor of about 60. Basically, the query runtimes are now essentially constant; I imagine that they would be flatter if the machine wasn't also running other processes. The query plan shows that it is doing a sequential pass over each of the tables, sorting and then doing a set-op - exactly as expected and OK for large limits, but not for small limits. I feel that it should be possible to do this in miliseconds, not seconds, using the existing indexes. This is with Postgresql 8.1 on Debian GNU/Linux. (BTW, this is for Decimail, my postgresql-based IMAP mail server. I have recently added tsearch2-based searching. It is still somewhat experimental but has never lost any mail. If you're interested, have a look at http://decimail.org/.) Many thanks for any advice, Phil. (You are welcome to CC: me in any replies) --
Re: [GENERAL] "explain analyse" much slower than actual query
Thanks for the quick reply Tom. Tom Lane wrote: >"Phil Endecott" >writes: >> I was not patient enough to wait for the remaining explain-analyse results, >> but I feel that there is a linear slowdown of about 60x between the raw >> query and the explain-analyse version. > > Slow gettimeofday() ... fairly common on desktop-grade PC hardware :-(. It's actually a virtual machine, and I seem to recall reading something about the virtualised gettimeofday() being slow. OK, that explains it. Thanks. >> But the peculiar behaviour of explain-analyse is really a distraction from >> the fact that the query is slow, especially when the limit value is large. > > You need a "hashed subplan" for NOT IN to work reasonably fast. The > fact you're not getting one suggests you either have to raise work_mem, > or you're using some weird datatype that doesn't support hashing. It's an int, and yes, increasing work_mem makes it use a hashed subplan: QUERY PLAN -- Limit (cost=4209.76..4213.61 rows=10 width=4) (actual time=5432.840..5461.518 rows=10 loops=1) -> Seq Scan on messages (cost=4209.76..11608.23 rows=19218 width=4) (actual time=5432.776..5460.859 rows=10 loops=1) Filter: (NOT (hashed subplan)) SubPlan -> Seq Scan on part_tsearch (cost=0.00..4115.01 rows=37901 width=4) (actual time=0.390..2984.783 rows=37907 loops=1) Total runtime: 5468.817 ms So presumably work_mem must be greater than some function of the size of the table in the subquery. Is there some way to work that out? This (virtual) machine doesn't have an enormous amount of RAM so I like to keep settings like this "as high as necessary but no higher". If I understand it correctly, it is still doing a sequential scan on part_tsearch that does not terminate early due to the limit clause. So I'm still seeing run times that are rather worse than I think should be possible. Can it not step through the indexes in the way that it does for a Merge Join until it has got enough results to satisfy the limit, and then terminate? Thanks, Phil. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] "no value found for parameter 1" error for query with no parameters
Dear All, I have recently been converting some old code to use $n parameter placeholders, rather than inlining the values in the query string. It has mostly gone well, but I have encountered some odd behaviour with a temporary view. I'm using libpq to talk to an 8.1.0 server. This is the old code which works: create or replace temporary view u_messages as select * from messsages where owner='phil'; select msg_id from u_messages where msgdate>'2007-02-11 21:36:43.886004+00'::timestamp with time zone - '1 day'::interval order by msg_id; I converted the first query like this: create or replace temporary view u_messages as select * from messsages where owner=$1; I run this with "phil" as the value for $1. This completes without error. Then I run the second, unchanged, query. It does not have any $n placeholders, but I get an error: ERROR: no value found for parameter 1 So I think it is complaining that I have not supplied enough parameters even though none are needed. I have used strace to look at the communication with the backend: send(4, "P\0\0\0\223\0select msg_id from u_messages where msgdate > \'2007-02-11 21:36:43.886004+00\'::timestamp with time zone - \'1 day\'::interval order by msg_id\0\0\0B\0\0\0\16\0\0\0\0\0\0\0\1\0\1D\0\0\0\6P\0E\0\0\0\t\0\0\0\0\0S\0\0\0\4", 185, 0) = 185 Which I have broken down as follows: P \0\0\0\223 \0 select msg_id from u_messages where msgdate > \'2007-02-11 21:36:43.886004+00\'::timestamp with time zone - \'1 day\'::interval order by msg_id\0 \0\0 B \0\0\0\16 \0 \0 \0\0 \0\0 \0\1 \0\1 D \0\0\0\6 P \0 E \0\0\0\t \0 \0\0\0\0 S \0\0\0\4 That looks OK. recv(4, "1\0\0\0\0042\0\0\0\4T\0\0\0\37\0\1msg_id\0\0\2\20\224\0\1\0\0\0\27\0\4 \377\377\377\377\0\1E\0\0\0OSERROR\0C42704\0Mno value found for parameter 1\0 Fparams.c\0L120\0RlookupParam\0\0", 16384, 0) = 122 1 \0\0\0\004 (Parse complete) 2 \0\0\0\4 (Bind complete) T \0\0\0\37 \0\1 msg_id\0 \0\2\20\224 \0\1 \0\0\0\27 \0\4 \377\377\377\377 \0\1 E \0\0\0O S ERROR\0 C 42704\0 M no value found for parameter 1\0 F params.c\0 L 120\0 R lookupParam\0 \0 Also OK I think, apart from the unexpected error. Is it normal to get the 'T' line before the error? For completeness here is the earlier communication to create the temporary view: P \0\0\0` \0 create or replace temporary view u_messages as select * from messages where owner=$1\0 \0\1 \0\0\0\31 B \0\0\0\30 \0 \0 \0\1 \0\0 \0\1 \0\0\0\4 phil \0\1 \0\1 D \0\0\0\6 P \0 E \0\0\0\t \0 \0\0\0\0 S \0\0\0\4 and the response: 1 \0\0\0\004 2 \0\0\0\4 n \0\0\0\4 C \0\0\0\20 CREATE VIEW\0 Z \0\0\0\5 I It's as if the server has been confused by the presence of the $1 in the view query and has tried to substitute a value for it from those supplied when the view is used. I know next to nothing about the internals but I could imagine something like this happening since it must expand the query to include the view at some point. Maybe I'm not allowed to use $n in "create view" queries at all? Unfortunately there must be a bit more to it than this as the simple test case that I have prepared does not fail. I will investigate further. But I thought I'd post now in case anyone is aware of any known issues in this area, or has any suggestions. Unrelated to this, it used to be true that recreating temporary tables (and presumably views) would confuse stored procedures (and presumably prepared statements) that had cached some sort of pointer to the old instance of the table (or view). See for example this thread: http://archives.postgresql.org/pgsql-general/2004-09/msg00316.php Is this still true? (This isn't the cause of the problems above as it doesn't get anywhere near running any queries for a second time, but I will need to re-visit this issue.) Cheers, Phil. (You are welcome to CC: me in any replies, so I'll see them more quickly.) ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] getting postgres to emulate mysql/sqlserver bit datatype
Is there any way to force pg to accept 1 and 0 for boolean? There is something called "create cast ... without function" which /might/ do what you want. Phil. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] Re: "no value found for parameter 1" error for query with no parameters
Peter Eisentraut wrote: Phil Endecott wrote: I converted the first query like this: create or replace temporary view u_messages as select * from messsages where owner=$1; What makes you think this should work? Hi Peter, thanks for the quick reply. Well this page: http://www.postgresql.org/docs/8.1/static/libpq-exec.html starts by saying "the functions described here are used to perform SQL queries and commands" and continues "PQexecParams is like PQexec but offers additional functionality" I don't see any suggestion that there are only some commands that can use PQexecParams while others are limited to using PQexec. But rather than discus the documentation, can we first establish for certain whether I may use $n in a create view command, or whether I have to go back to converting from C types to strings and escaping them? It's a shame if I do have to do that, as changing over to binary parameters had simplified a lot of my code. Regards, Phil. (You are welcome to CC: me in any replies.) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Still unclear about PQexecParams and "create view"
Dear All, I'm still unsure about whether I am allowed to use PQexecParams to execute a create view command with $n parameters; reading between the lines of Peter Eisentraut's message I get the impression that this isn't going to work. So: - Can anyone offer a list of which commands are compatible with PQexecParams and which aren't? - Is this likely to change any time soon? - Would you agree that not giving an error message immediately is a bug? - Since I have now changed all my code to use PQexecParams and binary parameters, I think it is easiest for me to put a wrapper around it that substitutes $ parameters before passing them to PQexec. I'll post the code when I have it working - but maybe someone has already done this? Cheers, Phil. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Yet another PostgreSQL C++ binding
Dear All, I've written a C++ PostgreSQL interface library which I use in a couple of open-source applications, and I thought that I would mention it here in case it could be of use to anyone. Yes, I know there are already several such libraries, but I believe mine has a unique feature: queries are functors that use prepared statements. Here's an example: Database db("dbname=foo, username=blah"); Query insert_thing(db,"insert into things(name,num) values ($1,$2)"); SingletonQuery count_things(db,"select sum(num) from things where name=$1"); Transaction t(db); insert_thing("table",1); insert_thing("chair",4); int n = count_things("bed"); t.commit(); Note how the queries are declared using the $n placeholder syntax for parameters. Once the queries are declared the fact that they are queries can be almost forgotten - they can be used as functions. I have written up some basic documentation here: http://svn.chezphil.org/libpbe/trunk/doc/Database Do let me know if you find this useful. Regards, Phil. (I encourage you to Cc: me in any replies.) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Approximate join on timestamps
Dear Experts, I have two tables containing chronological data, and I want to join them using the timestamps. The challenge is that the timestamps only match approximately. My first attempt was something like t1 join t2 on (abs(t1.t-t2.t)<'1 min'::interval) Of course there is no "abs" for intervals, and I couldn't think of anything better than this t1 join t2 on (t1.t-t2.t<'1 min'::interval and t2.t-t1.t<'1 min'::interval) What indexes could I add to make this moderately efficient? But that query isn't really good enough. There is no single "epsillon" value that works for this data set. I really want to find the closest match. I feel that it ought to be possible to step through the two tables in timestamp order matching up elements. Is there any way to express this is SQL? (One detail is that the left table has fewer rows than the right table, and I want one output row for each row in the left table.) Many thanks for any suggestions. Phil. (You are welcome to CC: me in any replies.) ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] pg_restore -l and schema name
Dear Postgresql people, For the first time I'm trying to use pg_restore to do a partial restore. It looks as if it should be easy: pg_restore -l, filter out the required tables, then pg_restore -L. But unfortunately the listing generated by pg_resotre doesn't include the schema name! Since my database consists of lots of almost-identical schemas, I'm a bit stuck. So this message is really a feature request for "please include the schema name in the pg_restore -l output". Also, I'd be interested to hear from anyone who has faced this problem before and can suggest how to work around it. Thanks in advance for any help. Regards, --Phil. p.s. I'm logging long-running statements to syslog, and I'd also love to see schema names there. But that's a less important problem. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[GENERAL] Analyse - max_locks_per_transaction - why?
Dear PostgreSQL experts, This is with version 7.4.2. My database has grown a bit recently, mostly in number of tables but also their size, and I started to see ANALYSE failing with this message: WARNING: out of shared memory ERROR: out of shared memory HINT: You may need to increase max_locks_per_transaction. So I increased max_locks_per_transaction from 64 to 200 and, after doing a /etc/init.d/postgresql/restart rather than a /etc/init.d/postgresql/reload, it seems to work again. Naively I imagined that ANALYSE looks at each table in turn, independently. So why does it need more locks when there are more tables? Isn't "ANALYSE" with no parameter equivalent to for i in all_tables_in_database { ANALYSE i; } I'm working in a memory-poor environment (a user-mode-linux virtual machine) and I'm a bit concerned about the memory overhead if I have to keep increasing max_locks_per_transaction just to keep ANALYSE happy. As an aside, what I really need in this particular case is to analyse all of the tables in a particular schema. Having "ANALYSE schemaname" or "ANALYSE schemaname.*" would be great. I presume that I can write a function to get the same effect - has anyone already done that? Regards, Phil Endecott. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Analyse - max_locks_per_transaction - why?
I asked: Naively I imagined that ANALYSE looks at each table in turn, independently. So why does it need more locks when there are more tables? Tom replied: 7.4 runs a database-wide ANALYZE as a single transaction, so the locks accumulate. This was recognized to be a bad idea :-(. 8.0 is a bit smarter. Thanks Tom. I will upgrade to 8.0 one day but not soon. In the meantime, is there a way to judge a suficient setting for max_locks_per_transaction so that a global ANALYZE will work? It doesn't seem to be one lock per table or anything as simple as that. > The best bet in 7.4 is probably to use VACUUM ANALYZE rather than > analyzing separately. That will force it to use a transaction per > table. That's another possibility. Thanks. --Phil. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] null value of type java.sql.Time
Occasionally I want to store a null value for my java.sql.Time--> Time column in Postgresql. update event set game_clock=null where event_id=1; I can retreive the record with the null value (type Time) if I select on the primary key, select game_clock from event where event_id = 1; but when I try to select on the null column value, I get zero records. select * from event where game_clock=null; How can I retreive records with null values for a column? thanks, Phil ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] null value of type java.sql.Time
Thanks for the insight Thomas and Gregory. I think I can make a work around. Phil ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[GENERAL] Temporary tables and disk activity
Dear Postgresql experts, Looking at vmstat output on my database server I have been suprised to see lots of disk writes going on while it is doing what should be exclusively read-only transactions. I see almost no disk reads as the database concerned is small enough to fit into the OS disk cache. I suspect that it might be something to do with temporary tables. There are a couple of places where I create temporary tables to "optimise" queries by factoring out what would otherwise be duplicate work. The amount of data being written is of the right order of magnitude for this to be the cause. I fear that perhaps Postgresql is flushing these tables to disk, even though they will be dropped before the end of the transaction. Is this a possibility? What issues should I be aware of with temporary tables? Are there any other common causes of lots of disk writes within read-only transactions? Is there any debug output that I can look at to track this down? Thanks in advance for any help that you can offer. Regards, Phil Endecott. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] Temporary tables and disk activity
Dear All, I sent a message last weekend asking about temporary tables being written to disk but didn't get any replies. I'm sure there is someone out there who knows something about this - please help! Here is the question again: Looking at vmstat output on my database server I have been suprised to see lots of disk writes going on while it is doing what should be exclusively read-only transactions. I see almost no disk reads as the database concerned is small enough to fit into the OS disk cache. I suspect that it might be something to do with temporary tables. There are a couple of places where I create temporary tables to "optimise" queries by factoring out what would otherwise be duplicate work. The amount of data being written is of the right order of magnitude for this to be the cause. I fear that perhaps Postgresql is flushing these tables to disk, even though they will be dropped before the end of the transaction. Is this a possibility? What issues should I be aware of with temporary tables? Are there any other common causes of lots of disk writes within read-only transactions? Is there any debug output that I can look at to track this down? Thanks in advance for any help that you can offer. Regards, Phil Endecott. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Temporary tables and disk activity
Tom Lane wrote: They [temporary tables] > will be written out from PG's internal buffers, but IIRC they will never be fsync'd, and they definitely aren't WAL-logged. (These statements hold true in 8.0, but not sure how far back.) In principle, therefore, the kernel could hold temp table data in its own disk buffers and never write it out to disk until the file is deleted. In practice, of course, the kernel doesn't know the data is transient and will probably push it out whenever it has nothing else to do. That makes sense. I suspect that I am seeing writes every 5 seconds, which looks like bdflush / update. But my connections normally only last for a second at most. In this case, surely the table would normally have been deleted before the kernel decided to write anything. This is with 7.4.2 on linux 2.4.26. Does anyone have any experience with this type of situation? Is there any kernel-tweaking I can play with? Regards, Phil. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Temporary tables and disk activity
Hi Tom, I thought of a quicker way to investiage this than strace and did an ls -lt in the data directory and looked up the tables that seem to change on every transaction in pg_class. They are the catalog tables: # ls -lt /var/lib/postgres/data/base/17142/ total 530108 -rw---1 postgres postgres 6488064 Dec 13 18:44 1259 -rw---1 postgres postgres 3670016 Dec 13 18:44 1247 -rw---1 postgres postgres 38715392 Dec 13 18:44 1249 -rw---1 postgres postgres 3317760 Dec 13 18:44 16390 -rw---1 postgres postgres 13467648 Dec 13 18:44 16599 -rw---1 postgres postgres 16957440 Dec 13 18:44 16610 -rw---1 postgres postgres 4808704 Dec 13 18:44 16613 -rw---1 postgres postgres 17072128 Dec 13 18:44 16624 -rw---1 postgres postgres 14352384 Dec 13 18:44 16625 -rw---1 postgres postgres 483328 Dec 13 18:44 16630 -rw---1 postgres postgres 2228224 Dec 13 18:44 16652 -rw---1 postgres postgres 5742592 Dec 13 18:44 16653 -rw---1 postgres postgres 63578112 Dec 13 18:44 16609 -rw---1 postgres postgres 13787136 Dec 13 18:44 16614 -rw---1 postgres postgres 483328 Dec 13 18:44 16629 => select pc.relfilenode, pc.relname, pn.nspname from pg_class pc join pg_namespace pn on (pc.relnamespace=pn.oid) where pc.relfilenode in ('1259','1247','1249','16390','16599','16610','16613','16624','16625','16630','16652','16653','16609','16614','16629'); relfilenode | relname | nspname -+-+ 16599 | pg_depend | pg_catalog 16390 | pg_index| pg_catalog 1259 | pg_class| pg_catalog 1249 | pg_attribute| pg_catalog 1247 | pg_type | pg_catalog 16653 | pg_type_typname_nsp_index | pg_catalog 16652 | pg_type_oid_index | pg_catalog 16630 | pg_index_indexrelid_index | pg_catalog 16629 | pg_index_indrelid_index | pg_catalog 16625 | pg_depend_reference_index | pg_catalog 16624 | pg_depend_depender_index| pg_catalog 16614 | pg_class_relname_nsp_index | pg_catalog 16613 | pg_class_oid_index | pg_catalog 16610 | pg_attribute_relid_attnum_index | pg_catalog 16609 | pg_attribute_relid_attnam_index | pg_catalog (15 rows) Does this make sense? I imagine that the temporary table is being added to these tables and then removed again. I do have quite a large number of tables in the database; I have one schema per user and of the order of 20 tables per user and 200 users. I can imagine that in a system with fewer tables this would be insignificant, yet in my case it seems to be writing of the order of a megabyte in each 5-second update. I should mention that I ANALYSE the temporary table after creating it and before using it for anything; I'm not sure if this does any good but I put it in as it "couldn't do any harm". Any thoughts? Regards, Phil. Tom Lane wrote: Phil Endecott <[EMAIL PROTECTED]> writes: Tom Lane wrote: In principle, therefore, the kernel could hold temp table data in its own disk buffers and never write it out to disk until the file is deleted. In practice, of course, the kernel doesn't know the data is transient and will probably push it out whenever it has nothing else to do. That makes sense. I suspect that I am seeing writes every 5 seconds, which looks like bdflush / update. But my connections normally only last for a second at most. In this case, surely the table would normally have been deleted before the kernel decided to write anything. That does seem a bit odd, then. Can you strace a typical backend session and see if it's doing anything to force a disk write? (I'm too lazy to go check right now whether 7.4 handled temp tables exactly the same as CVS tip does. I think it's the same but I might be wrong.) regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Temporary tables and disk activity
Tom Lane wrote: Phil Endecott <[EMAIL PROTECTED]> writes: Does this make sense? I imagine that the temporary table is being added to these tables and then removed again. Yes, a temp table has the same catalog infrastructure as a regular table, so creation and deletion of a temp table will cause some activity in those catalogs. I thought you were concerned about the data within the temp table, though. I'm concerned about unnecessary disk activity, whatever its cause. I guessed that it was the temp table contents. I do have quite a large number of tables in the database; I have one schema per user and of the order of 20 tables per user and 200 users. I can imagine that in a system with fewer tables this would be insignificant, yet in my case it seems to be writing of the order of a megabyte in each 5-second update. That seems like a lot. How often do you create/delete temp tables? Only once or twice per 5-sec update period. I agree that it sounds like a lot which makes me think this could all be a red herring; I suspect that there is something else going on as well as this temp table stuff (possibly nothing to do with postgresql). But FYI this is treefic.com, a family tree website. Have a look at, for example, http://treefic.com/treefic/royal92?a=tree_page&root_id=10286&direction=up The first step in building that diagram is to find the ancestors of the root individual. I have a pl_pgsql function that itteratively finds all of the ancestors, progressively adding them to a temporary table. So it will create, populate, read and then drop one table for each page that it generates. This is reasonably fast; overall speed is not limited by postgres. What would happen if I were to rollback at the end of the transaction, rather than committing (having made no changes)? Would that eliminate some or all of the catalog writes? Many thanks for helping me understand this. Regards, Phil. ---(end of broadcast)--- TIP 8: explain analyze is your friend
[GENERAL] to_char(0,'9999.99') -> ".00"
Dear All, I was a bit surprised to find that to_char(0,'.99') returns .00 rather than 0.00. Is this a bug or a feature? How do I get what I want? (This is with 7.4.2. Appologies if it is a known problem. The search form in the list archives seems to split words on _, so looking for to_char finds lots of pages that just say 'char'.) Regards, Phil Endecott. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] to_char(0,'9999.99') -> ".00"
I asked: >>to_char(0,'.99') >> >>returns .00 rather than 0.00. Dave Smith replied: to_char(0,'9990.00') ? Thanks Dave, that's embarrassingly obvious... I note that the examples in the documentation don't have a 0 before the decimal point, e.g. this one to_char(148.5, 'FM999.990') '148.500' I imagine that in most cases people do want '0.000' rather than '.000', so perhaps these examples should be changed to, in this case, 990.000. Regards, --Phil. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[GENERAL] Partitioning Postgresql
Hello All, I know that with Oracle you can partition the tables into logical subsets and was wondering if this was also possible in postgres. Also is postgresql supported on linux cluusters? Thanks in advance, Phil ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Partitioning Postgresql
>phil campaigne wrote: Hello All, I know that with Oracle you can partition the tables into logical subsets and was wondering if this was also possible in postgres. >No (except by hand, with a view layered over the top). Also is postgresql supported on linux cluusters? >And no (at least not without specialised hardware). >It might be that with more information, someone can give more lengthy advice though. Does this also mean that you cannot physically split a database over multiple physical servers? Phil ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Numeric type
Hi All, I have a double type in java that I am trying to store in postgresql as type numeric and it doesn't seem to like it. The error message is, function double(numeric) does not exist unable to identify a function that satisfies the given argument type you may need to add explicit type casts. If I just want to store the number as 'xx.xx', is 'numeric' the right postgresql type? thanks, Phil ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Error Message
Hi All, I'm getting the following from an "insertStatement.executeUpdate();" error message: postgresql.stat.result [EMAIL PROTECTED] Does anybody know why? thanks, Phil ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Error Message
Hi All, false alarm... I found the problem. I tried to reuse an insertStatement. can't do it. Phil ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Import SVG file
Hi Miguel, > how can I import a SVG file to a row?? The main problem is > that it has many postgres scape symbols. Can I use the > 'text' type for that? I would probably choose to use 'bytea' rather than 'text' because of the character set issues. The SVG, being XML, describes its own character set in its header. If you have an 8859-1 database and then import a UTF-8 SVG file into a text field you will have unnecessary problems. bytea is just a sequence of bytes. The main disadvantage of bytea is that you can't do regular expression searches on them. > Have I to do some byte-stuffing before the import? Yes, you will have to do the necessary escaping. I think that the postgres C library has code to do this, and others have mentioned the possibilities with other languages. You cannot do it with anything quite as simple as the one line of psql that you posted. Regards, --Phil Endecott. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Duration between two timestamps
Hi All, I have stored event records in Postgresql 7.3.4 and now need to calculate the duration between each event in succession. I have "record_id" and a" timestamp without time zone" columns for each event. What is a good way to calculate the difference in timestamp and store it in the record as duration. I am doing this as part of a java application on RH linux 8.0. My timestamp is of the form "2005-01-30 07:51:29.149458". thanks in advance, Phil ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly