[SQL] plpgsql notify trigger
[postgres 7.0.2, x86 linux] I am trying to use a trigger to perform an sql 'notify' command. I do something like: CREATE FUNCTION run_changed() RETURNS opaque AS ' declare nm text; begin nm := NEW.run_name; notify nm return null; end; ' LANGUAGE 'plpgsql'; create trigger run_changed_tr after update on runs for each row execute procedure run_changed(); BUT, when I update the table, I get: ERROR: parser: parse error at or near "$1" It looks like the *name* (or it's alias here: $1.run_name), not the *value* of the variable nm, is passwd to the notify command. Since notify only takes a name, not a string, I don't see how to proceed. Is there some way in plsql to construct a string and have it executed in sql? disappointed in plsql, George -- George Young, Rm. L-204[EMAIL PROTECTED] MIT Lincoln Laboratory 244 Wood St. Lexington, Massachusetts 02420-9108(781) 981-2756
[SQL] need to join successive log entries into one
I have a table like: run | seq | start| done 1415|261| 2001-01-29 12:36:55| 2001-02-07 13:02:38 1415|263| 2001-02-14 07:40:04| 2001-02-15 16:05:04 1415|264| 2001-02-16 16:05:05| 2001-03-08 16:34:03 1415|265| 2001-03-08 16:34:04| 1747|257| 2001-02-15 09:14:39| 2001-03-01 08:58:03 1747|258| 2001-03-01 08:58:04| 2001-03-01 09:01:27 1747|260| 2001-03-01 09:01:39| 2001-03-01 09:02:39 1747|262| 2001-03-05 13:13:58| 1954|218| 2001-02-02 20:55:39| 2001-02-08 12:44:48 1954|219| 2001-02-08 12:44:49| 2001-02-08 12:47:36 1954|220| 2001-02-08 12:47:36| 2001-02-08 16:50:33 1954|221| 2001-02-08 16:50:33| 2001-02-08 16:50:45 1954|222| 2001-02-08 16:50:46| 2001-02-12 14:36:41 1954|223| 2001-02-12 14:36:41| 2001-03-02 10:17:15 This is a log of some operations done on some runs. 'seq' is the step within the run. I need to produce a new table that coalesces immediately successive operations on a run into one, e.g.: run | start | done 1415| 2001-01-29 12:36:55| 2001-02-07 13:02:38 1415| 2001-02-14 07:40:04| 1747| 2001-02-15 09:14:39| 2001-03-01 09:02:39 1747| 2001-03-05 13:13:58| 1954| 2001-02-02 20:55:39| 2001-03-02 10:17:15 i.e. where a run has one or more steps with succesive seq values, or equivalently, with abutting start/end values, then I want the new table to have only one entry representing the full span of time. Null 'done' just means it's not done yet. Unfortunately, the start time of a 'succesive' op is sometimes 1 second later that the 'done' time of the previous one, so maybe using the seq field is simpler. Can anyone think of a way I can do this in postgres? -- George Young, Rm. L-204[EMAIL PROTECTED] MIT Lincoln Laboratory 244 Wood St. Lexington, Massachusetts 02420-9108(781) 981-2756 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] need to join successive log entries into one
On Wed, 14 Mar 2001, you wrote: > On 3/14/01, 5:24:12 PM, George Young <[EMAIL PROTECTED]> wrote regarding [SQL] > I need to join successive log entries into one: > > I have a table like: > > > run | seq | start| done > > 1415|261| 2001-01-29 12:36:55| 2001-02-07 13:02:38 > > 1415|263| 2001-02-14 07:40:04| 2001-02-15 16:05:04 > > 1415|264| 2001-02-16 16:05:05| 2001-03-08 16:34:03 > > 1415|265| 2001-03-08 16:34:04| > > > This is a log of some operations done on some runs. 'seq' is the step > > within the run. > > > I need to produce a new table that coalesces immediately successive > > operations on a run into one, e.g.: > > > run | start | done > > 1415| 2001-01-29 12:36:55| 2001-02-07 13:02:38 > > 1415| 2001-02-14 07:40:04| > > 1747| 2001-02-15 09:14:39| 2001-03-01 09:02:39 > > 1747| 2001-03-05 13:13:58| > > 1954| 2001-02-02 20:55:39| 2001-03-02 10:17:15 > > Try: > > select run,min(start),max(done) from mytable group by run; Alas, this combines *all* entries for a given run, not just those that are imediately adjacent (in time, or by 'seq' number)... -- George Young, Rm. L-204[EMAIL PROTECTED] MIT Lincoln Laboratory 244 Wood St. Lexington, Massachusetts 02420-9108(781) 981-2756 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[SQL] select question
[postgreql 7.2, linux] I have a table T with columns run, wafer, and test: T(run text, wafer int, test text) Given a run and a set of wafers, I need the set of tests that match *all* the specified wafers: run wafer test a 1 foo a 2 foo a 3 foo a 3 bar E.g. Given run 'a' and wafers (1,3) I should get one row: foo, since only foo matches both 1 and 3. Given run 'a' and wafers (3) I should get two rows: foo,bar, since both foo and bar match 3. Is there some neat way to do this in a single query? Puzzled, George -- I cannot think why the whole bed of the ocean is not one solid mass of oysters, so prolific they seem. Ah, I am wandering! Strange how the brain controls the brain! -- Sherlock Holmes in "The Dying Detective" ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[SQL] int id's helpful for indexing, or just use text names?
[linux, postgresql 7.2, 500MHz * 4 xeon cpu's, 1GB ram, hardware raid] My current db has serveral instances of something like: table foos(fooid int2, fooname text, foouser text, foobar int2 references bars(barid)) table bars(barid int2, barname text, barcolor text, primary key(barid) ) etc, where foonames and barnames are known to be, say <20 characters long. And the fooid's and barid's are arbitrary ints only known inside the db. The original reason for these numeric id's, (in another db system long long ago), was to conserve space(now irrelvant with 120G disks) and to make searching and index usage more efficient. Recently, there is increasing call for new apps and even ad-hoc queries.(Thank goodness people are finally interested in this data!) The artificial numeric id's make it a lot harder for naive users to understand the data structure, and sometimes actually requires an extra order of joins. The question is: would I be better off losing all those integer ids and just using the text names as primary indices? Is there much performance lost comparing text strings for every index operation? My db is not huge: longest table has 100k tuples, biggest table has 1k pages, total pg_dump output is 51 Mbytes. Typical activity: ~6000 updates and inserts/day, 30,000 selects/day. -- I cannot think why the whole bed of the ocean is not one solid mass of oysters, so prolific they seem. Ah, I am wandering! Strange how the brain controls the brain! -- Sherlock Holmes in "The Dying Detective" ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[SQL] design question: status table+log table, indexes, triggers
[postgresql-7.2, pgsql, linux] Here's a schema-design problem I've hit a few times -- it seems like there should be a better way: I have a machine table (140 rows), currently very static: machine(machine_name text NOT NULL, machine_id smallint NOT NULL, area text NOT NULL, text text NOT NULL); and a machine_log table (8400 rows), appended to ~4 times/hour: machine_log(machine_name text,date timestamp, status text, usr text, comment text); This schema seemed logical at the outset, but the most common query is: select m.machine_name, m.text, ml.status, ml.date from machine m, machine_log ml where m.machine_name=ml.machine_name and ml.date=(select max(date)from machine_log where machine_name=ml.machine_name); This takes ~25 seconds which is way too long for interactive status check. The max(date) subselect kills me -- I've tried various indexes but to no avail. It looks like I need to put status and date_changed columns into the machine table, even though that info is implicit in the machine_log table. Is there some other schema that I'm just not thinking of which neatly stores some static info about each of a list of things as well as log info of status changes of those things? Or is there some index I could create on machine_log that would do the above query fast? Finally, I've never used triggers or rules -- what's the best way to maintain the status and date_changed columns in "machine" automatically when "machine_log" is appended to? Thanks, George -- I cannot think why the whole bed of the ocean is not one solid mass of oysters, so prolific they seem. Ah, I am wandering! Strange how the brain controls the brain! -- Sherlock Holmes in "The Dying Detective" ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[SQL] Design Q.:logic in app or db?
I have general design question about Postgres usage: How does one decide how much, and what parts of logic should go in DB rules, triggers, functions, constraints etc, versus what should go in the application? I see postings here from people who obviously have a lot of domain logic in the DB side. I currently have almost none. I plan to set up a bunch of RI constraints to keep things clean and consistant, but what about logic that implements frequent domain operations? Brief sketch of my project: 2 developers, 4k lines of python(gtk, pygres), 2 main GUI user apps and a few read-only scripts for web display, 50 concurrent users(all local), DB performance important but not currently a problem. The main thing not done yet is to facilitate ad-hoc queries (via odbc excel etc.) from db-naive users: maybe restructuring the db to make it simpler, maybe views and functions... The data is somewhat complex in structure. -- George -- I cannot think why the whole bed of the ocean is not one solid mass of oysters, so prolific they seem. Ah, I am wandering! Strange how the brain controls the brain! -- Sherlock Holmes in "The Dying Detective" ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[SQL] find open transactions/locks in 7.2?
[select version() --> PostgreSQL 7.2 on i686-pc-linux-gnu, compiled by GCC 3.0.4] I'm getting hangups every day or so, I presume due to some open transaction that insert/update/delete'ed on a table that is used by my main app without a commit. Is there some way (in 7.2!) to find who's locking what or who has a transaction open? -- George -- I cannot think why the whole bed of the ocean is not one solid mass of oysters, so prolific they seem. Ah, I am wandering! Strange how the brain controls the brain! -- Sherlock Holmes in "The Dying Detective" ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[SQL] backend cpu usage? [7.2]
[postgresql-7.2, x86 linux] How can I get cpu usage info in 7.2 about the backend process of my db connection? I tried looking at the pg_stat_get_backend_pid function but it requires a backend_id that I don't know how to get. If I can get the backend pid I can fork a 'ps' command from my app, though it would be nicer to get it directly through sql. -- George Young -- I cannot think why the whole bed of the ocean is not one solid mass of oysters, so prolific they seem. Ah, I am wandering! Strange how the brain controls the brain! -- Sherlock Holmes in "The Dying Detective" ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[SQL] increment int value in subset of rows?
[postgresql 7.4, SuSE x86 linux] I have a table "rtest" with primary key (run,seq) and other data. For a given value of "run", seq is a sequential run of integers, 1,2,3,4.. Now I want to insert a row into this "sequence", say run='foo', seq=2, adjusting the seq up for all subsequent foo rows. My first thought was just: update rtest set seq=seq+1 where run='foo' and seq>1; which gets: ERROR: Cannot insert a duplicate key into unique index rtest_pkey no surprise :-(. This doesn't work, since the *order* of execution of these updates is not guaranteed, and I actually would need to start with the highest value of seq and work down. There may be a thousand or so rows for 'foo' run, so an external loop of queries would be very expensive. How can I increment all the seq values for foo columns where seq > something? create table rtest(run text,seq int,data int,primary key (run,seq)); insert into rtest values('foo',1,11); insert into rtest values('foo',2,22); insert into rtest values('foo',3,33); insert into rtest values('foo',4,44); insert into rtest values('bar',1,99); I want to shift all foo rows and insert a new one so that: select * from rtest where run='foo' order by seq; would get: run | seq | data -+-+-- foo | 1 | 11 foo | 2 | 999 foo | 3 | 22 foo | 4 | 33 foo | 5 | 44 -- I cannot think why the whole bed of the ocean is not one solid mass of oysters, so prolific they seem. Ah, I am wandering! Strange how the brain controls the brain! -- Sherlock Holmes in "The Dying Detective" ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [SQL] increment int value in subset of rows?
On Mon, 24 Nov 2003 09:21:39 - "Matthew Lunnon" <[EMAIL PROTECTED]> threw this fish to the penguins: > You could write a function to do it. > > Matthew That would save me the external interaction, but still amount to ~1000 sql queries -- I'm hoping to find something O(0), i.e. a few queries regardless of the number of rows... > - Original Message - > From: george young > To: [EMAIL PROTECTED] > Sent: Monday, November 24, 2003 1:59 AM > Subject: [SQL] increment int value in subset of rows? > > > [postgresql 7.4, SuSE x86 linux] > I have a table "rtest" with primary key (run,seq) and other data. For a given > value > of "run", seq is a sequential run of integers, 1,2,3,4.. Now I want to > insert a row into this "sequence", say run='foo', seq=2, adjusting the seq up for > all subsequent foo rows. My first thought > was just: > update rtest set seq=seq+1 where run='foo' and seq>1; > which gets: > ERROR: Cannot insert a duplicate key into unique index rtest_pkey > no surprise :-(. > > This doesn't work, since the *order* of execution of these updates > is not guaranteed, and I actually would need to start with the highest > value of seq and work down. There may be a thousand or so rows for 'foo' > run, so an external loop of queries would be very expensive. > How can I increment all the seq values for foo columns where seq > something? > > create table rtest(run text,seq int,data int,primary key (run,seq)); > insert into rtest values('foo',1,11); > insert into rtest values('foo',2,22); > insert into rtest values('foo',3,33); > insert into rtest values('foo',4,44); > insert into rtest values('bar',1,99); > > I want to shift all foo rows and insert a new one so that: > select * from rtest where run='foo' order by seq; > would get: > >run | seq | data > -+-+-- >foo | 1 | 11 >foo | 2 | 999 >foo | 3 | 22 >foo | 4 | 33 >foo | 5 | 44 -- I cannot think why the whole bed of the ocean is not one solid mass of oysters, so prolific they seem. Ah, I am wandering! Strange how the brain controls the brain! -- Sherlock Holmes in "The Dying Detective" ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[SQL] debugging query to put message in pg logfile?
[postgresql-7.4RC2, python-2.3.3, PyGreSQL-3.4, SuSE x86 Linux 8.2] I've started putting debugging queries like: select "opwin.py: committing step signoff" in my app, just to have an entry in the postgres logfile. These are especially helpful in tracking down what piece of code did a 'commit', since there's nothing to distinguish one from another in the log. Is there some cheaper (or more appropriate) sql statement that will show up in the postgres log? I thought I remembered a "message" sql statement or something like that. -- George Young -- "Are the gods not just?" "Oh no, child. What would become of us if they were?" (CSL) ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] debugging query to put message in pg logfile?
On Thu, 04 Mar 2004 16:35:01 -0500 Tom Lane <[EMAIL PROTECTED]> threw this fish to the penguins: > george young <[EMAIL PROTECTED]> writes: > > I've started putting debugging queries like: > >select "opwin.py: committing step signoff" > > in my app, just to have an entry in the postgres logfile. > > > Is there some cheaper (or more appropriate) sql statement that will show > > up in the postgres log? > > You could just send SQL comments: > > -- opwin.py: committing step signoff > > One advantage of this is that you can merge the comments with actual > commands, thus not incurring even a network round-trip time for them. > If you do send it separately, it will act like an empty query string. > > People tend not to think of this because psql strips -- comments before > sending commands. But I believe all the lower-level libraries will pass > them through. (If you need to pass loggable comments through psql, I > think the /* ... */ form will work.) Alas no: Python 2.3.3 (#1, Jan 3 2004, 07:17:11) [GCC 3.3.2] on linux2 >>> import pgdb >>> db=pgdb.connect(host='ivy:5433',database='pigtest') >>> c=db.cursor() >>> cur.execute('-- the rain') Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.3/site-packages/pgdb.py", line 189, in execute self.executemany(operation, (params,)) File "/usr/local/lib/python2.3/site-packages/pgdb.py", line 210, in executemany raise OperationalError, "internal error in '%s'" % sql pgdb.OperationalError: internal error in '-- the rain' Likewise for /* comments */. :-( I'll continue this on the pygresql mailing list; and I guess stick to "select 'comment text'" for now... Thanks, -- George Young -- "Are the gods not just?" "Oh no, child. What would become of us if they were?" (CSL) ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] Rank
On Sun, 2 May 2004 02:22:37 +0800 "Muhyiddin A.M Hayat" <[EMAIL PROTECTED]> threw this fish to the penguins: > I Have below table > > id | site_name | point > +---+--- > 1 | Site A|40 > 2 | Site B|90 > 3 | Site D|22 > 4 | Site X|98 > > Would like to calc that Rank for each site, and look like > > id | site_name | point | rank > +---+---+-- > 1 | Site A|40 |3 > 2 | Site B|90 |2 > 3 | Site D|22 |4 > 4 | Site X|98 |1 Well, a simple minded solution would be: select id,site_name,point,(select count(*)from mytable t2 where t2.point >= t1.point) as rank from mytable t1; id | site_name | point | rank +---+---+-- 4 | Site X|98 |1 2 | Site B|90 |2 1 | Site A|40 |3 3 | Site D|22 |4 (4 rows) If mytable is huge this may be prohibitively slow, but it's worth a try. There's probably a self join that would be faster. Hmm... in fact: select t1.id,t1.site_name,t1.point,count(t2.point) as rank from mytable t1,mytable t2 where t2.point >=t1.point group by t1.id,t1.site_name,t1.point; id | site_name | point | rank +---+---+-- 3 | Site D|22 |4 2 | Site B|90 |2 4 | Site X|98 |1 1 | Site A|40 |3 (4 rows) -- George Young -- "Are the gods not just?" "Oh no, child. What would become of us if they were?" (CSL) ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[SQL] owner of data type "areas" appears to be invalid ?
[PostgreSQL 7.4RC2, x86 linux] Several tables are giving the errors like: pg_dump: WARNING: owner of data type "areas" appears to be invalid from pg_dump. This is my production database, (50 users, 18 hours/day, 21MB compressed dump). The output of "pg_dump -t areas" starts with: REVOKE ALL ON TABLE areas FROM PUBLIC; REVOKE ALL ON TABLE areas FROM geoyou; SET SESSION AUTHORIZATION "101"; GRANT ALL ON TABLE areas TO "101" WITH GRANT OPTION; RESET SESSION AUTHORIZATION; SET SESSION AUTHORIZATION "101"; GRANT ALL ON TABLE areas TO PUBLIC; RESET SESSION AUTHORIZATION; GRANT ALL ON TABLE areas TO PUBLIC; But "101" is the numerical group id of the "operator" group! (see dump from pg_group below). The table owner is "geoyou" as seen from the pg_class select below. User "geoyou" is in pg_user with usesysid=501. There is no row in pg_user with usesysid=101, and there is none with usename "101". How can I fix this? I must be able to get clean dumps that can be reloaded in case of a crash. -- George Young pig5=> select * from pg_class where relname='areas'; relname | relnamespace | reltype | relowner | relam | relfilenode | relpages | reltuples | reltoastrelid | reltoastidxid | relhasindex | relisshared | relkind | relnatts | relchecks | reltriggers | relukeys | relfkeys | relrefs | relhasoids | relhaspkey | relhasrules | relhassubclass |relacl -+--+--+--+---+-+--+---+---+---+-+-+-+--+---+-+--+--+-+++-++--- areas | 2200 | 14745246 | 501 | 0 |14745245 |1 |41 | 14745247 | 0 | f | f | r |2 | 0 | 0 |0 |0 | 0 | t | f | f | f | {101=a*r*w*d*R*x*t*/101,=arwdRxt/101} pig5=> select * from pg_user where usesysid=501; usename | usesysid | usecreatedb | usesuper | usecatupd | passwd | valuntil | useconfig -+--+-+--+---+--+--+--- geoyou | 501 | f | f| f | | | select * from pg_group where grosysid=101; groname | grosysid | grolist --+--+--- operator | 101 | {602,616,509,525,614,514,617,539,517,558,581,567,506,609,94,511,573,17115,327,17422,512,537,626,503,519,583,547,570,584,78,10980,518,557,564,528,546,592,599,613,510,513,536,554,500,530,594,608,524,17114,533,17116,17289,17290,17292,17294,17345,17347,17421,17423,17425,214,17430,17427,17428,574,11,391,17431,17667,17703,8309,17769,17842,17773,17874,17877,13283,12758,17966,1,17902,18099,18117,18129,18170,18173,18163,32766,18195,18202,18208,17786,17704,18375,18322,18399,18410,17904,18438,18424,28424,18437,102137,9877,502,32768,18553,13065,10681,8245,17049,15885,15886,8977,18706,18717} select * from pg_tables where tablename='areas'; schemaname | tablename | tableowner | hasindexes | hasrules | hastriggers +---+++--+- public | areas | geoyou | f | f| f -- "Are the gods not just?" "Oh no, child. What would become of us if they were?" (CSL) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] owner of data type "areas" appears to be invalid ?
Yes, that worked. Thank you very much! -- George On Fri, 14 Oct 2005 12:04:13 -0400 Tom Lane <[EMAIL PROTECTED]> threw this fish to the penguins: > george young writes: > > How can I fix this? > > Re-create the owning user (which you evidently dropped), assigning it > sysid 101. > > PG 8.1 will make it impossible to drop users who still own objects or > have permissions ... although that will bring its own set of gotchas ... > > regards, tom lane > -- "Are the gods not just?" "Oh no, child. What would become of us if they were?" (CSL) ---(end of broadcast)--- TIP 6: explain analyze is your friend
[SQL] 8.0.x windows installer fails?
[Win XP Professional 2002, SP 1, Pentium 4, 1.4 GHz, 512MB RAM] I'm trying to install postgres 8.0.4 on my windows machine. I downloaded ftp.us.postgresql.org/pub/mirrors/postgresql/binary/v8.0.4/win32/postgresql-8.0.4.zip. I then double-clicked the file "postgresql-8.0". It says "Welcome to the PostgreSQL Installation Wizard...". The default english language is ok, so... I click the "Start" button, and immediately get a window labeled "Windows Installer" saying: This installation package could not be opened. Verify that the package exists adn that you can access it or contact the appilcation vendor to verify that this is a valid Windows Installer package. So I tried the analgous file from 8.0.3, with the same results. What am I doing wrong? -- George Young -- "Are the gods not just?" "Oh no, child. What would become of us if they were?" (CSL) ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Design question: Scalability and tens of thousands of
On Thu, 3 Nov 2005 09:58:29 -0800 "zackchandler" <[EMAIL PROTECTED]> threw this fish to the penguins: > I'm designing an application that will allow users to create custom objects > on the application level. A custom object can have zero or more attributes. > Attributes can be one of 5-10 types (ex. String, Number, List, Date, Time, > Currency, etc...). This will allow users to track anything exactly as they > want. My first thought on how to structure this is to make a custom table > in the db for each custom object. The attributes would map to fields and > everything would work just dandy. > > The problem is I am very concerned about scalability with having a different > table created for each custom object. I want to design to site to handle > tens of thousands of users. If each user has 3-5 custom objects the > database would have to handle tens of thousands of tables. > > So it would appear that this is a broken solution and will not scale. Has > anyone designed a similar system or have ideas to share? This is a sticky problem. My app is a bit similar. Trying something like your solution, I found that zillions of tables, constantly created and destroyed by users, to be terrible to manage. Now I use one table that defines objects' attributes, e.g. something like: create table fields(owner text, obj text, name text, type text, seq int) create table objs(owner text, name text) create table values(owner text, obj text, name text, val text) That is, the values are stored in text type, not the native type. Yes, this takes a performance hit for conversion of values, but the simplicity of schema really wins for me. I suggest you seriously consider it unless you need blinding performance in all 20,000 applications... -- George Young -- "Are the gods not just?" "Oh no, child. What would become of us if they were?" (CSL) ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[SQL] how to update table to make dup values distinct
[PostgreSQL 7.4RC2 on i686-pc-linux-gnu](I know, I know... must upgrade soon) I have a table mytable like: i | txt ---+--- 1 | the 2 | the 3 | rain 4 | in 5 | mainly 6 | spain 7 | stays 8 | mainly 9 | in I want to update it, adding a ':' to txt so that each txt value is unique. I don't care which entry gets changed. I tried: update mytable set txt=mytable.txt || ':' from mytable t2 where mytable.txt=t2.txt and mytable.i=t2.i; but this updated both duplicated entries. Um, there may sometimes be 3 or 4 duplicates, not just two. For these, I can add multiple colons, or one each of an assortment of characters, say ':+*&^#'. Performance does not matter here. The real table has 30K rows, ~200 dups. To clarify, I want to end up with something like: 1 | the 2 | the: 3 | rain 4 | in 5 | mainly: 6 | spain 7 | stays 8 | mainly 9 | in: -- George -- "Are the gods not just?" "Oh no, child. What would become of us if they were?" (CSL) ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] tid_le comparison for tuple id (ctid) values?
[PostgreSQL 8.1.0 on i686-pc-linux-gnu] I would like to suggest that there be a less-than (or greater-than) operator for the 'tid' type. I used to use oid's for finding and distinguishing duplicate data. Now that oid's are not included by default (and I do not quarrel with that change), I thought I could use ctid's instead. Suppose I have a table steps: create table steps(x text, y text, z text) but I want there to be a primary key(x,y). If I try to do: create table temp_steps(x text, y text, z text, primary key(x,y)) insert into temp_steps select * from steps; drop table steps; alter table temp_steps rename to steps; I get an error that "duplicate key violates unique constraint". Some of the rows in steps differ only in value of z. OK, I'll just fix the data... I thought I could force values of x to be distinct with: (I've done this several times in the past with oid's) update steps set x=x||'X' from steps s where steps.key1=s.key1 and steps.key2=s.key2 and step.ctidhttp://www.postgresql.org/docs/faq
Re: [SQL] tid_le comparison for tuple id (ctid) values?
On Mon, 21 Nov 2005 16:19:28 -0500 Tom Lane <[EMAIL PROTECTED]> threw this fish to the penguins: > george young writes: > >update steps set x=x||'X' from steps s where steps.key1=s.key1 and > > steps.key2=s.key2 and step.ctid > > But this fails because there is no less-than operator (or function) on > > type "tid". > > Probably a good thing, too, since if there was it wouldn't have anything > reliable to do with the age of the tuple. Well, I don't have any need for it to correlate with the age of the tuple. My use of step.ctid
Re: [SQL] problems with array
On Sat, 15 Oct 2005 08:49:15 + "paperinik 100" <[EMAIL PROTECTED]> threw this fish to the penguins: > PostgreSQL is 7.4.7. > > My first table > CREATE TABLE tb_cat ( > id INTEGER, > desc text > ); > INSERT INTO tb_cat VALUES (10, 'cat10'); > INSERT INTO tb_cat VALUES (20, 'cat20'); > INSERT INTO tb_cat VALUES (30, 'cat30'); > > My second table > CREATE TABLE tb_array( > id INTEGER, > cat INTEGER[] > ); > INSERT INTO tb_array VALUES(1, ARRAY [10, 20]); > > When I write my select > SELECT * from tb_cat WHERE id IN (SELECT cat FROM tb_array WHERE id=1); > the output is: > ERROR: operator does not exist: integer = integer[] > HINT: No operator matches the given name and argument type(s). You may need > to add explicit type casts. Use the "any" function (pseudo function? builtin? whatever); no subquery is needed: select c.* from tb_cat c,tb_array a where a.id=1 and c.id=any(a.cat); Look at section 8.10.5 "Searching in Arrays" in http://www.postgresql.org/docs/7.4/interactive/arrays.html and section 9.17.3 in: http://www.postgresql.org/docs/7.4/interactive/functions-comparisons.html -- George Young -- "Are the gods not just?" "Oh no, child. What would become of us if they were?" (CSL) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[SQL] rename idx's with table; avoid confusing idx names?
[PostgreSQL 8.1.0 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.0.1] After tearing out some hair over the following sequence of events: [a few weeks ago] alter table foo rename to old_foo; create table foo(); insert into foo select blahblahblah from old_foo; [today] cluster foo_pkey on foo; ERROR: "foo_pkey" is not an index for table "foo" What? Why does \d say the primary key idx is foo_pkey1 [light dawns] Aha! "alter table rename to" did not rename the table's indexes! I put together a plpgsql function to rename a table and it's indexes correspondingly[see below]. I would like to know: Is there a more robust/portable/clear way to do this? Is this a bad idea for some subtle reason? Is there any way to get a less cumbersome interface than "select rename_table_and_indexes('foo','old_foo')? Does this look useful enough for me to package more formally? -- George Young CREATE or REPLACE FUNCTION rename_table_and_indexes(old_name text, new_name text) returns void AS $$ declare prefix_len integer; r record; begin prefix_len = length(old_name); for r in select indexrelname from pg_stat_user_indexes where relname=old_name loop execute 'alter index ' || r.indexrelname || ' rename to ' || quote_ident(new_name) || substr(r.indexrelname, prefix_len + 1); raise NOTICE 'renamed index % to %', r.indexrelname, new_name || substr(r.indexrelname, prefix_len + 1); end loop; execute 'alter table ' || quote_ident(old_name) || ' rename to ' || quote_ident(new_name); raise NOTICE 'alter table % rename to %', old_name, new_name; end; $$ LANGUAGE plpgsql; ^^ -- "Are the gods not just?" "Oh no, child. What would become of us if they were?" (CSL) ---(end of broadcast)--- TIP 6: explain analyze is your friend
[SQL] how to convert relational column to array?
[PostgreSQL 8.1.0 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.0.1] I'm trying to convert a column from a traditional relational form to an array: create table old_tab(name text, id int, permits text); newschm3=# select * from old_tab order by name; name | id | permits --+---+ baker| 581 | operator lawless | 509 | operator lawless | 509 | originator lcalvet | 622 | originator loomis | 514 | operator loomis | 514 | originator pig | 614 | operator pig | 614 | originator pig | 614 | supervisor create table new_tab(name text, id int, permits text[]); -- I insert one row per name: insert into new_tab select distinct name,id,cast('{}' as text[]) from old_tab; Now I want to fold all the 'permits' values into the new permits arrays. I can do: update new_tab set permits=new_tab.permits||ot.permits from old_tab ot where ot.name=new_tab.name and ot.permits!=all(new_tab.permits); but this only gets one permits value per name. Repeating this many times would eventually get all of them, but it seems there must be a more reliable way? [I don't care about the *order* of permits values in the array, since order did not exist in old_tab] Just to be clear, I want to end up with: newschm3=# select * from new_tab order by name; name | id | permits -+-+-- baker | 581 | {operator} lawless | 509 | {operator,originator} lcalvet | 622 | {originator} loomis | 514 | {operator,originator} pig | 614 | {operator,originator,supervisor} -- George Young -- "Are the gods not just?" "Oh no, child. What would become of us if they were?" (CSL) ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] how to convert relational column to array?
On Mon, 19 Dec 2005 09:54:49 -0700 Michael Fuhr <[EMAIL PROTECTED]> threw this fish to the penguins: > On Mon, Dec 19, 2005 at 11:06:12AM -0500, george young wrote: > > create table new_tab(name text, id int, permits text[]); > > > > -- I insert one row per name: > > insert into new_tab select distinct name,id,cast('{}' as text[]) from > > old_tab; > > > > Now I want to fold all the 'permits' values into the new permits arrays. > > In PostgreSQL 7.4 and later you can build an array from a select, > so I think the following update should work (it did for me when I > tested it): > > UPDATE new_tab SET permits = array( > SELECT permits > FROM old_tab > WHERE old_tab.name = new_tab.name AND old_tab.id = new_tab.id > ); That's exactly what I needed. Works great. Thanks, George -- "Are the gods not just?" "Oh no, child. What would become of us if they were?" (CSL) ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] PostgreSQL and uuid/guid
On Mon, 02 Jan 2006 14:26:14 +0100 Mario Splivalo <[EMAIL PROTECTED]> threw this fish to the penguins: > While browsing the web I've seen that many people 'need' the ability to > create uuid/guid values from within the PostgreSQL. Most of them are > switching from MSSQL, and they're missing the newid() function, which in > MSSQL created the uuid/guid value. > > Now I'm in need of such function too. I'm finaly putting together > MSSQL-like-merge-replication for PostgreSQL, and the guid/uuid values > would come more than handy here. ... > Now, it should be possible to create postgresql function (in python, > forn instance) wich would call the uuidgen program, but that approach > seems rather slow. Since it looks like you are familiar with python, you could try a uuid generator in pure python -- here's one I played with a while ago: (don't know if it's still maintained, but there are others if you search comp.lang.python). http://www.alcyone.com/pyos/uid/ > I'd like to have postgresql function written in C that would call > uuid_generate > (http://www.die.net/doc/linux/man/man3/uuid_generate.3.html). > Considering what is said for the uuidgen manpage (concerning random and > pseudorandom generated numbers) I feel that uuids generated this way are > what I need. > > What is the general feeling concerning the uuid issue I explained above? > I've never written a C postgreSQL function, and any help (or > documentation pointout) would be greatly appreciated. > If I posted this to the wrong mailing list, please point me out to the > correct one. You question is quite welcome here! -- George Young -- "Are the gods not just?" "Oh no, child. What would become of us if they were?" (CSL) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[SQL] trecherous subselect needs warning?
[PostgreSQL 8.1.0 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.0.1] This query returns zero rows: newschm3=# select run_id from s_bake where opset_id not in (select opset_id from opset_steps); run_id (0 rows) But, by my mistake, table opset_steps has no column "opset_id"! Surely it should return an error, or at least a warning, not just an empty rowset. "s_bake" *does* have an "opset_id" column, so that's what it uses. The "from opset_steps" is useless. I can understand it might be inappropriate to make such illegal, but wouldn't a warning be appropriate? It seems like postgres should know immediately that there is a useless "from" clause. Even trickier would be: select run_id from s_bake where opset_id in (select opset_id from opset_steps); which would return all rows from s_bake IFF opset_steps has any rows! Eeek! I suppose the moral of the story is to ALWAYS, absolutely ALWAYS qualify a correlation name (table alias). Of course, what I meant in the original query was: select s.run_id from s_bake s where s.opset_id not in (select os.opset_id from old_opset_steps os); Sigh. Am I missing something here? -- George Young -- "Are the gods not just?" "Oh no, child. What would become of us if they were?" (CSL) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] trecherous subselect needs warning?
On Tue, 07 Feb 2006 12:45:53 -0500 Tom Lane <[EMAIL PROTECTED]> threw this fish to the penguins: > george young writes: > > This query returns zero rows: > > > newschm3=# select run_id from s_bake where opset_id not in (select opset_id > > from opset_steps); > > run_id > > > > (0 rows) > > > But, by my mistake, table opset_steps has no column "opset_id"! > > Surely it should return an error, or at least a warning, not just an > > empty rowset. > > Access to upper-level variables from subqueries is (a) useful and (b) My orginal posting suggested a warning for the *useless* "from opset_steps" clause, since it's presence is misleading. But I don't suppose the SQL spec allows warnings that are not explicitly in the spec, alas. Of course upper-level variables must in general be accessible from subqueries. > required by the SQL spec, so we are not going to start throwing warnings > about it. I was just trying to find a way to prevent other innocent users from wasting many hours of torment tracking down this subtle twist of SQL... How about a *documentation* suggestion that sub-queries can be very dangerous if one doesn't qualify all column references? Maybe with an example like the one that bit me? -- George -- "Are the gods not just?" "Oh no, child. What would become of us if they were?" (CSL) ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[SQL] unique constraint instead of primary key? what disadvantage(ODBC usage)?
[PostgreSQL 8.1.0 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.0.1] I'm designing a completely new schema for my database. A major criterion is that it facilitate ad-hoc queries via MS-access, excel and OpenOffice, presumably with ODBC. My question regards the use of UNIQUE constraints instead of PRIMARY KEY's on some tables. Both result in an index on the specified tuple of fields, so I presume query performance shouldn't be much different. Using UNIQUE constraints seems to let me better match the natural structure of my data. A 'run' contains a sequence of 'opsets'. Each opset contains a sequence of (a few) 'step's. run-foo opset-1 step-1 step-2 opset-2 step-1 So the 'steps' table is logically indexed by (run, opset_num, step_num). But some opsets are not in runs, and some steps are not in opsets, so I would have step.run be null in some cases, likewise step.opset_num. Null values mean I can't use these fields in a primary key, so I propose to use UNIQUE constraints instead. What am I losing by not using PRIMARY KEYS? Will ODBC clients have difficulty dealing nicely with the database? Will the planner produce lousy query plans? Will Edgar Codd haunt my dreams? -- George Young -- "Are the gods not just?" "Oh no, child. What would become of us if they were?" (CSL) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] unique constraint instead of primary key? what
On Wed, 08 Feb 2006 18:34:22 -0800 Ken Hill <[EMAIL PROTECTED]> threw this fish to the penguins: > On Wed, 2006-02-08 at 21:04 -0500, george young wrote: > > > [PostgreSQL 8.1.0 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.0.1] > > I'm designing a completely new schema for my database. A major > > criterion is that it facilitate ad-hoc queries via MS-access, excel and > > OpenOffice, presumably with ODBC. > > > > My question regards the use of UNIQUE constraints instead of PRIMARY > > KEY's on some tables. Both result in an index on the specified tuple > > of fields, so I presume query performance shouldn't be much different. > > > > Using UNIQUE constraints seems to let me better match the natural > > structure of my data. A 'run' contains a sequence of 'opsets'. > > Each opset contains a sequence of (a few) 'step's. > > > >run-foo > > opset-1 > > step-1 > > step-2 > > opset-2 > > step-1 > > > > So the 'steps' table is logically indexed by (run, opset_num, step_num). > > But some opsets are not in runs, and some steps are not in opsets, so > > I would have step.run be null in some cases, likewise step.opset_num. > > > > Null values mean I can't use these fields in a primary key, so I > > propose to use UNIQUE constraints instead. > > > > What am I losing by not using PRIMARY KEYS? Will ODBC clients have > > difficulty dealing nicely with the database? Will the planner produce > > lousy query plans? Will Edgar Codd haunt my dreams? > > > > -- George Young > > > > > > I think I can give you some insights about MS Access to help you. In MS > Access, you can specify a column as a "primary key"; which basically > means the column is indexed and must contain unique values (also, nulls > are not allowed). I have run into problems depending on columns being > "primary key" in MS Access in db apps that receive data in batch file > uploads from other sources (e.g., uploading 1,000+ records into a > table). > > Is sounds like your requirement to use MS Access for ad-hoc queries > means that you will have some users that want to access the database > with MS Access as a "front-end" client tool. If that is the situation, > then you don't need to worry about the structure of the table as MS > Access relies on ODBC for this. You may also want to communicate to the > end users that MS Access is not a client-server tool; in other words, > all of the records are transferred from the server to the client's box > and then the query is executed. Ouch! A good portion of queries will access my 4M row parameter table in joins with other tables. It sounds like MS access is not workable. Thanks for the info. > > I hope that helps. > -- "Are the gods not just?" "Oh no, child. What would become of us if they were?" (CSL) ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] unique constraint instead of primary key? what
On 9 Feb 2006 08:22:59 -0800 "BigSmoke" <[EMAIL PROTECTED]> threw this fish to the penguins: > If my tables have one or more UNIQUE constraints/indices, I still add a > "id SERIAL PRIMARY KEY" field to most of my tables. This makes > referencing easier and faster. It also improves consistency, which is > never a bad thing in my opinion. In this schema overhaul I'm trying to *eliminate* arbitrary "id" columns like this. They may sometimes improve performance, but certainly obscure the meaning of the data -- naive [read-only] users accessing through excel are confused by this sort of thing. > As far as I know, though, PRIMARY KEY does the same thing as UNIQUE NOT > NULL in PostgreSQL. > > The reason that PRIMARY KEY can't be NULL and _has to be_ UNIQUE is > that it is the primary means of identifying a given record in a table. > If you don't have PRIMARY KEY that is UNIQUE and NOT NULL, how are you > going to identify (or reference) individual records? PostgreSQL won't > allow you to reference more than one row for what I came to believe are > very good reasons. Actually, I don't really see a problem here. E.g. when I want data from a step that has no parent run and no parent opset, I could say: select * from steps where run is null and opset is null and step='mystep'; I don't understand what you mean by "PostgreSQL won't allow you to reference more than one row". If the above query returns 10 rows, what's wrong with that? Nothing *requires* a PRIMARY KEY at all, anyway. -- puzzled... George -- "Are the gods not just?" "Oh no, child. What would become of us if they were?" (CSL) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] Change date format through an environmental variable?
Try the PGDATESTYLE environment variable. Works in 7.4 and 8.1, though it is claimed to be deprecated. -- George Young On Wed, 01 Mar 2006 12:32:26 -0500 Mark Fenbers <[EMAIL PROTECTED]> threw this fish to the penguins: > I want to get Pg (v7.4.7) to output a date field in a different format > than -mm-dd through the use of an environmental variable (because I > have no access the SQL). Is this possible? I know about the DATESTYLE > variable, but that seems to work only within a query transaction, and > has no effect if trying to set it as an envvar. > > Mark > > ---(end of broadcast)--- > TIP 5: don't forget to increase your free space map settings > -- "Are the gods not just?" "Oh no, child. What would become of us if they were?" (CSL) ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Copying a row within table
Assuming the sequence in foo is named foo_seq, you could do: -- You could also select multiple rows here, e.g. foo_id>10, if desired. create temp table foo_tmp as select * from foo where foo_id=2; alter table foo_tmp add column tmp_seq int default nextval('foo_seq'); -- foo_tmp now *shares* the sequence. insert into foo select * from foo_tmp; drop table foo_tmp; If there's any chance of concurrent update/insert/deletes to foo, you might should wrap this in a (begin; stuff; commit) transaction. -- George Young On Tue, 14 Mar 2006 09:19:49 +0200 Aarni Ruuhimäki <[EMAIL PROTECTED]> threw this fish to the penguins: > > testing=# INSERT INTO foo (SELECT * FROM foo WHERE foo_id = 2); > ERROR: duplicate key violates unique constraint "foo_pkey" > testing=# > > testing=# INSERT INTO foo (foo_1, foo_2, foo_3 ...) (SELECT foo_1, foo_2, > foo_3 ... FROM message_table WHERE foo_id = 10); > INSERT 717286 1 > testing=# > > Is there a fast way to copy all but not the PK column to a new row within the > same table so that the new foo_id gets its value from the sequence ? > > TIA and BR, > > Aarni > > -- > Aarni Ruuhimäki > -- > This is a bugfree broadcast to you > from **Kmail** > on **Fedora Core** linux system > -- > > ---(end of broadcast)--- > TIP 4: Have you searched our list archives? > >http://archives.postgresql.org > -- "Are the gods not just?" "Oh no, child. What would become of us if they were?" (CSL) ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Custom type
On Wed, 22 Mar 2006 13:07:33 -0800 Bryce Nesbitt <[EMAIL PROTECTED]> threw this fish to the penguins: > Terry Lee Tucker wrote: > > rnd=# \h comment > > Command: COMMENT > > Description: define or change the comment of an object > > ..I believe this is what you need. > > > Cool! > That's a great feature. Though it would be even nicer if the comment > showed when > you "\d" a table:: Hmm, it does(in 8.1.0) if you just say "\d+" without an argument: newschm3=# comment on table fffg is 'the rain'; newschm3=# \d+ List of relations Schema | Name | Type | Owner | Description +--+---+--+- public | fffg | table | postgres | the rain public | group_run_limit | table | postgres | public | hold_log | table | postgres | ... But not if you specify the object: newschm3=# \d+ fffg Table "public.fffg" Column | Type | Modifiers | Description +-+---+- t | text| | i | integer | | This seems a bit like a bug to me... -- George Young > stage=# comment on table db_version is 'Managed in nautilus.xml'; > stage=# \d db_version > > Table "public.db_version" > Column | Type | Modifiers > -+---+--- > version | character varying(64) | > > > stage=# \dd db_version > > Object descriptions > Schema |Name| Object | Description > +++- > public | db_version | table | Managed in nautilus.xml > > > ---(end of broadcast)--- > TIP 5: don't forget to increase your free space map settings > -- "Are the gods not just?" "Oh no, child. What would become of us if they were?" (CSL) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] Expressing a result set as an array (and vice versa)?
On Thu, 23 Mar 2006 11:44:32 -0800 Don Maier <[EMAIL PROTECTED]> threw this fish to the penguins: > Is it possible to construct an array from an appropriate select > expression that generates a result set of unknown cardinality? > To focus on the simple case: Is it possible to construct a one- > dimensional array from a select of a single column in a table with an > unknown number of rows? select array(select some_int_field from my_table where something); produces an array of integers. No user defined function is required. > Conversely, is it possible to construct a (single column) result set > from a select expression on a one-dimensional array with an unknown > number of elements? Not so easy without a custom function. > Thanks for any hints! > > Regards, > Don Maier > > > > ---(end of broadcast)--- > TIP 1: if posting/reading through Usenet, please send an appropriate >subscribe-nomail command to [EMAIL PROTECTED] so that your >message can get through to the mailing list cleanly > -- "Are the gods not just?" "Oh no, child. What would become of us if they were?" (CSL) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[SQL] trigger to enforce FK with nulls?
[PostgreSQL 8.1.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.0.1] I'm starting to use lots of foreign key constraints to keep my data clean. In one case, however, I need to allow null values for the key. E.g.: create table opset_steps(opset text, step text, step_num int); create table steps(run text, step text, opset text, user text, step_num int); The constraint on steps should be: steps.(opset,step) must be found in opset_steps.(opset,step) UNLESS steps.opset is null. I started to put dummy rows in opset_steps with opset='NO-OPSET' for each step with no real opset, but this looks really kludgy. Since foreign keys per se can't do this, I presume the way is to use triggers, though I have not used triggers before. Any suggestions, caveats? Opset_steps has only 4400 rows, and inserts to steps are not real frequent, so I don't *think* performace should be a problem. No updates are ever made to steps.opset and steps.step, or to opset_steps.(opset,step) [though updates are often made to *other* fields of steps]. -- George Young -- "Are the gods not just?" "Oh no, child. What would become of us if they were?" (CSL) ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] find all tables with a specific column name?
I've found it useful to explore the information_schema schema by doing: set search_path=information_schema; -- Lets just look at the system tables. \d -- Show me all the tables. ... (40 rows) then select * from some-likely-looking-table limit 20; In this case, I quickly found a table called "columns", so you can do: select table_name from information_schema.columns where column_name='last_modified'; Of course you could be a wuss and actually read the documentation ;-) http://www.postgresql.org/docs/8.1/interactive/infoschema-columns.html -- George Young On Fri, 21 Apr 2006 09:29:33 -0700 (PDT) Jeff Frost <[EMAIL PROTECTED]> wrote: > Is there a reasonable way to extract a list of all tables which contain a > specific column name from the system views on 8.1? > > For instance, I might want to enumerate all tables with a column named > last_modified. > > -- > Jeff Frost, Owner <[EMAIL PROTECTED]> > Frost Consulting, LLC http://www.frostconsultingllc.com/ > Phone: 650-780-7908 FAX: 650-649-1954 > > ---(end of broadcast)--- > TIP 5: don't forget to increase your free space map settings > -- "Are the gods not just?" "Oh no, child. What would become of us if they were?" (CSL) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] viewing the description of tables from python DB-API
On Tue, 1 Aug 2006 15:30:48 -0700 "Daniel Joo" <[EMAIL PROTECTED]> wrote: > Is there a way to view the list of all tables from python (or any other > languages for that matter) DB-API? What I'm looking for is a command > similar to the meta-command '\d' that works with the psql client. The cursor object has a "description" data member. Look at the DB-API spec: http://www.python.org/dev/peps/pep-0249/ under "Cursor Objects". Python 2.3.3 (#1, Jan 3 2004, 07:17:11) [GCC 3.3.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import pgdb >>> db=pgdb.connect(database='pigrestore', host='ivy') >>> cur=db.cursor() >>> cur.execute('select * from machine') >>> print cur.description [('machine_name', 'text', -1, -1, None, None, None), ('machine_id', 'int2', -1, 2, None, None, None), ('area', 'text', -1, -1, None, None, None), ('text', 'text', -1, -1, None, None, None)] >>> -- "Are the gods not just?" "Oh no, child. What would become of us if they were?" (C.S. Lewis) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster