Re: [SQL] [PHP] [ADMIN] Data insert
On Sun, Aug 21, 2005 at 06:35:22AM +0100, Aldor wrote: > if you want to insert biiig data volumes try either using COPY instead > of INSERT - it will run much much faster And if for some reason you have to stick with inserts, group them into transactions; it will perform much better than individual transactions. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Softwarehttp://pervasive.com512-569-9461 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] insert only if conditions are met?
SELECT sum(hours) FROM table WHERE emp_name = 'JSMITH' AND work_date = '8-15-2005'::date will give you the hours. So... INSERT INTO table SELECT blah WHERE (SELECT sum(hours) FROM table WHERE emp_name = 'JSMITH' AND work_date = '8-15-2005'::date) != 8 Should do what you want. On Wed, Aug 31, 2005 at 12:49:14PM -0400, Henry Ortega wrote: > Ok. Here's TABLE A > > emp date hours type > JSMITH 08-15-2005 5 WORK > JSMITH 08-15-2005 3 WORK > JSMITH 08-25-2005 6 WORK > > I want to insert the ff: > 1.) JSMITH 08-15-2005 8 VAC > 2.) DOE 08-16-2005 8 VAC > > #1 should fail because there is already 8 hours entered as being > Worked on 08-15-2005 (same date). > > Any suggestions? > > > > On 8/31/05, Ragnar Hafsta? <[EMAIL PROTECTED]> wrote: > > > > On Wed, 2005-08-31 at 11:49 -0400, Henry Ortega wrote: > > > What I am trying to do is > > > * Insert a record for EMPLOYEE A to TABLE A > > > IF > > > the sum of the hours worked by EMPLOYEE A on TABLE A > > > is not equal to N > > > > > > Is this possible? > > > > Sure, given a suitable schema > > > > It is not clear to me, if the hours worked are > > to be found in the same table you want to insert > > into, or not. > > > > gnari > > > > > > > > -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Softwarehttp://pervasive.com512-569-9461 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Help with multistage query
On Wed, Sep 07, 2005 at 05:37:47PM -0400, Matt Emmerton wrote: > > - Original Message - > From: Russell Simpkins > To: pgsql-sql@postgresql.org > Sent: Wednesday, September 07, 2005 4:05 PM > Subject: Re: [SQL] Help with multistage query > > I have a perl script that issues a series of SQL statements to perform > some queries. The script works, but I believe there must be a more elegant > way to do this. > > > > The simplified queries look like this: > > > > SELECT id FROM t1 WHERE condition1; ;returns about 2k records which are > stored in @idarray > > > > foreach $id (@idarray) { > >SELECT x FROM t2 WHERE id=$id; ; each select returns about 100 > records which are saved in a perl variable > > } > > how about > select t1.id from t1, t2 where t1.id = t2.id and t2.id = x > > or more correctly, based on the OP's example: > > select t2.x from t1, t2 where t1.id = t2.id and t1.id = Actually, I think you want AND t2.x , not t1.id. BTW, I recommend not using id as a bareword field name. Very easy to get confused when you start joining a bunch of stuff together. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] Why doesn't the SERIAL data type automatically have a
On Tue, Sep 27, 2005 at 10:33:14AM -0500, Scott Marlowe wrote: > On Mon, 2005-09-26 at 20:03, Tom Lane wrote: > > Ferindo Middleton Jr <[EMAIL PROTECTED]> writes: > > > Is there some reason why the SERIAL data type doesn't automatically have > > > a UNIQUE CONSTRAINT. > > > > It used to, and then we decoupled it. I don't think "I have no use for > > one without the other" translates to an argument that no one has a use > > for it ... > > I have to admit, right after the change was made, I was of the opinion > that no one would ever need that. Then, a few months later, it was > exactly what I needed for some project... :) Arguably it would have been better to make the default case add either UNIQUE or PRIMARY KEY with a way to over-ride. If newbies are getting burned maybe it would be useful to toss a NOTICE or maybe even WARNING when a serial is created without a unique constraint of some kind? -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] Why doesn't the SERIAL data type automatically have a
On Tue, Oct 04, 2005 at 07:50:28PM -0400, Ferindo Middleton Jr wrote: > Based on the feedback I received after I made that original post, it > seemed most people don't use SERIAL with a unique constraint or primary > key and I was blasted for making such a suggestion. I'm sorry... It I don't think either assertion is true. I'd bet most of the developers actually do normally use an index on a serial, since it's normally used as a PK. And while people can be a bit terse with their replies, I wouldn't say you were blasted. :) -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [DOCS] [SQL] Update timestamp on update
On Wed, Oct 12, 2005 at 10:52:04PM -0400, Tom Lane wrote: > Jeff Williams <[EMAIL PROTECTED]> writes: > > Thanks. Triggers was my first thought, but chapter 35 on Triggers didn't > > really indicate a way I could do this easily and scared me with a lot of > > c code. > > Yeah. This is a documentation issue that's bothered me for awhile. > The problem is that we treat the PL languages as add-ons and therefore > the documentation of the "core" system shouldn't rely on them ... but > that leaves us presenting C-code triggers as the only examples in > chapter 35. There is a paragraph in there suggesting you go look at > the PL languages first, but obviously it's not getting the job done. Chapter 35 is plpgsql.. do you mean chapter 32.4? > Anybody have a better idea? What about a See Also section ala man pages that links to trigger info for other languages? -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] How to speed up the database query?
Have you taken a look at http://www.postgresql.org/docs/8.0/interactive/performance-tips.html ? On Thu, Oct 27, 2005 at 03:03:36PM +0800, Abdul Wahab Dahalan wrote: > Hi everyone! > > I'm looking for solution to speed up the database query, means that to get > resultset as quicker as we can. > > For example if I've 700 records in the table it will take longer time > compared if I've only 20 records. How do we speed up the query?. Any query > technique that can be applied?. > > Thus wild card query like : select * from tableA will cause query time > increased compare to say select a,b from tableA. > > any help, prettymuch appreciated. > > > ---(end of broadcast)--- > TIP 5: don't forget to increase your free space map settings > -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] why vacuum
On Thu, Oct 27, 2005 at 02:21:15PM +0100, Richard Huxton wrote: > So - if your statement contains something non-deterministic that isn't > catered for in Mysql's code then it will break. > > At it's simplest - if I write a function my_random() and then do: > UPDATE foo SET a=1 WHERE b < my_random(); > IF my_random() returns different results on different machines, then the > replication will be broken. See the manual entry below: > http://dev.mysql.com/doc/refman/5.0/en/replication-features.html > > That's not to say the system is worthless - it works fine for many > people. But it does have limitations. And you can easily have multi-master syncronous replication in PostgreSQL using the same idea; just see pgCluster. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] [GENERAL] A Not Join
On Tue, Nov 01, 2005 at 04:27:01PM +0200, L van der Walt wrote: > I have three table: > Users - Contains username, ID etc... > Permissions - A permission name and ID > Link up table - The user.id and permission.id > > If a user.id and a permission.id row exists in the linkuptable the user > have that permission granted. > > With the statement below I can see the permissions a user have. > > SELECT users.username, permissions.name > FROM users INNER JOIN linkuptable > ON (users.id = linkuptable.userid) > INNER JOIN permissions > ON (permissions.id = linkuptable.permissionid) > WHERE users.username = 'DummyUser' > > How do I see the permissions that user DON'T have with a fast SQL statement. > > Thus, a NOT the statement for the above SQL statement LEFT JOIN permissions ON (...) WHERE permissions.id IS NULL You might have to do the NULL check in a HAVING clause instead... try it. BTW, this is probably better asked on pgsql-sql. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Nested Table in PostgreSQL or some alternative Variants
What do you mean by 'nested table'? Maybe arrays will do what you want? Typically (and this applies to other databases as well), this is done using two tables and refferential integrity. IE: CREATE TABLE purchase_order( po_id serial CONSTRAINT purchase_order__po_id PRIMARY KEY , customer_id int CONSTRAINT purchase_order__customer_RI REFERENCES customer(id) , more fields... ) CREATE TABLE po_lines ( po_id int CONSTRAINT po_lines__po_id_RI REFERENCES purchase_order(po_id) , line_number smallintNOT NULL , ... , CONSTRAINT po_lines__po_id_line_number PRIMARY KEY( po_id, line_number) ) On Mon, Oct 31, 2005 at 02:22:05PM +0100, Thomas Zuberbuehler wrote: > Hello there > > I've a problem. I can't find some information about nested tables in > PostgreSQL. Is this Features possible in pgsql or not? > > * When yes, how i can use and create nested tables with pgsql? > * When no, which alternative are there (for same problem definition)? > > Thank you for help. > Greetings from Zurich, Switzerland. > Thomas Zuberbuehler > > ---(end of broadcast)----------- > TIP 2: Don't 'kill -9' the postmaster > -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Does VACUUM reorder tables on clustered indices
On Sun, Dec 18, 2005 at 07:01:39PM -0300, Alvaro Herrera wrote: > Martin Marques escribi?: > > On Sun, 18 Dec 2005, frank church wrote: > > > > > > > >Does VACUUMing reorder tables on clustered indices or is it only the > > >CLUSTER > > >command that can do that? > > > > Cluster does that. Vacuum only cleans dead tuples from the tables. > > Note that while reordering, CLUSTER also gets rid of dead tuples, so if > you cluster you don't need to vacuum. It also does a REINDEX... -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] Does VACUUM reorder tables on clustered indices
On Wed, Dec 21, 2005 at 12:34:12AM +0100, [EMAIL PROTECTED] wrote: > Hi, > > Utilize CLUSTER; (after vacuum) to reorder the data. Why would you vacuum when cluster is just going to wipe out the dead tuples anyway? > >>Note that while reordering, CLUSTER also gets rid of dead tuples, so if > >>you cluster you don't need to vacuum. > > > >It also does a REINDEX... -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] regarding grant option
Though, it is pretty easy to do something like: select 'GRANT ALL ON ' || table_name || ' TO public;' from information_schema.tables where table_schema='blah'; You can feed the output of that to psql, ei: psql -qc "select 'GRANT ALL ON ' || table_name || ' TO public;' from information_schema.tables where table_schema='blah'" | psql On Wed, Mar 01, 2006 at 12:00:16PM -0300, Alvaro Herrera wrote: > AKHILESH GUPTA wrote: > > thank you very much sir for your valuable suggestion, > > but i am talking about direct database query...! > > There is none that can help you here, short of making a function in > PL/pgSQL or other language ... > > > On 3/1/06, Alvaro Herrera <[EMAIL PROTECTED]> wrote: > > > > > > AKHILESH GUPTA wrote: > > > > > > > here i have to grant permissions to that user individually for each and > > > > every table by using: > > > > :->> grant ALL ON to ; > > > > GRANT > > > > and all the permissions are granted to that user for that particular > > > table. > > > > > > Yes. If you are annoyed by having to type too many commands, you can > > > write a little shell script to do it for you. > > -- > Alvaro Herrerahttp://www.CommandPrompt.com/ > The PostgreSQL Company - Command Prompt, Inc. > > ---(end of broadcast)--- > TIP 9: In versions below 8.0, the planner will ignore your desire to >choose an index scan if your joining column's datatypes do not >match > -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Replication - state of the art?
You could also use WAL shipping and some PITR trickery to keep a 'warm standby' database up to date. How far behind it falls is up to you, since you'll be periodically syncing the current WAL file to the backup machine. Do the sync once a minute, and at most you lose 60 seconds of data. On Wed, Mar 01, 2006 at 02:49:18PM -0500, Andrew Sullivan wrote: > On Wed, Mar 01, 2006 at 11:28:06AM -0800, Bryce Nesbitt wrote: > > Actually let me loosen that a bit: we don't need two phase commit. We > > can loose the most recent transaction, or even the last few seconds of > > transactions. What we can't survive is -- on the day of the emergency > > -- a long and complicated DB rebuild with mistakes and hard-to-debug > > data issues. > > Then I suggest you use Slony-I. While it is not plug and play, the > thing it _is_ designed to handle reasonably well is failover and > (better) switchover. Most systems plan to solve that piece of > functionality later, with a script or something, at which point it is > apparent that setting up failover or swichover to be anything > approaching safe is actually very tricky. (Log shipping is probably > not in this category, but AFAIK the promote-to-live support for a > standby database copy is still not all built by anyone. If you like > rolling your own, however, it might be your answer.) > > > There's no fire creating demand for replication, so there is little time > > budget. > > So is there a sort of padded, no-sharp-corners, playroom that gets us > > 90% of the way there? > > The "no budget" remark here is what makes me strike CMD's Mammoth > Replicator off the list. But I'm sure their administration tools are > far sweeter than the admittedly hackish ones that Slony currently > delivers out of the box. > > > nightly) into something more reasonable (like 500 milliseconds). But > > risk -- of data corruption -- > > and time --too much-- will can the project. > > Another big reason to use a live-standby system like Slony is that > once you have the extra database online, you suddenly think of all > sorts of nifty queries you can move there without destroying your > production performance. Be careful not to get addicted, is all. > > A > > -- > Andrew Sullivan | [EMAIL PROTECTED] > Information security isn't a technological problem. It's an economics > problem. > --Bruce Schneier > > -----------(end of broadcast)--- > TIP 9: In versions below 8.0, the planner will ignore your desire to >choose an index scan if your joining column's datatypes do not >match > -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] Problem with query on history table
Probably the easiest way is to switch to using table partitioning and switch to using start_timestamp and end_timestamp, so that when you modify a row you update the old one setting end_timestamp to now() and insert the new row (all within one transaction). There are other ways to do it, but they'll probably be much slower. I don't think they require a lot of CASE statements though. Show us what you were planning on doing and maybe I'll have more ideas. On Mon, Feb 27, 2006 at 08:19:30AM +0100, Andreas Joseph Krogh wrote: > Hi all! > > I don't know if there's a standard solution to the kind of problem I'm trying > to solve, but I will appreciate your thougts(and maybe solution:) on this > problem of mine: > > I have 2 tables: hist and curr which hold numbers for "history-data" and > "current-data" respectivly. Here is a simplified version of the schema: > > CREATE TABLE curr ( > id integer NOT NULL, > etc integer NOT NULL, > created timestamp without time zone NOT NULL, > modified timestamp without time zone > ); > > CREATE TABLE hist ( > id serial NOT NULL, > curr_id integer NOT NULL REFERENCES curr(id), > etc integer NOT NULL, > modified timestamp without time zone NOT NULL > ); > > andreak=# SELECT * from curr; > id | etc | created | modified > +-+-+- > 1 | 5 | 2006-02-01 00:00:00 | > 2 | 10 | 2006-01-15 00:00:00 | 2006-01-26 00:00:00 > 3 | 10 | 2006-01-08 00:00:00 | 2006-01-25 00:00:00 > (3 rows) > > andreak=# SELECT * from hist; > id | curr_id | etc | modified > ++-+- > 1 | 3 | 30 | 2006-01-16 00:00:00 > 2 | 3 | 20 | 2006-01-25 00:00:00 > 3 | 2 | 20 | 2006-01-26 00:00:00 > (3 rows) > > Now - I would like to get a report on what the "ETC" is on a given entry in > "curr" in a given "point in time". Let me explain. If I want status for 17. > jan.(17.01.2006) I would like to get these numbers out from the query: > > id | created |curr_modified|hist_modified| etc > +-+-+-+- > 3 | 2006-01-08 00:00:00 | 2006-01-25 00:00:00 | 2006-01-16 00:00:00 | 30 > 2 | 2006-01-15 00:00:00 | 2006-01-26 00:00:00 | 2006-01-26 00:00:00 | 20 > 1 | 2006-02-01 00:00:00 | | | 5 > > > That is; If the entry is modified after it's created, a snapshot of the "old > version" is copied to table "hist" with the hist.modified field set to the > "modified-timestamp". So there will exist several entries in "hist" for each > time an entry in "curr" is modified. > > If I want status for the 27. jan. I would like the query to return the > following rows: > > id | created |curr_modified|hist_modified| etc > +-+-+-+- > 3 | 2006-01-08 00:00:00 | 2006-01-25 00:00:00 | 2006-01-25 00:00:00 | 10 > 2 | 2006-01-15 00:00:00 | 2006-01-26 00:00:00 | 2006-01-26 00:00:00 | 10 > 1 | 2006-02-01 00:00:00 | | | 5 > > select curr.id, curr.created, curr.modified as curr_modified, hist.modified > as > hist_modified, coalesce(hist.etc, curr.etc) as etc FROM curr LEFT OUTER JOIN > hist ON(curr.id = hist.curr_id) WHERE ... > > I'm really stuck here. It seems to me that I need a lot of > CASE...WHEN...ELSE.. statements in the query, but is there an easier way? > > -- > Andreas Joseph Krogh <[EMAIL PROTECTED]> > > ---(end of broadcast)--- > TIP 3: Have you checked our extensive FAQ? > >http://www.postgresql.org/docs/faq > -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] Expressing a result set as an array (and vice versa)?
On Sat, Mar 25, 2006 at 12:17:08AM +0100, PFC wrote: > > > CREATE OR REPLACE FUNCTION foreach( liste INTEGER[] ) RETURNS SETOF > INTEGER AS $$ > DECLARE > i INTEGER; > BEGIN > FOR i IN 1..icount(liste) LOOP > RETURN NEXT liste[i]; > END LOOP; > END; > $$ LANGUAGE plpgsql; Seems like this should really exist in the backend... > CREATE AGGREGATE array_accum ( > sfunc = array_append, > basetype = anyelement, > stype = anyarray, > initcond = '{}' > ); > > SELECT array_accum( DISTINCT list_id ) FROM bookmarks; > array_accum > --- > {1,2,3,4,5,7} Couldn't you just use array()? -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] SQL Query Newbie Help
On Fri, Mar 24, 2006 at 02:29:10PM -0800, Stephan Szabo wrote: > > On Fri, 24 Mar 2006, Julie Robinson wrote: > > > This works, but is there a better solution? > > > > select * > > from quality_control_reset T > > where date = ( > > select max(date) > > from quality_control_reset > > where qualitycontrolrange = T.qualitycontrolrange); > > If you can use PostgreSQL extensions (and don't care that you might not > get two rows if two ids had the same date equaling the max date for a > given range), maybe something like: > > select distinct on (qualitycontrolrange) id, date, qualitycontrolrange > from quality_control_reset order by qualitycontrolrange,date desc; > > > Otherwise, you might see how the above compares in plan to something like > (not really tested): > > select T.* from quality_control_reset T inner join > (select qualitycontrolrange, max(date) as date from quality_control_reset > group by qualitycontrolrange) T2 > on (T.qualitycontrolrange = T2.qualitycontrolrange and T.date=T2.date); BTW, I believe the new row operator fixes in 8.2 make it possible to use them to do this kind of thing as well... -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] Question about One to Many relationships
On Fri, Mar 24, 2006 at 06:29:25PM +0100, PFC wrote: > > > >>And I want to link the band to the album, but, if the album is a > >>compilation it'll be linked to multiple band.ids, so i can't just add > >>a column like: > > For a compilation, you should link a band to a track, not an album. > This opens another can of worms... > > I would use the following tables : BTW, if you're going to be writing code to manage stuff like this, you should absolutely check out the source for http://musicbrainz.org/, which uses PostgreSQL as it's backend. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Find min and max values across two columns?
On Fri, Mar 24, 2006 at 04:00:35PM -0500, Tom Lane wrote: > Amos Hayes <[EMAIL PROTECTED]> writes: > > I'm trying to build a query that among other things, returns the > > minimum and maximum values contained in either of two columns. > > I think you might be looking for > > select greatest(max(columnA), max(columnB)) from tab; > select least(min(columnA), min(columnB)) from tab; > > greatest/least are relatively new but you can roll your own in > older PG releases. And if you care about performance you might also try: SELECT max(greatest(column_a, column_b) ... SELECT min(least(column_a, column_b) ... There may be a difference in performance between the two. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] unique names in variables and columns in plsql functions
On Mon, Mar 27, 2006 at 04:33:55PM +0200, Wiebe Cazemier wrote: > Hi, > > In a plpgsl function, consider the following excerpt: > > DECLARE > provider_id INTEGER; > BEGIN > provider_id := (SELECT provider_id FROM investment_products WHERE id = > my_new.investment_product_id); > END; > > After a lot of trouble, I found out this line doesn't work correctly > with the variable name as it is. It doesn't give an error or anything, > it just retrieves some wrong value (probably NULL). When I change the > variable name to anything other than "provider_id", it works OK. > > I was somewhat surprised to discover this. Can't Postgres determine that > the provider_id in the SELECT statement is not the same one as the variable? Sadly, overloading variable names between plpgsql and SQL is *highly* problematic. Because of this I *always* prefix plpgsql variables with something, such as p_ for parameters and v_ for general variables. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] unique names in variables and columns in plsql functions
On Mon, Mar 27, 2006 at 10:02:24AM -0500, Tom Lane wrote: > Wiebe Cazemier <[EMAIL PROTECTED]> writes: > > DECLARE > > provider_id INTEGER; > > BEGIN > > provider_id := (SELECT provider_id FROM investment_products WHERE id = > > my_new.investment_product_id); > > END; > > > After a lot of trouble, I found out this line doesn't work correctly > > with the variable name as it is. It doesn't give an error or anything, > > it just retrieves some wrong value (probably NULL). > > It'll retrieve whatever the current value of the plpgsql variable > provider_id is. plpgsql always assumes that ambiguous names refer > to its variables (indeed, it isn't even directly aware that there's > any possible ambiguity here). > > > I was somewhat surprised to discover this. Can't Postgres determine that > > the provider_id in the SELECT statement is not the same one as the variable? > > How and why would it determine that? In general it's perfectly normal > to use plpgsql variable values in SQL commands. I don't think it'd make > the system more usable if the parser tried to apply a heuristic rule > about some occurrences being meant as variable references and other ones > not. If the rule ever got it wrong, it'd be even more confusing. BTW, I believe SELECT investment_products.provider_id would work here, but I'm too lazy to test that theory out. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] Migrating a Database to a new tablespace
That means that the tablespace directory isn't empty. On Mon, Apr 24, 2006 at 01:34:33PM +0200, Markus Schaber wrote: > Hello, > > I now pulled the plug, migrated all databases via "create database ... > tempate olddatabase tablespace newts" to new tablespaces, one for each > database, and dropped all old databases that contained references to the > tablespace. Pgadmin3 also shows that the tablespace is not referenced by > anything. > > But I cannot drop it, I get the following message: > > postgres=# drop TABLESPACE foo; > ERROR: tablespace "foo" is not empty > > > It seems that the whole tablespace thing is not yet 100% waterproof, > good that this did happen on a developer machine, and not on a > production machine. > > Thanks for your patience, > Markus > > -- > Markus Schaber | Logical Tracking&Tracing International AG > Dipl. Inf. | Software Development GIS > > Fight against software patents in EU! www.ffii.org www.nosoftwarepatents.org > > ---(end of broadcast)------- > TIP 4: Have you searched our list archives? > >http://archives.postgresql.org > -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Migrating a Database to a new tablespace
On Wed, Apr 26, 2006 at 12:35:39PM -0400, Bruce Momjian wrote: > Tom Lane wrote: > > Markus Schaber <[EMAIL PROTECTED]> writes: > > > As I said the leftovers are likely to be caused by hard kills and > > > backend crashes, so I would not go into deeper analysis, but maybe the > > > finding and possibly removing of such leftovers should be half-automated > > > to assist server admins. > > > > It's been discussed. Personally I'm afraid of the idea of automatically > > deleting files that seem unreferenced, but having a tool to find them > > for manual deletion isn't a bad idea. I think Bruce had a prototype > > patch at one point --- not sure what the status is. > > > > I have work someone did in the past. I just need to be updated to deal > with tablespaces. > > ftp://candle.pha.pa.us/pub/postgresql/mypatches/checkfile.* > > Let me know if you want details. Is it able to also delete the cruft? Seems to be a useful extension, especially on windows, which AFAIK doesn't have an equivalent to ``. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] LinkedList
decibel=# select * from t; a | b ---+--- 1 | 0 3 | 1 5 | 3 7 | 5 2 | 0 4 | 2 6 | 4 8 | 6 (8 rows) decibel=# select * from t x join t y on(x.a=y.b) where y.a=7; a | b | a | b ---+---+---+--- 5 | 3 | 7 | 5 (1 row) decibel=# select * from t x join t y on(x.a=y.b) where y.a=8; a | b | a | b ---+---+---+--- 6 | 4 | 8 | 6 (1 row) decibel=# As you can see, it selects the right data, but you'll need to step through it somehow. You might be able to do it with a generate_series(), or you can use a function. If we get WITH support/recursion in 8.2 you'd use that. I think that "SQL For Smarties" by Joe Celko might have an example of how to do this without using a function. Even if it doesn't it's a book any serious database developer should own. On Wed, Apr 26, 2006 at 10:35:15AM -0700, Ray Madigan wrote: > Scott, > > Thanks for your reply, I tried what you said, worked around a few things > but I am still stuck. The main reason is I didn't do an adequate job of > explaining the situation. The table implements many linked lists and I want > to traverse one of them given the end of the list. > > Say the table contains > > h | v | j > 1 0 100 > 3 1 300 > 5 3 500 > 7 5 700 > > 2 0 200 > 4 2 400 > 6 4 600 > 8 6 800 > > If I specify t.h = 8 I want to traverse the even part of the table > If I specify t.h = 7 I want to traverse the odd part of the table > > If you can send me to a book to read I am willing > > Thanks > > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] Behalf Of Scott Marlowe > Sent: Wednesday, April 26, 2006 8:59 AM > To: Ray Madigan > Cc: pgsql-sql@postgresql.org > Subject: Re: [SQL] LinkedList > > > On Wed, 2006-04-26 at 11:09, Ray Madigan wrote: > > I have a table that I created that implements a linked list. I am not an > > expert SQL developer and was wondering if there are known ways to traverse > > the linked lists. Any information that can point me in the direction to > > figure this out would be appreciated. The table contains many linked > lists > > based upon the head of the list and I need to extract all of the nodes > that > > make up a list. The lists are simple with a item and a link to the > history > > item so it goes kind of like: > > > > 1, 0 > > 3, 1 > > 7, 3 > > 9, 7 > > ... > > > > Any suggestions would be helpful, or I will have to implement the table > > differently. > > You should be able to do this with a fairly simple self-join... > > select a.id, b.aid, a.field1, b.field1 > from mytable a > join mytable b > on (a.id=b.aid) > > Or something like that. > > ---(end of broadcast)--- > TIP 2: Don't 'kill -9' the postmaster > > > ---(end of broadcast)--- > TIP 6: explain analyze is your friend > -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] (Ab)Using schemas and inheritance
Moving to -general, where it's more likely that others will have input. On Tue, May 23, 2006 at 05:16:54PM -0300, Jorge Godoy wrote: > I'm modelling an application that will have data -- financial data, human > resources, etc. -- for several hundred (even thousands) of companies. This > is for an accounting office. > > I could put some kind of "company_id" column in all of my tables to separate > data in a more standard way, I could create a separate schema for each client > and then create all needed tables in there (could I? I'd be abusing schemas > here and this is part of my doubt) and, finally, I could create a "base" > schema, define all my standard tables and create an individual schema for > each client where I'd inherit from those base.tables. > > This would allow me to separate all information with a "SET search_path TO > company" without having to make the restriction "by hand" (on "company_id", > for example). It would also allow me to view some complete statistics > grouping all clients by SELECTing data from the base schema. We're testing > views and functions to see how they behave with inherited tables and changes > on "search_path", and it looks like we can do that for, at least, a small > number of schemas. > > Of course, this has implications on permissions as well, so there will be a > large number of groups -- probably at least one per schema + some common > groups -- and roles as well... > > > Is this a good idea? Would this be too bad, performance-wise, if I had > thousands of schemas to use like that? Any advice on better approaches? Any > expected problems? One issue is that you'll probably be breaking new ground here a bit; I suspect there's very few people that are using more than a handful of schemas. Shouldn't pose any issues, but you never know; although any issues you do run into should only be performance problems. Another consideration is that the free space map doesn't care too much for tracking space info on tons of small tables. Perhaps the biggest issue is: what happens when you need to do DDL? If you have 1000 schemas that should be identical, you'll need to perform any DDL 1000 times. But as you point out, there's some interesting advantages to using schemas like this. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] pgxml & xpath_table
There's a good chance the author isn't on this list. You'd be better off on pgsql-general, or just emailing the author of pgxml directly. On Thu, Jun 08, 2006 at 06:00:42PM +0200, Philippe Lang wrote: > Hi, > > I'm playing with the contrib/pgxml library under PG 8.1.4, and I'm not sure > if what I found with pgxml is a feature of a bug: > > I've got the following table: > > > CREATE TABLE test > ( > id int4 NOT NULL, > xml varchar(200), > CONSTRAINT pk PRIMARY KEY (id) > ) > WITHOUT OIDS; > > INSERT INTO test VALUES (1, ' num="L1">123 num="L2">112233'); > > INSERT INTO test VALUES (2, ' num="L1">111222333 num="L2">111222333'); > > > > If I launch this query: > > > select * from > > xpath_table('id','xml','test', > '/doc/@num|/doc/line/@num|/doc/line/a|/doc/line/b|/doc/line/c','1=1') AS t(id > int4, doc_num varchar(10), line_num varchar(10), val1 int4, val2 int4, val3 > int4) > > where id = 1 > > order by doc_num, line_num > > > I get: > > > iddoc_num line_numval1val2val3 > 1 C1 L1 1 2 3 > 1 L2 11 22 33 > > > I was expecting doc_num would receive twice the C1 value, just like with a > normal sql join. > > Regards, > > -- > Philippe Lang, Ing. Dipl. EPFL > Attik System > rte de la Fonderie 2 > 1700 Fribourg > Switzerland > http://www.attiksystem.ch > > Tel: +41 (26) 422 13 75 > Fax: +41 (26) 422 13 76 -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] files or DataBase
On Fri, Sep 29, 2006 at 12:37:56AM +0200, tomcask o_o wrote: > Hi > > in advance, sorry for my english. > > in a Web server as is the best option? > > to accede to db to show the contents or to accede to static files modified > by scripts when the content of db has been modified. > > That the habitual thing is to work directly with db and to give back to the > results mounting the page then. > > But I believe that serious much more fast if they were directly the files > and on the other hand in the server of local way executed scripts whom the > changes of the files in a aux table verified, and published the files that > have undergone changes. > > that to both consider the yield of the server in the diferents scenes. > > Greetings and thanks. The more code that sits between you and the data, the longer it takes to get at the data. Filesystems try and minimize that code path so that you can get at your data as fast as possible. Databases on the other hand provide you with a rich query environment that makes it easy to filter, format and manipulate data. So... if you're just talking about spewing data out a pipe without having to process it, nothing will beat a filesystem. So if your web content almost never changes, you'll probably be better off with static files that are re-generated from the database as needed. If most of your page content is static, with a small amount that's dynamic, you might want to mix and match the two somehow. Here's what you don't want to do: have each page issueing 100 queries to the database. That's a great way to kill your performance and/or scaleability. -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] timestamp subtraction (was Re: [SQL] formatting intervals with to_char)
On Mon, Oct 09, 2006 at 02:57:28PM -0500, Aaron Bono wrote: > On 10/5/06, Jim Nasby <[EMAIL PROTECTED]> wrote: > > > >On Oct 5, 2006, at 11:50 AM, Tom Lane wrote: > >> regression=# select ('2006-09-15 23:59:00'::timestamp - '2006-09-01 > >> 09:30:41'::timestamp); > >> ?column? > >> -- > >> 14 days 14:28:19 > >> (1 row) > >> > >> should be reporting '350:28:19' instead. > >> > >> This is a hack that was done to minimize the changes in the regression > >> test expected outputs when we changed type interval from months/ > >> seconds > >> to months/days/seconds. But I wonder whether it wasn't a dumb idea. > >> It is certainly inconsistent, as noted in the code comments. > >> > >> I'm tempted to propose that we remove the justify_hours call, and tell > >> anyone who really wants the old results to apply justify_hours() to > >> the > >> subtraction result for themselves. Not sure what the fallout would > >> be, > >> though. > > > >I suspect there's applications out there that are relying on that > >being nicely formated for display purposes. > > > >I agree it should be removed, but we might need a form of backwards > >compatibility for a version or two... > > I am personally of the opinion that display logic should never be put into > the database. Applications that rely on the database formatting - that is > tightly coupling your application to the database which does not follow good > programming principles. > > None-the-less, the feature would be nice and may be very valuable for > reporting. I agree in general, except most languages have terrible support for time/date data, so I can see a much bigger case for the database being able to do it (and it's not like we'll be removing justify_*). Be that as it may, there are probably apps out there that will break if this is just changed. -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Temp tables and functions
On Tue, Oct 10, 2006 at 03:21:36PM -0400, [EMAIL PROTECTED] wrote: > Everyone, > > I've written a function (language = sql) which uses a temporary table to > simplify the process; however, when I go to load the function I get: > > /var/lib/pgsql$cat thm.sql | psql test > ERROR: relation "lost_bus" does not exist > > where "lost_bus" is the name of my temp table. The function is just for a > report that is run monthly, so I can create a permanent table if necessary, > but I'd rather not. Create the temp table in your script that creates the function. You don't need to populate it or anything, you just need it to exist (prefferably with the correct definition). -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] [HACKERS] Bug?
Moving to -sql. On Wed, Oct 18, 2006 at 06:53:46PM +0530, Indira Muthuswamy wrote: > Hai, > > I have encountered a problem with PostgreSQL.I have created a table > 'tab1' with a column 'a' with serial type.I entered 20 records into the > table.So the query > select max(a) from tab1; > returned 20.When I tried the same query after the command > truncate table tab1; > I found that the output of the first query as > > max > - > > (1 row) > When I tried to insert a new row into the table tab1 I found that the > value at column a incremented to the value 21. > But I heard from my friends that the value of the serial column gets > decremented whenever we issue the 'truncate table' command (in MS SQL > SERVER).Is the concept same with PostgreSQL or not?Can anyone clarify me on > this? Your friends are mistaken. TRUNCATE does nothing to affect sequences. -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] hi i want help on levels
Search the archives for hierarchical query. -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] [HACKERS] Bug?
See section 9.12 of the docs. On Thu, Oct 19, 2006 at 12:28:58PM +0530, Indira Muthuswamy wrote: > Then how do we clear the values of a serial column(is it done only by > dropping the column?)? > > Regards, > M.Indira > > > > On 10/19/06, Jim C. Nasby <[EMAIL PROTECTED]> wrote: > > > >Moving to -sql. > > > >On Wed, Oct 18, 2006 at 06:53:46PM +0530, Indira Muthuswamy wrote: > >> Hai, > >> > >> I have encountered a problem with PostgreSQL.I have created a table > >> 'tab1' with a column 'a' with serial type.I entered 20 records into the > >> table.So the query > >> select max(a) from tab1; > >> returned 20.When I tried the same query after the command > >> truncate table tab1; > >> I found that the output of the first query as > >> > >> max > >> - > >> > >> (1 row) > >> When I tried to insert a new row into the table tab1 I found that the > >> value at column a incremented to the value 21. > >> But I heard from my friends that the value of the serial column gets > >> decremented whenever we issue the 'truncate table' command (in MS SQL > >> SERVER).Is the concept same with PostgreSQL or not?Can anyone clarify me > >on > >> this? > > > >Your friends are mistaken. TRUNCATE does nothing to affect sequences. > >-- > >Jim Nasby[EMAIL PROTECTED] > >EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) > > -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] Grouping by day, limiting amounts
On Thu, Oct 19, 2006 at 01:51:55PM +0200, Mezei Zolt??n wrote: > > Hi, > I didn't really know what subject I should give. > I have a table like this one: > 2006.10.01.Bela10 > 2006.10.01.Aladar9 > 2006.10.01.Cecil8 > 2006.10.01.Dezso7 > 2006.10.01.Elemer6 > 2006.10.02.Bela11 > 2006.10.02.Aladar10 > 2006.10.02.Cecil9 > 2006.10.02.Dezso8 > 2006.10.02.Ferenc7 > 2006.10.03.Bela6 > 2006.10.03.Aladar5 > 2006.10.03.Cecil4 > 2006.10.03.Dezso3 > 2006.10.03.Jozef2 > The first column is a date, the second is a name, the third is the > number of votes that the name received on that day. > I would like to select the 3 (or 10) names with the most votes for > each day. > Any suggestions on how can it be done easily? It'd be easy with windowing functions, but unfortunately we don't have those... SELECT * FROM (SELECT DISTINCT date FROM table) AS dates , (SELECT date, name, votes FROM table WHERE table.date = dates.date ORDER BY votes DESC LIMIT 3 ) ; Note that this has to scan the table twice (well, the second subquery will likely use an index on date). If you have another table that has the dates in it already, you can use that instead of the first subquery. If you know that every day has a row, you could also replace the first subquery with a generate_series(). -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] Can we convert from Postgres to Oracle !!???
On Sun, Oct 22, 2006 at 12:03:38AM +0300, Devrim GUNDUZ wrote: > On Tue, 2006-10-17 at 14:21 +0530, Sandeep Kumar Jakkaraju wrote: > > Can we convert from Postgres to Oracle !!??? You can also run our software and get Oracle syntax for 1/25th the cost. -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] [ADMIN] Is there anyway to...
Moving to -general (and please start a new thread instead of hijacking an existing one). On Thu, Nov 02, 2006 at 01:14:22PM -0500, louis gonzales wrote: > Hello all, > Is there an existing mechanism is postgresql that can automatically > increment/decrement on a daily basis w/out user interaction? The use > case I'm considering is where a student is in some type of contract with > an instructor of some sort, and that contract puts a time limit on the > student requiring her to pay a fee by a certain day. IF that day comes > to pass - or a certain number of days elapse - and that payment > requirement hasn't been met, I want to trigger a function. > > The one requirement I want to impose is, that no end user of the DB > application, needs to do anything to set the trigger, other than the > initialization of making the student of this type. > > An example would be: > Day1 - Application user(typically the instructor) creates a profile for > a new student - John Doe, which sets a 30 day time limit for John Doe to > pay $100.00 > Day2 -> Day31 - John Doe didn't make the payment > Day 31 - Trigger of event occurs when the instructor logs in. > > Basically on Day 1 when John Doe's profile was created, I want a > decrement counter to occur daily on his profile(some attribute/timer) > and nothing should happen until day 31 when he doesn't pay. While you could setup a cron job to decrement some counter every day, I think that's not the best approach. Instead, I'd run a query once a day that finds all students that are past-due and takes some kind of action. -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] timestamp subtraction (was Re: [SQL] formatting intervals with to_char)
Yes, but if it was '2004-01-02 01:00:00'-'2004-01-01 00:00:00' it should return 25:00:00, not 1 day 1:00. I agree with Tom that this should be changed; I'm just arguing that we might well need a backwards-compatibility solution for a while. At the very least we'd need to make this change very clear to users. On Tue, Feb 20, 2007 at 08:07:11PM -0500, Bruce Momjian wrote: > > One problem with removing justify_hours() is that this is going to > return '24:00:00', rather than '1 day: > > test=> select '2004-01-02 00:00:00'::timestamptz - '2004-01-01 > 00:00:00'::timestamptz; >?column? > -- >24:00:00 > (1 row) > > --- > > Jim Nasby wrote: > > On Oct 5, 2006, at 11:50 AM, Tom Lane wrote: > > > regression=# select ('2006-09-15 23:59:00'::timestamp - '2006-09-01 > > > 09:30:41'::timestamp); > > > ?column? > > > -- > > > 14 days 14:28:19 > > > (1 row) > > > > > > should be reporting '350:28:19' instead. > > > > > > This is a hack that was done to minimize the changes in the regression > > > test expected outputs when we changed type interval from months/ > > > seconds > > > to months/days/seconds. But I wonder whether it wasn't a dumb idea. > > > It is certainly inconsistent, as noted in the code comments. > > > > > > I'm tempted to propose that we remove the justify_hours call, and tell > > > anyone who really wants the old results to apply justify_hours() to > > > the > > > subtraction result for themselves. Not sure what the fallout would > > > be, > > > though. > > > > I suspect there's applications out there that are relying on that > > being nicely formated for display purposes. > > > > I agree it should be removed, but we might need a form of backwards > > compatibility for a version or two... > > -- > > Jim Nasby[EMAIL PROTECTED] > > EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) > > > > > > > > ---(end of broadcast)--- > > TIP 3: Have you checked our extensive FAQ? > > > >http://www.postgresql.org/docs/faq > > -- > Bruce Momjian <[EMAIL PROTECTED]> http://momjian.us > EnterpriseDB http://www.enterprisedb.com > > + If your life is a hard drive, Christ can be your backup. + > > ---(end of broadcast)--- > TIP 2: Don't 'kill -9' the postmaster -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] Adding "on delete cascade" to all foreign key constraints
On Tue, May 15, 2007 at 12:19:54PM -0500, Peter Hanson wrote: > Hello, > > I was wondering if there's a fast way I can add "on delete cascade" to all > foreign key constraints in my database? Maybe a quick update I can make > against the catalog possibly? Or is there a way I can query for all foreign > key constrains in the database and then I could write up a quick script to do > the updates for me. You shouldn't go mucking about with the system tables unless absolutely necessary. Instead, write a SELECT that outputs the appropriate syntax. You could do that by querying the catalogs directly, but I think you'll find the pg_user_foreign_keys view defined by http://pgfoundry.org/projects/newsysviews to be most helpful. -- Jim Nasby [EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings