[SQL] Difficult SQL Statement
I have a table that has the following columns: AUTHOR_NO, ASMT_CODE, & STATUS. The table would look something like this: AUTHOR_NO ASMT_CODE STATUS 12345 1 PASSED 12345 2 FAILED 12345 3 FAILED 12345 4 PASSED 12346 1 PASSED 12346 2 PASSED 12346 3 PASSED 654321 1 FAILED 654321 2 PASSED 654321 3 FAILED 654321 4 FAILED 000123 1 PASSED So I am trying to write a SQL statement that will return the ASMT_CODE, the total number of ‘PASSED’ for the ASMT_CODE, the total number of participants for that ASMT_CODE and finally a percent of the PASSED for that particular ASMT_CODE over the number of participants for that ASMT_CODE. So, if I had the table above I would get something like this: ASMT_CODE # PASSEDTOTAL # % of Total 1 3 4 75 2 2 3 66.67 3 1 3 33.34 4 1 2 50 As you notice I am look for the ASMT_CODE base percentage rather than the over all percentage. What would be the SQL to do this? I have tried to write this, but cannot figure out how to calculate the last two columns. Here is what I have so far: select d1.asmt_code, count(d1.amst_code) from test_run d1 where d1.status = 'PASSED' group by d1.asmt_code order by d1.asmt_code BUT this only yields me the first two columns. CAN ANYONE HELP? ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] SQL Subqueries on each result row
On Sep 23, 5:43 am, AnthonyV wrote: > Hello, > > I have a table like : > > date | value > --- > 2009-09-19 | 1 > 2009-09-20 | 2 > 2009-09-21 | 6 > 2009-09-22 | 9 > 2009-09-23 | 1 > > I'd like a request which gives me the sum of each last n days. > For example, if I want the sum of each 3 days, I want this result: > > date | sum_value > --- > 2009-09-19 | 1 (sum from 2009-09-17 to 2009-09-19) > 2009-09-20 | 3 (sum from 2009-09-18 to 2009-09-20) > 2009-09-21 | 9 (sum from 2009-09-19 to 2009-09-21) > 2009-09-22 | 17 (sum from 2009-09-20 to 2009-09-22) > 2009-09-23 | 16 (sum from 2009-09-21 to 2009-09-23) > > I try to make a subquery which is apply on each row of a query, but it > does work. > > Has anybody an idea? > > Thanks in advance! > > Anthony How about the following? BEGIN ; CREATE TABLE z ( the_date date not null ,value integer not null ) ; INSERT INTO z VALUES('2009-09-19',1) ; INSERT INTO z VALUES('2009-09-20',2) ; INSERT INTO z VALUES('2009-09-21',6) ; INSERT INTO z VALUES('2009-09-22',9) ; INSERT INTO z VALUES('2009-09-23',1) ; SELECT z.the_date, SUM(z2.value) FROM z LEFT JOIN z z2 ON z2.the_date IN ( z.the_date ,z.the_date-'1 day'::interval ,z.the_date-'2 day'::interval ) GROUP BY 1 ORDER BY 1 ; ROLLBACK ; output: the_date | sum +- 2009-09-19 | 1 2009-09-20 | 3 2009-09-21 | 9 2009-09-22 | 17 2009-09-23 | 16 (5 rows) -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] RE: counting distinct rows on more than one column
If the fields are fixed length character type, then the simpler concatenation should work. Actually, the requirement is only that all but the final field be fixed length. And if they aren't fixed length, you can cast them to be such, as long as you know the maximum length of the string values, as in the following where that maximum is 20 and we are looking at 3 fields: select count(distinct (cast(field1 as char(20)) || cast(field2 as char(20)) || field3)) from ... Jim Ballard - Original Message - From: "Jeff Eckermann" <[EMAIL PROTECTED]> To: "'Dirk Lutzebaeck'" <[EMAIL PROTECTED]>; "Michael Fork" <[EMAIL PROTECTED]> Cc: <[EMAIL PROTECTED]> Sent: Wednesday, March 28, 2001 3:43 PM Subject: [SQL] RE: counting distinct rows on more than one column > I don't think this will necessarily work: > > field1 | field2 > aa | ab > a | aab > > These are two distinct rows, so should be counted as two. > The proposed method would count them as one. > You can get around this problem by doing: > count (distinct (a || x || b)) > where x is some character not found in your data. > > > -Original Message- > > From: Dirk Lutzebaeck [SMTP:[EMAIL PROTECTED]] > > Sent: Wednesday, March 28, 2001 1:32 PM > > To: Michael Fork > > Cc: [EMAIL PROTECTED] > > Subject: Re: counting distinct rows on more than one column > > > > Michael Fork writes: > > > In 7.0.3, I believe the following would work: > > > > > > SELECT count(distinct(a || b)) FROM t; > > > > Great, this works! I don't quite get it why... > > > > Dirk > > > > ---(end of broadcast)--- > > TIP 3: if posting/reading through Usenet, please send an appropriate > > subscribe-nomail command to [EMAIL PROTECTED] so that your > > message can get through to the mailing list cleanly > > ---(end of broadcast)--- > TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] > ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] Case Insensitive Queries
This is a good point - and it means that Postgres is not following the SQL Standard in this regard. According to the standard, a scalar string function of a single string argument should return the same "type" of string as its input. So upper() should return a fixed-char-field. But it doesn't - it always returns a varchar that includes the trailing spaces from the space-padded fixed char argument. And those trailing spaces are significant for the varchar comparison with the string literal. It seems to me there are two ways to correct this behavior. One is to have overloaded versions of the relevant string function that return the right types. But, probably better, Postgres could support the notion of PAD SPACE or PAD OFF to control the behavior of string comparisons regardless of the particular types of the character fields involved. Are ther plans to change this Postgres behavior? Thanks, Jim Ballard Netezza Corp. -- Original Message -- From: Mark <[EMAIL PROTECTED]> Date: 29 May 2001 10:21:15 -0600 >We tried these but it didn't work. However, that's because username is >a bpchar and not a varchar, so its padded with blanks. so we tried >where lower(trim(username)) = 'test' and it works. We'll change that >column to varchar. The real problem was in the datatype for username. > >Thanks, > >On 29 May 2001 12:35:53 -0400, ANDREW PERRIN wrote: >> Try: >> >> - The ILIKE operator, for example, >> >> SELECT * FROM account WHERE username ILIKE "test"; >> >> - upper() or lower(), for example, >> >> SELECT * FROM accont WHERE lower(username) = "test"; >> >> - >>Andrew J. Perrin - Assistant Professor of Sociology >> University of North Carolina, Chapel Hill >> 269 Hamilton Hall CB#3210, Chapel Hill, NC 27599-3210 USA >>[EMAIL PROTECTED] - http://www.unc.edu/~aperrin >> >> On 29 May 2001, Mark wrote: >> >> > Is it possible to execute a query using a where clause that allows case >> > insensitive comparison between a field and text. >> > >> > For example: >> > >> > select * from account where username = 'test' >> > >> > where username could be 'Test', which would be a match. As is, this >> > compare is case sensitive. >> > >> > grep'd the source, but stricmp is only used for keywords and not actual >> > column data. >> > >> > Any help would be greatly appreciated. >> > >> > Thanks, >> > >> > Mark >> > >> > >> > ---(end of broadcast)--- >> > TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] >> > >> > > >---(end of broadcast)--- >TIP 4: Don't 'kill -9' the postmaster > ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] Substrings by Regular Expression
Hello- The 7.2 version distributed with RedHat 8.0 seems not to work for substrings and POSIX regular expressions. I think this should work pds=> select substring( str1 from ',' ) from address; ERROR: pg_atoi: error in ",": can't parse "," Am I missing something? TIA J. Drummey [EMAIL PROTECTED] ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [SQL] Is it possible in PostgreSQL?
This is what I use to flatten a table, the syntax may not be postgresql correct but you will get idea. SELECT a.name ,SUM (CASE WHEN EXTRACT(month from a.date) = 1 THEN a.quantity ELSE 0 END) AS '01' ,SUM(CASE WHEN EXTRACT(month from a.date) = 2 THEN a.quantity ELSE 0 END) AS '02' etc,etc. FROM (SELECT DISTINCT name FROM "whatever") as a JOIN "whatever" as b on a.name = b.name WHERE -- put in year range GROUP BY a.name The "a" table could b a temp table with the know values to speed up execution. The main thing is to only scan the table once. Let me know how it works out for you. On Sunday 18 January 2004 07:02, you wrote: > Moving thread over to SQL list as it belongs there. > > Bronx: This certainly is possible, but IMO, not in one query. Actually > doing it will be relatively complex. For purposes of maintenance, I am > thinking that doing this would be better handled by wrapping at least one > view. > > CREATE VIEW sales_pre_proc AS > SELECT name, quantity, to_char("date", '') AS year, to_char("date", > 'MM') FROM sales; > > This is needed for the group by statement below to function properly: > CREATE VIEW sales_month_summary AS > SELECT name, sum(quantity) AS quantity, year, month from sales_pre_proc > GROUP BY name, year, month; > > This will give you a view that will have the sum information. Now we just > have to create the statement which will create the pivot effect. I > understand that there is something under contrib/tablefunc for this, but I > do not have it on my system (cygwin), at the moment. Perhaps someone else > can help. > > Failing that, you can write your own function to return each row. I was > working on a quick proof of concept but it was not working properly. > > Best Wishes, > Chris Travers > > - Original Message - > From: Bronx > To: [EMAIL PROTECTED] > Sent: Tuesday, January 13, 2004 6:58 AM > Subject: [ADMIN] Is it possible in PostgreSQL? > > > Hi, > I've got problem with one specific query. I've got the table > with many of rekords like these: > > name | quantity| date > --- > aaa22003-04-01 > bbb42003-04-12 > ccc52003-05-12 > aaa32003-01-14 > aaa12003-12-09 > bbb92003-08-08 > > and so on ... > > Does anybody know how make query which return grouped > records by month of year and name (also sum of quantity). > It is possible to make a query whitch return something like that: > > name | 01 | 02 | 03 | 04 | ... | 12 (months) > > aaa x x xx... x > bbb x x xx... x > ccc x x xx... x > > where x means sum of quantity in month. > It is possible to make it in one query? > I know that in Access is construction : PIVOT. > > Thanks > Adam ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] vacuum analyze slows sql query
have you reindexes your tables. When I was running 7.1.4, I ran a vacuum and reindex nightly. Otherwise your index files will keep getting bigger and bigger (this has been fixed in 7.4). Jim -- Original Message --- From: patrick ~ <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] Sent: Tue, 2 Nov 2004 18:50:31 -0800 (PST) Subject: [SQL] vacuum analyze slows sql query > Greetings pgsql-sql, > > I have a very strange problem. Our production database is a fair > sized db, structure wise, and quite huge data wise. We have a web/php > based UI for our customer to manage the data in our application db. > The customer complains that the UI is sluggish accessing certain > pages and completely times-out on certain other pages. > > We have a nightly "garbage collection" process that runs and purges > any old data. After this process a 'vacuum analyze' is kicked off > (regardless of whether or not any data was actually purged). > > At this point I should mention that our customer sites are running > PostgreSQL 7.1.3; however, I am able to reproduce the issue on 7.4.2. > If it at all matters, customer is running the db on a Dell PowerEdge > 2550 equiped with 1gig of ram. My personal test box is a 700Mhz Intel > with 512mb ram. > > I have in the past made modifications to our SQL statements to make > queries more efficient. At this point I have given up and set out > to strip down our database and data enough to be able to post to > the list and ask for help from more qualified SQL experts. > > In the process of "stripping down" our database I noticed some very > strange behavior which I could not explain. I started to reformulate > my original to-be post to to the list to ask assistence in explaining > this strange behavior I was observing. Next I noticed yet another > strange issue with PostgreSQL. > > I noticed that a freshly created db with freshly inserted data (from > a previous pg_dump) would result in quite fast results. However, > after running 'vacuum analyze' the very same query slowed down about > 1250x (Time: 1080688.921 ms vs Time: 864.522 ms). > > Following is a paste from a psql shell after a dropdb, createdb > and populate db. The query is fast. I next run 'explain' and 'explain > verbose' on the query. Then you see a 'vacuum analyze' followed by > the the 'explain', 'explain verbose' and lastly the query again which > is now extremely slow! > > -- begin > orig=# select offer_id, pkk_offer_has_pending_purch( offer_id ) from pkk_offer > ; > > (618 rows) > > Time: 864.522 ms > orig=# explain select offer_id, pkk_offer_has_pending_purch( offer_id ) from > pkk_offer ; > QUERY PLAN > - > Seq Scan on pkk_offer (cost=0.00..22.50 rows=1000 width=4) > (1 row) > > Time: 24.251 ms > orig=# explain verbose select offer_id, pkk_offer_has_pending_purch( offer_id ) > from pkk_offer ; > > QUERY PLAN > - > {SEQSCAN > :startup_cost 0.00 > :total_cost 22.50 > :plan_rows 1000 > :plan_width 4 > :targetlist ( >{TARGETENTRY >:resdom > {RESDOM > :resno 1 > :restype 23 > :restypmod -1 > :resname offer_id > :ressortgroupref 0 > :resorigtbl 34965071 > :resorigcol 1 > :resjunk false > } > >:expr > {VAR > :varno 1 > :varattno 1 > :vartype 23 > :vartypmod -1 > :varlevelsup 0 > :varnoold 1 > :varoattno 1 > } >} > >{TARGETENTRY >:resdom > {RESDOM > :resno 2 > :restype 16 > :restypmod -1 > :resname pkk_offer_has_pending_purch > :ressortgroupref 0 > :resorigtbl 0 > :resorigcol 0 > :resjunk false > } > >:expr > {FUNCEXPR > :funcid 34965096 > :funcresulttype 16 > :funcretset false > :funcformat 0 > :args ( > {VAR > :varno 1 > :varattno 1 > :vartype 23 > :vartypmod -1 > :varlevelsup 0 > :varnoold 1 > :varoattno 1 > } > ) > } >} &
Re: [ADMIN] [SQL] Postgres schema comparison.
Stef, I use dblink to attach to both databases and query pg_namespace, pg_class, pg_attribute ... to get the diffs. See attached as an example. look for the dblink_connect lines to specify your database. You will need to install contrib/dblink. I used this with 7.4.X series and have NOT tested yet with 8.0.X. You can adjust the output to fit your needs. Jim -- Original Message --- From: Stef <[EMAIL PROTECTED]> To: Tom Lane <[EMAIL PROTECTED]> Cc: pgsql-ADMIN@postgresql.org, pgsql-sql@postgresql.org Sent: Mon, 7 Mar 2005 17:31:55 +0200 Subject: Re: [ADMIN] [SQL] Postgres schema comparison. > Tom Lane mentioned : > => > The problem I have with this, is that I have to run the command per > table, > => > => Why? > => > => If the problem is varying order of table declarations, try 8.0's > => pg_dump. > > Yes, this will solve the global schema check, but I will still need to split > it into "per table" dumps , to do "per table" comparisons. > > Kind Regards > Stefan > > ---(end of broadcast)--- > TIP 3: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to [EMAIL PROTECTED] so that your > message can get through to the mailing list cleanly --- End of Original Message --- db_compare.sql Description: Binary data ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] Funtions + plpgsql + contrib/pgcrypto = ??
give this a try CREATE OR REPLACE FUNCTION selectFromCrypto(INTEGER) RETURNS VARCHAR AS ' DECLARE _pid ALIAS FOR $1; c text; BEGIN SELECT decrypt(crypted_content, decode(''password''::text, ''escape''::text), ''aes''::text) into c FROM crypto WHERE pid = _pid; RETURN c; END; ' LANGUAGE 'plpgsql'; -- Original Message --- From: "Moran.Michael" <[EMAIL PROTECTED]> To: pgsql-sql@postgresql.org Sent: Thu, 24 Mar 2005 08:41:34 -0800 Subject: [SQL] Funtions + plpgsql + contrib/pgcrypto = ?? > Hello there, > > What's the preferred and most efficient way to obtain PGCrypto encrypted > data from a plpgsql function? > > 1. Imagine the following simple table: > CREATE TABLE crypto ( > pid SERIAL PRIMARY KEY, > title VARCHAR(50), > crypted_content BYTEA > ); > > 2. Now insert the following 3 rows of data: > > INSERT INTO crypto VALUES (1, 'test1', encrypt( 'ABCD', 'password', > 'aes')); > INSERT INTO crypto VALUES (2, 'test2', encrypt( 'BCDE', 'password', > 'aes')); > INSERT INTO crypto VALUES (3, 'test3', encrypt( 'CDEF', 'password', > 'aes')); > > 3. Using the psql tool, selecting * from the crypto table yields the > following: > > # select * from crypto; > id | title |crypted_content > +---+ > 1 | test1 | \307\266xI\235"\210a\363=\201\222\247\2660\215 > 2 | test2 | D\211\256E\372^Y\0259\341\215\271\030]\234\017 > 3 | test3 | 4\263\251y\006_N\343\0270I\207;\370\270\266 > > Pretty straight forward, right? > > Now how about doing this in a simple plpgsql Function. That's where we > encounter problems. I want to get DECRYPTED data based on an input ID. So... > > 4. Imagine the following simple plpgsql function (note I'm trying to decrypt > the table's encrypted BYTEA column into a decrypted VARCHAR for return): > > CREATE OR REPLACE FUNCTION selectFromCrypto(INTEGER) > RETURNS VARCHAR > AS ' > DECLARE > crypto_cursor CURSOR (input INTEGER) FOR SELECT > encode(decrypt(crypted_content, decode(''password''::text, > ''escape''::text), ''aes''::text), ''escape''::text) FROM crypto WHERE id = > input; > crypto crypto.crypted_content%TYPE; > tid ALIAS FOR $1; > > BEGIN > OPEN crypto_cursor( tid ); > LOOP > FETCH crypto_cursor INTO crypto; > EXIT WHEN NOT FOUND; > END LOOP; > CLOSE crypto_cursor; > RETURN ( encode(crypto, ''escape''::text)::VARCHAR ); > END; > ' LANGUAGE 'plpgsql'; > > 5. When I use the above function (in the tool, psql) to get the decrypted > contents for ID = 2, it says I get 1 row returned, but the contents are > blank: > > # select * from selectFromCrypto(1); > selectfromcrypto1 > --- > > (1 row) > > Notice the blank row returned... So what am I doing wrong? > > I suspect it has something to do with > converting/encoding/decoding/decrypting the BYTEA column for return... but > what is the problem with the above Function? > > I'm using PostgreSQL 7.4.6, on a Fedora Core 1 system. > > Best regards and thank you very much in advance, > Michael Moran > > ---(end of broadcast)--- > TIP 4: Don't 'kill -9' the postmaster --- End of Original Message --- ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [SQL] Funtions + plpgsql + contrib/pgcrypto = ??
I would change the return type to TEXT, I believe your original example had it as a varchar and I didn't change it. Also, I believe that "under the hood" text does equal varchar. Glad I could help Jim -- Original Message --- From: "Moran.Michael" <[EMAIL PROTECTED]> To: [EMAIL PROTECTED], pgsql-sql@postgresql.org Sent: Thu, 24 Mar 2005 09:43:18 -0800 Subject: Re: [SQL] Funtions + plpgsql + contrib/pgcrypto = ?? > Thank you, Jim. You rock! > > This worked although I'm a bit confused: > > Your function below says it returns VARCHAR, yet the variable that holds the > contents of my SELECT which we ultimately return is of type TEXT. > > When I return the TEXT variable, it works. Does this mean TEXT = VARCHAR in > plpgsql Functions? > > Yet if you cast the TEXT to VARCHAR in your RETURN line below (because, it > seems logical to try to match the declared return type)... it fails > > So, this works:return c; > This doesn't: return c::VARCHAR; > > I always thought matching my return type to my funtion's RETURN declaration > is logical... but now I know that if want a VARCHAR, I gotta manipulate it > as a TEXT within my function when using PGCrypto. Any idea why? > > Thank again, Jim! > > _ > > From: Jim Buttafuoco [mailto:[EMAIL PROTECTED] > Sent: Thu 3/24/2005 9:14 AM > To: Moran.Michael; pgsql-sql@postgresql.org > Subject: Re: [SQL] Funtions + plpgsql + contrib/pgcrypto = ?? > > give this a try > > CREATE OR REPLACE FUNCTION selectFromCrypto(INTEGER) > RETURNS VARCHAR > AS ' > DECLARE >_pid ALIAS FOR $1; >c text; > > BEGIN > > SELECT decrypt(crypted_content, decode(''password''::text, > ''escape''::text), ''aes''::text) into c > FROM crypto > WHERE pid = _pid; > >RETURN c; > END; > ' LANGUAGE 'plpgsql'; > > -- Original Message --- > From: "Moran.Michael" <[EMAIL PROTECTED]> > To: pgsql-sql@postgresql.org > Sent: Thu, 24 Mar 2005 08:41:34 -0800 > Subject: [SQL] Funtions + plpgsql + contrib/pgcrypto = ?? > > > Hello there, > > > > What's the preferred and most efficient way to obtain PGCrypto encrypted > > data from a plpgsql function? > > > > 1. Imagine the following simple table: > > CREATE TABLE crypto ( > > pid SERIAL PRIMARY KEY, > > title VARCHAR(50), > > crypted_content BYTEA > > ); > > > > 2. Now insert the following 3 rows of data: > > > > INSERT INTO crypto VALUES (1, 'test1', encrypt( 'ABCD', > 'password', > > 'aes')); > > INSERT INTO crypto VALUES (2, 'test2', encrypt( 'BCDE', > 'password', > > 'aes')); > > INSERT INTO crypto VALUES (3, 'test3', encrypt( 'CDEF', > 'password', > > 'aes')); > > > > 3. Using the psql tool, selecting * from the crypto table yields the > > following: > > > > # select * from crypto; > > id | title |crypted_content > > +---+ > > 1 | test1 | \307\266xI\235"\210a\363=\201\222\247\2660\215 > > 2 | test2 | D\211\256E\372^Y\0259\341\215\271\030]\234\017 > > 3 | test3 | 4\263\251y\006_N\343\0270I\207;\370\270\266 > > > > Pretty straight forward, right? > > > > Now how about doing this in a simple plpgsql Function. That's where we > > encounter problems. I want to get DECRYPTED data based on an input ID. > So... > > > > 4. Imagine the following simple plpgsql function (note I'm trying to > decrypt > > the table's encrypted BYTEA column into a decrypted VARCHAR for return): > > > > CREATE OR REPLACE FUNCTION selectFromCrypto(INTEGER) > > RETURNS VARCHAR > > AS ' > > DECLARE > > crypto_cursor CURSOR (input INTEGER) FOR SELECT > > encode(decrypt(crypted_content, decode(''password''::text, > > ''escape''::text), ''aes''::text), ''escape''::text) FROM crypto WHERE id > = > > input; > > crypto crypto.crypted_content%TYPE; > > tid ALIAS FOR $1; > > > > BEGIN > > OPEN crypto_cursor( tid ); > > LOOP > > FETCH crypto_cursor INTO crypto; > > EXIT WHEN NOT FOUN
Re: [SQL] Calling functions from Delete View (+ contrib/pgcrypto) = madness ??
Mike, I posted this RULE also on hackers CREATE or replace RULE crypto_view_delete_rule AS ON DELETE TO crypto_view DO INSTEAD ( select func_delFromCrypto( OLD.id,OLD.crypted_content); ); Jim -- Original Message --- From: "Moran.Michael" <[EMAIL PROTECTED]> To: pgsql-sql@postgresql.org Sent: Thu, 24 Mar 2005 17:30:33 -0800 Subject: [SQL] Calling functions from Delete View (+ contrib/pgcrypto) = madness ?? > Hello there, > > I have a View with a Delete rule and I would like the Delete rule to call a > function (and pass-in a few of the the underlying View's/Table's column > values). How do you do this? > > When I do it, I keep getting the following error: > ERROR: function expression in FROM may not refer to other relations of same > query level > > This is my new Delete View that attempts to call a Function but yields the > above-mentioned error: > > CREATE RULE crypto_view_delete_rule > AS ON DELETE > TO crypto_view > DO INSTEAD > --Original (working) code: > -- DELETE FROM crypto > -- WHERE id = OLD.id; > > --NEW (non-working) code: > select * from func_delFromCrypto( crypto.id, encode( decrypt( > crypto.crypted_content, decode ('password'::text, 'escape'::text), > 'aes'::text), 'escape'::text) ); > > The above may seem complicated, but the crypto portion DOES work (as you'll > see below), and I've broken down the steps below so that it's easy to see > what's going on: > > 1. Imagine the following simple table: > > CREATE TABLE crypto ( > id SERIAL PRIMARY KEY, > title VARCHAR(50), > crypted_content BYTEA > ); > > 2. Imagine the following simple working View: > > CREATE VIEW crypto_view AS > SELECT > id, > title, > --Decrypt the BYTEA column and convert result to TEXT type: > encode(decrypt(crypted_content, decode('password','escape'::text), > 'aes'::text), 'escape'::text) as crypted_content > FROM > crypto; > > 3. Imagine my original, simple Delete Rule (that works -- it was my original > version prior to changing it to call the Function): > > CREATE RULE crypto_view_delete_rule > AS ON DELETE > TO crypto_view > DO INSTEAD > DELETE FROM crypto > WHERE id = OLD.id; > > 4. Let's load (and encrypt) some test data into the above-mentioned table: > > insert into crypto VALUES (1, 'test1', encrypt('', 'password', > 'aes') ); > insert into crypto VALUES (2, 'test2', encrypt('', 'password', > 'aes') ); > insert into crypto VALUES (3, 'test3', encrypt('', 'password', > 'aes') ); > insert into crypto VALUES (4, 'test4', encrypt('', 'password', > 'aes') ); > insert into crypto VALUES (5, 'test5', encrypt('', 'password', > 'aes') ); > > 5. Let's SELECT from the table to see its contents (note the encrypted > values): > > select * from crypto; > id | title | crypted_content > +---+ > 1 | test1 | \026\206I93\327\315\376t\243\006~J\177{\301 > 2 | test2 | \257\212\260\321\217\274c\210d \361\225\212\327\317\206 > 3 | test3 | 6\345:\224dp\002\206<\007k\344\302\347V\214 > 4 | test4 | VH)\023\303\0239\363\323\362\22734\204R\357 > 5 | test5 | \216Np\235\026\362\277\246\026\027\221\266\021\361\224\256 > (5 rows) > > 6. Let's run the View (which nicely decrypts the encrypted columns): > > # select * from crypto_view; > id | title | crypted_content > +---+- > 1 | test1 | > 2 | test2 | > 3 | test3 | > 4 | test4 | > 5 | test5 | > (5 rows) > > 7. Let's test the old DELETE RULE (the one that doesn't call a function, > hence works): > > delete from crypto_view where crypted_content = ; > DELETE 1 > > Check that the DELETE RULE worked (there is one less row of data now): > # select * from crypto_view; > id | title | crypted_content > +---+- > 1 | test1 | > 2 | test2 | > 3 | test3 | > 4 | test4 | > (4 rows) > > It works! Pretty straight forward, right? > > Now let's make things a little more interesting... > > 8. Now here is the following (working) function that I want my rule to call: > > CREATE OR REPLACE FUNCTI
Re: [SQL] Numeric and CSV under 8.0.1 ?
Stef: Why is iuserid numeric? Are you going to do any math on the field? If not, change it to varchar. In the long run you'll be happier. Stef wrote: Hello Everyone, Currently, here at work, I am doing the whole 'advocacy' part of postgreSQL. It's not really hard to do, as the other database's are MySQL and Sybase ;) There is obviously a whole spat of data munging going on in the background, and I noticed that psql in 8.0.1 now accepts CSV ! Great. Except, it doesn't really appear to be 100% there. Numeric's wrapped in '...' don't want to appear to go in. Is this a 'known problem' ? Table "public.visitor_main" Column | Type | Modifiers --+--+-- iuserid | numeric(12,0)| not null firstname| character(25)| not null lastname | character(25)| not null Sample Data line '3236','Alonzo','Peter' ERROR: invalid input syntax for type numeric: "'3236'" CONTEXT: COPY visitor_main, line 1, column iuserid: "'3236'" Thoughts ? Regards Steph ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] funny update, say update 1, updated 1 added 2nd.
works fine for me. Do you have any triggers on the tables or other rules? Can you provide a complete SQL script that starts from an empty database. Jim -- Original Message --- From: Neil Dugan <[EMAIL PROTECTED]> To: pgsql-sql@postgresql.org Sent: Thu, 16 Jun 2005 13:38:58 +1000 Subject: [SQL] funny update, say update 1, updated 1 added 2nd. > I have been having some trouble with a particular table view. An UPDATE > command is not only changing the applicable record it is also creating a > new record as well. > > wholesale=# select * from accounts_supplier; > id | name | contact | addr| addr2 | town | > postcode | state | phone | fax | account_type > +--+-+---+---++--+---+---+-+-- > 1 | ABC construction | TOM | | || >| NSW | | | Cash Only > 2 | test | | | || >| | | | 7 Day > 3 | build-4-U| boss| somewhere | | back of beyond | >| | | | 7 Day > (3 rows) > > wholesale=# update accounts_supplier set addr='nowhere' where id=3; > UPDATE 1 > wholesale=# select * from accounts_supplier; > id | name | contact | addr | addr2 | town | > postcode | state | phone | fax | account_type > +--+-+-+---++--+---+---+-+-- > 1 | ABC construction | TOM | | || > | NSW | | | Cash > Only 2 | test | | | || > | | | | 7 > Day 6 | build-4-U| boss| nowhere | | back of beyond | > | | | | 7 > Day 3 | build-4-U| boss| nowhere | | back of beyond | > | | | | 7 Day > (4 rows) > > Can anyone tell me why this is happening and how to fix it. > > Here are the table and view definitions. > > CREATE TABLE account_type ( > number smallint, > name character varying(20) > ); > > CREATE TABLE address ( > addr character varying(40), > addr2 character varying(40), > town character varying(20), > postcode character varying(10), > state character(4) > ); > > CREATE TABLE supplier ( > id bigserial NOT NULL, > name character varying(40), > phone character varying(20), > fax character varying(20), > contact character varying(40), > account_type smallint DEFAULT 0 > ) > INHERITS (address); > > CREATE VIEW accounts_supplier AS > SELECT supplier.id, > supplier.name, > supplier.contact, > supplier.addr, > supplier.addr2, > supplier.town, > supplier.postcode, > supplier.state, > supplier.phone, > supplier.fax, > account_type.name AS account_type > FROM supplier, account_type > WHERE (account_type.number = supplier.account_type); > > CREATE RULE accounts_supplier_update > AS ON UPDATE TO accounts_supplier > DO INSTEAD UPDATE supplier > SET name = new.name, > contact = new.contact, > addr = new.addr, > addr2 = new.addr2, > town = new.town, > postcode = new.postcode, > state = upper((new.state)::text), > phone = new.phone, > fax = new.fax, > account_type = (SELECT account_type.number > FROM account_type > WHERE ((account_type.name)::text = (new.account_type)::text)) > WHERE (supplier.id = new.id); > > wholesale=# select version(); > version > -- > PostgreSQL 7.4.8 on i386-redhat-linux-gnu, compiled by GCC > i386-redhat-linux-gcc (GCC) 3.4.3 20050227 (Red > Hat 3.4.3-22) > (1 row) > > wholesale=# select * from account_type; > number | name > +--- > 0 | Cash Only > 1 | 7 Day > 2 | 30 Day > 3 | 60 Day > 4 | 90 Day > (5 rows) > > Thanks for any help > Regards Neil. > > ---(end of broadcast)--- > TIP 5: Have you checked our extensive FAQ? > >http://www.postgresql.org/docs/faq --- End of Original Message --- ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] multiple PK with a non UNIQUE field
if you need a multi column fk don't use the "references" keyword on your create table, instead use the "FOREIGN KEY" keyword for the table, see the "create table" help. so for example (untested) change CREATE TABLE appalto ( cod_op int not null references Opere, cod_com int not null references Opere, scadenza date not null, importoint not null, PRIMARY KEY (cod_op,cod_com) ); to CREATE TABLE appalto ( cod_op int not null, cod_com int not null, scadenza date not null, importoint not null, PRIMARY KEY (cod_op,cod_com), FOREIGN KEY (cod_op,cod_com) REFERENCES Opere(cod_op,cod_com) ); -- Original Message --- From: davide <[EMAIL PROTECTED]> To: pgsql-sql@postgresql.org Sent: Sun, 26 Jun 2005 16:58:50 +0200 Subject: [SQL] multiple PK with a non UNIQUE field > Sorry, I 'm a beginner of postgres and I found a problem when I was > passed from MySQL: > I have to create a referential constraint table APPALTO from a table > OPERE that as a composed PK(cod_op,cod_com) where cod_op ISN'T unique. > in MySQL: > CREATE TABLE opere ( > cod_op int NOT NULL , > cod_com int NOT NULL , > costo int , > data_inizio date , > data_fine date , > tipo char(6) NOT NULL , > PRIMARY KEY (cod_op,cod_com) > ) ; > > CREATE TABLE committenti ( > cod_com int NOT NULL , > nome char(30) NOT NULL, > indirizzo char(60) NOT NULL, > CF char(16) unique, > P_IVA char(11) unique, > tipo char(8) NOT NULL , > PRIMARY KEY (cod_com) > ); > > CREATE TABLE appalto ( > cod_op int not null references Opere, > cod_com int not null references Opere, > scadenza date not null, > importoint not null, > PRIMARY KEY (cod_op,cod_com) > ); > > But when I try to insert it: > ERROR: number of referencing and referenced columns for foreign key > disagree > > another table connected at OPERE give instead another error: > > CREATE TABLE direzione ( > CF char(16) not null references Salariati(CF), > cod_op int not null references Opere (cod_op), > cod_com int not null references Opere (cod_com), > --opere_pkeyintreferences Opere, > --PRIMARY KEY (opere_pkey) > PRIMARY KEY (CF,cod_op,cod_com) > ) ; > ERROR: there is no unique constraint matching given keys for referenced > table "opere" > > If I try to use the index "opere_pkey" (automatic created) > NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index > "opere_pkey" for table "opere" > > Why MySQL let me do these and PostgreSQL no? > There's another way? > > ---(end of broadcast)--- > TIP 4: Don't 'kill -9' the postmaster --- End of Original Message --- ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] How can I simply substatue a value in a query?
try case for example select case when bool_column then 'Yes' else 'No end from your_table; -- Original Message --- From: Roy Souther <[EMAIL PROTECTED]> To: pgsql-sql@postgresql.org Sent: Mon, 27 Jun 2005 11:16:58 -0600 Subject: [SQL] How can I simply substatue a value in a query? > I want to do a simple substatution of a value in a query. For example I > have a boolean field that wil return t or f for True and False. I would > like it to return Yes or No. I don't want to have a 2x2 table to look up > the output. I don't want to use stored procedue. > > I think there is a better way, somthing very simple but I cannot > remember what it is. > > Roy Souther > www.SiliconTao.com > Let Open Source help your business move beyond. > > For security this message is digitally authenticated by GnuPG. --- End of Original Message --- ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] 'show full processlist' in postgres?
did you restart postgresql and use the pg_stat_activity view instead (just to save some typing). Jim -- Original Message --- From: Erik Wasser <[EMAIL PROTECTED]> To: pgsql-sql@postgresql.org Sent: Fri, 1 Jul 2005 15:58:46 +0200 Subject: [SQL] 'show full processlist' in postgres? > Hallo pgsql-sql@postgresql.org, > > I'm looking for a way to display the active querys of postgres. The > mysql way is a 'show [full] processlist'. > > After I've read http://pgsqld.active-venture.com/monitoring-stats.html I > set STATS_COMMAND_STRING, STATS_BLOCK_LEVEL, STATS_ROW_LEVEL and > STATS_START_COLLECTOR to 'true'. > > Then I executed the query from the bottom of the page: > > > SELECT pg_stat_get_backend_pid(S.backendid) AS procpid, > >pg_stat_get_backend_activity(S.backendid) AS current_query > >FROM (SELECT pg_stat_get_backend_idset() AS backendid) AS S; > > The result was: > > procpid | current_query > -+--- > 1367 | >27387 | > 930 | >28425 | > (4 rows) > > Why there isn't any query in the 'current_query' row? Calling the query > multiple times just changes the PID (sometimes) but 'current_query' was > always empty. How do I show up the a list of querys that will be > processed right in this moment? > > P.S: I'm using postgresql 7.4.7 and I was logged I as 'postgres'. > > -- > So long... Fuzz > > ---(end of broadcast)--- > TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] --- End of Original Message --- ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] Generating a range of integers in a query
I use the following function which returns a date series. You can modify it to return an int series instead create or replace function alldates(date,date) returns setof date as ' declare s alias for $1; e alias for $2; d date; begin d := s; while d <= e LOOP return next d; select d + \'1 day\'::interval into d; END LOOP; return null; end; ' LANGUAGE 'plpgsql' ; select * from alldates('2004-07-01','2004-08-10'); -- Original Message --- From: Aaron Bingham <[EMAIL PROTECTED]> To: pgsql-sql@postgresql.org Sent: Wed, 13 Jul 2005 11:13:06 +0200 Subject: [SQL] Generating a range of integers in a query > Hello, > > I've got an interesting problem: I need to select all possible values > of an attribute that do /not/ occur in the database. > > This would be easy (in my case at least) if there were a way to > generate a table containing all integers between 1 and n, where n is > the result of a subquery. In my case, n will be at most a few > hundred. I would like to be able to generate this table as a > subquery. Any ideas? > > Thanks, > > -- > > Aaron Bingham > Senior Software Engineer > Cenix BioScience GmbH > > > ---(end of broadcast)--- > TIP 3: Have you checked our extensive FAQ? > >http://www.postgresql.org/docs/faq --- End of Original Message --- ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] Dumping table definitions
use pg_dump with the --schema-only and --table= params -- Original Message --- From: "Mark Fenbers" <[EMAIL PROTECTED]> To: Pg SQL Discussion Group Sent: Mon, 18 Jul 2005 12:50:54 -0400 Subject: [SQL] Dumping table definitions > I am looking for a way to reformat the information that is generated from > \d mytable > into SQL syntax, such that the table can be recreated with 'psql -f > mytable.sql' complete with index and constraint definitions. I can do > awk and sed commands to do this if I need to, but first wanted to check > if Pg already had tools to export the table structure (without the > data). Does it? > > Also, Is there a way to export the structure of all tables at once > instead of one table at a time? > > Mark --- End of Original Message --- ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Multi-column returns from pgsql
Mark, Instead of RETURN NEXT rec.txt1; RETURN NEXT rec.txt2; just use RETURN NEXT rec; then your select statement would be select * from my_func() as (txt1 text,txt2 text); Jim -- Original Message --- From: "Mark R. Dingee" <[EMAIL PROTECTED]> To: pgsql-sql@postgresql.org Sent: Fri, 22 Jul 2005 11:49:21 -0400 Subject: [SQL] Multi-column returns from pgsql > Hi Everyone, > > Does anyone know if/how it's possible to return multi-column sets from a > pgsql > function? Right now I'm using something like the following as a work around > > CREATE OR REPLACE FUNCTION my_func() returns SETOF TEXT AS ' > DECLARE > rec record; > BEGIN > FOR rec IN SELECT txt1, txt2 FROM mytable LOOP >RETURN NEXT rec.txt1; >RETURN NEXT rec.txt2; > END LOOP; > RETURN; > END;' language 'plpgsql'; > > which leaves me parsing multiple records to achieve the desired end result. > > Anyone have any thoughts? > > Thanks, > Mark > > ---(end of broadcast)--- > TIP 9: In versions below 8.0, the planner will ignore your desire to >choose an index scan if your joining column's datatypes do not >match --- End of Original Message --- ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] Multi-column returns from pgsql
or just return setof RECORD (version 7.4 +) -- Original Message --- From: Tony Wasson <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] Cc: "Mark R. Dingee" <[EMAIL PROTECTED]>, pgsql-sql@postgresql.org Sent: Fri, 22 Jul 2005 11:11:09 -0700 Subject: Re: [SQL] Multi-column returns from pgsql > On 7/22/05, Jim Buttafuoco <[EMAIL PROTECTED]> wrote: > > Mark, > > > > Instead of RETURN NEXT rec.txt1; RETURN NEXT rec.txt2; just use RETURN > > NEXT rec; > > > > then your select statement would be > > select * from my_func() as (txt1 text,txt2 text); > > > > Jim > > Besides a simple RETURN NEXT, you'll need to return a SETOF some > composite type. You can do something like > > CREATE TYPE twotexts_t AS (txt1 TEXT, txt2 TEXT); > > CREATE OR REPLACE FUNCTION my_func() returns SETOF twotexts_t AS ' > DECLARE >rec record; > BEGIN >FOR rec IN SELECT txt1, txt2 FROM mytable LOOP > RETURN NEXT >END LOOP; >RETURN; > END;' language 'plpgsql'; > > ---(end of broadcast)--- > TIP 6: explain analyze is your friend --- End of Original Message --- ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] using pg_tables and tablename in queries
do you mean UNION ALL instead of JOIN, if you mean UNION ALL , I would go with a set returning function passing it the necessary WHERE clause to be applied to all of your tables. You might be able to wrap the whole thing into a view -- Original Message --- From: solarsail <[EMAIL PROTECTED]> To: Tom Lane <[EMAIL PROTECTED]> Cc: pgsql-sql@postgresql.org Sent: Tue, 4 Oct 2005 14:40:54 -0400 Subject: Re: [SQL] using pg_tables and tablename in queries > The current behavior is by design. > > We use the table as a logging repository. It can get very large 250 000 > records. Because of the large number of records that we have in the table we > found it was much faster to perform inserts on a smaller table. Our current > system rolls the tables over every 12 hours or so, creating a new table with > the following behavior: > > CREATE TABLE mytable_temp {...} > > ALTER TABLE mytable RENAME TO mytable_back_datetime; > ALTER TABLE mytable_temp RENAME TO mytable; > > I want to join the mytable_back_datetime tables together in order to perform > queries against my huge set of data to generate some reports. I'm probably > going to create a temporary table with a few indexes to make the reports run > faster... however I need to join the tables all together first. > > On 10/4/05, Tom Lane <[EMAIL PROTECTED]> wrote: > > > > solarsail <[EMAIL PROTECTED]> writes: > > > I have a large number of tables with a common naming convention > > > > > mytable001, mytable002, mytable003 ... mytable00n > > > > > I would like to do a query across all of the tables, however I do not > > know > > > all of the tables before hand, and I do not want to ( cant ) manually > > > generate a query like > > > > > select * from mytable001, mytable002, mytable003 > > > > > I have a query that returns the names of the tables I want to query: > > > > > select tablename from pg_tables where tablename like 'mytable%' > > > > This looks to me like a situation in which you should rethink your > > data design. Those tables should all get merged into one big table, > > adding one extra column that reflects what you had been using to > > segregate the data into different tables. > > > > regards, tom lane > > --- End of Original Message --- ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] using pg_tables and tablename in queries
again, do you really want to join the tables or do a UNION ALL. From one of your posts you said the table were the same. you need to do something like select * from table_001 union all select * from table_002 ... select * from table_999 I would do this in a set returning function looping of an EXECUTE. If you need help, post the schema for a couple of your tables and I will help with the function Jim -- Original Message --- From: solarsail <[EMAIL PROTECTED]> To: Yasir Malik <[EMAIL PROTECTED]> Cc: PostgreSQL Sent: Tue, 4 Oct 2005 15:50:39 -0400 Subject: Re: [SQL] using pg_tables and tablename in queries > I managed to make this work as sub query before... I wish I had > written it down somewhere... > > Regarding the creation of a function. I do have a function that > almost does that. I'm having a hard time getting it to return a set > of records from the EXECUTE command ( more than one row returned by > the select * ...). > > If I generate a temporary table instead of returning the results how > long will that table exist for? Excuse the OOP terminology but would > it be correct to create a 'Singleton' to access the temporary table, > where if it exists and is less than 30 minutes old use that one, > otherwise drop the table and recreate it? > > Thanks > > -- sample function.. > > CREATE OR REPLACE FUNCTION testfunc_jointables() > RETURNS SETOF record AS > $BODY$ > DECLARE > query TEXT; > BEGIN > query := 'auditrecord'; > > FOR > atablename IN select * from pg_tables where tablename like > 'mytable_%' > LOOP > > query := query || ', ' || quote_ident(atablename.tablename); > > END LOOP; > > EXECUTE ' SELECT * from ' || query; > > END; > > On 10/4/05, Yasir Malik <[EMAIL PROTECTED]> wrote: > > > The current behavior is by design. > > > > > > We use the table as a logging repository. It can get very large 250 000 > > > records. Because of the large number of records that we have in the table > > > we > > > found it was much faster to perform inserts on a smaller table. Our > > > current > > > system rolls the tables over every 12 hours or so, creating a new table > > > with > > > the following behavior: > > > > > > CREATE TABLE mytable_temp {...} > > > > > > ALTER TABLE mytable RENAME TO mytable_back_datetime; > > > ALTER TABLE mytable_temp RENAME TO mytable; > > > > > > I want to join the mytable_back_datetime tables together in order to > > > perform > > > queries against my huge set of data to generate some reports. I'm probably > > > going to create a temporary table with a few indexes to make the reports > > > run > > > faster... however I need to join the tables all together first. > > > > > > > I would create a function that creates a string with a query that includes > > all the tables you need, and call execute on the string. You would loop > > through the all tables from pg_tables and keep on appending the table name > > you need. > > > > Regards, > > Yasir > > > > ---(end of broadcast)--- > > TIP 3: Have you checked our extensive FAQ? > > > >http://www.postgresql.org/docs/faq > > > > ---(end of broadcast)--- > TIP 2: Don't 'kill -9' the postmaster --- End of Original Message --- ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] How to get a count() where column < ''?
Joost Kraaijeveld wrote: Hi Stephan, On Sun, 2005-12-04 at 13:33 -0800, Stephan Szabo wrote: SELECT COUNT(customers.objectid) FROM prototype.customers, prototype.addresses WHERE customers.contactaddress = addresses.objectid AND zipCode < '2716BN' ORDER By zipCode, houseNumber In a non-grouped query like the above, I don't think that the order by is meaningful. You only get one row back anyway without a group by, and there's no single zipCode or houseNumber to associate with the row. What do you mean by a non-grouped query? The query below gives the same error: SELECT zipcode, COUNT(*) FROM prototype.customers, prototype.addresses WHERE customers.contactaddress = addresses.objectid AND zipCode < '2716BN' Try SELECT COUNT(*) FROM prototype.customers, prototype.addresses WHERE customers.contactaddress = addresses.objectid AND zipCode < '2716BN' ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] FOREIGN KEYs ... I think ...
Marc Is the "assumption" that anytime there are comments the status changes? If I'm reading between the lines correctly, there could be a large number of comments before the status changes. So no need to change status until explicitly needed. If there is a specific "comment" that means a status change, you could code a trigger to check for the comment and then update the "status". I don't really like this solution but it could work. The front end/user should explicitly change the "status". Marc G. Fournier wrote: On Wed, 4 Jan 2006, Tom Lane wrote: "Marc G. Fournier" <[EMAIL PROTECTED]> writes: Now, what I want to do is add a FOREIGN KEY (again, I think) that when incident_summary.status is changed (either closed, or reopened), the associated records in incident_comments are changed to the same state ... Why not just get rid of the status column in incident_comments, and treat incident_summary.status as the sole copy of the state? When you need to get to it from incident_comments, you do a join. I may end up getting to that point ... The foreign key you really ought to have here is from incident_comments.incident_id to incident_summary.id (assuming that I've understood your schema correctly). 'k, where I'm getting lost here is how do I get status changed in _comments on UPDATE of incident_summary.id? There doesn't seem to be anything for ON UPDATE to 'run SQL query' or some such ... or I'm reading old docs :( This is the part that I'm having a bugger of a time wrapping my head around ... Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: [EMAIL PROTECTED] Yahoo!: yscrappy ICQ: 7615664 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] new rule syntax?
Try this rule instead create rule checks_d0 as on delete to checks do delete from checkitems where ckid = OLD.ckid; -- Original Message --- From: "Milen A. Radev" <[EMAIL PROTECTED]> To: pgsql-sql@postgresql.org Sent: Sun, 05 Feb 2006 15:10:23 +0200 Subject: Re: [SQL] new rule syntax? [UTF-8?]> A. R. Van Hook напиÑа: > > I have two tables defined as: > > checks > >(ckidint NOT null PRIMARY KEY, > > payto text, > > notes text, > > ckdate date, > > printed int default 0, > > tdate timestamp not null) > > checkitems > >(itemint not null, > > ckidint NOT null references checks, > > itemtypeint not null, > > amt numeric(7,3), > > primary key (item, ckid)) > > > > in previous versions (<8.1) the following rule declaration seemed to > > work fine > > create rule checks_d0 as > > on delete to checks > >do delete from checkitems > > where ckid = checks.ckid; > > in 8.1.2 I get > > > > ERROR: missing FROM-clause entry from table "checks" > > > > any idea? > > May be you are bitten by the change of the default value of > "add_missing_from" setting > (http://www.postgresql.org/docs/8.1/static/runtime-config-compatible.html#RUNTIME-CONFIG-COMPATIBLE-VERSION). > > -- > Milen A. Radev > > ---(end of broadcast)--- > TIP 1: if posting/reading through Usenet, please send an appropriate >subscribe-nomail command to [EMAIL PROTECTED] so that your >message can get through to the mailing list cleanly --- End of Original Message --- ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] Sum If
try select ...,sum(case when sales.itemname = 'some' then sales.count else 0 end) as "Sales Candies" from your_table_here group by ... -- Original Message --- From: "Daniel Hernandez" <[EMAIL PROTECTED]> To: pgsql-sql@postgresql.org Sent: Thu, 23 Feb 2006 12:46:44 -0500 (EST) Subject: [SQL] Sum If > Hi Guys, I'm new on this group, and I have a question, is there a way to do > a "sum if" (kind of) > ?What i want to do is the following.Select customers.custid, if > (sales.itemname = 'candy', sum(sales.count)) > as "Sales candies", if (sales.itemname = 'some', sum(sales.count)) > as "Sales Some"from > ...join ...where .group by customers.custid ...Thanks in advanced, and > best regards,Daniel > Hernández.Tijuana, BC, México."More you learn, more you earn". > > ___ > Join Excite! - http://www.excite.com > The most personalized portal on the Web! --- End of Original Message --- ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] Query from shell
or psql db < To: Sent: Thu, 6 Apr 2006 14:37:51 -0700 Subject: Re: [SQL] Query from shell > Judith wrote: > > >Hi every body, somebody can show me hot to execute a > > query from a shell > > echo QUERY HERE | psql databasename > > Or, if you want to run several queries, run psql and run your queries there. > > ---(end of broadcast)--- > TIP 2: Don't 'kill -9' the postmaster --- End of Original Message --- ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Problems Testing User-Defined Function
You will have to use the "CALLED ON NULL INPUT" option to "create function" (Postgresql 8.1, I don't know about other versions) if you expect NULL arguments. Jim -- Original Message --- From: "Rommel the iCeMAn" <[EMAIL PROTECTED]> To: "PostgreSQL SQL Mailing List" Sent: Fri, 9 Jun 2006 16:01:26 -0400 Subject: [SQL] Problems Testing User-Defined Function > Hi list, > > I'm a newbie and I have a problem. I've defined the following function using > pgAdmin but I am clueless as to how I can test it. I will eventually be > calling this function from a .NET application but I want to test it using > raw SQL first. Here's the function definition: > > CREATE OR REPLACE FUNCTION sp_insert_manifest(_sender varchar(255), >_sender_email varchar(255), >_reply_to varchar(255), >_filename varchar(255), >_file oid, >_datetime_sent timestamp) RETURNS integer AS $$ > > -- blah blah -- > > $$ LANGUAGE plpgsql; > > I am trying to pass the following values to the function but I have been so > far unsuccessful. > > SELECT sp_insert_manifest('me', [EMAIL PROTECTED]', '[EMAIL PROTECTED]', > 'test.txt', NULL, > '2006/06/09') > > Can anyone help me here? > > Thanks, > Rommel the iCeMAn. > > ---(end of broadcast)--- > TIP 5: don't forget to increase your free space map settings --- End of Original Message --- ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] Good examples of calling slony stored procedures
try www.slony.info -- Original Message --- From: "Mark Adan" <[EMAIL PROTECTED]> To: Sent: Wed, 14 Jun 2006 08:50:23 -0700 Subject: Re: [SQL] Good examples of calling slony stored procedures > Hi > > Can somebody direct me to the mailing list for slony. I couldn't find > it anywhere on the postgres.org website (which is where I found this > list to begin with). Thanks > > Mark > > -Original Message- > From: Tom Lane [mailto:[EMAIL PROTECTED] > Sent: Tuesday, June 13, 2006 20:27 > To: Mark Adan > Cc: pgsql-sql@postgresql.org > Subject: Re: [SQL] Good examples of calling slony stored procedures > > "Mark Adan" <[EMAIL PROTECTED]> writes: > > I was wondering if where can I find some examples of calling the slony > > stored procedures instead of using slonik? I want to be able to for > > example add a table into slony. Thanks > > This is likely the wrong bunch to ask --- there's a slony project > mailing list where the right people to ask hang out. Don't have > the address at hand. > > regards, tom lane > > ---(end of broadcast)--- > TIP 3: Have you checked our extensive FAQ? > >http://www.postgresql.org/docs/faq --- End of Original Message --- ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] create aggregate function 'count_bool( column_name, boolean )'
James, I know Postgresql doesn't have 2 arg aggregate functions. what you could do is the following (untested) select distict product_id, sum(case when purchased then 1 else 0 end) as purchased, sum(case when was_selected then 1 else 0 end) as was_selected from some_table group by product_id; Jim -- Original Message --- From: "James Moliere" <[EMAIL PROTECTED]> To: Sent: Fri, 7 Jul 2006 06:53:45 -0700 Subject: [SQL] create aggregate function 'count_bool( column_name, boolean )' > Hello, > I'd like to create a function called count_bool( column_name, boolean ) in > PostgreSQL. > > this function is similar to the count( column_name ) function but will only > count the trues or falses based on the other input parameter boolean. e.g. > if you pass in a 'true', all the trues will be counted but not the falses -- > it's the same but opposite if the 'false' was passed in. > > I'd like this aggregate function to be developed with the SQL language > instead of 'C' (for maintenance reasons). From the surface, it appears to > be an incredibly simple job only to find that it's difficult. > > In some respects, I can't believe this function doesn't exist in SQL > > so now I can create the statement > select distict product_id, count_bool(purchased, true), > count_bool(was_selected, true) from some_table group by product_id; > > instead of breaking the query into 3 seperate queries > > select distict product_id from some_table; > select count(purchased) from product_id where purchased = true; > select count(was_selected) from some_table where was_selected = true; > > Am I missing a detail with SQL based aggregate function development? > > Any help would be appreciated. > > Thanks! --- End of Original Message --- ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] SELECT substring with regex
use plperl -- Original Message --- From: T E Schmitz <[EMAIL PROTECTED]> To: pgsql-sql@postgresql.org Sent: Fri, 07 Jul 2006 20:23:50 +0100 Subject: Re: [SQL] SELECT substring with regex > Rodrigo De Leon wrote: > > On 7/7/06, T E Schmitz <[EMAIL PROTECTED]> wrote: > > > >> But that takes me to the next problem: > >> > >> For the sake of the example I simplified the regular pattern. > >> In reality, BASE_NAME might be: > >> > >> 28mm > >> 28-70mm > >> > >> So the reg. expr. requires brackets: > >> > >> substring (NAME, '^(\\d+(-\\d+)?mm)' ) as BASE_NAME > >> > >> Actually, the pattern is more complex than that and I cannot see how I > >> can express it without brackets. > > > > > > Maybe: > > > > select > > substring ('28-70mm LD AD Asp XR Macro', '^[\\d-]*mm' ) as BASE_NAME > > Sorry, but that would also capture something like > 10-30-59mm > > The pattern describes either a single length (120 millimeters) or a > range (30 to 70 millimetres), hence: > > \\d+(-\\d+)?mm > > The ? quantifier refers to the combination of '-' and digits and has to > be bracketed. > > If the brackets cannot be avoided in the expression, your original > suggestion might come in handy though: > > SELECT > substring (NAME, '^\\d+(-\\d+)?mm') AS BASE_NAME , > substr( > NAME > , char_length( > substring (NAME, '^\\d+(-\\d+)?mm') > ) + 2 > ) AS SUFFIX > > Still, I'd be interested to know whether there is a 'more elegant' solution. > > -- > > Regards, > > Tarlika Elisabeth Schmitz > > ---(end of broadcast)--- > TIP 4: Have you searched our list archives? > >http://archives.postgresql.org --- End of Original Message --- ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] DBD::Pg ... how would I format this prepare?
try now() + (? || ' day')::interval -- Original Message --- From: "Marc G. Fournier" <[EMAIL PROTECTED]> To: pgsql-sql@postgresql.org Sent: Wed, 26 Jul 2006 18:29:32 -0300 (ADT) Subject: [SQL] DBD::Pg ... how would I format this prepare? > I need to do: > > NOW() + '2 day'::interval > > where 2 is a variable ... > > if I do: > > NOW() + '? day'::interval > > it, of course, takes the ? as a literal ... so is there some way I can do > this such that I can do the placeholder? > > Thx > > > Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) > Email . [EMAIL PROTECTED] MSN . [EMAIL PROTECTED] > Yahoo . yscrappy Skype: hub.orgICQ . 7615664 > > ---(end of broadcast)--- > TIP 3: Have you checked our extensive FAQ? > >http://www.postgresql.org/docs/faq --- End of Original Message --- ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] finding unused indexes?
check out pg_stat_user_indexes, you will need to turn on the stats collection in your postgresql.conf file first. Jim -- Original Message --- From: "George Pavlov" <[EMAIL PROTECTED]> To: Sent: Tue, 1 Aug 2006 09:05:34 -0700 Subject: [SQL] finding unused indexes? > Anybody have a clever way to quickly find whether there are any unused > indexes in a PG DB? One way I have done is to take queries from the DB > log, prepend an explain to each and grep the results, but I am wondering > if there are either any index usage stats maintained somewhere inside > Postgres or if there is a slicker/less cumbersome way of doing it. Also > indexes used by functions are hard to simulate that way. > > George > > ---(end of broadcast)--- > TIP 6: explain analyze is your friend --- End of Original Message --- ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] Deleting Functions
Scott, I use the following query with psql \o option. Change the schema name from public to whatever. I am sure you could put this into a plpgsql function using execute as well. Jim \o drops.sql select 'drop function ' || nspname || '.' || proname || '(' || pg_catalog.oidvectortypes(p.proargtypes) || ');' from pg_proc p join pg_namespace b on (p.pronamespace=b.oid) where nspname='public'; \o \!vi drops.sql # < To: pgsql-sql@postgresql.org Sent: Wed, 23 Aug 2006 13:29:12 -0600 Subject: [SQL] Deleting Functions > Folks, > > I am using PGSQL do do all of my schema changes, but have run into a > problem. I need to be able to DROP all of the USER FUNCTIONS that are > defined in a schema. The problem is that I do not know the names of all > of the functions and parameters. I know that I could use '\df' and then > use perl or some other scripting language to parse this, but I need to > be able to perform this action on multiple platforms (without rewriting > the process for each platform). > > I have written a script that incorporates '\df' and appropriate parsing > to perform this action for LINUX and it works fine/lasts a long time. I > need to make this more transparent so it will run on all platforms (in > particular windows). > > I know that in the past (version 7) there was a way to drop the > functions by directly accessing some of the system tables. The > question/problem is that we have updated to version 8.1 and need the > same functionality. > > Any help or directions would be greatly appreciated. > > Thanks Scott. > > ---(end of broadcast)--- > TIP 6: explain analyze is your friend --- End of Original Message --- ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] timestamp subtraction (was Re: [SQL] formatting intervals with to_char)
On Oct 5, 2006, at 11:50 AM, Tom Lane wrote: regression=# select ('2006-09-15 23:59:00'::timestamp - '2006-09-01 09:30:41'::timestamp); ?column? -- 14 days 14:28:19 (1 row) should be reporting '350:28:19' instead. This is a hack that was done to minimize the changes in the regression test expected outputs when we changed type interval from months/ seconds to months/days/seconds. But I wonder whether it wasn't a dumb idea. It is certainly inconsistent, as noted in the code comments. I'm tempted to propose that we remove the justify_hours call, and tell anyone who really wants the old results to apply justify_hours() to the subtraction result for themselves. Not sure what the fallout would be, though. I suspect there's applications out there that are relying on that being nicely formated for display purposes. I agree it should be removed, but we might need a form of backwards compatibility for a version or two... -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] [SQL] Case Preservation disregarding case
On Nov 14, 2006, at 2:42 PM, Simon Riggs wrote: On Thu, 2006-11-02 at 10:51 -0500, Tom Lane wrote: "Simon Riggs" <[EMAIL PROTECTED]> writes: We have namespaces to differentiate between two sources of object names, so anybody who creates a schema where MyColumn is not the same thing as myColumn is not following sensible rules for conceptual distance. I'd agree that that is not a good design practice, but the fact remains that they *are* different per spec. Would be better to make this behaviour a userset switchable between the exactly compliant and the more intuitive. That's certainly not happening --- if you make any changes in the semantics of equality of type name, it would have to be frozen no later than initdb time, for exactly the same reasons we freeze locale then (hint: index ordering). [Re-read all of this after Bruce's post got me thinking.] My summary of the thread, with TODO items noted: 1. PostgreSQL doesn't follow the spec, but almost does, with regard to comparison of unquoted and quoted identifiers. DB2 does this per spec. 2. TODO: We could follow the spec, but it would need an initdb option; some non-SQL:2003 standard PostgreSQL programs would not work as they do now. This is considered a minor, low priority item, though. 3. TODO: We could set column headers better if we wanted to (rather than ?column? we could use e.g. Sum_ColumnName etc) Did the idea of preserving the original case and using that for output column names, /d, etc. get shot down? I thought it would be a useful addition... -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] [pgadmin-support] Groups and Roles and Users
On Nov 6, 2006, at 12:18 PM, Ezequias Rodrigues da Rocha wrote: Could someone suggest me how to get a documentation about Groups and Roles and Users ? It would be nice to see a simple and easy documentation (a tutorial could be better, with pgadmin much better). Have you looked in the PostgreSQL docs? Roles aren't pgAdmin specific, afterall. -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Rule for multiple entries
Use a trigger instead, the rule is only run once per insert/update/delete while the trigger is run for each row. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of William Scott Jordan Sent: Wednesday, December 13, 2006 9:05 PM To: pgsql-sql@postgresql.org Subject: [SQL] Rule for multiple entries Hi all! I have a rule in place that is supposed to adjust a value in one table based on how many rows are added or deleted to another table, but I'm not getting the results that I hoped for. If a single sql statement adds or deletes multiple entries, the rule still only adjusts the value by one, when I want it to adjust the value by the number of rows that were added or deleted. To recreate this problem, first set up the schema: CREATE TABLE counter ( counter_key int, counter_value int ) ; INSERT INTO counter (counter_key, counter_value) VALUES (1, 0) ; CREATE TABLE entries ( entry_key int ) ; CREATE RULE entries_insert AS ON INSERT TO entries DO UPDATE counter SET counter_value = counter_value + 1 WHERE counter_key = 1 ; CREATE RULE entries_delete AS ON DELETE TO entries DO UPDATE counter SET counter_value = counter_value - 1 WHERE counter_key = 1 ; Then enter some values: INSERT INTO entries (entry_key) VALUES (1) ; INSERT INTO entries (entry_key) VALUES (2) ; INSERT INTO entries (entry_key) VALUES (3) ; INSERT INTO entries (entry_key) VALUES (4) ; INSERT INTO entries (entry_key) VALUES (5) ; At this point the counter table should show a counter_value of 5, because there are 5 entries in the entries table. Now, if you delete 3 of those entries, the counter table should show a value of 2, but instead it only counts it as a single transaction. DELETE FROM entries WHERE entry_key > 2 ; Why doesn't the entries_delete rule hit for each row that's being adjusted. Or, more importantly, how can I adjust the rule so that it will count each row that's be added/deleted? Any suggestions would be appreciated. -Scott ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] Help with quotes in plpgsql
Try select into a now() - interval ($1 || ' day') -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Richard Ray Sent: Tuesday, December 19, 2006 3:10 PM To: pgsql-sql@postgresql.org Subject: [SQL] Help with quotes in plpgsql How should this be properly quoted create or replace function test(integer) returns setof text as $$ declare a record; begin select into a now() - interval '$1 day'; return next a; return; end $$ language 'plpgsql'; I'm not having a lot of luck Thanks Richard ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Related tables to a view
Try this query select a.relname as base,a.relkind from pg_class a join pg_depend d on (a.oid = d.refobjid) join pg_class c on (d.classid = c.oid) join pg_rewrite r on (objid = r.oid) join pg_class v on (ev_class = v.oid) where a.relkind in('r', 'v') and a.relname <> v.relname and v.relname='YOUR VIEW NAME HERE' order by 1 -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Jan Meyland Andersen Sent: Tuesday, December 26, 2006 3:11 PM To: Michael Fuhr Cc: Jan Meyland Andersen; pgsql-sql@postgresql.org Subject: Re: [SQL] Related tables to a view > You could query pg_depend to find out which tables and columns the > view's rewrite rule depends on but that's not as specific as what > you're requesting. I'm not aware of a way to associate a particular > table column with a particular view column short of parsing the view > definition or rule action; doing so would have to allow for the > possibility of a view column deriving its value from an arbitrarily > complex expression involving multiple tables, subqueries, etc. I have been thinking the problem through again, and I think I am able to solve the problem if I just know which tables the view is using. But how do I get this information from the pg_depend table? Regards Jan ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[SQL] unsubscribe
unsubscribe
[SQL] Selecting different views from a Boolean expression
Hi there, I have a web based reporting system that I am coding. I need to be able to view statistics by the calendar year or the fiscal year depending on what the user selects. My issue is how do I change y select statements to change the column output? Example IncidentJan Feb Mar Apr May June JulyAug Sep Oct Nov Dec === === === === === === Falls 1 1 0 0 0 0 0 0 0 0 0 0 Overdose0 1 0 1 1 1 0 0 0 1 1 0 But if the user selects fiscal year the output could be: IncidentJun Jul Aug septOct Nov Dec Jan Feb Mar Apr May === === === === === === Falls 1 1 0 0 0 0 0 0 0 0 0 0 Overdose0 1 0 1 1 1 0 0 0 1 1 0 I have created select views for each month possibility, but I don't know how to select the view I need from an IF/Then/else statement!! Any help would be appreciated. Thanks
[SQL] How too select different views using a IF/THEN/ELSE procedure ?
Below is a select statement that select incidents by month. I need a function or a method to select differents views that will show the Month columns is a different order. Say Apr - Mar for a fiscal year. I need to do something like an if/then/else statement that selects the correct view to use by a variable check. Any ideas what I can use or do ? select case public.incident.gender_code_id WHEN 31 THEN 'Male' WHEN 32 THEN 'Female' ELSE 'Non-Person' END, count (case extract ( month from public.incident.incident_date )WHEN 01 then 1 Else NULL END) as Jan, count (case extract ( month from public.incident.incident_date )WHEN 02 then 1 Else NULL END) as Feb, count (case extract ( month from public.incident.incident_date )WHEN 03 then 1 Else NULL END) as Mar, count (case extract ( month from public.incident.incident_date )WHEN 04 then 1 Else NULL END) as Apr, count (case extract ( month from public.incident.incident_date )WHEN 05 then 1 Else NULL END) as May, count (case extract ( month from public.incident.incident_date )WHEN 06 then 1 Else NULL END) as Jun, count (case extract ( month from public.incident.incident_date )WHEN 07 then 1 Else NULL END) as Jul, count (case extract ( month from public.incident.incident_date )WHEN 08 then 1 Else NULL END) as Aug, count (case extract ( month from public.incident.incident_date )WHEN 09 then 1 Else NULL END) as Sep, count (case extract ( month from public.incident.incident_date )WHEN 10 then 1 Else NULL END) as Oct, count (case extract ( month from public.incident.incident_date )WHEN 11 then 1 Else NULL END) as Nov, count (case extract ( month from public.incident.incident_date )WHEN 12 then 1 Else NULL END) as Dec, count (extract ( month from public.incident.incident_date )) as Total from public.incident GROUP BY public.incident.gender_code_id
[SQL] Using a variable as a view name in a select
I have created a view, called april_may. I need to select this view by combineing to fields in the database to create the view name etc ... Create view as select * from table_X; I need to do something like this ... Select * from (select table.start_month||_||table.end_month); == Start_month = april End_month = May What I what to pass to the select is the combination of the 2 fields as the view name. Any ideas ?
Re: [SQL] Using a variable as a view name in a select
Almost, in the table there are multiple different incidents. Incident April May June July Aug === Falls1 0 1 0 0 Roof Area 0 1 0 0 2 Complaints.. 1 2 3 2 2 Etc ... What I need to do is to be able to change the column heading to have a different start and finish month Etc ... Incident Feb Mar Apr May June == Falls1 0 1 0 0 Roof Area 0 1 0 0 2 Complaints.. 1 2 3 2 2 The only way I can think of is to create 12 differents views with the months in order and then concatenating the start_month and end_month fields in the database to create the view name. Then do a select with the created view name. Select * from May_June; . . . From: Hilary Forbes [mailto:[EMAIL PROTECTED] Sent: April 3, 2007 12:45 PM To: Wilkinson, Jim Cc: pgsql-sql@postgresql.org Subject: RE: [SQL] Using a variable as a view name in a select Jim So let's suppose you have a "master" table of incidents incident_no (serial) incident_date (timestamp) other fields My understanding is that you now want to eg count the incidents starting in a given month and going forwards for 12 months, grouping the results by month. Have I understood the problem? If so here goes: Set up a table hftest incident serial incdate timestamp SELECT * from hftest; incident | incdate --+- 1000 | 2006-05-03 00:00:00 1001 | 2006-04-03 00:00:00 1002 | 2006-04-01 00:00:00 1003 | 2006-12-08 00:00:00 1004 | 2007-02-28 00:00:00 1005 | 2007-08-03 00:00:00 Now: SELECT max(to_char(incdate,'Mon')) ,count(incident) from hftest WHERE date_trunc('month',incdate) >='2006/04/01' AND date_trunc('month',incdate)<=date_trunc('month',date '2006/04/01' + interval '12 months') GROUP BY date_trunc('month',incdate) ORDER BY date_trunc('month',incdate); max | count -+--- Apr | 2 May | 1 Dec | 1 Feb | 1 which is almost what you want. To get the missing months with zeroes, I think you probably need a table of months and to use a left outer join but you may have found a better way by now! Now I have NO idea on the efficiency of this as I rather suspect all those date_trunc functions may have an adverse effect! Best regards Hilary At 16:44 03/04/2007, you wrote: Hi Hilary, I am trying to produce reports where the user can select a different fiscal year starting month. From this I would select the correct table view to produce the reports in the correct month order by column Select * from table_view; Incident April May June July Aug === Falls 1 0 1 0 0 . . . . Can you think of another way to do this ? From: Hilary Forbes [ mailto:[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]> ] Sent: April 3, 2007 10:14 AM To: Wilkinson, Jim; pgsql-sql@postgresql.org Subject: Re: [SQL] Using a variable as a view name in a select Jim My initial reaction is what are you trying to achieve? Surely you could have one underlying table with dates in it and SELECT * from mytable WHERE date1>='2007/04/01' AND date2<='2007/05/01'; but otherwise, like John, I would use an external scripting language to create the table name. Hilary At 14:04 03/04/2007, Wilkinson, Jim wrote: I have created a view, called april_may. I need to select this view by combineing to fields in the database to create the view name etc ... Create view as select * from table_X; I need to do something like this ... Select * from (select table.start_month||_||table.end_month); == Start_month = april End_month = May What I what to pass to the select is the combination of the 2 fields as the view name. Any ideas ? Hilary Forbes DMR Limited (UK registration 01134804) A DMR Information and Technology Group company ( www.dmr.co.uk <http://www.dmr.co.uk/> ) Direct tel 01689 889950 Fax 01689 860330 DMR is a UK registered trade mark of DMR Limited ** Hilary Forbes DMR Limited (UK registration 01134804) A DMR Information and Technology Group company ( www.dmr.co.uk <http://www.dmr.co.uk/> ) Direct tel 01689 889950 Fax 01689 860330 DMR is a UK registered trade mark of DMR Limited **
[SQL] EXECUTE in a funtion to return a VIEW object ID
Hi there, I have tried many ideas to get this working but no luck. Can some show me or explain what is happening EXAMPLE == I am trying to read to 2 text fields to combine them togther to form the name of a VIEW. example SELECT * FROM ( 'april'||'may') ; I have tried the EXECUTE in a function to PREPARE a dynameic select call; EXECUTE 'SELECT * FROM ' || 'select tablename.text_field1 from tablename' || 'select tablename.text_field2 from tablename'; If tablename.text_field1 = "May" and tablename.text_field2 = "Aprl", this function only returns "MayApril" and not the columns and data that I expected. What am I doing wrong here? Has anyone done this before or have any ideas on how to do it ? Thanks
[SQL] Function to return a multiple colmn table or view
I am new to psql , so please be patient ! Can someone please provide a small quick example of a a function that take 1 paramater and based on that parameter, returns a table or view ? Etc Note this is just a abstract of the functon, not a working function call !!! Create function viewtest( start_month) If $1 = 'April' Then Select * from april_view; Return ; Let me know how to do this. Thanks
Re: [SQL] hi
Select sno AS "SNO", SELECT value AS "VALUE", get_sum(value) as SUM from temp; From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Penchalaiah P. Sent: April 24, 2007 8:09 AM To: pgsql-sql@postgresql.org Subject: [SQL] hi Hi I have the data like this in temp table SQL> Select sno, value from temp; SNO Value 1 650.00 2 850.00 3 640.00 3 985.00 5 987.00 9 9864.00 7 875.00 Now, I want display the result like this... SQL> Select sno, value, get_sum(value) as sum from temp; SNO Value SUM 1 650.00 650.00 2 850.00 1500.00 --650+850 3 640.00 2140.00 --1500+640 3 985.00 3125.00 -- 2140+985 5 987.00 4112.00 9 9864.00 13976.00 7 875.00 14851.00 Any one can tell me query for this .. I don't want any procedure or function... Thanks & Regards Penchal Reddy Information transmitted by this e-mail is proprietary to Infinite Computer Solutions and / or its Customers and is intended for use only by the individual or the entity to which it is addressed, and may contain information that is privileged, confidential or exempt from disclosure under applicable law. If you are not the intended recipient or it appears that this mail has been forwarded to you without proper authority, you are notified that any use or dissemination of this information in any manner is strictly prohibited. In such cases, please notify us immediately at [EMAIL PROTECTED] and delete this email from your records.
Re: [SQL] Dynamic prepare possible in plpgsql?
On May 1, 2007, at 12:16 PM, Jonah H. Harris wrote: On 5/1/07, Collin Peters <[EMAIL PROTECTED]> wrote: Is it faster to use PREPARE for the various INSERT statements inside a plpgsql function? Perhaps I am wrong and it does its PREPARE work when the function is parsed. IIRC, PLpgSQL automagically prepares each statement behind the scenes on the first use. BTW, a good use for a version of EXECUTE that accepted parameters is the trigger on a partitioned table to direct inserts to the appropriate partition. Currently, you have to quote_literal(coalesce (NEW.field, 'NULL')) in the dynamic statement. -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Doing a conditional aggregate (e.g. count(*) if x=y) in postgres?
Use case statement and sum to get a count where status=20... For example Select sum(case when status=20 then 1 else 0 end) as status20, Sum(case when status=30 then 1 else 0 end) as status30 -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Bryce Nesbitt Sent: Monday, May 14, 2007 6:56 PM To: pgsql-sql@postgresql.org Subject: [SQL] Doing a conditional aggregate (e.g. count(*) if x=y) in postgres? All; Is there a way to get a conditional aggregate? I have this two column view: SELECT count(*) AS count, xx_plan.plan_name FROM xx_membership JOIN xx_account USING (account_id) JOIN xx_plan USING (plan_id) WHERE xx_membership.status = 10 GROUP BY xx_plan.plan_name; And would like to add additional columns (not rows) breaking out "status=20" and "status=30" totals. Is this possible without a stored procedure? ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] [PERFORM] Performance on writable views
On Aug 11, 2007, at 8:58 AM, Joshua D. Drake wrote: Heikki Linnakangas wrote: Enrico Weigelt wrote: I'm often using writable views as interfaces to clients, so they only see "virtual" objects and never have to cope with the actual storage, ie. to give some client an totally denormalized view of certain things, containing only those information required for certain kind of operations. Now I've got the strange feeling that this makes updates slow, since it always has to run the whole view query to fetch an record to be updated (ie. to get OLD.*). There is some overhead in rewriting the query, but it shouldn't be significantly slower than issuing the statements behind the view directly. I wouldn't worry about it, unless you have concrete evidence that it's causing problems. I don't know about that, at least when using rules for partitioning the impact can be significant in comparison to triggers. That's because you have to re-evaluate the input query for each rule that's defined, so even if you only have rules for 2 partitions in a table (which is really about the minimum you can have, at least for some period of overlap surrounding the time when you switch to a new partition), you're looking at evaluating every input query twice. In this case, the rules presumably are just simply re-directing DML, so there'd only be one rule in play at a time. That means the only real overhead is in the rewrite engine. -- Decibel!, aka Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] [PHP] [ADMIN] Data insert
On Sun, Aug 21, 2005 at 06:35:22AM +0100, Aldor wrote: > if you want to insert biiig data volumes try either using COPY instead > of INSERT - it will run much much faster And if for some reason you have to stick with inserts, group them into transactions; it will perform much better than individual transactions. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Softwarehttp://pervasive.com512-569-9461 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] insert only if conditions are met?
SELECT sum(hours) FROM table WHERE emp_name = 'JSMITH' AND work_date = '8-15-2005'::date will give you the hours. So... INSERT INTO table SELECT blah WHERE (SELECT sum(hours) FROM table WHERE emp_name = 'JSMITH' AND work_date = '8-15-2005'::date) != 8 Should do what you want. On Wed, Aug 31, 2005 at 12:49:14PM -0400, Henry Ortega wrote: > Ok. Here's TABLE A > > emp date hours type > JSMITH 08-15-2005 5 WORK > JSMITH 08-15-2005 3 WORK > JSMITH 08-25-2005 6 WORK > > I want to insert the ff: > 1.) JSMITH 08-15-2005 8 VAC > 2.) DOE 08-16-2005 8 VAC > > #1 should fail because there is already 8 hours entered as being > Worked on 08-15-2005 (same date). > > Any suggestions? > > > > On 8/31/05, Ragnar Hafsta? <[EMAIL PROTECTED]> wrote: > > > > On Wed, 2005-08-31 at 11:49 -0400, Henry Ortega wrote: > > > What I am trying to do is > > > * Insert a record for EMPLOYEE A to TABLE A > > > IF > > > the sum of the hours worked by EMPLOYEE A on TABLE A > > > is not equal to N > > > > > > Is this possible? > > > > Sure, given a suitable schema > > > > It is not clear to me, if the hours worked are > > to be found in the same table you want to insert > > into, or not. > > > > gnari > > > > > > > > -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Softwarehttp://pervasive.com512-569-9461 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Help with multistage query
On Wed, Sep 07, 2005 at 05:37:47PM -0400, Matt Emmerton wrote: > > - Original Message - > From: Russell Simpkins > To: pgsql-sql@postgresql.org > Sent: Wednesday, September 07, 2005 4:05 PM > Subject: Re: [SQL] Help with multistage query > > I have a perl script that issues a series of SQL statements to perform > some queries. The script works, but I believe there must be a more elegant > way to do this. > > > > The simplified queries look like this: > > > > SELECT id FROM t1 WHERE condition1; ;returns about 2k records which are > stored in @idarray > > > > foreach $id (@idarray) { > >SELECT x FROM t2 WHERE id=$id; ; each select returns about 100 > records which are saved in a perl variable > > } > > how about > select t1.id from t1, t2 where t1.id = t2.id and t2.id = x > > or more correctly, based on the OP's example: > > select t2.x from t1, t2 where t1.id = t2.id and t1.id = Actually, I think you want AND t2.x , not t1.id. BTW, I recommend not using id as a bareword field name. Very easy to get confused when you start joining a bunch of stuff together. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] Why doesn't the SERIAL data type automatically have a
On Tue, Sep 27, 2005 at 10:33:14AM -0500, Scott Marlowe wrote: > On Mon, 2005-09-26 at 20:03, Tom Lane wrote: > > Ferindo Middleton Jr <[EMAIL PROTECTED]> writes: > > > Is there some reason why the SERIAL data type doesn't automatically have > > > a UNIQUE CONSTRAINT. > > > > It used to, and then we decoupled it. I don't think "I have no use for > > one without the other" translates to an argument that no one has a use > > for it ... > > I have to admit, right after the change was made, I was of the opinion > that no one would ever need that. Then, a few months later, it was > exactly what I needed for some project... :) Arguably it would have been better to make the default case add either UNIQUE or PRIMARY KEY with a way to over-ride. If newbies are getting burned maybe it would be useful to toss a NOTICE or maybe even WARNING when a serial is created without a unique constraint of some kind? -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] Why doesn't the SERIAL data type automatically have a
On Tue, Oct 04, 2005 at 07:50:28PM -0400, Ferindo Middleton Jr wrote: > Based on the feedback I received after I made that original post, it > seemed most people don't use SERIAL with a unique constraint or primary > key and I was blasted for making such a suggestion. I'm sorry... It I don't think either assertion is true. I'd bet most of the developers actually do normally use an index on a serial, since it's normally used as a PK. And while people can be a bit terse with their replies, I wouldn't say you were blasted. :) -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [DOCS] [SQL] Update timestamp on update
On Wed, Oct 12, 2005 at 10:52:04PM -0400, Tom Lane wrote: > Jeff Williams <[EMAIL PROTECTED]> writes: > > Thanks. Triggers was my first thought, but chapter 35 on Triggers didn't > > really indicate a way I could do this easily and scared me with a lot of > > c code. > > Yeah. This is a documentation issue that's bothered me for awhile. > The problem is that we treat the PL languages as add-ons and therefore > the documentation of the "core" system shouldn't rely on them ... but > that leaves us presenting C-code triggers as the only examples in > chapter 35. There is a paragraph in there suggesting you go look at > the PL languages first, but obviously it's not getting the job done. Chapter 35 is plpgsql.. do you mean chapter 32.4? > Anybody have a better idea? What about a See Also section ala man pages that links to trigger info for other languages? -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] How to speed up the database query?
Have you taken a look at http://www.postgresql.org/docs/8.0/interactive/performance-tips.html ? On Thu, Oct 27, 2005 at 03:03:36PM +0800, Abdul Wahab Dahalan wrote: > Hi everyone! > > I'm looking for solution to speed up the database query, means that to get > resultset as quicker as we can. > > For example if I've 700 records in the table it will take longer time > compared if I've only 20 records. How do we speed up the query?. Any query > technique that can be applied?. > > Thus wild card query like : select * from tableA will cause query time > increased compare to say select a,b from tableA. > > any help, prettymuch appreciated. > > > ---(end of broadcast)--- > TIP 5: don't forget to increase your free space map settings > -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] why vacuum
On Thu, Oct 27, 2005 at 02:21:15PM +0100, Richard Huxton wrote: > So - if your statement contains something non-deterministic that isn't > catered for in Mysql's code then it will break. > > At it's simplest - if I write a function my_random() and then do: > UPDATE foo SET a=1 WHERE b < my_random(); > IF my_random() returns different results on different machines, then the > replication will be broken. See the manual entry below: > http://dev.mysql.com/doc/refman/5.0/en/replication-features.html > > That's not to say the system is worthless - it works fine for many > people. But it does have limitations. And you can easily have multi-master syncronous replication in PostgreSQL using the same idea; just see pgCluster. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] [GENERAL] A Not Join
On Tue, Nov 01, 2005 at 04:27:01PM +0200, L van der Walt wrote: > I have three table: > Users - Contains username, ID etc... > Permissions - A permission name and ID > Link up table - The user.id and permission.id > > If a user.id and a permission.id row exists in the linkuptable the user > have that permission granted. > > With the statement below I can see the permissions a user have. > > SELECT users.username, permissions.name > FROM users INNER JOIN linkuptable > ON (users.id = linkuptable.userid) > INNER JOIN permissions > ON (permissions.id = linkuptable.permissionid) > WHERE users.username = 'DummyUser' > > How do I see the permissions that user DON'T have with a fast SQL statement. > > Thus, a NOT the statement for the above SQL statement LEFT JOIN permissions ON (...) WHERE permissions.id IS NULL You might have to do the NULL check in a HAVING clause instead... try it. BTW, this is probably better asked on pgsql-sql. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Nested Table in PostgreSQL or some alternative Variants
What do you mean by 'nested table'? Maybe arrays will do what you want? Typically (and this applies to other databases as well), this is done using two tables and refferential integrity. IE: CREATE TABLE purchase_order( po_id serial CONSTRAINT purchase_order__po_id PRIMARY KEY , customer_id int CONSTRAINT purchase_order__customer_RI REFERENCES customer(id) , more fields... ) CREATE TABLE po_lines ( po_id int CONSTRAINT po_lines__po_id_RI REFERENCES purchase_order(po_id) , line_number smallintNOT NULL , ... , CONSTRAINT po_lines__po_id_line_number PRIMARY KEY( po_id, line_number) ) On Mon, Oct 31, 2005 at 02:22:05PM +0100, Thomas Zuberbuehler wrote: > Hello there > > I've a problem. I can't find some information about nested tables in > PostgreSQL. Is this Features possible in pgsql or not? > > * When yes, how i can use and create nested tables with pgsql? > * When no, which alternative are there (for same problem definition)? > > Thank you for help. > Greetings from Zurich, Switzerland. > Thomas Zuberbuehler > > ---(end of broadcast)------- > TIP 2: Don't 'kill -9' the postmaster > -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Does VACUUM reorder tables on clustered indices
On Sun, Dec 18, 2005 at 07:01:39PM -0300, Alvaro Herrera wrote: > Martin Marques escribi?: > > On Sun, 18 Dec 2005, frank church wrote: > > > > > > > >Does VACUUMing reorder tables on clustered indices or is it only the > > >CLUSTER > > >command that can do that? > > > > Cluster does that. Vacuum only cleans dead tuples from the tables. > > Note that while reordering, CLUSTER also gets rid of dead tuples, so if > you cluster you don't need to vacuum. It also does a REINDEX... -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] Does VACUUM reorder tables on clustered indices
On Wed, Dec 21, 2005 at 12:34:12AM +0100, [EMAIL PROTECTED] wrote: > Hi, > > Utilize CLUSTER; (after vacuum) to reorder the data. Why would you vacuum when cluster is just going to wipe out the dead tuples anyway? > >>Note that while reordering, CLUSTER also gets rid of dead tuples, so if > >>you cluster you don't need to vacuum. > > > >It also does a REINDEX... -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] regarding grant option
Though, it is pretty easy to do something like: select 'GRANT ALL ON ' || table_name || ' TO public;' from information_schema.tables where table_schema='blah'; You can feed the output of that to psql, ei: psql -qc "select 'GRANT ALL ON ' || table_name || ' TO public;' from information_schema.tables where table_schema='blah'" | psql On Wed, Mar 01, 2006 at 12:00:16PM -0300, Alvaro Herrera wrote: > AKHILESH GUPTA wrote: > > thank you very much sir for your valuable suggestion, > > but i am talking about direct database query...! > > There is none that can help you here, short of making a function in > PL/pgSQL or other language ... > > > On 3/1/06, Alvaro Herrera <[EMAIL PROTECTED]> wrote: > > > > > > AKHILESH GUPTA wrote: > > > > > > > here i have to grant permissions to that user individually for each and > > > > every table by using: > > > > :->> grant ALL ON to ; > > > > GRANT > > > > and all the permissions are granted to that user for that particular > > > table. > > > > > > Yes. If you are annoyed by having to type too many commands, you can > > > write a little shell script to do it for you. > > -- > Alvaro Herrerahttp://www.CommandPrompt.com/ > The PostgreSQL Company - Command Prompt, Inc. > > ---(end of broadcast)--- > TIP 9: In versions below 8.0, the planner will ignore your desire to >choose an index scan if your joining column's datatypes do not >match > -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Replication - state of the art?
You could also use WAL shipping and some PITR trickery to keep a 'warm standby' database up to date. How far behind it falls is up to you, since you'll be periodically syncing the current WAL file to the backup machine. Do the sync once a minute, and at most you lose 60 seconds of data. On Wed, Mar 01, 2006 at 02:49:18PM -0500, Andrew Sullivan wrote: > On Wed, Mar 01, 2006 at 11:28:06AM -0800, Bryce Nesbitt wrote: > > Actually let me loosen that a bit: we don't need two phase commit. We > > can loose the most recent transaction, or even the last few seconds of > > transactions. What we can't survive is -- on the day of the emergency > > -- a long and complicated DB rebuild with mistakes and hard-to-debug > > data issues. > > Then I suggest you use Slony-I. While it is not plug and play, the > thing it _is_ designed to handle reasonably well is failover and > (better) switchover. Most systems plan to solve that piece of > functionality later, with a script or something, at which point it is > apparent that setting up failover or swichover to be anything > approaching safe is actually very tricky. (Log shipping is probably > not in this category, but AFAIK the promote-to-live support for a > standby database copy is still not all built by anyone. If you like > rolling your own, however, it might be your answer.) > > > There's no fire creating demand for replication, so there is little time > > budget. > > So is there a sort of padded, no-sharp-corners, playroom that gets us > > 90% of the way there? > > The "no budget" remark here is what makes me strike CMD's Mammoth > Replicator off the list. But I'm sure their administration tools are > far sweeter than the admittedly hackish ones that Slony currently > delivers out of the box. > > > nightly) into something more reasonable (like 500 milliseconds). But > > risk -- of data corruption -- > > and time --too much-- will can the project. > > Another big reason to use a live-standby system like Slony is that > once you have the extra database online, you suddenly think of all > sorts of nifty queries you can move there without destroying your > production performance. Be careful not to get addicted, is all. > > A > > -- > Andrew Sullivan | [EMAIL PROTECTED] > Information security isn't a technological problem. It's an economics > problem. > --Bruce Schneier > > ---(end of broadcast)--- > TIP 9: In versions below 8.0, the planner will ignore your desire to >choose an index scan if your joining column's datatypes do not >match > -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] Problem with query on history table
Probably the easiest way is to switch to using table partitioning and switch to using start_timestamp and end_timestamp, so that when you modify a row you update the old one setting end_timestamp to now() and insert the new row (all within one transaction). There are other ways to do it, but they'll probably be much slower. I don't think they require a lot of CASE statements though. Show us what you were planning on doing and maybe I'll have more ideas. On Mon, Feb 27, 2006 at 08:19:30AM +0100, Andreas Joseph Krogh wrote: > Hi all! > > I don't know if there's a standard solution to the kind of problem I'm trying > to solve, but I will appreciate your thougts(and maybe solution:) on this > problem of mine: > > I have 2 tables: hist and curr which hold numbers for "history-data" and > "current-data" respectivly. Here is a simplified version of the schema: > > CREATE TABLE curr ( > id integer NOT NULL, > etc integer NOT NULL, > created timestamp without time zone NOT NULL, > modified timestamp without time zone > ); > > CREATE TABLE hist ( > id serial NOT NULL, > curr_id integer NOT NULL REFERENCES curr(id), > etc integer NOT NULL, > modified timestamp without time zone NOT NULL > ); > > andreak=# SELECT * from curr; > id | etc | created | modified > +-+-+- > 1 | 5 | 2006-02-01 00:00:00 | > 2 | 10 | 2006-01-15 00:00:00 | 2006-01-26 00:00:00 > 3 | 10 | 2006-01-08 00:00:00 | 2006-01-25 00:00:00 > (3 rows) > > andreak=# SELECT * from hist; > id | curr_id | etc | modified > ++-+- > 1 | 3 | 30 | 2006-01-16 00:00:00 > 2 | 3 | 20 | 2006-01-25 00:00:00 > 3 | 2 | 20 | 2006-01-26 00:00:00 > (3 rows) > > Now - I would like to get a report on what the "ETC" is on a given entry in > "curr" in a given "point in time". Let me explain. If I want status for 17. > jan.(17.01.2006) I would like to get these numbers out from the query: > > id | created |curr_modified|hist_modified| etc > +-+-+-+- > 3 | 2006-01-08 00:00:00 | 2006-01-25 00:00:00 | 2006-01-16 00:00:00 | 30 > 2 | 2006-01-15 00:00:00 | 2006-01-26 00:00:00 | 2006-01-26 00:00:00 | 20 > 1 | 2006-02-01 00:00:00 | | | 5 > > > That is; If the entry is modified after it's created, a snapshot of the "old > version" is copied to table "hist" with the hist.modified field set to the > "modified-timestamp". So there will exist several entries in "hist" for each > time an entry in "curr" is modified. > > If I want status for the 27. jan. I would like the query to return the > following rows: > > id | created |curr_modified|hist_modified| etc > +-+-+-+- > 3 | 2006-01-08 00:00:00 | 2006-01-25 00:00:00 | 2006-01-25 00:00:00 | 10 > 2 | 2006-01-15 00:00:00 | 2006-01-26 00:00:00 | 2006-01-26 00:00:00 | 10 > 1 | 2006-02-01 00:00:00 | | | 5 > > select curr.id, curr.created, curr.modified as curr_modified, hist.modified > as > hist_modified, coalesce(hist.etc, curr.etc) as etc FROM curr LEFT OUTER JOIN > hist ON(curr.id = hist.curr_id) WHERE ... > > I'm really stuck here. It seems to me that I need a lot of > CASE...WHEN...ELSE.. statements in the query, but is there an easier way? > > -- > Andreas Joseph Krogh <[EMAIL PROTECTED]> > > ---(end of broadcast)--- > TIP 3: Have you checked our extensive FAQ? > >http://www.postgresql.org/docs/faq > -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] Expressing a result set as an array (and vice versa)?
On Sat, Mar 25, 2006 at 12:17:08AM +0100, PFC wrote: > > > CREATE OR REPLACE FUNCTION foreach( liste INTEGER[] ) RETURNS SETOF > INTEGER AS $$ > DECLARE > i INTEGER; > BEGIN > FOR i IN 1..icount(liste) LOOP > RETURN NEXT liste[i]; > END LOOP; > END; > $$ LANGUAGE plpgsql; Seems like this should really exist in the backend... > CREATE AGGREGATE array_accum ( > sfunc = array_append, > basetype = anyelement, > stype = anyarray, > initcond = '{}' > ); > > SELECT array_accum( DISTINCT list_id ) FROM bookmarks; > array_accum > --- > {1,2,3,4,5,7} Couldn't you just use array()? -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] SQL Query Newbie Help
On Fri, Mar 24, 2006 at 02:29:10PM -0800, Stephan Szabo wrote: > > On Fri, 24 Mar 2006, Julie Robinson wrote: > > > This works, but is there a better solution? > > > > select * > > from quality_control_reset T > > where date = ( > > select max(date) > > from quality_control_reset > > where qualitycontrolrange = T.qualitycontrolrange); > > If you can use PostgreSQL extensions (and don't care that you might not > get two rows if two ids had the same date equaling the max date for a > given range), maybe something like: > > select distinct on (qualitycontrolrange) id, date, qualitycontrolrange > from quality_control_reset order by qualitycontrolrange,date desc; > > > Otherwise, you might see how the above compares in plan to something like > (not really tested): > > select T.* from quality_control_reset T inner join > (select qualitycontrolrange, max(date) as date from quality_control_reset > group by qualitycontrolrange) T2 > on (T.qualitycontrolrange = T2.qualitycontrolrange and T.date=T2.date); BTW, I believe the new row operator fixes in 8.2 make it possible to use them to do this kind of thing as well... -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] Question about One to Many relationships
On Fri, Mar 24, 2006 at 06:29:25PM +0100, PFC wrote: > > > >>And I want to link the band to the album, but, if the album is a > >>compilation it'll be linked to multiple band.ids, so i can't just add > >>a column like: > > For a compilation, you should link a band to a track, not an album. > This opens another can of worms... > > I would use the following tables : BTW, if you're going to be writing code to manage stuff like this, you should absolutely check out the source for http://musicbrainz.org/, which uses PostgreSQL as it's backend. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Find min and max values across two columns?
On Fri, Mar 24, 2006 at 04:00:35PM -0500, Tom Lane wrote: > Amos Hayes <[EMAIL PROTECTED]> writes: > > I'm trying to build a query that among other things, returns the > > minimum and maximum values contained in either of two columns. > > I think you might be looking for > > select greatest(max(columnA), max(columnB)) from tab; > select least(min(columnA), min(columnB)) from tab; > > greatest/least are relatively new but you can roll your own in > older PG releases. And if you care about performance you might also try: SELECT max(greatest(column_a, column_b) ... SELECT min(least(column_a, column_b) ... There may be a difference in performance between the two. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] unique names in variables and columns in plsql functions
On Mon, Mar 27, 2006 at 04:33:55PM +0200, Wiebe Cazemier wrote: > Hi, > > In a plpgsl function, consider the following excerpt: > > DECLARE > provider_id INTEGER; > BEGIN > provider_id := (SELECT provider_id FROM investment_products WHERE id = > my_new.investment_product_id); > END; > > After a lot of trouble, I found out this line doesn't work correctly > with the variable name as it is. It doesn't give an error or anything, > it just retrieves some wrong value (probably NULL). When I change the > variable name to anything other than "provider_id", it works OK. > > I was somewhat surprised to discover this. Can't Postgres determine that > the provider_id in the SELECT statement is not the same one as the variable? Sadly, overloading variable names between plpgsql and SQL is *highly* problematic. Because of this I *always* prefix plpgsql variables with something, such as p_ for parameters and v_ for general variables. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] unique names in variables and columns in plsql functions
On Mon, Mar 27, 2006 at 10:02:24AM -0500, Tom Lane wrote: > Wiebe Cazemier <[EMAIL PROTECTED]> writes: > > DECLARE > > provider_id INTEGER; > > BEGIN > > provider_id := (SELECT provider_id FROM investment_products WHERE id = > > my_new.investment_product_id); > > END; > > > After a lot of trouble, I found out this line doesn't work correctly > > with the variable name as it is. It doesn't give an error or anything, > > it just retrieves some wrong value (probably NULL). > > It'll retrieve whatever the current value of the plpgsql variable > provider_id is. plpgsql always assumes that ambiguous names refer > to its variables (indeed, it isn't even directly aware that there's > any possible ambiguity here). > > > I was somewhat surprised to discover this. Can't Postgres determine that > > the provider_id in the SELECT statement is not the same one as the variable? > > How and why would it determine that? In general it's perfectly normal > to use plpgsql variable values in SQL commands. I don't think it'd make > the system more usable if the parser tried to apply a heuristic rule > about some occurrences being meant as variable references and other ones > not. If the rule ever got it wrong, it'd be even more confusing. BTW, I believe SELECT investment_products.provider_id would work here, but I'm too lazy to test that theory out. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] Migrating a Database to a new tablespace
That means that the tablespace directory isn't empty. On Mon, Apr 24, 2006 at 01:34:33PM +0200, Markus Schaber wrote: > Hello, > > I now pulled the plug, migrated all databases via "create database ... > tempate olddatabase tablespace newts" to new tablespaces, one for each > database, and dropped all old databases that contained references to the > tablespace. Pgadmin3 also shows that the tablespace is not referenced by > anything. > > But I cannot drop it, I get the following message: > > postgres=# drop TABLESPACE foo; > ERROR: tablespace "foo" is not empty > > > It seems that the whole tablespace thing is not yet 100% waterproof, > good that this did happen on a developer machine, and not on a > production machine. > > Thanks for your patience, > Markus > > -- > Markus Schaber | Logical Tracking&Tracing International AG > Dipl. Inf. | Software Development GIS > > Fight against software patents in EU! www.ffii.org www.nosoftwarepatents.org > > ---(end of broadcast)--- > TIP 4: Have you searched our list archives? > >http://archives.postgresql.org > -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Migrating a Database to a new tablespace
On Wed, Apr 26, 2006 at 12:35:39PM -0400, Bruce Momjian wrote: > Tom Lane wrote: > > Markus Schaber <[EMAIL PROTECTED]> writes: > > > As I said the leftovers are likely to be caused by hard kills and > > > backend crashes, so I would not go into deeper analysis, but maybe the > > > finding and possibly removing of such leftovers should be half-automated > > > to assist server admins. > > > > It's been discussed. Personally I'm afraid of the idea of automatically > > deleting files that seem unreferenced, but having a tool to find them > > for manual deletion isn't a bad idea. I think Bruce had a prototype > > patch at one point --- not sure what the status is. > > > > I have work someone did in the past. I just need to be updated to deal > with tablespaces. > > ftp://candle.pha.pa.us/pub/postgresql/mypatches/checkfile.* > > Let me know if you want details. Is it able to also delete the cruft? Seems to be a useful extension, especially on windows, which AFAIK doesn't have an equivalent to ``. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] LinkedList
decibel=# select * from t; a | b ---+--- 1 | 0 3 | 1 5 | 3 7 | 5 2 | 0 4 | 2 6 | 4 8 | 6 (8 rows) decibel=# select * from t x join t y on(x.a=y.b) where y.a=7; a | b | a | b ---+---+---+--- 5 | 3 | 7 | 5 (1 row) decibel=# select * from t x join t y on(x.a=y.b) where y.a=8; a | b | a | b ---+---+---+--- 6 | 4 | 8 | 6 (1 row) decibel=# As you can see, it selects the right data, but you'll need to step through it somehow. You might be able to do it with a generate_series(), or you can use a function. If we get WITH support/recursion in 8.2 you'd use that. I think that "SQL For Smarties" by Joe Celko might have an example of how to do this without using a function. Even if it doesn't it's a book any serious database developer should own. On Wed, Apr 26, 2006 at 10:35:15AM -0700, Ray Madigan wrote: > Scott, > > Thanks for your reply, I tried what you said, worked around a few things > but I am still stuck. The main reason is I didn't do an adequate job of > explaining the situation. The table implements many linked lists and I want > to traverse one of them given the end of the list. > > Say the table contains > > h | v | j > 1 0 100 > 3 1 300 > 5 3 500 > 7 5 700 > > 2 0 200 > 4 2 400 > 6 4 600 > 8 6 800 > > If I specify t.h = 8 I want to traverse the even part of the table > If I specify t.h = 7 I want to traverse the odd part of the table > > If you can send me to a book to read I am willing > > Thanks > > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] Behalf Of Scott Marlowe > Sent: Wednesday, April 26, 2006 8:59 AM > To: Ray Madigan > Cc: pgsql-sql@postgresql.org > Subject: Re: [SQL] LinkedList > > > On Wed, 2006-04-26 at 11:09, Ray Madigan wrote: > > I have a table that I created that implements a linked list. I am not an > > expert SQL developer and was wondering if there are known ways to traverse > > the linked lists. Any information that can point me in the direction to > > figure this out would be appreciated. The table contains many linked > lists > > based upon the head of the list and I need to extract all of the nodes > that > > make up a list. The lists are simple with a item and a link to the > history > > item so it goes kind of like: > > > > 1, 0 > > 3, 1 > > 7, 3 > > 9, 7 > > ... > > > > Any suggestions would be helpful, or I will have to implement the table > > differently. > > You should be able to do this with a fairly simple self-join... > > select a.id, b.aid, a.field1, b.field1 > from mytable a > join mytable b > on (a.id=b.aid) > > Or something like that. > > ---(end of broadcast)--- > TIP 2: Don't 'kill -9' the postmaster > > > ---(end of broadcast)--- > TIP 6: explain analyze is your friend > -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] (Ab)Using schemas and inheritance
Moving to -general, where it's more likely that others will have input. On Tue, May 23, 2006 at 05:16:54PM -0300, Jorge Godoy wrote: > I'm modelling an application that will have data -- financial data, human > resources, etc. -- for several hundred (even thousands) of companies. This > is for an accounting office. > > I could put some kind of "company_id" column in all of my tables to separate > data in a more standard way, I could create a separate schema for each client > and then create all needed tables in there (could I? I'd be abusing schemas > here and this is part of my doubt) and, finally, I could create a "base" > schema, define all my standard tables and create an individual schema for > each client where I'd inherit from those base.tables. > > This would allow me to separate all information with a "SET search_path TO > company" without having to make the restriction "by hand" (on "company_id", > for example). It would also allow me to view some complete statistics > grouping all clients by SELECTing data from the base schema. We're testing > views and functions to see how they behave with inherited tables and changes > on "search_path", and it looks like we can do that for, at least, a small > number of schemas. > > Of course, this has implications on permissions as well, so there will be a > large number of groups -- probably at least one per schema + some common > groups -- and roles as well... > > > Is this a good idea? Would this be too bad, performance-wise, if I had > thousands of schemas to use like that? Any advice on better approaches? Any > expected problems? One issue is that you'll probably be breaking new ground here a bit; I suspect there's very few people that are using more than a handful of schemas. Shouldn't pose any issues, but you never know; although any issues you do run into should only be performance problems. Another consideration is that the free space map doesn't care too much for tracking space info on tons of small tables. Perhaps the biggest issue is: what happens when you need to do DDL? If you have 1000 schemas that should be identical, you'll need to perform any DDL 1000 times. But as you point out, there's some interesting advantages to using schemas like this. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] pgxml & xpath_table
There's a good chance the author isn't on this list. You'd be better off on pgsql-general, or just emailing the author of pgxml directly. On Thu, Jun 08, 2006 at 06:00:42PM +0200, Philippe Lang wrote: > Hi, > > I'm playing with the contrib/pgxml library under PG 8.1.4, and I'm not sure > if what I found with pgxml is a feature of a bug: > > I've got the following table: > > > CREATE TABLE test > ( > id int4 NOT NULL, > xml varchar(200), > CONSTRAINT pk PRIMARY KEY (id) > ) > WITHOUT OIDS; > > INSERT INTO test VALUES (1, ' num="L1">123 num="L2">112233'); > > INSERT INTO test VALUES (2, ' num="L1">111222333 num="L2">111222333'); > > > > If I launch this query: > > > select * from > > xpath_table('id','xml','test', > '/doc/@num|/doc/line/@num|/doc/line/a|/doc/line/b|/doc/line/c','1=1') AS t(id > int4, doc_num varchar(10), line_num varchar(10), val1 int4, val2 int4, val3 > int4) > > where id = 1 > > order by doc_num, line_num > > > I get: > > > iddoc_num line_numval1val2val3 > 1 C1 L1 1 2 3 > 1 L2 11 22 33 > > > I was expecting doc_num would receive twice the C1 value, just like with a > normal sql join. > > Regards, > > -- > Philippe Lang, Ing. Dipl. EPFL > Attik System > rte de la Fonderie 2 > 1700 Fribourg > Switzerland > http://www.attiksystem.ch > > Tel: +41 (26) 422 13 75 > Fax: +41 (26) 422 13 76 -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] files or DataBase
On Fri, Sep 29, 2006 at 12:37:56AM +0200, tomcask o_o wrote: > Hi > > in advance, sorry for my english. > > in a Web server as is the best option? > > to accede to db to show the contents or to accede to static files modified > by scripts when the content of db has been modified. > > That the habitual thing is to work directly with db and to give back to the > results mounting the page then. > > But I believe that serious much more fast if they were directly the files > and on the other hand in the server of local way executed scripts whom the > changes of the files in a aux table verified, and published the files that > have undergone changes. > > that to both consider the yield of the server in the diferents scenes. > > Greetings and thanks. The more code that sits between you and the data, the longer it takes to get at the data. Filesystems try and minimize that code path so that you can get at your data as fast as possible. Databases on the other hand provide you with a rich query environment that makes it easy to filter, format and manipulate data. So... if you're just talking about spewing data out a pipe without having to process it, nothing will beat a filesystem. So if your web content almost never changes, you'll probably be better off with static files that are re-generated from the database as needed. If most of your page content is static, with a small amount that's dynamic, you might want to mix and match the two somehow. Here's what you don't want to do: have each page issueing 100 queries to the database. That's a great way to kill your performance and/or scaleability. -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] timestamp subtraction (was Re: [SQL] formatting intervals with to_char)
On Mon, Oct 09, 2006 at 02:57:28PM -0500, Aaron Bono wrote: > On 10/5/06, Jim Nasby <[EMAIL PROTECTED]> wrote: > > > >On Oct 5, 2006, at 11:50 AM, Tom Lane wrote: > >> regression=# select ('2006-09-15 23:59:00'::timestamp - '2006-09-01 > >> 09:30:41'::timestamp); > >> ?column? > >> -- > >> 14 days 14:28:19 > >> (1 row) > >> > >> should be reporting '350:28:19' instead. > >> > >> This is a hack that was done to minimize the changes in the regression > >> test expected outputs when we changed type interval from months/ > >> seconds > >> to months/days/seconds. But I wonder whether it wasn't a dumb idea. > >> It is certainly inconsistent, as noted in the code comments. > >> > >> I'm tempted to propose that we remove the justify_hours call, and tell > >> anyone who really wants the old results to apply justify_hours() to > >> the > >> subtraction result for themselves. Not sure what the fallout would > >> be, > >> though. > > > >I suspect there's applications out there that are relying on that > >being nicely formated for display purposes. > > > >I agree it should be removed, but we might need a form of backwards > >compatibility for a version or two... > > I am personally of the opinion that display logic should never be put into > the database. Applications that rely on the database formatting - that is > tightly coupling your application to the database which does not follow good > programming principles. > > None-the-less, the feature would be nice and may be very valuable for > reporting. I agree in general, except most languages have terrible support for time/date data, so I can see a much bigger case for the database being able to do it (and it's not like we'll be removing justify_*). Be that as it may, there are probably apps out there that will break if this is just changed. -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Temp tables and functions
On Tue, Oct 10, 2006 at 03:21:36PM -0400, [EMAIL PROTECTED] wrote: > Everyone, > > I've written a function (language = sql) which uses a temporary table to > simplify the process; however, when I go to load the function I get: > > /var/lib/pgsql$cat thm.sql | psql test > ERROR: relation "lost_bus" does not exist > > where "lost_bus" is the name of my temp table. The function is just for a > report that is run monthly, so I can create a permanent table if necessary, > but I'd rather not. Create the temp table in your script that creates the function. You don't need to populate it or anything, you just need it to exist (prefferably with the correct definition). -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] [HACKERS] Bug?
Moving to -sql. On Wed, Oct 18, 2006 at 06:53:46PM +0530, Indira Muthuswamy wrote: > Hai, > > I have encountered a problem with PostgreSQL.I have created a table > 'tab1' with a column 'a' with serial type.I entered 20 records into the > table.So the query > select max(a) from tab1; > returned 20.When I tried the same query after the command > truncate table tab1; > I found that the output of the first query as > > max > - > > (1 row) > When I tried to insert a new row into the table tab1 I found that the > value at column a incremented to the value 21. > But I heard from my friends that the value of the serial column gets > decremented whenever we issue the 'truncate table' command (in MS SQL > SERVER).Is the concept same with PostgreSQL or not?Can anyone clarify me on > this? Your friends are mistaken. TRUNCATE does nothing to affect sequences. -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] hi i want help on levels
Search the archives for hierarchical query. -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] [HACKERS] Bug?
See section 9.12 of the docs. On Thu, Oct 19, 2006 at 12:28:58PM +0530, Indira Muthuswamy wrote: > Then how do we clear the values of a serial column(is it done only by > dropping the column?)? > > Regards, > M.Indira > > > > On 10/19/06, Jim C. Nasby <[EMAIL PROTECTED]> wrote: > > > >Moving to -sql. > > > >On Wed, Oct 18, 2006 at 06:53:46PM +0530, Indira Muthuswamy wrote: > >> Hai, > >> > >> I have encountered a problem with PostgreSQL.I have created a table > >> 'tab1' with a column 'a' with serial type.I entered 20 records into the > >> table.So the query > >> select max(a) from tab1; > >> returned 20.When I tried the same query after the command > >> truncate table tab1; > >> I found that the output of the first query as > >> > >> max > >> - > >> > >> (1 row) > >> When I tried to insert a new row into the table tab1 I found that the > >> value at column a incremented to the value 21. > >> But I heard from my friends that the value of the serial column gets > >> decremented whenever we issue the 'truncate table' command (in MS SQL > >> SERVER).Is the concept same with PostgreSQL or not?Can anyone clarify me > >on > >> this? > > > >Your friends are mistaken. TRUNCATE does nothing to affect sequences. > >-- > >Jim Nasby[EMAIL PROTECTED] > >EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) > > -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] Grouping by day, limiting amounts
On Thu, Oct 19, 2006 at 01:51:55PM +0200, Mezei Zolt??n wrote: > > Hi, > I didn't really know what subject I should give. > I have a table like this one: > 2006.10.01.Bela10 > 2006.10.01.Aladar9 > 2006.10.01.Cecil8 > 2006.10.01.Dezso7 > 2006.10.01.Elemer6 > 2006.10.02.Bela11 > 2006.10.02.Aladar10 > 2006.10.02.Cecil9 > 2006.10.02.Dezso8 > 2006.10.02.Ferenc7 > 2006.10.03.Bela6 > 2006.10.03.Aladar5 > 2006.10.03.Cecil4 > 2006.10.03.Dezso3 > 2006.10.03.Jozef2 > The first column is a date, the second is a name, the third is the > number of votes that the name received on that day. > I would like to select the 3 (or 10) names with the most votes for > each day. > Any suggestions on how can it be done easily? It'd be easy with windowing functions, but unfortunately we don't have those... SELECT * FROM (SELECT DISTINCT date FROM table) AS dates , (SELECT date, name, votes FROM table WHERE table.date = dates.date ORDER BY votes DESC LIMIT 3 ) ; Note that this has to scan the table twice (well, the second subquery will likely use an index on date). If you have another table that has the dates in it already, you can use that instead of the first subquery. If you know that every day has a row, you could also replace the first subquery with a generate_series(). -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] Can we convert from Postgres to Oracle !!???
On Sun, Oct 22, 2006 at 12:03:38AM +0300, Devrim GUNDUZ wrote: > On Tue, 2006-10-17 at 14:21 +0530, Sandeep Kumar Jakkaraju wrote: > > Can we convert from Postgres to Oracle !!??? You can also run our software and get Oracle syntax for 1/25th the cost. -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] [ADMIN] Is there anyway to...
Moving to -general (and please start a new thread instead of hijacking an existing one). On Thu, Nov 02, 2006 at 01:14:22PM -0500, louis gonzales wrote: > Hello all, > Is there an existing mechanism is postgresql that can automatically > increment/decrement on a daily basis w/out user interaction? The use > case I'm considering is where a student is in some type of contract with > an instructor of some sort, and that contract puts a time limit on the > student requiring her to pay a fee by a certain day. IF that day comes > to pass - or a certain number of days elapse - and that payment > requirement hasn't been met, I want to trigger a function. > > The one requirement I want to impose is, that no end user of the DB > application, needs to do anything to set the trigger, other than the > initialization of making the student of this type. > > An example would be: > Day1 - Application user(typically the instructor) creates a profile for > a new student - John Doe, which sets a 30 day time limit for John Doe to > pay $100.00 > Day2 -> Day31 - John Doe didn't make the payment > Day 31 - Trigger of event occurs when the instructor logs in. > > Basically on Day 1 when John Doe's profile was created, I want a > decrement counter to occur daily on his profile(some attribute/timer) > and nothing should happen until day 31 when he doesn't pay. While you could setup a cron job to decrement some counter every day, I think that's not the best approach. Instead, I'd run a query once a day that finds all students that are past-due and takes some kind of action. -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] timestamp subtraction (was Re: [SQL] formatting intervals with to_char)
Yes, but if it was '2004-01-02 01:00:00'-'2004-01-01 00:00:00' it should return 25:00:00, not 1 day 1:00. I agree with Tom that this should be changed; I'm just arguing that we might well need a backwards-compatibility solution for a while. At the very least we'd need to make this change very clear to users. On Tue, Feb 20, 2007 at 08:07:11PM -0500, Bruce Momjian wrote: > > One problem with removing justify_hours() is that this is going to > return '24:00:00', rather than '1 day: > > test=> select '2004-01-02 00:00:00'::timestamptz - '2004-01-01 > 00:00:00'::timestamptz; >?column? > -- >24:00:00 > (1 row) > > --- > > Jim Nasby wrote: > > On Oct 5, 2006, at 11:50 AM, Tom Lane wrote: > > > regression=# select ('2006-09-15 23:59:00'::timestamp - '2006-09-01 > > > 09:30:41'::timestamp); > > > ?column? > > > -- > > > 14 days 14:28:19 > > > (1 row) > > > > > > should be reporting '350:28:19' instead. > > > > > > This is a hack that was done to minimize the changes in the regression > > > test expected outputs when we changed type interval from months/ > > > seconds > > > to months/days/seconds. But I wonder whether it wasn't a dumb idea. > > > It is certainly inconsistent, as noted in the code comments. > > > > > > I'm tempted to propose that we remove the justify_hours call, and tell > > > anyone who really wants the old results to apply justify_hours() to > > > the > > > subtraction result for themselves. Not sure what the fallout would > > > be, > > > though. > > > > I suspect there's applications out there that are relying on that > > being nicely formated for display purposes. > > > > I agree it should be removed, but we might need a form of backwards > > compatibility for a version or two... > > -- > > Jim Nasby[EMAIL PROTECTED] > > EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) > > > > > > > > ---(end of broadcast)--- > > TIP 3: Have you checked our extensive FAQ? > > > >http://www.postgresql.org/docs/faq > > -- > Bruce Momjian <[EMAIL PROTECTED]> http://momjian.us > EnterpriseDB http://www.enterprisedb.com > > + If your life is a hard drive, Christ can be your backup. + > > ---(end of broadcast)--- > TIP 2: Don't 'kill -9' the postmaster -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] Adding "on delete cascade" to all foreign key constraints
On Tue, May 15, 2007 at 12:19:54PM -0500, Peter Hanson wrote: > Hello, > > I was wondering if there's a fast way I can add "on delete cascade" to all > foreign key constraints in my database? Maybe a quick update I can make > against the catalog possibly? Or is there a way I can query for all foreign > key constrains in the database and then I could write up a quick script to do > the updates for me. You shouldn't go mucking about with the system tables unless absolutely necessary. Instead, write a SELECT that outputs the appropriate syntax. You could do that by querying the catalogs directly, but I think you'll find the pg_user_foreign_keys view defined by http://pgfoundry.org/projects/newsysviews to be most helpful. -- Jim Nasby [EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings