Re: [GENERAL] [SQL] Database Design tool
On Tue, 2002-11-05 at 15:00, Thomas Good wrote: > Is there a page at PostgreSQL.org where all these links are listed? > I've been saving each bit of mail that goes by on this thread...but > a page on Pg would be very nice. ;-) you mean like techdocs.postgresql.org ??? The page that is mentionned on the postgresql web site??? nuff said? Tony ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[SQL] (No Subject)
Hello, I have visited your web site and found very interesting and informative. I would like to know: What is a BLOB Field and can you give a example of one? I hope to hear from you soon Get your small business started at Lycos Small Business at http://www.lycos.com/business/mail.html
Re: [SQL] Combine query views into one SQL string
I don't get it? What's the difference between an inner-join and an equijoin? "Nils Zonneveld" <[EMAIL PROTECTED]> wrote in message [EMAIL PROTECTED]">news:[EMAIL PROTECTED]... > > > Maik wrote: > > > > Its clear, union concat the two results. > > > > But he can also use this join version, if its the intention. > > > > select t1.id, sum(t1.amount), t2.id, sum(t2.amount) from table1 as t1, > > table2 as t2 where t1.id=t2.id; > > > > Yeps, thats another way to write an inner join :-) > > Mazzel, > > Nils > > -- > Alles van waarde is weerloos > Lucebert ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] Foreign character struggles
On Fri, 2002-10-25 at 15:33, Tim Perdue wrote: > I compiled postgres with --enable-multibyte and --enable-recode, and it > doesn't appear to help with my problem. createdb my_db_name -E LATIN1 Worked just fine for me but the client wanted to be able to search with accents so I turned the to_ascii stuff off. See www.3continents.com/base_de_donnees.htm and search for "Amnésie" if you want the english search to work you search for "Amnesia"... The client wants the user to check spelling... Before it worked just the way you wanted _but_ I am using a JDBC request via JSP. Cheers Tony Grant -- www.tgds.net Library management software toolkit, redhat linux on Sony Vaio C1XD, Dreamweaver MX with Tomcat and PostgreSQL ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[SQL] help: triggers
hello, I'm trying to update a columm on a table with a trigger but it don't work on PostgreSQL 7.2 (i686-pc-linux-gnu, compiled by GCC 2.96). when I update a row the trigger does nothing. what can I do? thanks in advance tony here is my trigger: CREATE FUNCTION update_my_objekt() RETURNS OPAQUE AS ' BEGIN RAISE NOTICE ''My Trigger is updating the table''; IF OLD.status=''Activo'' THEN NEW.tempo := 10; NEW.actual := now(); END IF; RETURN NEW; END; ' LANGUAGE 'plpgsql'; CREATE TRIGGER update_online BEFORE UPDATE ON iobjects FOR EACH ROW EXECUTE PROCEDURE update_my_objekt(); ## ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [SQL] help: triggers
Wei Weng wrote: It would be better if you could provide the source of that trigger and involved table schemas? CREATE SEQUENCE "iobjects_id_seq" start 1 increment 1 maxvalue 9223372036854775807 minvalue 1 cache 1; GRANT ALL on "iobjects_id_seq" to "jantos"; CREATE TABLE "iobjects" ( "id" integer DEFAULT nextval('"iobjects_id_seq"'::text) NOT NULL, "status" character varying(10) DEFAULT 'Activo' NOT NULL, "local" character varying(80) NOT NULL, "fig_1" character varying(80), "fig_2" character varying(80), "fig_3" character varying(80), "introduzido" timestamp with time zone DEFAULT now() NOT NULL, "actual" timestamp with time zone DEFAULT now() NOT NULL, "tempo" integer DEFAULT 0 NOT NULL, "deleted" character(1) DEFAULT '0', Constraint "pk_iobjects" Primary Key ("id") ); GRANT ALL on "iobjects" to "jantos"; Wei - Original Message - From: "Tony Simbine" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Sent: Wednesday, January 29, 2003 8:29 AM Subject: [SQL] help: triggers hello, I'm trying to update a columm on a table with a trigger but it don't work on PostgreSQL 7.2 (i686-pc-linux-gnu, compiled by GCC 2.96). when I update a row the trigger does nothing. what can I do? thanks in advance tony here is my trigger: CREATE FUNCTION update_my_objekt() RETURNS OPAQUE AS ' BEGIN RAISE NOTICE ''My Trigger is updating the table''; IF OLD.status=''Activo'' THEN NEW.tempo := 10; NEW.actual := now(); END IF; RETURN NEW; END; ' LANGUAGE 'plpgsql'; CREATE TRIGGER update_online BEFORE UPDATE ON iobjects FOR EACH ROW EXECUTE PROCEDURE update_my_objekt(); ## ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[SQL] help
hello, I've a query which needs too many time ca. 12-15 sec. how can i get a better perfomance? my table have less than 2300 rows. thanks in advance tony explain select o.id from ioobeject o,dist_vertron v where macro_lid=1123 and (o.id=v.id) and (o.deleted<>'1') and (o.status='activo') order by o.ort; QUERY PLAN - -- Sort (cost=790.49..791.63 rows=457 width=66) Sort Key: o.ort -> Hash Join (cost=228.88..770.31 rows=457 width=66) Hash Cond: ("outer".id = "inner".id) -> Append (cost=0.00..502.35 rows=3337 width=4) -> Seq Scan on dist_vertron v (cost=0.00..0.00 rows=1 width=4) -> Seq Scan on disposicao v (cost=0.00..136.87 rows=987 width=4) -> Seq Scan on oponente v (cost=0.00..0.00 rows=1 width=4) -> Seq Scan on novinho v (cost=0.00..5.14 rows=14 width=4) -> Seq Scan on colagem_livre v (cost=0.00..194.69 rows=1369 width=4) -> Seq Scan on jardim_contribuicao v (cost=0.00..149.08 rows=808 width=4) -> Seq Scan on jardim_comardia v (cost=0.00..16.57 rows=157 width=4) -> Hash (cost=228.81..228.81 rows=27 width=62) -> Append (cost=0.00..228.81 rows=27 width=62) -> Seq Scan on ioobeject o (cost=0.00..0.00 rows=1 width=62) Filter: ((macro_lid = 1123) AND (deleted <> '1'::bpchar) AND (status = 'activo'::character varying)) -> Index Scan using iwohnung_macro_lid_index on iwohnung o (cost=0.00..28.71 rows=3 width=17) Index Cond: (macro_lid = 1123) Filter: ((deleted <> '1'::bpchar) AND (status = 'activo'::character varying)) -> Index Scan using izoologicowohnung_macro_lid_index on izoologicowohnung o (cost=0.00..14.70 rows=1 width=19) Index Cond: (macro_lid = 1123) Filter: ((deleted <> '1'::bpchar) AND (status = 'activo'::character varying)) -> Index Scan using icolagem_macro_lid_index on icasamento o (cost=0.00..21.06 rows=2 width=18) Index Cond: (macro_lid = 1123) Filter: ((deleted <> '1'::bpchar) AND (status = 'activo'::character varying)) -> Index Scan using iwohn_geschaefts_colagem_macro_lid__index on iwohn_geschaefts_casamento o (cost=0.00..7.80 row s=1 width=16) Index Cond: (macro_lid = 1123) Filter: ((deleted <> '1'::bpchar) AND (status = 'activo'::character varying)) -> Index Scan using ialinhadocolagem_macro_lid_index on ialinhadocasamento o (cost=0.00..8.30 rows=1 width=18) Index Cond: (macro_lid = 1123) Filter: ((deleted <> '1'::bpchar) AND (status = 'activo'::character varying)) -> Index Scan using idobro_colagem_macro_lid_index on idobro_casamento o (cost=0.00..6.08 rows=1 width=17) Index Cond: (macro_lid = 1123) Filter: ((deleted <> '1'::bpchar) AND (status = 'activo'::character varying)) -> Index Scan using ibauernhof_mediador_index on ibauernhof o (cost=0.00..8.53 rows=1 width=18) Index Cond: (macro_lid = 1123) Filter: ((deleted <> '1'::bpchar) AND (status = 'activo'::character varying)) -> Index Scan using imehrfamcolagem_mediador_index on imehrfamcasamento o (cost=0.00..11.49 rows=1 width=17) Index Cond: (macro_lid = 1123) Filter: ((deleted <> '1'::bpchar) AND (status = 'activo'::character varying)) -> Index Scan using izoologicocolagem_macro_lid_index on izoologicocasamento o (cost=0.00..8.03 rows=1 width=17) Index Cond: (macro_lid = 1123) Filter: ((deleted <> '1'::bpchar) AND (status = 'activo'::character varying)) -> Index Scan using igartenbungalow_macro_lid_index on igartenbungalow o (cost=0.00..9.20 rows=1 width=19) Index Cond: (macro_lid = 1123) Filter: ((deleted <> '1'::bpchar) A
Re: [SQL] help
Josh, thanks for your help. Josh Berkus wrote: Tony, I've a query which needs too many time ca. 12-15 sec. how can i get a better perfomance? First of all, please take this to the PGSQL-PERFORMANCE list. Second, see this web page: http://techdocs.postgresql.org/guides/SlowQueryPostingGuidelines i read some tips on the above pages and my database and all my queries run very well. i've optimized the queriey from where ... IN (select ..) to EXISTS and I've done vacumdb full. my requests last between 0.009 to 0.2 sec compared with 12-15 sec. i'm very happy. thanks very much. tony explain select o.id from ioobeject o,dist_vertron v where macro_lid=1123 and (o.id=v.id) and (o.deleted<>'1') and (o.status='activo') order by o.ort; Third, from your explain, ioobject and dist_vertron are obviously somewhat complex views. We need those view definitions, possibly plus schema for the underlying tables (including indexes), or we can't help you. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[SQL] Help with COPY command
I'm attempting to use the COPY command through the libpq C functions. In the psql command line editor, I'd have the following commands: COPY testtable FROM stdin WITH DELIMITER ','; 1, 2, 'a', 3, 4 5, 6, 'b', 7, 8 9, 10, 'c', 11, 12 \. My question is: Does this all have to be passed as one big command through libpq? For example, res = PQexec(conn, "COPY testtable FROM stdin WITH DELIMITER ',';\n", "1, 2, 'a', 3, 4\n5, 6, 'b', 7, 8\n9, 10, 'c', 11, 12\n\."); Or, can it be broken up into separate lines? For example, res = PQexec(conn, "COPY testtable FROM stdin WITH DELIMITER ',';"); res = PQexec(conn, "1, 2, 'a', 3, 4"); res = PQexec(conn, "5, 6, 'b', 7, 8"); res = PQexec(conn, "9, 10, 'c', 11, 12"); res = PQexec(conn, "\."); -Tony ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Help with COPY command
Ok. I found the libpq syntax for COPY in the Programmer's manual. I've got a working version, but wanted to verify something. PQexec(conn, "COPY foo FROM STDIN"); PQputline(conn, "3\thello world\t4.5\n"); PQputline(conn,"4\tgoodbye world\t7.11\n"); ... PQputline(conn,"\\.\n"); PQendcopy(conn); 1. I'm assuming that I can put in as many PQputline statements as I want to between the PQexec("COPY ... FROM ...") and the terminator line. Is that correct? No limit? 2. Do any of these lines need to be followed by a PQclear(res)? What about the first PQexec? -Tony ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[SQL] SELECTing part of a matrix
I've got a 3D matrix stored in a table. I'd like to pull back just 2 of the dimensions (specifying a constant value for the 3rd dimension). e.g. table1 == matrix1[5][10][20] I've like to get matrix[1][all][all] and have it pulled back as a 2D matrix (new_matrix1[all][all]). Any way to do this? -Tony ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [SQL] best method to copy data across databases
[EMAIL PROTECTED] (ctrl) wrote in message news:<[EMAIL PROTECTED]>... > I need to copy data that I have on a table in one Postgres database > into another table, on a different database. The destination table is > not identical with the source table so I need to do some mapping > between the 2 tables. > What would be the best (and quickest) way to transfer the data? (there > are over 500k rows) > > thanks! If the 2 tables have different arrangements, then I'm not sure if there is a quick way. The safest way is probably to do a pg_dump --attribute-inserts. >From the pg_dump manpage, --attribute-inserts Dump data as INSERT commands with explicit column names (INSERT INTO table (column, ...) VALUES ...). This will make restoration very slow, but it is necessary if you desire to rearrange the column ordering. HTH, -Tony ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[SQL] help with scheme changes to live system.
Hi Gurus, Please forgive this naive question: Say, I have a table (containerId, itemId) where for each containerId there are several rows (different itemId value) in that table. Now I want to give those rows (with same containerId) a sequence, so add one colum there to make it become (containerId, itemId, sequence); how do I make such changes through SQL? I know how to alter the table to insert that new column, but how do I populate the existed rows on that new column? note the sequence dose not need to confirm to any sorting of other columns, i.e., as long as there is a sequence is ok. for example, if I already have : containerId | itemId | 1 1001 1 1002 1 1003 2 2001 2 2002 then either containerId | itemId | sequence 1 1001 0 1 1002 1 1 1003 2 2 2001 1 2 2002 0 or containerId | itemId | sequence 1 1001 2 1 1002 1 1 1003 0 2 2001 0 2 2002 1 are all OK. Thanks, Gnale _ Is your PC infected? Get a FREE online computer virus scan from McAfee® Security. http://clinic.mcafee.com/clinic/ibuy/campaign.asp?cid=3963 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[SQL] Looking for a way to sum integer arrays....
I'd like to be able to sum up an integer array. Like so: {3,2,1} + {0,2,2} --- {3,4,3} The following solution I've been hacking on works, although I think it is far from "ideal". Is there a built in way to sum up arrays? If not, is there a better way than my crude method? I have tested this on 7.4 and 8.0. I'd also be appreciate if any insight on why my aggregate fails to work when I have an empty initcondition. P.S. I have never written an aggregate and I was lost trying to follow the complex_sum example in the docs. - CREATE OR REPLACE FUNCTION sum_intarray(INTEGER[],INTEGER[]) RETURNS INTEGER[] LANGUAGE 'plpgsql' AS ' /* || Author: Tony Wasson || || Overview: Experiment with arrays and aggregates || 3,2,1 ||+ 0,2,2 || --- || 3,4,3 || || Revisions: (when, who, what) || 2005/04/21 -- TW - Create function */ DECLARE inta1 ALIAS FOR $1; inta2 ALIAS FOR $2; out_arr INTEGER[]; out_arr_textTEXT := ''''; i INTEGER; nextnum INTEGER; BEGIN FOR i IN array_lower(inta1, 1)..array_upper(inta1, 1) LOOP RAISE NOTICE ''looking at element %'',i; nextnum := COALESCE(inta1[i],0) + COALESCE(inta2[i],0); RAISE NOTICE ''nextnum %'',nextnum; out_arr_text := out_arr_text || nextnum::TEXT || '',''; RAISE NOTICE ''text %'',out_arr_text; END LOOP; RAISE NOTICE ''text %'',out_arr_text; --drop the last comma IF SUBSTRING(out_arr_text,length(out_arr_text),1) = '','' THEN out_arr_text := substring(out_arr_text,1,length(out_arr_text)-1); END IF; out_arr_text := ''{'' || out_arr_text || ''}''; RAISE NOTICE ''text %'',out_arr_text; out_arr := out_arr_text; RAISE NOTICE ''out_arr %'',out_arr; RETURN out_arr; END '; SELECT sum_intarray('{1,2}','{2,3}'); SELECT sum_intarray('{3,2,1}','{0,2,2}'); --- Now I make a table to demonstrate an aggregate on CREATE TABLE arraytest ( id character varying(10) NOT NULL, somearr integer[] ); INSERT INTO arraytest (id, somearr) VALUES ('a', '{1,2,3}'); INSERT INTO arraytest (id, somearr) VALUES ('b', '{0,1,2}'); CREATE AGGREGATE sum_integer_array ( sfunc = sum_intarray, basetype = INTEGER[], stype = INTEGER[], initcond = '{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}' ); -- # SELECT sum_integer_array(somearr) FROM arraytest; sum_integer_array - {1,3,5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0} Thanks in advance to anyone who reads this far. Tony Wasson [EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Looking for a way to sum integer arrays....
Thank you for the responses! To recap: pl/r array support works very well. In my case, I am looking for pl/pgsql solution. I also got this nice function from dennisb on the #postgresql irc channel, which seems extremely "clean" and works with 7.4/8.0. My original function didn't handle a blank initcond in the aggregate gracefully. CREATE OR REPLACE FUNCTION array_add(int[],int[]) RETURNS int[] AS ' DECLARE x ALIAS FOR $1; y ALIAS FOR $2; a int; b int; i int; res int[]; BEGIN res = x; a := array_lower (y, 1); b := array_upper (y, 1); IF a IS NOT NULL THEN FOR i IN a .. b LOOP res[i] := coalesce(res[i],0) + y[i]; END LOOP; END IF; RETURN res; END; ' LANGUAGE plpgsql STRICT IMMUTABLE; --- then this aggregate lets me sum integer arrays... CREATE AGGREGATE sum_integer_array ( sfunc = array_add, basetype = INTEGER[], stype = INTEGER[], initcond = '{}' ); Here's how my sample table looked and my new array summing aggregate and function: #SELECT * FROM arraytest ; id | somearr +- a | {1,2,3} b | {0,1,2} (2 rows) #SELECT sum_integer_array(somearr) FROM arraytest ; sum_integer_array --- {1,3,5} (1 row) Tony Wasson ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [SQL] default value for select?
On 5/9/05, Mark Fenbers <[EMAIL PROTECTED]> wrote: > I want to update a column in myTable. The value this column is set to > depends on a nested select statement which sometimes returns 0 rows instead > of 1. This is a problem since the column I'm trying to update is set to > refuse nulls. Here's a sample: > > update myTable set myColumn = (Select altColumn from altTable where > altColumn != 'XXX' limit 1) where myColumn = 'XXX'; > > MyColumn cannot accept nulls, but sometimes "Select altColumn ..." returns > 0 rows, and thus, the query fails. > > Is there a way to set a default value to be inserted into myColumn if and > when "select altColumn ..." returns zero rows? > > Mark Mark, You can work around this by using a CASE statement. In this case, test for a NULL from your subquery. This is not elegant at all, but it should do what you are wanting. update myTable set myColumn = (CASE WHEN (Select altColumn from altTable where altColumn != 'XXX' limit 1) IS NULL THEN 'some default value' ELSE (Select altColumn from altTable where altColumn != 'XXX' limit 1) END) where myColumn = 'XXX'; Hope this helps... Tony ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] Help: Function for splitting VARCHAR column and migrating its data to 2 new tables
able('%'); The input goes to a SIMILAR TO on id - and % means all elements. You can match a single row by using something like SELECT build_keyword_table('123'); I also used this as my test data... It worked for me! CREATE TABLE user_data ( id SERIAL, user_id INTEGER, keywords VARCHAR(256) NOT NULL, add_date TIMESTAMP, PRIMARY KEY(id) ); INSERT INTO user_data (keywords) VALUES ('new york,san francisco, dallas, food'); INSERT INTO user_data (keywords) VALUES ('phoenix, hot, summer, fun'); CREATE TABLE keyword ( name VARCHAR(64) NOT NULL, id SERIAL, add_date TIMESTAMP, PRIMARY KEY(name) ); -- todo put a UNIQUE INDEX on keyword (id) I hope this helps. Tony Wasson ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] 3 tables, slow count(*), order by Seq Scan in Query Plan
On 5/26/05, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: > Hello, > > I have 3 tables (2 tables + 1 lookup table that ties them) and running > a straight-forward aggregate count(*) query with a couple of joins > takes about 10 seconds (and I need it to be sub-second or so). > Also, I am wondering if this approach is scalable with my row-counts > and my hardware (below). > > My slow query is this: > -- > SELECT keyword.name, count(*) > FROM user_data, user_data_keyword, keyword > WHERE (user_data.user_id = 1) > AND (user_data.id = user_data_keyword.user_data_id) > AND (user_data_keyword.keyword_id = keyword.id) > GROUP BY keyword.name > ORDER BY COUNT(*) DESC LIMIT 10; > Is there any way of speeding up my query? > > Also, given the number of rows expected in those tables: > user_data: 10M > user_data_keyword: 40M > keyword:4M This sounds like a perfect candidate for a summary table. You should read Jonathan Gardner's writeup about materialized views. Depending on your requirements, you'll either need to build triggers or a periodic summarization you run. This sounds like a "top N" report so a periodic update out to work. http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html Tony Wasson ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [SQL] left joins
On 7/6/05, Ragnar Hafstað <[EMAIL PROTECTED]> wrote: > On Wed, 2005-07-06 at 11:33 +0100, Nick Stone wrote: > > I've had exactly yhe same problem - try changing the query to. > > > > select count(*) > > from h left join p using (r,pos) and p.r_order=1 > > where h.tn > 20 > > and h.tn < 30 > > really ? is this legal SQL ? > is this a 8.0 feature ? > I get syntax error at or near "and" at character 41 > SQL like this works for me when I write it with an ON statement instead of a USING. select count(*) from h left join p on (h.r=p.r and h.pos=r.pos and p.r_order=1) where h.tn > 20 and h.tn < 30 Filtering within the join condition is very useful when doing a left outer join. Here's another example "from the book" doing this type of filter within the join: http://www.postgresql.org/docs/8.0/interactive/queries-table-expressions.html Tony Wasson ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [SQL] funstions for parsing words
On 7/19/05, John Kopanas <[EMAIL PROTECTED]> wrote: > I have a table called Phrases that holds the text of a phrase. I want > write a query that will return all the words found in all the text of > the Phrases. Like so: > > > Phrases: > > "Hello World" > "Goodbye World" > "I like candy > > Words (select statement result): > > "Hello" > "World" > "Goodbye" > "I" > "Like" > "Candy" > > Is anything like this possible? > > Thanks alot. > > Your Friend, > > John Kopanas You can do this by using array_to_string and using a space as your delimiter. If you need to trim the quotes use the trim function also. You can also see the split_on_commas example below -- you'd want to split on a space. I would also rewrite this to use array_to_string or use pl/perl if you can. http://archives.postgresql.org/pgsql-sql/2005-05/msg00204.php Hope this helps. Tony Wasson ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] echo/printf function in plpgsql
On 7/19/05, Andreas Joseph Krogh <[EMAIL PROTECTED]> wrote: > On Tuesday 19 July 2005 17:18, Richard Huxton wrote: > > Andreas Joseph Krogh wrote: > > > Hi all! > > > > > > Is there a way of echo'ing a string(like "raise notice 'this is id%', > > > id") from plpgsql? I want to echo/print it to STDOUT 'cause the > > > notice-mechanism produces too much noise IMH. > > > > Your function is running in the backend. You don't have a STDOUT > > (although you might have redirected STDERR for logging). > > I see. Can I make the ouput somehow less verbose? It spits out a lot of noise > for each "NOTICE": You can control the severity messages sent to your client by first setting client_min_message. Try SET client_min_messages = WARNING; http://www.postgresql.org/docs/8.0/interactive/runtime-config.html Tony ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] Multi-column returns from pgsql
On 7/22/05, Jim Buttafuoco <[EMAIL PROTECTED]> wrote: > Mark, > > Instead of RETURN NEXT rec.txt1; RETURN NEXT rec.txt2; just use RETURN NEXT > rec; > > then your select statement would be > select * from my_func() as (txt1 text,txt2 text); > > Jim Besides a simple RETURN NEXT, you'll need to return a SETOF some composite type. You can do something like CREATE TYPE twotexts_t AS (txt1 TEXT, txt2 TEXT); CREATE OR REPLACE FUNCTION my_func() returns SETOF twotexts_t AS ' DECLARE rec record; BEGIN FOR rec IN SELECT txt1, txt2 FROM mytable LOOP RETURN NEXT END LOOP; RETURN; END;' language 'plpgsql'; ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] psql client: technique for applying default values to :variables?
On 1/11/06, Jeff Boes <[EMAIL PROTECTED]> wrote: > Stumped: is there any way to set up default values for psql variables > within the .SQL file itself? Obviously, I can do something like: > > $ psql -f my_script -v MYVAR=${myvar:-mydefault} > > but I would prefer to have the value stored with the .SQL file, e.g. (if > this actually worked): > > \set MYVAR COALESCE(:MYVAR,'mydefault') Stuff like this works for me in a SQL file \set edate 'CURRENT_DATE::DATE' SELECT * FROM some_table WHERE update_date = :edate; ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Function Dependency
On 2/1/06, Padam J Singh <[EMAIL PROTECTED]> wrote: > Hello, > > I am maintaining an application that has over 400 procedures and functions > written in plsql, and around 100 tables. > I want to generate a function dependency chart to depict the following: > > 1. Inter function/procedure dependencies > 2. function-tables dependencies > 3. function-sequences depencies > > Is there a standard method of doing this? > > Thanks in advance, > Padam. I too would be interested in a standardized tool to do this. I had a similar situation and I ended up writing a perl script to parse my SQL and make a graphviz dot file. I then used graphviz to make a function dependency chart. I can't promise it would catch every single case, but I can provide you with the code if you wish to give it a whirl. Tony Wasson ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] help..postgresql mulyiple return values
On 5/12/06, Michael Joseph Tan <[EMAIL PROTECTED]> wrote: hi, im new in postgresql, generally new in databases. im trying to make a function using PGAdminIII which returns several types, example, my query is: "select count(id) as requests, organization from connection_requests group by organization" id is of type int8, organization is of type varchar(50). basically the query would return coun(id), and a varchar(50) which is organization. i really dont know what to put on the return type. what would be the best solution? If you are in 8.1 you can follow this example from the documentation. http://www.postgresql.org/docs/8.1/interactive/plpgsql-declarations.html CREATE FUNCTION sum_n_product(x int, y int, OUT sum int, OUT prod int) AS $$ BEGIN sum := x + y; prod := x * y; END; $$ LANGUAGE plpgsql; Then run it like: SELECT sum, prod FROM sum_n_product(1,2); ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] How to pass array of values to a stored procedure
On 7/18/06, Curtis Scheer <[EMAIL PROTECTED]> wrote: Does anyone have any examples of how I would make a stored procedure in plpgsql that would allow for passing a list or arrays of values to be used in an sql IN clause? Like so: select * from table where field1 in (values). Here's a very simple example. However, I think passing arrays of INTs around is dirty. I wasn't able to do this without a FOR ...IN EXECUTE statement. CREATE TABLE ids ( id INTEGER , PRIMARY KEY (id) ); INSERT INTO ids VALUES (1); INSERT INTO ids VALUES (2); INSERT INTO ids VALUES (3); CREATE OR REPLACE FUNCTION example_array_input(INT[]) RETURNS SETOF ids AS $BODY$ DECLARE in_clause ALIAS FOR $1; clause TEXT; rec RECORD; BEGIN -- conver the array to a text string and make it LOOK like an IN statement clause := in_clause; clause := trim(leading '{' FROM clause); clause := trim(trailing '}' FROM clause); FOR rec IN EXECUTE 'SELECT id FROM ids WHERE id IN (' || clause || ');' LOOP RETURN NEXT rec; END LOOP; -- final return RETURN; END $BODY$ language plpgsql; SELECT * FROM example_array_input('{1,2,4,5,6}'::INT[]); ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] How to pass array of values to a stored procedure
On 7/18/06, Tony Wasson <[EMAIL PROTECTED]> wrote: On 7/18/06, Curtis Scheer <[EMAIL PROTECTED]> wrote: > Does anyone have any examples of how I would make a stored procedure in > plpgsql that would allow for passing a list or arrays of values to be used > in an sql IN clause? Like so: select * from table where field1 in (values). > Ahhh... Here's an example using Tom's recommended field=ANY (arrayvalue) SQL. CREATE TABLE ids ( id INTEGER , PRIMARY KEY (id) ); INSERT INTO ids VALUES (1); INSERT INTO ids VALUES (2); INSERT INTO ids VALUES (3); CREATE OR REPLACE FUNCTION example_array_input(INT[]) RETURNS SETOF ids AS $BODY$ DECLARE in_clause ALIAS FOR $1; clause TEXT; rec RECORD; BEGIN FOR rec IN SELECT id FROM ids WHERE id = ANY(in_clause) LOOP RETURN NEXT rec; END LOOP; -- final return RETURN; END $BODY$ language plpgsql; SELECT * FROM example_array_input('{1,2,4,5,6}'::INT[]); ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] Remote monitoring of Postgres w/minimal grants
On Wed, Mar 10, 2010 at 12:26 AM, Bryce Nesbitt wrote: > I'm setting up remote monitoring of postgres, but running into an > uncomfortable situation with permissions. > Basically it seems hard to set up a secure "read only" role, yet also allow > proper monitoring. > > A brief writeup of that is here: > > http://help.logicmonitor.com/installation-getting-started/notes-for-monitoring-specific-types-of-hosts/databases/postgresql/postgresql-credentials/ > In order to get accurate server busy stats and max query time, the > LogicMonitor user needs to be a superuser "alter role logicmonitor > superuser;". Without the SuperUser privilege, all servers will appear busy, > and maximum query time will always be 0. > > Is there a way to grant the type of permission needed to view stats, > without superuser? > Seems like you could get around most of these cases by making a function or set returning function to return the data and making it "security definer" and then grant your monitoring user access to that. Tony
[SQL] Avoiding cycles in a directed graph
I'm using the following table to represent an acyclic directed graph: CREATE TABLE edge( id SERIAL PRIMARY KEY, child INTEGER NOT NULL, parent INTEGER, UNIQUE (child, parent) ); I see there is an example in the online docs for detecting cycles in recursive queries, and I've adapted the example query to the table above: WITH RECURSIVE search_graph(parent, child, id, depth, path, cycle) AS ( SELECT e.parent, e.child, e.id, 1, ARRAY[e.id], false FROM edge e UNION ALL SELECT e.parent, e.child, e.id, sg.depth + 1, path || e.id, e.id = ANY(path) FROM edge e, search_graph sg WHERE e.parent = sg.child AND NOT cycle ) SELECT * FROM search_graph; That's great to avoid looping forever on queries, but what about preventing anyone from inserting edges that would create cycles in the first place? I reckon I'll need a trigger of some sort, but nothing I've tried has enabled me to do the cycle check as part of the trigger to avoid inserting an edge that would create a cycle. I tried having the non-recursive SELECT use NEW.parent, NEW.child, etc. but that isn't working. Is there any way to do this, or do I have to just insert the edge, check if it cycles, and delete it if it does? Thanks. -Tony -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Avoiding cycles in a directed graph
On 3/16/10 4:34 PM, Tom Lane wrote: > The same kind of problem exists for unique and foreign key constraints, > both of which use low-level locking mechanisms to catch such cases. > There's no way that I can see to express the "no cycle" constraint as a > uniqueness constraint unfortunately. You could solve limited forms of > it using the exclusion-constraint mechanism that's new in 9.0, but > AFAICS not the general case :-( Are you saying there's no way to avoid cycles at all, or just no way to do it using uniqueness constraints? I'm okay with running the big, fat WITH RECURSIVE query in my insert trigger if I have to -- it won't be great for performance, but I don't expect this to be a frequent operation, so I'll accept the performance hit if it works. Unfortunately I can't even get that working. Here's the (not at all functional) trigger I've got right now, which only detects the cycle *after* it's been inserted, which is of no help at all. Any way I can modify this to do the right thing? CREATE OR REPLACE FUNCTION check_edge_cycle() RETURNS trigger AS $$ DECLARE cycles INTEGER; BEGIN WITH RECURSIVE search_graph(parent, child, id, depth, path, cycle) AS ( SELECT NEW.parent, NEW.child, NEW.id, 1, ARRAY[NEW.id], false UNION ALL SELECT g.parent, g.child, g.id, sg.depth + 1, path || g.id, g.id = ANY(path) FROM catalog_edge g, search_graph sg WHERE g.parent = sg.child AND NOT cycle ) SELECT INTO cycles COUNT(*) FROM search_graph WHERE cycle=true; RAISE NOTICE 'cycles: %', cycles; IF cycles > 0 THEN RAISE EXCEPTION 'cycle'; END IF; RETURN NEW; END $$ LANGUAGE plpgsql; -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Avoiding cycles in a directed graph
On 3/16/10 6:22 PM, Tom Lane wrote: > Richard Huxton writes: > > > Um, what if the cycle is being formed from whole cloth? For instance > T1 inserts an edge A->B while T2 is inserting B->A. There are no > pre-existing rows to lock, but there will still be a cycle after they > both commit. For what it's worth, when I did give the "FOR UPDATE" strategy a try, but with the recursive query rather than the simpler approach Richard suggested, I got the following error (v8.4.2): ERROR: FOR UPDATE/SHARE in a recursive query is not implemented I'm sticking with the recursive query, because it seems to me the only way to ensure there are no cycles is to check the whole graph for cycles, and the only way I know how to do that is the recursive approach. Since "FOR UPDATE" isn't implemented for recursive queries, I'll just lock the entire table for now. Thanks, all! -Tony -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] concatenate question
Here's my table: plsql_dw=# \d tmpsv_parent_master Table "staging.tmpsv_parent_master" Column |Type | Modifiers +-+--- memberid | numeric | addeddate | timestamp without time zone | sourceid | numeric | regcomplete| numeric(1,0)| optoutdate | date| bouncedate | date| websiteid | numeric | emailbounced | numeric(2,0)| emailok| numeric(2,0)| emailaddress | character varying(50) | srcwebsiteid | numeric | srcmemberid| numeric | sitetype | character varying | commissionpct | numeric | pricepermember | numeric | acceptrate | numeric(3,2)| mktgcenterid | numeric | label | character varying(32) | Why won't this work? plsql_dw=# select memberid || addeddate from tmpsv_parent_master limit 10; ERROR: operator does not exist: numeric || timestamp without time zone LINE 1: select memberid || addeddate from tmpsv_parent_master limit ... ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. Thanks. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] concatenate question
Ok, that worked. Why did I need to cast both as text though? plsql_dw=# select memberid::text||addeddate::text from tmpsv_parent_master limit 5; ?column? -- 4005941032010-11-16 19:32:17 4005941952010-11-16 19:33:29 4005942842010-11-16 19:34:32 4005943492010-11-16 19:35:22 4005943662010-11-16 19:35:37 (5 rows) Thanks. On Tue, 2010-12-07 at 16:43 -0500, Peter Steinheuser wrote: > I think the HINT is what you need to look at. > > Cast both columns to text. > > On Tue, Dec 7, 2010 at 4:37 PM, Tony Capobianco > wrote: > Here's my table: > >plsql_dw=# \d tmpsv_parent_master > Table "staging.tmpsv_parent_master" > Column |Type | > Modifiers > > +-+--- > memberid | numeric | > addeddate | timestamp without time zone | > sourceid | numeric | > regcomplete| numeric(1,0)| > optoutdate | date| > bouncedate | date| > websiteid | numeric | > emailbounced | numeric(2,0)| > emailok| numeric(2,0)| > emailaddress | character varying(50) | > srcwebsiteid | numeric | > srcmemberid| numeric | > sitetype | character varying | > commissionpct | numeric | > pricepermember | numeric | > acceptrate | numeric(3,2)| > mktgcenterid | numeric | > label | character varying(32) | > > >Why won't this work? >plsql_dw=# select memberid || addeddate from > tmpsv_parent_master > limit >10; >ERROR: operator does not exist: numeric || timestamp > without > time zone >LINE 1: select memberid || addeddate from > tmpsv_parent_master > limit ... >^ >HINT: No operator matches the given name and argument > type(s). > You >might need to add explicit type casts. > >Thanks. > > > -- > Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-sql > > > > -- > Peter Steinheuser > psteinheu...@myyearbook.com -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] concatenate question
Thanks so much to everyone for your responses. You've been very helpful. I'm running PostGres 8.4 and we're migrating our datawarehouse from Oracle 10.2. I guess datatype is implicitly cast in oracle under these circumstances: SQL> create table tony_test as select memberid||addeddate "data" from TMPSV_PARENT_MASTER where rownum < 5; Table created. SQL> desc tony_test Name Null?Type - data VARCHAR2(59) SQL> select * from tony_test; data --- 3812482212010-06-23 13:53:38 3812510902010-06-23 14:12:25 3812622482010-06-23 15:24:45 3812725152010-06-23 16:35:24 Thanks! On Tue, 2010-12-07 at 16:54 -0500, Peter Steinheuser wrote: > I don't know what Postgres version you're using but check out the doc > related to String Functions and Operators. > Cheers, >Peter > > > On Tue, Dec 7, 2010 at 4:47 PM, Tony Capobianco > wrote: > Ok, that worked. Why did I need to cast both as text though? > > plsql_dw=# select memberid::text||addeddate::text from > tmpsv_parent_master limit 5; > ?column? > -- > 4005941032010-11-16 19:32:17 > 4005941952010-11-16 19:33:29 > 4005942842010-11-16 19:34:32 > 4005943492010-11-16 19:35:22 > 4005943662010-11-16 19:35:37 > (5 rows) > > Thanks. > > On Tue, 2010-12-07 at 16:43 -0500, Peter Steinheuser wrote: > > I think the HINT is what you need to look at. > > > > Cast both columns to text. > > > > On Tue, Dec 7, 2010 at 4:37 PM, Tony Capobianco > > wrote: > > Here's my table: > > > >plsql_dw=# \d tmpsv_parent_master > > Table > "staging.tmpsv_parent_master" > > Column |Type > | > > Modifiers > > > > > +-+--- > > memberid | numeric > | > > addeddate | timestamp without time zone > | > > sourceid | numeric > | > > regcomplete| numeric(1,0) > | > > optoutdate | date > | > > bouncedate | date > | > > websiteid | numeric > | > > emailbounced | numeric(2,0) > | > > emailok| numeric(2,0) > | > > emailaddress | character varying(50) > | > > srcwebsiteid | numeric > | > > srcmemberid| numeric > | > > sitetype | character varying > | > > commissionpct | numeric > | > > pricepermember | numeric > | > > acceptrate | numeric(3,2) > | > > mktgcenterid | numeric > | > > label | character varying(32) > | > > > > > >Why won't this work? > >plsql_dw=# select memberid || addeddate from > > tmpsv_parent_master > > limit > >10; > >ERROR: operator does not exist: numeric || > timestamp > > without > > time zone > >LINE 1: select memberid || addeddate from > > tmpsv_parent_master > > limit ... > >^ > >HINT: No operator matches the given name and > argument > > type(s). > > You > >might need to add explicit type casts. > > > >Thanks. > > > > > > -- > > Sent via pgsql-sql mailing list > (pgsql
[SQL] sqlplus reporting equivalent in postgres?
We're in the process of porting our Oracle 10.2 datawarehouse over to PostGres 8.4. One thing we rely upon are daily/hourly reports that are run and formatted in sqlplus (break on report, compute sum, etc.). Is there an equivalent available in postgres? Thanks. Tony -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] concatenate question
Here's my table: plsql_dw=# \d tmpsv_parent_master Table "staging.tmpsv_parent_master" Column |Type | Modifiers +-+--- memberid | numeric | addeddate | timestamp without time zone | sourceid | numeric | regcomplete| numeric(1,0)| optoutdate | date| bouncedate | date| websiteid | numeric | emailbounced | numeric(2,0)| emailok| numeric(2,0)| emailaddress | character varying(50) | srcwebsiteid | numeric | srcmemberid| numeric | sitetype | character varying | commissionpct | numeric | pricepermember | numeric | acceptrate | numeric(3,2)| mktgcenterid | numeric | label | character varying(32) | Why won't this work? plsql_dw=# select memberid || addeddate from tmpsv_parent_master limit 10; ERROR: operator does not exist: numeric || timestamp without time zone LINE 1: select memberid || addeddate from tmpsv_parent_master limit ... ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. Thanks. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] create role
Hi, I'm successfully executing the below: create role developer login; alter role developer set default_tablespace=dev; alter role developer set search_path=dev,staging, esave, support, email, public; grant select on members to developer; grant create on schema dev to developer; However, when I do this: psql (8.4.5, server 8.4.2) Type "help" for help. esave_dw=> \d members Did not find any relation named "members". esave_dw=> esave_dw=> \d esave.members Table "esave.members" Column|Type | Modifiers -+-+--- memberid| numeric | not null etc How can I get this so I don't have to preface the \d with the schema name every time? Thanks. Tony -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Determine length of numeric field
I'm altering datatypes in several tables from numeric to integer. In doing so, I get the following error: dw=# \d uniq_hits Table "support.uniq_hits" Column | Type | Modifiers +-+--- sourceid | numeric | hitdate| date| total | numeric | hitdate_id | integer | Indexes: "uniq_hits_hitdateid_idx" btree (hitdate_id), tablespace "support_idx" Tablespace: "support" esave_dw=# alter table uniq_hits alter sourceid type int; ERROR: integer out of range Sourceid should not be more than 5 digits long. I'm able to perform this query on Oracle and would like something similar on postgres 8.4: delete from uniq_hits where sourceid in (select sourceid from uniq_hits where length(sourceid) > 5); I haven't had much luck with the length or char_length functions on postgres. Thanks. Tony -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Determine length of numeric field
Pavel, That's perfect! Thanks. Tony On Tue, 2011-02-15 at 22:04 +0100, Pavel Stehule wrote: > Hello > > probably you have to use a explicit cast > > postgres=# select length(10::numeric::text); > length > > 2 > (1 row) > > Regards > > Pavel Stehule > > 2011/2/15 Tony Capobianco : > > I'm altering datatypes in several tables from numeric to integer. In > > doing so, I get the following error: > > > > dw=# \d uniq_hits > >Table "support.uniq_hits" > > Column | Type | Modifiers > > +-+--- > > sourceid | numeric | > > hitdate| date| > > total | numeric | > > hitdate_id | integer | > > Indexes: > >"uniq_hits_hitdateid_idx" btree (hitdate_id), tablespace > > "support_idx" > > Tablespace: "support" > > > > esave_dw=# alter table uniq_hits alter sourceid type int; > > ERROR: integer out of range > > > > Sourceid should not be more than 5 digits long. I'm able to perform > > this query on Oracle and would like something similar on postgres 8.4: > > > > delete from uniq_hits where sourceid in (select sourceid from uniq_hits > > where length(sourceid) > 5); > > > > I haven't had much luck with the length or char_length functions on > > postgres. > > > > Thanks. > > Tony > > > > > > -- > > Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) > > To make changes to your subscription: > > http://www.postgresql.org/mailpref/pgsql-sql > > > -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Determine length of numeric field
Tom, That's a frighteningly easy solution. Thanks. Tony On Tue, 2011-02-15 at 16:10 -0500, Tom Lane wrote: > Tony Capobianco writes: > > I'm altering datatypes in several tables from numeric to integer. In > > doing so, I get the following error: > > > dw=# \d uniq_hits > > Table "support.uniq_hits" > >Column | Type | Modifiers > > +-+--- > > sourceid | numeric | > > hitdate| date| > > total | numeric | > > hitdate_id | integer | > > Indexes: > > "uniq_hits_hitdateid_idx" btree (hitdate_id), tablespace > > "support_idx" > > Tablespace: "support" > > > esave_dw=# alter table uniq_hits alter sourceid type int; > > ERROR: integer out of range > > > Sourceid should not be more than 5 digits long. I'm able to perform > > this query on Oracle and would like something similar on postgres 8.4: > > > delete from uniq_hits where sourceid in (select sourceid from uniq_hits > > where length(sourceid) > 5); > > That seems like a pretty bizarre operation to apply to a number. Why > not "where sourceid > 9"? Or maybe "where abs(sourceid) > 9" > would be better. > > regards, tom lane > -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] replace " with nothing
We are converting from Oracle to Postgres. An Oracle script contains this line: select replace(firstname,'"'), memberid, emailaddress from members; in an effort to replace the " with nothing. How can I achieve the same result with Postgres? Here's the Postgres error I get: select replace(firstname,'"'), memberid, emailaddress from members; ERROR: function replace(character varying, unknown) does not exist LINE 1: select replace(firstname,'"'), memberid, emailaddress from m... Thanks. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] replace " with nothing
Ok, I think I found it: select translate(firstname,'"','') from members; gives me what I want. Thanks. On Wed, 2011-05-11 at 16:29 -0400, Tony Capobianco wrote: > We are converting from Oracle to Postgres. An Oracle script contains > this line: > > select replace(firstname,'"'), memberid, emailaddress from members; > > in an effort to replace the " with nothing. How can I achieve the same > result with Postgres? > > Here's the Postgres error I get: > > select replace(firstname,'"'), memberid, emailaddress from members; > ERROR: function replace(character varying, unknown) does not exist > LINE 1: select replace(firstname,'"'), memberid, emailaddress from m... > > Thanks. > > -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Add one column to another
Use the concat || operator. On Thu, 2011-08-25 at 15:21 +0100, gvim wrote: > I have to deal with a table which contains: > > first_name > surname > email1 > email2 > > ... and I would like to create a view which combines both email columns thus: > > first_name > surname > email > > It looks simple but I can't think of an obvious query. > > gvim > -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] compare table names
I have these 3 tables: tablename tmp_staging0109 tmp_staging1229 tmp_staging0108 I'd like this query: select tablename from pg_tables where tablename like 'tmp_staging%' and tablename < 'tmp_staging1230'; To return this result: tablename tmp_staging1229 However, I'm receiving: tablename tmp_staging0109 tmp_staging1229 tmp_staging0108 How can I write this correctly? Thanks. Tony -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] compare table names
I see what you're saying: pg=# select tablename from pg_tables where tablename like 'tmp_staging%' and tablename < 'tmp_staging1230' and tablename > 'tmp_staging1228'; tablename tmp_staging1229 This query is part of a larger script where I want to dynamically select tablenames older than 10 days and drop them. The tables are created in a tmp_stagingMMDD format. I know postgres does not maintain object create times, how can I write this to select tables from pg_tables that are older than 10 days? Thanks. Tony On Mon, 2012-01-09 at 08:19 -0800, Adrian Klaver wrote: > On Monday, January 09, 2012 8:12:18 am Tony Capobianco wrote: > > I have these 3 tables: > > > > tablename > > > > tmp_staging0109 > > tmp_staging1229 > > tmp_staging0108 > > > > > > I'd like this query: > > > > select tablename from pg_tables where tablename like 'tmp_staging%' and > > tablename < 'tmp_staging1230'; > > > > To return this result: > > > > tablename > > > > tmp_staging1229 > > > > However, I'm receiving: > > > > tablename > > > > tmp_staging0109 > > tmp_staging1229 > > tmp_staging0108 > > > > How can I write this correctly? > > As far as I can tell it is correct. 0108,0109 and 1229 are all less than > 1230. > What happens if you do?: > > select tablename from pg_tables where tablename like 'tmp_staging%' and > tablename < 'tmp_staging1230' and tablename > 'tmp_staging1228; > > > > > Thanks. > > Tony > -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] compare table names
This is an old system that we converted over from Oracle just this past year. The first 10 days of this year bit us when needed warehouse tables were removed erroneously. I'm going to push uphill against management to try and create the tables as table_nameMMDD. Wish me luck! Thanks for all your responses. Tony On Wed, 2012-01-11 at 07:43 +0100, Brice André wrote: > Just my 2 cents... > > Why don't you use a date column type instead of a string ? In this > case, at insertion, you could simply do this : > > INERT INTO tablename (insertion_time, ...) VALUES (now(), ...) > > and, for the select, you could simply write : > > SELECT * FROM tablename WHERE insertion_time >= (now() - interval '1 > day') > > > > 2012/1/9 Adrian Klaver > On Monday, January 09, 2012 8:28:43 am Tony Capobianco wrote: > > I see what you're saying: > > > > pg=# select tablename from pg_tables where tablename like > 'tmp_staging%' > > and tablename < 'tmp_staging1230' and tablename > > 'tmp_staging1228'; > > tablename > > > > tmp_staging1229 > > > > > > This query is part of a larger script where I want to > dynamically select > > tablenames older than 10 days and drop them. The tables are > created in > > a tmp_stagingMMDD format. I know postgres does not maintain > object > > create times, how can I write this to select tables from > pg_tables that > > are older than 10 days? > > > Well with out a year number(i.e. YYMMDD) that is going to be > difficult around the > year break. > > As an example: > > test(5432)aklaver=>select * from name_test; > fld_1 > - > tmp_staging0109 > tmp_staging0108 > tmp_staging1229 > (3 rows) > > > test(5432)aklaver=>SELECT fld_1 from name_test where fld_1 < > 'tmp_staging'|| > to_char(current_date-interval '10 days','MMDD') and fld_1 > > 'tmp_staging0131'; > fld_1 > - > tmp_staging1229 > > > > > > > Thanks. > > Tony > > > > > -- > Adrian Klaver > adrian.kla...@gmail.com > > > > -- > Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-sql > > -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] pivot query with count
The following is my code and results: select '1' "num_ads", (case when r.region_code = 1000 then ( select count(*) from ( select userid from user_event_stg2 where userid in ( select userid from user_region where region_code = 1000) and messagetype = 'impression' group by userid having count(userid) = 1) as foo) else 0 end) as "NorthEast", (case when r.region_code = 2000 then ( select count(*) from ( select userid from user_event_stg2 where userid in ( select userid from user_region where region_code = 2000) and messagetype = 'impression' group by userid having count(userid) = 1) as foo) else 0 end) as "NorthWest", (case when r.region_code = 3000 then ( select count(*) from ( select userid from user_event_stg2 where userid in ( select userid from user_region where region_code = 3000) and messagetype = 'impression' group by userid having count(userid) = 1) as foo) else 0 end) as "SouthEast", (case when r.region_code = 4000 then ( select count(*) from ( select userid from user_event_stg2 where userid in ( select userid from user_region where region_code = 4000) and messagetype = 'impression' group by userid having count(userid) = 1) as foo) else 0 end) as "SouthWest", (case when r.region_code = 5000 then ( select count(*) from ( select userid from user_event_stg2 where userid in ( select userid from user_region where region_code = 5000) and messagetype = 'impression' group by userid having count(userid) = 1) as foo) else 0 end) as "Middle of Nowhere" from user_region u, region r where u.region_code = r.region_code group by r.region_code; num_ads | NorthEast | NorthWest | SouthEast | SouthWest | Middle of Nowhere -+---+---+---+---+--- 1 | 0 | 0 | 3898 | 0 | 0 1 | 3895 | 0 | 0 | 0 | 0 1 | 0 | 3873 | 0 | 0 | 0 1 | 0 | 0 | 0 | 3915 | 0 How can I get this output on to a single line? Thanks.
[SQL] pivot query with count
The following is my code and results: select '1' "num_ads", (case when r.region_code = 1000 then ( select count(*) from ( select userid from user_event_stg2 where userid in ( select userid from user_region where region_code = 1000) and messagetype = 'impression' group by userid having count(userid) = 1) as foo) else 0 end) as "NorthEast", (case when r.region_code = 2000 then ( select count(*) from ( select userid from user_event_stg2 where userid in ( select userid from user_region where region_code = 2000) and messagetype = 'impression' group by userid having count(userid) = 1) as foo) else 0 end) as "NorthWest", (case when r.region_code = 3000 then ( select count(*) from ( select userid from user_event_stg2 where userid in ( select userid from user_region where region_code = 3000) and messagetype = 'impression' group by userid having count(userid) = 1) as foo) else 0 end) as "SouthEast", (case when r.region_code = 4000 then ( select count(*) from ( select userid from user_event_stg2 where userid in ( select userid from user_region where region_code = 4000) and messagetype = 'impression' group by userid having count(userid) = 1) as foo) else 0 end) as "SouthWest", (case when r.region_code = 5000 then ( select count(*) from ( select userid from user_event_stg2 where userid in ( select userid from user_region where region_code = 5000) and messagetype = 'impression' group by userid having count(userid) = 1) as foo) else 0 end) as "Middle of Nowhere" from user_region u, region r where u.region_code = r.region_code group by r.region_code; num_ads | NorthEast | NorthWest | SouthEast | SouthWest | Middle of Nowhere -+---+---+---+---+--- 1 | 0 | 0 | 3898 | 0 | 0 1 | 3895 | 0 | 0 | 0 | 0 1 | 0 | 3873 | 0 | 0 | 0 1 | 0 | 0 | 0 | 3915 | 0 How can I get this output on to a single line? num_ads | NorthEast | NorthWest | SouthEast | SouthWest | Middle of Nowhere -+---+---+---+---+--- 1 |3895 |3873 | 3898 |3915 | 0 Thanks.
Re: [SQL] pivot query with count
Thank you very much for your response. However, I'm unclear what you want me to substitute for sum(...)? select '1' as "num_ads", sum(...) from (select a.userid from user_event_stg2 a, user_region b where a.userid = b.userid and b.region_code = 1000 and a.messagetype = 'impression' group by a.userid having count(a.userid) = 1) group by num_ads; I was able to eliminate that sub-select per your recommendation. That makes things a bit easier. Thanks. On Fri, Apr 12, 2013 at 11:29 PM, David Johnston wrote: > My prior comment simply answers your question. You likely can rewrite > your > query so that a separate grouping layer is not needed (or rather the group > by would exist in the main query and you minimize the case/sub-select > column > queries and use aggregates and case instead). > > David J. > > > > > -- > View this message in context: > http://postgresql.1045698.n5.nabble.com/pivot-query-with-count-tp5752072p5752078.html > Sent from the PostgreSQL - sql mailing list archive at Nabble.com. > > > -- > Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-sql >
Re: [SQL] accounting schema
On Feb 6, 2008 6:08 PM, Medi Montaseri <[EMAIL PROTECTED]> wrote: > I am learning my way into Accounting and was wondering how Accounting > applications are designed. perhaps you could point the way > As a DBA, (and keeping it simple) I am thinking I need a table for every > account which migh look like > > id, description, credit, debit, validated, created_on, created_by, > modified_on, modified_by > > Is that pretty match it ? > Please let me know if you have seen some accounting or DB book that > addresses this problem domain. Another codebase to look at is http://www.sql-ledger.org/. It uses postgresql. Regards, Tony ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] Can COPY update or skip existing records?
On Tue, Sep 30, 2008 at 5:16 AM, Glenn Gillen <[EMAIL PROTECTED]> wrote: > Hey all, > > I've got a table with a unique constraint across a few fields which I > need to regularly import a batch of data into. Is there a way to do it > with COPY without getting conflicts on the unique contraint? I have no > was of being certain that some of the data I'm trying to load isn't in > the table already. > > Ideally I'd like it to operate like MySQL's on_duplicate_key_update > option, but for now I'll suffice with just ignoring existing rows and > proceeding with everything else. I ran into a similar problem. I'm using these merge_by_key functions: http://pgfoundry.org/projects/mbk Here's a quick example... CREATE TEMP TABLE foo (LIKE dst INCLUDING DEFAULTS); COPY foo (c1, c2) FROM STDIN; (your copy data here) \. SELECT * FROM merge_by_key( 'public', -- table schema 'dst', -- table name 'mnew.c2 < mold.c2', -- merge condition 'select c1,c2 FROM foo' ); Disclaimer: The author is a friend of mine. :-) -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] grouping/clustering query
On Fri, Oct 24, 2008 at 8:04 AM, Steve Midgley <[EMAIL PROTECTED]> wrote: > At 11:28 AM 10/23/2008, Joe wrote: >> >> Steve Midgley wrote: # (invoiceid, txid) (A, 1) (A, 3) (B, 1) (B, 2) (C, 5) (D, 6) (D, 7) (E, 8) (F, 8) For journalling, I need to group/cluster this together. Is there a SQL query that can generate this output: # (journal: invoiceids, txids) [A,B] , [1,2,3] [C], [5] [D], [6,7] [E,F], [8] >>> >>> Hi Dave, >>> >>> I'm not following the logic here. A has 1,3 and B has 1,2. So why does >>> the first line print: >>> [A,B] , [1,2,3] >>> >>> What's the rule that tells the query to output this way? Is it that all >>> of B's values are between A's values? >> >> From a purely accounting standpoint, since transaction 1 was applied to >> both invoices A and B, you need to group the invoices so that you can >> compare total invoiced against total paid. > > I tinkered around briefly but didn't come up with a good idea, but I bet > someone on this list can. However, I did create a CREATE script for your > table design which, in my experience, makes it more likely that a real > expert will take on your problem.. > > Hope this helps, > > Steve > > DROP TABLE IF EXISTS trans; > > CREATE TABLE trans > ( > id serial NOT NULL, > inv_id character varying, > tx_id character varying, > CONSTRAINT pk_id PRIMARY KEY (id) > ) > WITH (OIDS=FALSE); > > insert into trans (inv_id, tx_id) values('A','1'); > insert into trans (inv_id, tx_id) values('A','3'); > insert into trans (inv_id, tx_id) values('B','1'); > insert into trans (inv_id, tx_id) values('B','2'); > insert into trans (inv_id, tx_id) values('C','5'); > insert into trans (inv_id, tx_id) values('D','6'); > insert into trans (inv_id, tx_id) values('D','7'); > insert into trans (inv_id, tx_id) values('E','8'); > insert into trans (inv_id, tx_id) values('F','8'); This is as close as I can get the data. I think I'd need a custom array grouping aggregate to get the results to match completely. Notice how ABC are on their own lines? test=# SELECT inv_array, tx_array FROM ( SELECT tx_id, array_accum(inv_id) AS inv_array FROM trans GROUP BY tx_id ORDER BY tx_id ) AS t JOIN ( SELECT inv_id, array_accum(tx_id) AS tx_array FROM trans GROUP BY inv_id ORDER BY inv_id ) AS i ON (t.tx_id = ANY(i.tx_array) OR i.inv_id =ANY(t.inv_array)) GROUP BY tx_array,inv_array ; inv_array | tx_array ---+-- {A,B} | {1,2} {B} | {1,2} {A} | {1,3} {A,B} | {1,3} {C} | {5} {D} | {6,7} {E,F} | {8} (7 rows) -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] grouping/clustering query
On Fri, Oct 24, 2008 at 10:24 AM, Tony Wasson <[EMAIL PROTECTED]> wrote: > On Fri, Oct 24, 2008 at 8:04 AM, Steve Midgley <[EMAIL PROTECTED]> wrote: >> At 11:28 AM 10/23/2008, Joe wrote: >>> >>> Steve Midgley wrote: >>>>> >>>>> # (invoiceid, txid) >>>>> (A, 1) >>>>> (A, 3) >>>>> (B, 1) >>>>> (B, 2) >>>>> (C, 5) >>>>> (D, 6) >>>>> (D, 7) >>>>> (E, 8) >>>>> (F, 8) >>>>> >>>>> For journalling, I need to group/cluster this together. Is there a SQL >>>>> query that can generate this output: >>>>> >>>>> # (journal: invoiceids, txids) >>>>> [A,B] , [1,2,3] >>>>> [C], [5] >>>>> [D], [6,7] >>>>> [E,F], [8] >>>> >>>> Hi Dave, >>>> >>>> I'm not following the logic here. A has 1,3 and B has 1,2. So why does >>>> the first line print: >>>> >>>>> [A,B] , [1,2,3] >>>> >>>> What's the rule that tells the query to output this way? Is it that all >>>> of B's values are between A's values? >>> >>> From a purely accounting standpoint, since transaction 1 was applied to >>> both invoices A and B, you need to group the invoices so that you can >>> compare total invoiced against total paid. >> >> I tinkered around briefly but didn't come up with a good idea, but I bet >> someone on this list can. However, I did create a CREATE script for your >> table design which, in my experience, makes it more likely that a real >> expert will take on your problem.. >> >> Hope this helps, >> >> Steve >> >> DROP TABLE IF EXISTS trans; >> >> CREATE TABLE trans >> ( >> id serial NOT NULL, >> inv_id character varying, >> tx_id character varying, >> CONSTRAINT pk_id PRIMARY KEY (id) >> ) >> WITH (OIDS=FALSE); >> >> insert into trans (inv_id, tx_id) values('A','1'); >> insert into trans (inv_id, tx_id) values('A','3'); >> insert into trans (inv_id, tx_id) values('B','1'); >> insert into trans (inv_id, tx_id) values('B','2'); >> insert into trans (inv_id, tx_id) values('C','5'); >> insert into trans (inv_id, tx_id) values('D','6'); >> insert into trans (inv_id, tx_id) values('D','7'); >> insert into trans (inv_id, tx_id) values('E','8'); >> insert into trans (inv_id, tx_id) values('F','8'); Here's a stab at a custom aggregate attempting to explain what's going on inside. CREATE OR REPLACE FUNCTION varchar_array_accum_unique (x VARCHAR[], y VARCHAR) RETURNS VARCHAR[] AS $$ DECLARE res VARCHAR[]; BEGIN RAISE NOTICE 'input state is %',x; RAISE NOTICE 'input variable is %',y; IF x = '{}' THEN RAISE NOTICE 'x is empty, returning input variable %',y; res[1] := y; ELSE RAISE NOTICE 'input array is not empty, checking if input variable is a member %',y; res := x; IF y = ANY(res) THEN RAISE NOTICE 'y is already in array %, skipping',res; ELSE res := array_append(res, y); RAISE NOTICE 'appending input variable %',y; END IF; END IF; RETURN res; END $$ LANGUAGE plpgsql STRICT; DROP AGGREGATE array_accum_unique(VARCHAR); CREATE AGGREGATE array_accum_unique ( basetype = VARCHAR , sfunc = varchar_array_accum_unique , stype = VARCHAR[] , initcond = '{}' ); SELECT array_accum_unique(inv_id) AS invoiceids , array_accum_unique(tx_id) AS transactionids FROM ( SELECT tx_id, inv_id FROM trans WHERE inv_id IN ( SELECT inv_id FROM trans WHERE id IN ( SELECT id FROM trans WHERE tx_id=1 ) ) ORDER BY tx_id, inv_id ) AS ss ; Returns this result for transaction id 1. invoiceids | transactionids + {A,B} | {1,2,3} (1 row) Hope this helps! Tony -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Performance problem with row count trigger
I was looking to speed up a count(*) query, as per the recommendations on the postgres wiki: http://wiki.postgresql.org/wiki/Slow_Counting I decided to use the trigger approach to get an accurate count that doesn't depend on VACUUM being run recently. I've got it working, but the addition of the trigger slows things down so bad that it's not a viable option. I was hoping for advice on how to speed things up, or at least an explanation of why it gets so slow. The relevant tables are as follows: --- CREATE TABLE dataset( dataset_id SERIAL PRIMARY KEY, catalog_id INTEGER REFERENCES catalog (catalog_id) ON DELETE CASCADE, t_begin TIMESTAMP WITHOUT TIME ZONE NULL, t_end TIMESTAMP WITHOUT TIME ZONE NULL, "ctime" TIMESTAMP WITHOUT TIME ZONE NOT NULL DEFAULT now(), "mtime" TIMESTAMP WITHOUT TIME ZONE NOT NULL DEFAULT now(), "assoc_count" BIGINT NOT NULL DEFAULT 0 ) CREATE TABLE assoc ( dataset_id INTEGER REFERENCES dataset (dataset_id) ON DELETE CASCADE, range ip4r NOT NULL, label_id INTEGER NULL, value BIGINT NULL, PRIMARY KEY (dataset_id, range), UNIQUE (dataset_id, range, label_id) ); --- What I want to do is update the assoc_count field in the dataset table to reflect the count of related records in the assoc field. To do so, I added the following trigger: --- CREATE OR REPLACE FUNCTION update_assoc_count_insert() RETURNS TRIGGER AS ' BEGIN UPDATE dataset SET assoc_count = assoc_count + 1 WHERE dataset_id = NEW.dataset_id; RETURN NEW; END ' LANGUAGE plpgsql; CREATE TRIGGER assoc_update_assoc_count_insert AFTER INSERT ON assoc FOR EACH ROW EXECUTE PROCEDURE update_assoc_count_insert(); --- (I also have triggers for UPDATE/DELETE, left out for brevity.) The slowness I'm talking about doesn't show up for a single insert, but arises when doing thousands of them in a transaction. Here are some test runs of 10,000 inserts without the trigger in place: --- 1000 (2231.540142/s) 2000 (2341.849077/s) 3000 (2234.332303/s) 4000 (2311.247629/s) 5000 (2366.171695/s) 6000 (2400.028800/s) 7000 (2407.147716/s) 8000 (2416.419084/s) 9000 (2401.476107/s) 1 (2406.870943/s) --- The number in parens is the number of inserts per second for each batch of 1,000 inserts. As you can see, performance isn't too terrible, and is pretty constant from start to finish. Now I add the trigger, and here's what happens: --- 1000 (1723.216901/s) 2000 (1613.529119/s) 3000 (1526.081496/s) 4000 (1431.907261/s) 5000 (1340.159570/s) 6000 (1269.746140/s) 7000 (1191.374990/s) 8000 (1117.332012/s) 9000 (1056.309389/s) 1 (1001.051003/s) --- The throughput of the first batch of 1,000 is diminished, but still tolerable, but after 10,000 inserts, it's gotten much worse. This pattern continues, to the point where performance is unacceptable after 20k or 30k inserts. To rule out the performance of the trigger mechanism itself, I swapped the trigger out for one that does nothing. The results were the same as without the trigger (the first set of numbers), which leads me to believe there's something about the UPDATE statement in the trigger that is causing this behavior. I then tried setting the assoc_count to a constant number instead of trying to increment it with assoc_count = assoc_count + 1, but performance was just as bad as with the proper UPDATE statement. I'm not really sure where to go next. I can update the assoc_count once at the end of the transaction (without a trigger), but this could lead to inconsistencies if another client does inserts without updating the count. I would really prefer to use the trigger solution recommended on the PGsql wiki, but can't do so until I solve this performance problem. I greatly appreciate any and all help. Thanks. -Tony -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Performance problem with row count trigger
Hi Craig, thanks for your help. Craig Ringer wrote: > MVCC bloat from the constant updates to the assoc_count table, maybe? That's what a coworker suggested might be happening. The fact that a no-op trigger performs fine but the UPDATE trigger doesn't would seem to confirm that it's something in the trigger SQL and not in the trigger mechanism itself. > If you're using 8.3, I'd expect HOT to save you here. Are you using an > older version of PostgreSQL? If not, have you by any chance defined an > index on assoc_count ? I'm running 8.3.7, which is the most recent version from Macports. There's no index of any kind on dataset.assoc_count. Having read up on HOT, it sounds like it would be helpful. Is there anything I need to do to enable HOT in 8.3.7, or is it always used? > Also, try to keep records in your `dataset' table as narrow as possible. > If the catalog_id, t_begin, t_end, ctime and mtime fields do not change > almost as often as the assoc_count field, split them into a separate > table with a foreign key referencing dataset_id, rather than storing > them directly in the dataset table. ctime is the creation time of the dataset, so it's never supposed to be updated. mtime is the last time the dataset was changed, and there's another trigger to update that timestamp whenever the dataset table changes. So, at best, I'd be able to remove the ctime column from the dataset table, but I'd have to add the foreign key, so I don't think that would be helpful. Your mention of the ctime and mtime columns made me think the update timestamp trigger may be contributing to the problem as well (since the assoc INSERT trigger would presumably cause the dataset UPDATE trigger to fire), but dropping that trigger yielded no improvement, so I think it's something else. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Performance problem with row count trigger
Greg Sabino Mullane wrote: > A few things spring to mind: > > 1) Use a separate table, rather than storing things inside of > dataset itself. This will reduce the activity on the dataset table. A separate table just for that one column? Would that really help, given that I'd have to add the foreign key dataset_id to the related table? How does splitting activity across dataset and, say, dataset_counts help things? > 2) Do you really need bigint for the counts? Probably not. Still, changing to INTEGER hasn't changed the performance in any measurable way. > 3) If you do want to do this, you'll need a different approach as > Tom mentioned. One way to do this is to have a special method for > bulk loading, that gets around the normal updates and requires that > the user take responsiblity for knowing when and how to call the > alternate path. The basic scheme is this: > > 1. Disable the normal triggers > 2. Enable special (perl) triggers that keep the count in memory > 3. Do the bulk changes > 4. Enable normal triggers, disable special perl one > 5. Do other things as needed > 6. Commit the changes to the assoc_count field. I gave this a shot, and my initial testing looks very promising. Using your scheme, the performance looks to be just as good as without the trigger. I haven't done any kind of edge case testing to see if weird things happen when multiple datasets are added simultaneously, or when someone inserts an assoc record out-of-band while a bulk dataset load is happening, but you've certainly got me well on my way to a workable solution. Many thanks! There's one part I don't get, though... > Number 6 can be done anytime, as long as you are in the same session. The > danger > is in leaving the session without calling the final function. This can be > solved with some deferred FK trickery, or by careful scripting of the events. > All this doesn't completely remove the pain, but it may shift it around enough > in useful ways for your app. I'm not sure I understand the danger you're talking about here. Doesn't putting the whole start_bulkload_assoc_count(), bulk insert, and end_bulkload_assoc_count() process in a transaction save me from any of these problems? Or is there some more subtle problem I'm not seeing? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Supported Encoding
Hello people, I need answers to a simple question that I couldn't find the definite answer for: Does Postgresql support only EUC? Basically, I am trying to save international fonts to Postgresql, so I'm trying to find out what exactly I need to do. It seems like it works fine when I am using PHP to insert/retrieve data without any encoding conversions. I think my Postgresql was not specified any special encoding at the time of make. Do I need to convert everything EUC before I store data, and do why would I need to do that? Thanks in advance!! Tony
[SQL] Trouble porting a Oracle PL/SQL procedure to PL/pgSQL
Hello. I'm using PostgreSQL to learn SQL and SQL-based languages. The text I'm using (SAMS Teach Yourself SQL in 21 Days, 3rd.ed.) references Oracle in its examples, though I can usually get PostgreSQL to work almost as well. Well, I'm almost to the end of the book and I'm trying to port some of the book's PL/SQL examples to PL/pgSQL, with mixed success. In this case, I translated BEGIN DECLARE UnknownPayType EXCEPTION; CURSOR pay_cursor IS SELECT name, pay_type, pay_rate, eff_date, sysdate, rowid FROM pay_table; IndRec pay_cursor%ROWTYPE; . . . END; / to this: CREATE OR REPLACE FUNCTION update_pay() RETURNS VOID AS ' DECLARE pay_cursor CURSOR IS SELECT name, pay_type, pay_rate, eff_date, current_date, oid FROM pay_table; IndRec pay_cursor%ROWTYPE; cOldDate DATE; . . . END; ' LANGUAGE 'plpgsql'; The problem is, when I call the function, I get: sql-practice=# select update_pay(); ERROR: pay_cursor: no such class WARNING: plpgsql: ERROR during compile of update_pay near line 2 WARNING: plpgsql: ERROR during compile of update_pay near line 2 ERROR: pay_cursor: no such class I tried several ways of defining pay_cursor, with the same result. What am I to do? ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [SQL] Trouble porting a Oracle PL/SQL procedure to PL/pgSQL--THANKS!
On Tue, 17 Jun 2003 00:42:54 -0400 Tom Lane <[EMAIL PROTECTED]> writes: > Stephan Szabo <[EMAIL PROTECTED]> writes: > > On Mon, 16 Jun 2003, Tony G. Harris wrote: > >> The problem is, when I call the function, I get: > >> sql-practice=# select update_pay(); > >> ERROR: pay_cursor: no such class > >> WARNING: plpgsql: ERROR during compile of update_pay near line > 2 > >> ERROR: pay_cursor: no such class > > > I think it's complaining because you can't use > pay_cursor%ROWTYPE. > > Yeah, you're right. I didn't believe that theory at first because > the line number reference didn't point at the line with %ROWTYPE > ... > but upon digging into it I find that plpgsql's code for determining > the line number to report is flat wrong for this case. (I've > committed > a quick fix into CVS tip, but I wonder whether the whole mechanism > shouldn't be rethought. Calling plpgsql_scanner_lineno() all over > the > place doesn't seem real clean.) > > > Maybe declaring IndRec as being of type record may work (don't > know > > for certain). > > That's what I'd try. We don't consider that declaring a cursor > creates > a named rowtype ... I'm surprised that Oracle seems to think it > does. > > regards, tom lane > > Stephan, Tom, Thank you both for your insights. You were right, Stephan, declaring IndRec as a RECORD type worked ! That and a couple other no doubt "quick and dirty" fixes and the function ran exactly how it was supposed to. Again, thank you both, and everyone on the pgsql-SQL list for all the help. Tony Tony G. Harris 928 Sibley Street Hammond, IN 46320-1649 Phone: (219) 931-3546 FAX: (646) 405-8569 email: [EMAIL PROTECTED] I returned, and saw under the sun, that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favour to men of skill; but time and chance happeneth to them all. -- Ecclesiastes 9:11 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [SQL] Supported Encoding
Volker, Thank you, I was using UTF8 because I am accessing the database via JDBC, and Java's default encoding is UTF8. I wasn't sure about whether it was a right thing to do or not. Thank you very much. I really appreciate your help! Tony Nakamura - Original Message - From: Volker Paul <[EMAIL PROTECTED]> To: Tony Nakamura <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]> Sent: Tuesday, July 11, 2000 2:38 AM Subject: Re: [SQL] Supported Encoding > > Does Postgresql support only EUC? Basically, I am trying to save > > international > > fonts to Postgresql, so I'm trying to find out what exactly I need to do. > > I'd suggest you try the Unicode UTF-8 encoding. It supports many > languages, including Japanese of course, and needs no modification > of existing programs, if you don't rely on character counting > for output formatting. > > Volker Paul >
Re: [SQL] Help with COPY command
Thanks for the reply. -Tony - Original Message - From: "Christoph Haller" <[EMAIL PROTECTED]> To: "Tony Reina" <[EMAIL PROTECTED]> Cc: <[EMAIL PROTECTED]> Sent: Wednesday, April 14, 2004 5:42 PM Subject: Re: [SQL] Help with COPY command > > > > Ok. I found the libpq syntax for COPY in the Programmer's manual. I've > > got a working version, but wanted to verify something. > > > > PQexec(conn, "COPY foo FROM STDIN"); > > PQputline(conn, "3\thello world\t4.5\n"); > > PQputline(conn,"4\tgoodbye world\t7.11\n"); > > ... > > PQputline(conn,"\\.\n"); > > PQendcopy(conn); > > > > 1. I'm assuming that I can put in as many PQputline statements as I > > want to between the PQexec("COPY ... FROM ...") and the terminator > > line. Is that correct? No limit? > > > > 2. Do any of these lines need to be followed by a PQclear(res)? What > > about the first PQexec? > > > > -Tony > > > ad 1. Yes, no limit. > ad 2. I use PQclear following the PQexec. > > Regards, Christoph > > ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly