[SQL] Temp tbl Vs. View
Hello, I have an interesing problem relating to sql and performance issues and am looking at ways I can increase the performace from postgres. Currently I have a view created from two tables. All the selects are being done on the view - which normally does not take a lot of time, but because my web app uses filtering on such as symbol ~ '^.*$', side, date etc, the select from the view is taking a lot of time (7000 ms) as per explain analyze. Both the primary and secondary tables have about 400,000 rows. I noticed that it is doing a sequential scan on the primary table which is joined to the secondary table in the view query. I just read when I use filters that postgres will do a seq scan on the table. My question is how can I fix this? Would it be better to create a temporary table for just daily data and have the view for more extended queries? Any other design ideas? Thanks, Radhika -- It is all a matter of perspective. You choose your view by choosing where to stand. Larry Wall --- ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [SQL] Temp tbl Vs. View
> select from the view is taking a lot of time (7000 ms) as per explain > analyze. Both the primary and secondary tables have about 400,000 rows. > > My question is how can I fix this? > Would it be better to create a temporary table for just daily data and > have the view for more extended queries? Any other design ideas? Perhaps tsearch2 would work for you. I am just starting to learn about it. Maybe you can create a tsearch2 GIN index on each table in the view, and then use tsearch2 queries to quickly retrieve your results. Regards, Richard Broersma Jr. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Temp tbl Vs. View
On 3/29/07, Radhika Sambamurti <[EMAIL PROTECTED]> wrote: Hello, I have an interesing problem relating to sql and performance issues and am looking at ways I can increase the performace from postgres. Currently I have a view created from two tables. All the selects are being done on the view - which normally does not take a lot of time, but because my web app uses filtering on such as symbol ~ '^.*$', side, date etc, the select from the view is taking a lot of time (7000 ms) as per explain analyze. Both the primary and secondary tables have about 400,000 rows. I noticed that it is doing a sequential scan on the primary table which is joined to the secondary table in the view query. I just read when I use filters that postgres will do a seq scan on the table. My question is how can I fix this? Would it be better to create a temporary table for just daily data and have the view for more extended queries? Any other design ideas? Thanks, Radhika CREATE TABLE T_ONE AS SELECT S.X AS ID, 'DATA'||S.X AS VAL FROM GENERATE_SERIES(1,20) S(X) CREATE UNIQUE INDEX I01 ON T_ONE(ID); CREATE UNIQUE INDEX I02 ON T_ONE(VAL); ANALYZE T_ONE; CREATE VIEW V_ONE AS SELECT * FROM T_ONE UNION ALL SELECT * FROM T_ONE; EXPLAIN ANALYZE SELECT * FROM V_ONE WHERE ID = 1; Result (cost=0.00..16.58 rows=2 width=17) (actual time=0.035..0.060 rows=2 loops=1) -> Append (cost=0.00..16.58 rows=2 width=17) (actual time=0.030..0.049 rows=2 loops=1) -> Index Scan using i01 on t_one (cost=0.00..8.29 rows=1 width=17) (actual time=0.026..0.029 rows=1 loops=1) Index Cond: (id = 1) -> Index Scan using i01 on t_one (cost=0.00..8.29 rows=1 width=17) (actual time=0.006..0.008 rows=1 loops=1) Index Cond: (id = 1) Total runtime: 0.153 ms EXPLAIN ANALYZE SELECT * FROM V_ONE WHERE VAL LIKE 'DATA123%'; Result (cost=6.67..1248.85 rows=466 width=17) (actual time=0.565..14.790 rows= loops=1) -> Append (cost=6.67..1248.85 rows=466 width=17) (actual time=0.560..9.449 rows= loops=1) -> Bitmap Heap Scan on t_one (cost=6.67..624.42 rows=233 width=17) (actual time=0.556..2.253 rows= loops=1) Filter: (val ~~ 'DATA123%'::text) -> Bitmap Index Scan on i02 (cost=0.00..6.61 rows=233 width=0) (actual time=0.537..0.537 rows= loops=1) Index Cond: ((val >= 'DATA123'::text) AND (val < 'DATA124'::text)) -> Bitmap Heap Scan on t_one (cost=6.67..624.42 rows=233 width=17) (actual time=0.531..2.168 rows= loops=1) Filter: (val ~~ 'DATA123%'::text) -> Bitmap Index Scan on i02 (cost=0.00..6.61 rows=233 width=0) (actual time=0.517..0.517 rows= loops=1) Index Cond: ((val >= 'DATA123'::text) AND (val < 'DATA124'::text)) Total runtime: 17.436 ms EXPLAIN ANALYZE SELECT * FROM V_ONE WHERE VAL ~ '^DATA123.*$'; Result (cost=6.67..1248.85 rows=466 width=17) (actual time=0.606..23.212 rows= loops=1) -> Append (cost=6.67..1248.85 rows=466 width=17) (actual time=0.600..17.460 rows= loops=1) -> Bitmap Heap Scan on t_one (cost=6.67..624.42 rows=233 width=17) (actual time=0.597..6.090 rows= loops=1) Filter: (val ~ '^DATA123.*$'::text) -> Bitmap Index Scan on i02 (cost=0.00..6.61 rows=233 width=0) (actual time=0.521..0.521 rows= loops=1) Index Cond: ((val >= 'DATA123'::text) AND (val < 'DATA124'::text)) -> Bitmap Heap Scan on t_one (cost=6.67..624.42 rows=233 width=17) (actual time=0.542..6.266 rows= loops=1) Filter: (val ~ '^DATA123.*$'::text) -> Bitmap Index Scan on i02 (cost=0.00..6.61 rows=233 width=0) (actual time=0.523..0.523 rows= loops=1) Index Cond: ((val >= 'DATA123'::text) AND (val < 'DATA124'::text)) Total runtime: 26.121 ms EXPLAIN ANALYZE SELECT * FROM V_ONE WHERE VAL LIKE '%DATA123%'; Result (cost=0.00..7922.00 rows=26 width=17) (actual time=0.079..161.078 rows= loops=1) -> Append (cost=0.00..7922.00 rows=26 width=17) (actual time=0.073..155.990 rows= loops=1) -> Seq Scan on t_one (cost=0.00..3961.00 rows=13 width=17) (actual time=0.069..71.904 rows= loops=1) Filter: (val ~~ '%DATA123%'::text) -> Seq Scan on t_one (cost=0.00..3961.00 rows=13 width=17) (actual time=0.054..79.065 rows= loops=1) Filter: (val ~~ '%DATA123%'::text) Total runtime: 163.722 ms EXPLAIN ANALYZE SELECT * FROM V_ONE WHERE VAL ~ '^.*DATA123.*$'; Result (cost=0.00..7922.00 rows=16 width=17) (actual time=0.828..2385.027 rows= loops=1) -> Append (cost=0.00..7922.00 rows=16 width=17) (actual time=0.823..2379.641 rows= loops=1) -> Seq Scan on t_one (cost=0.00..3961.00 rows=8 width=17) (actual time=0.819..1216.405 rows= loops=1) Filter: (val ~ '^.*DATA123.*$'::text) -> Seq Scan on t_one (cost=0.00..3961.00 rows=8 width=17) (actual time=0.666..1156.561 rows= loop
[SQL] Empty Table
Hi list, I have many tables with many constraints and I would like to empty all my tables. There is any simple way to do it ? I have pgAdmin here but not psql. Any help would be glad. Regards Ezequias -- Ezequias Rodrigues da Rocha ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] Empty Table
psql> TRUNCATE TABLE ; if you have too many tables , generate the above commands by using a query on tables information schema table. hope it helps. On 3/29/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: Hi list, I have many tables with many constraints and I would like to empty all my tables. There is any simple way to do it ? I have pgAdmin here but not psql. Any help would be glad. Regards Ezequias -- Ezequias Rodrigues da Rocha ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] Empty Table
Thank you all for a so fast reply. I did a backup PLAIN with pgAdmin and I had to delete the information of the current database (the database I used to make the backup). I think that's why pgAdmin does not work with plain backups on Restore. What does occurs with the information schema when I restore from one database with a name to other with another name ? Regards Ezequias Em Thu, 29 Mar 2007 23:46:31 +0530 "Rajesh Kumar Mallah" <[EMAIL PROTECTED]> escreveu: psql> TRUNCATE TABLE ; if you have too many tables , generate the above commands by using a query on tables information schema table. hope it helps. On 3/29/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: Hi list, I have many tables with many constraints and I would like to empty all my tables. There is any simple way to do it ? I have pgAdmin here but not psql. Any help would be glad. Regards Ezequias -- Ezequias Rodrigues da Rocha ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly -- Ezequias Rodrigues da Rocha ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [SQL] Empty Table
I dont think i clearly understand your requirement. Are you wanting to restore the "PLAIN" backup of a database with a different database name ? what do you mean "deleting information of current database ..." I have not used pgAdmin as i prefer cmd line mostly. eager to help, (expect my next reply aft 6 hrs , i am abt to crash now). hope someone else helps u in meantime. regds On 3/30/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: Thank you all for a so fast reply. I did a backup PLAIN with pgAdmin and I had to delete the information of the current database (the database I used to make the backup). I think that's why pgAdmin does not work with plain backups on Restore. What does occurs with the information schema when I restore from one database with a name to other with another name ? Regards Ezequias Em Thu, 29 Mar 2007 23:46:31 +0530 "Rajesh Kumar Mallah" <[EMAIL PROTECTED]> escreveu: >psql> TRUNCATE TABLE ; > >if you have too many tables , generate the above commands >by using a query on tables information schema table. >hope it helps. > >On 3/29/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: >> Hi list, >> >> I have many tables with many constraints and I would like to empty all my >> tables. >> >> There is any simple way to do it ? >> >> I have pgAdmin here but not psql. >> >> Any help would be glad. >> >> Regards >> Ezequias >> >> >> -- >> Ezequias Rodrigues da Rocha >> >> ---(end of broadcast)--- >> TIP 4: Have you searched our list archives? >> >>http://archives.postgresql.org >> > >---(end of broadcast)--- >TIP 1: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to [EMAIL PROTECTED] so that your > message can get through to the mailing list cleanly -- Ezequias Rodrigues da Rocha ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] Empty Table
>I dont think i clearly understand your requirement. > >Are you wanting to restore the "PLAIN" backup of a database >with a different database name ? > Yes Are you getting any particular error? what platform are you in ? >what do you mean "deleting information of current database ..." > Clear all lines where the name of my orign database is mentioned in the .sql file. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] Track query status
more methods: 1. select * from pg_stat_activity; see what all running and pid of your current query also. 2. SELECT h.pid AS blocker, w.pid AS blockee FROM ONLY pg_locks h, ONLY pg_locks w WHERE h."granted" AND NOT w."granted" AND (h.relation = w.relation AND h."database" = w."database" OR h."transaction" = w."transaction"); make sure that the pid in 1 is not listed as a blockee in result of above query m, ie The the update is waiting for anything. 3. go the shell , su - postgres , strace -p of the backend got in 1. observe the system calls , see if anything awkward there shoud be a lot of writes 4. iostat ofcourse, unless the above sql is the only sql(update) running running in server . my 4cents regds mallah. On 3/29/07, Andrew Sullivan <[EMAIL PROTECTED]> wrote: On Wed, Mar 28, 2007 at 04:59:24PM -0400, Sumeet wrote: > Hi all, > > I have ran a update query on a dataset which has about 48 million records > and the query is already running for the third dayim so tempted to the > kill this query nowis there a way to know if the query is running?? > here is the query i've ran 48 million records is a lot. You oughta see activity with iostat or something. A -- Andrew Sullivan | [EMAIL PROTECTED] When my information changes, I alter my conclusions. What do you do sir? --attr. John Maynard Keynes ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] Foreign Unique Constraint
use a pre-insert triggers - one for each table. include something like I ended up going this route and it seems to work. Thanks for the help from all. I figured i'd post the solution to the list so it shows up when googled. Also, if my solution can be simplfied i'd appreciate knowing how. This would be the trigger for table1 in my example. CREATE FUNCTION function_name() RETURNS trigger AS ' DECLARE result RECORD; BEGIN SELECT INTO result * FROM table2 WHERE extension=NEW.extension; IF FOUND THEN RAISE EXCEPTION ''The extension % is already in use'', NEW.extension; END IF; RETURN NEW; END; ' LANGUAGE plpgsql; CREATE TRIGGER function_name BEFORE INSERT OR UPDATE ON table1 FOR EACH ROW EXECUTE PROCEDURE function_name(); Again, thanks for the help. Jon. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [SQL] olympics ranking query
Several years ago someone posted a question about how to achieve a running total of columns in sql. I have been trying to find a solution to a similar problem for some time and just came up with something that works great for me so I thought I'd post it for the benefit of the list. The problem is to produce a query like the following: select date,amount,run_sum(amount) from ledger order by date; DateAmount Sum --- -- - 2007-Jan-01 10.00 10.00 2007-Jan-02 20.00 30.00 2007-Jan-05 5.00 35.00 2007-Jan-10 -3.00 32.00 . . . Using pl/tcl, I made the following function: #Keep a running sum on a column where tag and trans are used to keep the #results distinct from other calls to the same function #Usage: run_sum(value,tag,trans) #--- function run_sum(numeric,text,text) {} { returns numeric language pltcl called on null input as $$ if {![info exists GD(sum.$2.$3)]} { set GD(sum.$2.$3) 0.00 } if {[argisnull 1]} { return $GD(sum.$2.$3) } else { return [set GD(sum.$2.$3) [expr $GD(sum.$2.$3) + $1]] } $$;} Then I added a wrapper function to automatically produce a separate namespace for each statement instance that uses the query: #Keep a running sum on a column where tag is used to keep the results distinct #Automatically keeps results distinct for each separate statement #Usage: run_sum(value,tag) #--- function run_sum(numeric,text) {run_sum(numeric,text,text)} { returns numeric language sql as $$ select run_sum($1,$2,statement_timestamp()::text); $$;} Now you can do: select date,amount,run_sum(amount,'amount') from ledger; to get an initial result. The only problem is now ordering the data. If you just put an 'order by' clause on the end, you don't get what you might expect because the ordering happens after the function has produced its result. So I do the following to order and sum it correctly: select date,amount,run_sum(amount,'amount') from (select * from ledger order by date) ss; The use of the "tag" parameter allows you to use this on multiple columns such as: select date,debit,credit, run_sum(debit,'debit')::numeric(14,2) as debits, run_sum(credit,'credit')::numeric(14,2) as credits from (select * from ledger order by date) ss; Enjoy, Kyle ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings