[GENERAL] changing language of system messages
since the update from 8.3.0 to 8.3.1., postgresql system error messages as well as all logfile entries appear in german. i've already tried to change LC_MESSAGE in the postgres.conf without luck. the db's LC_COLLATE is 'German_Switzerland', and i do not want to change it. i'd just like to have the system/error messages in english... it's a windows 2003 system (english os with German_Switzerland locale). thanks, thomas -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Killing a session in windows
wasn't the OP asking for a way to kill active connections to a db? afaik pgAdmin3 does provide this functionality: pgadmin3 tools server status there you can easily terminate connections transactions to a particular db. works pretty well in my test, i can kill active connections and drop the db afterwards. regards, thomas The pgadmin terminate connections never did anything on any of my windows servers. I always assumed it was something geared towards nix servers that never quite worked on windows??? i'm using the pgAdmin3 on windows 2003 and win32 pgsql 8.x for quite a while now, and the terminate feature has been working just fine a few times in the last years. - thomas ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Killing a session in windows
On Tue, Dec 11, 2007 at 05:50:46PM -0500, Tom Lane wrote: Alvaro Herrera [EMAIL PROTECTED] writes: Howard Cole wrote: I take it from the lack of response that nobody knows how to kill a connection from the postgresql side on windows? You can't, short of sending a signal to the process or restarting the service. Which you can do, no? I thought pg_ctl's kill option was invented specifically to make this less painful on Windows. It does, and it shuold work. But it's just as dangerous as using kill directly on the backends on Unix, of course. wasn't the OP asking for a way to kill active connections to a db? afaik pgAdmin3 does provide this functionality: pgadmin3 tools server status there you can easily terminate connections transactions to a particular db. works pretty well in my test, i can kill active connections and drop the db afterwards. regards, thomas ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] subselect field problem
hi list i was writing the query below containing a subquery. by mistake, i referenced a field from the main table in the subquery, leading to a very strange (but working??) result. the planner announced a insanely high startup cost, but the query itself finished pretty quickly. nevertheless, shouldn't pgsql warn the user if he's referencing a non-existing field in a subquery? the field referenced in the subqueries WHERE-clause doesn't exist in the subqueries table, thus i don't even understand why that wouldn't throw an error and how the result would have to be interpreted: SELECT * FROM titles WHERE tit_id IN ( SELECT DISTINCT nam_tit_id FROM names WHERE lower(tit_name) LIKE '%best%' ) the field tit_name is in titles. the field i intented to use was nam_name from table names... regards, thomas ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] subselect field problem
i was writing the query below containing a subquery. by mistake, i referenced a field from the main table in the subquery, leading to a very strange (but working??) result. the planner announced a insanely high startup cost, but the query itself finished pretty quickly. Pick up any SQL book and read up on correlated subqueries. Have a nice day, thanks! i didn't knew this term. the result is pretty obvious now, and so is the high cost prediction. regards, thomas
Re: [GENERAL] Let's play bash the search engine
http://search.postgresql.org/search?q=HAVING says An error occured while searching. I bet HAVING is a stop-word, so actual message is 'NOTICE: query contains only stopword(s) or doesn't contain lexeme(s), ignored' I think we should add to pg_dict dictionary line having having just a though... wouldn't it make sense for a documentation search index to *not* have stop words at all? potentially every word that is being searched for could be contained in a query example, code piece etc and thus seems important to me... for example keywords like AND, OR etc. - thomas ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Let's play bash the search engine
I think it would be good to make it more prominent. Maybe have all the search forms integrated on a single page and put a link to it in the top menu, next to Support. well, why not add a dropdown (or even better a multi-select input) on the search page where users can choose what to search in: All (default) Documentation \_ Most Recent \_ 8.2 \_ 8.1 \_... Mailing Lists \_ Beginners \_ General \_ Hackers \_ ODBC \_ ... and so on... or one could provide options to narrow search results by specifying parameters to the search, for example select query url:documentation/8.2 would return only results whose paths contain the provided url parameter and whose pages contain the words select and query... - thomas ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Let's play bash the search engine
Take a look at let us know what you think and how it performs for you. i would love an advanced search where you can limit the results to a particular version of the documentation. the query for SELECT returns too many results from too many versions, obviously. its fast quick tho :-) regards, thomas ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Vote for your favorite database
Are there more firebird users in hungary (.hr right?) than postgresql maybe? I wonder. Well it looks like they have a following in Europe in general: Dateline Prague 12-NOV-2006 18:00 GMT The Firebird Project today officially released the much-anticipated version 2.0 of its open source Firebird relational database software during the opening session of the fourth international Firebird Conference in Prague, Czech Republic. one more reason to have a pgsql 8.2 release party over here in europe as well :-) - thomas ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] PostgreSQL doesn't accept connections when Windows
How might I determine this? netstat -a -n this will list you all current listened ports you could also try and telnet localhost 5432 to see if something is listening. Would this make a difference when the system is rebooted vs manually restarting the server? when another program is occupying the port, it would probabyl not make much difference, except if the pgsql service is started before the other tool. maybe pgsql is already running? ;-) cheers, thomas ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] PostgreSQL doesn't accept connections when Windows
If I understand what you're suggesting, perhaps upon reboot PostgreSQL is starting before another app that listens on the same port, which could cause the problem, but if I then restart the PostgreSQL service this solves the problem? if you can start it manually, then its probably not a problem with another tool using the same port. a) do you have a software firewall installed on that machine that might be interfering here? b) is your windows in a domain? if so, make sure you manually set log on as a service priviledges on the domain controller for the local postgres user. the installed only sets local priviledges that are overwritten upon reboot. you find the setting here: start - run - mmc - file - add snapin - add - group policy editor - browse - default domain policy then browse for computer configuration - windows settings - security settings - local policies - user rights - log on as a service. add the user there. regards, - thomas ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] DB crashed
post some more informations, i.e. version, latest entries in logfile before crash, etc. - thomas - Original Message - From: rbaisak To: pgsql-general@postgresql.org Sent: Monday, November 27, 2006 8:26 PM Subject: [GENERAL] DB crashed I have been using postgres since a long time. Recently number of users in my portal has been increased drastically and because of that load on Database server has been increased. Suddenly postgres Database crashed and I need to restart the DB. I am not able to find out root cause of this problem. I need help to get any clue regarding what query caused DB to crash. Any utility/program that can help me to debug the DB will be well appreciated. - R
[GENERAL] have pg_restore create a new database?
i'm trying to have pg_restore create a new db and load the data. unfortunately, pg_restore always uses the database name specified in the dump. --create and --dbname=newdb does not seem to have any effect (pg_restore yells about database being non-existant). what is the proper method to restore data from a previous named database olddb into a fresh database named newdb? the old database still exists and i don't want to overwrite the data. regards, thomas
Re: [GENERAL] have pg_restore create a new database?
Thomas H. [EMAIL PROTECTED] writes: what is the proper method to restore data from a previous named database = olddb into a fresh database named newdb? the old database still = exists and i don't want to overwrite the data. Create newdb by hand and then specify it as the connection target for pg_restore. *Don't* use --create, as that tells pg_restore to create and (re)connect to the database named in the dump. i see. but then failures will pop up: when creating a new db, it already contains the system languages, functions and tables (for example tsearch2). as the dump contains these as well, i'm getting lots of errors and more important, tsearch2 custom config won't be restored. how can i create an *empty* database? - thomas ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Postgres scalability and performance on windows
i'm seeing the opposite here on our win32 pgsql instance. while mssql server used ~50% cpu constantly (AND consuming lots of memory, pgsql runs at a low 1-5% and gives better performance even under relatively high load. is the high cpu load coming from one particular postgres.exe process or is it distributed among all the processes? try raising work_mem. we have set it to 30MB - thomas - Original Message - From: Gopal To: pgsql-general@postgresql.org Sent: Thursday, November 23, 2006 11:31 PM Subject: [GENERAL] Postgres scalability and performance on windows Hi all, I have a postgres installation thats running under 70-80% CPU usage while an MSSQL7 installation did 'roughly' the same thing with 1-2% CPU load. Here's the scenario, 300 queries/second Server: Postgres 8.1.4 on win2k server CPU: Dual Xeon 3.6 Ghz, Memory: 4GB RAM Disks: 3 x 36gb , 15K RPM SCSI C# based web application calling postgres functions using npgsql 0.7. Its almost completely read-only db apart from fortnightly updates. Table 1 - About 300,000 rows with simple rectangles Table 2 - 1 million rows Total size: 300MB Functions : Simple coordinate reprojection and intersection query + inner join of table1 and table2. I think I have all the right indexes defined and indeed the performance for queries under low loads is fast. == postgresql.conf has following settings max_connections = 150 hared_buffers = 2# min 16 or max_connections*2, 8KB each temp_buffers = 2000 # min 100, 8KB each max_prepared_transactions = 25 # can be 0 or more # note: increasing max_prepared_transactions costs ~600 bytes of shared memory # per transaction slot, plus lock space (see max_locks_per_transaction). work_mem = 512 # min 64, size in KB #maintenance_work_mem = 16384 # min 1024, size in KB max_stack_depth = 2048 effective_cache_size = 82728 # typically 8KB each random_page_cost = 4 # units are one sequential page fetch == SQL server caches all the data in memory which is making it faster(uses about 1.2GB memory- which is fine). But postgres has everything spread across 10-15 processes, with each process using about 10-30MB, not nearly enough to cache all the data and ends up doing a lot of disk reads. I've read that postgres depends on OS to cache the files, I wonder if this is not happenning on windows. In any case I cannot believe that having 15-20 processes running on windows helps. Why not spwan of threads instead of processes, which might be far less expensive and more efficient. Is there any way of doing this? My question is, should I just accept the performance I am getting as the limit on windows or should I be looking at some other params that I might have missed? Thanks, Gopal
Re: [GENERAL] MSSQL to PostgreSQL : Encoding problem
Or go via MS-Access/Perl and ODBC/DBI perhaps? Yes, I think it would work. The problem is that the DB is too big for this king of export. Using DTS from MSSQL to export directly to PostgreSQL using psqlODBC Unicode Driver, I exported ~1000 rows per second in a 2-columns table with ~20M rows. That means several days just for this table, and I have bigger ones ! Well it's about 0.25 days, but if it's too long, it's too long. Sure, sorry for the confusion, the problem is with the other tables (same number of rows but a lot of columns, some very large). well, if its too slow, then you will have to dump the db to a textfile (DTS does this for you) and then convert the textfile to utf8 manually before importing it to pgsql. iconv for win32 will help you there. i found tho it removes some wanted special characters, so watch out. a less scientific approach would be using an unicode-aware texteditor to convert it (ultraedit does this pretty nicely, for example). have had good results with it. loading several million rows will always take some time, tho. - thomas ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] tsearch trigger: function public.tsearch2(tsvector, text) does not exist
hi list i'm trying to write a custom tsearch2 trigger that checks on update if the column value is changed. here's what i did so far: CREATE OR REPLACE FUNCTION forum.tsearch2_trigger_posts () RETURNS trigger AS $body$ BEGIN IF (TG_OP = 'UPDATE') THEN IF (NEW.p_msg_clean != OLD.p_msg_clean) THEN EXECUTE public.tsearch2(NEW.idxfti, NEW.p_msg_clean); END IF; RETURN NEW; ELSIF (TG_OP = 'INSERT') THEN EXECUTE public.tsearch2(NEW.idxfti, NEW.p_msg_clean); RETURN NEW; END IF; RETURN NULL; -- result is ignored since this is an AFTER trigger END; $body$ LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER; CREATE TRIGGER posts_ts_update BEFORE INSERT OR UPDATE ON forum.posts FOR EACH ROW EXECUTE PROCEDURE forum.tsearch2_trigger_posts(idxfti, p_msg_clean); unfortunately, the EXECUTE public.tsearch2 part does not work: Error while executing the query; ERROR: function public.tsearch2(tsvector, text) does not exist at character 9 HINT: No function matches the given name and argument types. You may need to add explicit type casts. QUERY: SELECT public.tsearch2( $1 , $2 ) CONTEXT: PL/pgSQL function tsearch2_trigger_posts line 4 at execute statement UPDATE forum.posts SET p_msg_clean = 'test' WHERE p_t_id = 4468 when using the public.tsearch2 function directly as a trigger, it works fine: CREATE TRIGGER posts_ts_update BEFORE INSERT OR UPDATE ON forum.posts FOR EACH ROW EXECUTE PROCEDURE public.tsearch2(idxfti, p_msg_clean); when i'm trying to use EXECUTE PROCEDURE instead of EXECUTE within the function, it won't compile: ERROR: syntax error at or near tsearch2 at character 19 QUERY: SELECT PROCEDURE public.tsearch2( $1 , $2 ) CONTEXT: SQL statement in PL/PgSQL function tsearch2_trigger_news near line 4 what am i missing? thanks, thomas ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Cannot rename file pg_xlog\0000.....
hi mike you seem to have run into the same (or similar) bug as i did with 8.2 (also w2k3 here) I have noticed that in the event logs that I am again seeing errors about the renaming: LOG: could not rename file pg_xlog/000100690021 to pg_xlog/000100690030 continuing to try there is some discussion of it in [BUGS]. and there is a patch by tom / magnus for the win32 code that fixed the problem for me, but it is not yet in the 8.2beta3 win32 executable. - thomas ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] planer picks a bad plan (seq-scan instead of index) when adding an additional join
1. You MUST sequence scan dvds, as there is no way to do an index search on a like with % at the beginning. 2. You are asking for a left join on dvds, which means you want all records, so you must sequence scan dvds. The filters are all OR, so you can't say that a records is excluded until AFTER the join is completed. true, but thats fast (200ms). 3. The join estimates that 39900 rows will come out, but only 2 do, thats out by a large factor 10^4, which means that it's not going to join movies after the filter is applied. 20866 is the total number of rows in the dvd table. the planer is now showing the accurate rate after another (auto)vacuum run. of course it can't know the estimate of a '%...' comparsion, so estimating the full result set is ok. now, an additional table (containing 600k records) is added through a left join. all the sudden the query takes 24sec. although there are indices on both dvds.dvd_ean and data_soundmedia.sm_info_ean, the planer does not make use of the indices but rather chooses to do 2 seq-scans. The 2 items from the first query are still true. You just have a lot more records to play with now by joining in a 3rd table. Which means there is no way to reduce the possible output set before the join. well, under normal cases, the output set would be determined by index lookups (see bottom) - Sort (cost=286162.37..287781.38 rows=647601 width=660) (actual time=19336.011..20328.247 rows=646633 loops=1) This sort here is where nearly all of the 24 seconds goes. I am assuming at this point that the Merge Join is chosen because of the stats problem listed next; exactly. but that sort shouldn't happen as there is an index on the join-field, and that is usualy pretty fast (~400ms), but not here... Merge Left Join (cost=309190.05..313899.09 rows=159086 width=1722) (actual time=19876.552..21902.007 rows=20866 loops=1) This estimate is out by a factor of 10. I'd increase the statistics on the join columns to see if it decides to use a different join method. If the estimate for the Merge join is fixed to be closed, then it's likely an index scan would be chosen, 159000 is about 25% of the table. Assuming it's small on disk then it's faster to do a seq_scan than all the random I/O to read the index, then the heap to produce results. the discs are not the fastest, but there is plenty of free ram available. seq_page_cost is in its default state (1.0). should i raise this? How big is data_soundmedia? Mb size, not rows. How much is your sort_mem/work_mem? data_soundmedia is 195mb + 105mb for indices (6 fields indexed). work_mem = 30MB, sort_mem is undefined in 8.2's postgresql.conf what troubles me is that its only slow with this table (data_soundmedia). we have other tables (400k and 200k entries) that are joined with the same query in under 400ms total. these tables do have the exact same structure and indices defined, but in these joins the planer properly uses an Index Scann... if i force enable_seqscan = off, the planer makes use of the index, resulting in acceptable query speed: Nested Loop Left Join (cost=8402.16..257761.36 rows=83223 width=1067) (actual time=361.931..713.405 rows=2 loops=1) - Hash Left Join (cost=8402.16..11292.37 rows=20873 width=407) (actual time=322.085..666.519 rows=2 loops=1) Hash Cond: (dvds.dvd_mov_id = movies.mov_id) Filter: ((lower((movies.mov_name)::text) ~~ '%superman re%'::text) OR (lower((dvds.dvd_name)::text) ~~ '%superman re%'::text) OR (lower((dvds.dvd_edition)::text) ~~ '%superman re%'::text)) - Seq Scan on dvds (cost=0.00..804.73 rows=20873 width=193) (actual time=11.781..329.672 rows=20866 loops=1) - Hash (cost=8194.93..8194.93 rows=82893 width=214) (actual time=200.823..200.823 rows=37418 loops=1) - Seq Scan on movies (cost=0.00..8194.93 rows=82893 width=214) (actual time=0.070..155.178 rows=37418 loops=1) - Index Scan using data_soundmedia_info_ean_idx on data_soundmedia (cost=0.00..11.76 rows=4 width=660) (actual time=23.424..23.428 rows=1 loops=2) Index Cond: ((data_soundmedia.sm_info_ean)::text = (dvds.dvd_ean)::text) Total runtime: 716.988 ms 2 rows fetched (821 ms) could it be the index gets somehow corrupted? but on the other hand, if i do a TRUNCATE before loading new data, it should be rebuild anyway, shouldn't it? thanks, thomas ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] planer picks a bad plan (seq-scan instead of index)
Try putting your conditions as part of the join: SELECT * FROM shop.dvds LEFT JOIN oldtables.movies ON mov_id = dvd_mov_id AND ( lower(mov_name) LIKE '%superman re%' OR lower(dvd_name) like '%superman re%' OR lower(dvd_edition) LIKE '%superman re%' ) LEFT JOIN shop.data_soundmedia ON sm_info_ean = dvd_ean unfortunately its getting optimized into the same plan :-) I'd also be tempted to look at a tsearch2 setup for the word searches. tsearch2 doesn't work that well for exact matches (including special chars). but the culprit here isn't the '%...'%' seqscan, but rather the additional joined table (where no lookup except for the join-column takes place) that makes the query going from 200ms to 24sec. regards, thomas ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] planer picks a bad plan (seq-scan instead of index)
OK - in that case try explicit subqueries: SELECT ... FROM (SELECT * FROM shop.dvds LEFT JOIN shop.oldtables.movies WHERE lower(mov_name) LIKE ... ) AS bar LEFT JOIN shop.data_soundmedia same result, have tried this as well (22sec). it's the LEFT JOIN shop.data_soundmedia for which the planer picks a seqscan instead of index scan, no matter what... I'd also be tempted to look at a tsearch2 setup for the word searches. tsearch2 doesn't work that well for exact matches (including special chars). but the culprit here isn't the '%...'%' seqscan, but rather the additional joined table (where no lookup except for the join-column takes place) that makes the query going from 200ms to 24sec. Agreed, but I'd still be inclined to let tsearch do a first filter then limit the results with LIKE. would be a way to probably speed up the seqscan on shop.dvds that takes now 200ms. unfortunately, tsearch2 is broken for me in 8.2 (filling tsearch2 tvector columns crashes backend). but thats a different story :-) - thomas ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] planer picks a bad plan (seq-scan instead of index) when adding an additional join
SELECT * FROM shop.dvds LEFT JOIN oldtables.movies ON mov_id = dvd_mov_id LEFT JOIN shop.data_soundmedia ON sm_info_ean = dvd_ean WHERE (lower(mov_name) LIKE '%superman re%' OR lower(dvd_name) like '%superman re%' OR lower(dvd_edition) LIKE '%superman re%') Um, what's the datatype of sm_info_ean and dvd_ean exactly? varchar(15) and varchar(14) i can make them same width if that could help - just saw the same field on the other tables are limited to 14... regards, thomas ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] planer picks a bad plan (seq-scan instead of index) when adding an additional join
I wrote: This didn't matter a whole lot back when the planner couldn't reorder outer joins, but now that it can, it's more important that the estimates be accurate. I'm not sure if this is feasible to fix before 8.2, but I'll take a look. Actually, the changes were far more localized than I first feared. Please apply the attached patch to your copy and see what you get for your problem query. thanks for the quick patch. unfortunately i'm stuck with the win32 version for now and haven't got the possibility to compile from source (yet)... maybe magnus can provide me a custom win32 executable of b3 that contains this patch [and the one for the xlog lockup]? best wishes, thomas ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] planer picks a bad plan (seq-scan instead of index) when adding an additional join
hi list. as soon as i left-join an additional table, the query takes 24sec instead of 0.2sec, although the added fields have no impact on the resultset: SELECT * FROM shop.dvds LEFT JOIN oldtables.movies ON mov_id = dvd_mov_id WHERE (lower(mov_name) LIKE '%superman re%' OR lower(dvd_name) like '%superman re%' OR lower(dvd_edition) LIKE '%superman re%') Hash Left Join (cost=8402.16..10733.16 rows=39900 width=1276) (actual time=260.712..260.722 rows=2 loops=1) Hash Cond: (dvds.dvd_mov_id = movies.mov_id) Filter: ((lower((movies.mov_name)::text) ~~ '%superman re%'::text) OR (lower((dvds.dvd_edition)::text) ~~ '%superman re%'::text)) - Seq Scan on dvds (cost=0.00..1292.00 rows=39900 width=1062) (actual time=0.036..23.594 rows=20866 loops=1) - Hash (cost=8194.93..8194.93 rows=82893 width=214) (actual time=168.121..168.121 rows=37417 loops=1) - Seq Scan on movies (cost=0.00..8194.93 rows=82893 width=214) (actual time=0.024..131.401 rows=37417 loops=1) Total runtime: 264.193 ms 2 rows fetched now, an additional table (containing 600k records) is added through a left join. all the sudden the query takes 24sec. although there are indices on both dvds.dvd_ean and data_soundmedia.sm_info_ean, the planer does not make use of the indices but rather chooses to do 2 seq-scans. SELECT * FROM shop.dvds LEFT JOIN oldtables.movies ON mov_id = dvd_mov_id LEFT JOIN shop.data_soundmedia ON sm_info_ean = dvd_ean WHERE (lower(mov_name) LIKE '%superman re%' OR lower(dvd_name) like '%superman re%' OR lower(dvd_edition) LIKE '%superman re%') Hash Left Join (cost=317592.21..326882.92 rows=159086 width=1936) (actual time=21021.023..22242.253 rows=2 loops=1) Hash Cond: (dvds.dvd_mov_id = movies.mov_id) Filter: ((lower((movies.mov_name)::text) ~~ '%superman re%'::text) OR (lower((dvds.dvd_name)::text) ~~ '%superman re%'::text) OR (lower((dvds.dvd_edition)::text) ~~ '%superman re%'::text)) - Merge Left Join (cost=309190.05..313899.09 rows=159086 width=1722) (actual time=19876.552..21902.007 rows=20866 loops=1) Merge Cond: (outer.?column20? = inner.?column29?) - Sort (cost=23027.68..23127.43 rows=39900 width=1062) (actual time=507.886..520.143 rows=20866 loops=1) Sort Key: (dvds.dvd_ean)::text - Seq Scan on dvds (cost=0.00..1292.00 rows=39900 width=1062) (actual time=0.047..100.415 rows=20866 loops=1) - Sort (cost=286162.37..287781.38 rows=647601 width=660) (actual time=19336.011..20328.247 rows=646633 loops=1) Sort Key: (data_soundmedia.sm_info_ean)::text - Seq Scan on data_soundmedia (cost=0.00..31080.01 rows=647601 width=660) (actual time=0.074..2834.831 rows=647601 loops=1) - Hash (cost=8194.93..8194.93 rows=82893 width=214) (actual time=177.033..177.033 rows=37417 loops=1) - Seq Scan on movies (cost=0.00..8194.93 rows=82893 width=214) (actual time=0.118..129.716 rows=37417 loops=1) Total runtime: 24419.939 ms 2 rows fetched shouldn't the planer join the additional table *after* filtering? even if it does first joining then filtering, why isn't the existing index not used? pgsql is 8.2beta2 thanks, thomas ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] planer picks a bad plan (seq-scan instead of index)
Have you run analyze on all the three tables since creating the database? yes. even a forced ANALYZE FULL after the table loads: the tables were TRUNCATE'd, refilled and ANALYZE FULL'ed some minutes before the tests. there where no UPDATEs after the INSERTs... - thomas ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Can PostgreSQL reside on the same server as MSDE?
[EMAIL PROTECTED] wrote: Any issues with running PostgreSQL on a Windows 2003 server that already has Microsoft SQL Server Desktop Engine? What about with MS SQL Server? No issues on Windows XP, 2003 Server should be similar. no issues on 2003 server either. just make sure you have enough ram for both database systems... - thomas ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] missing FROM-clause entry
you didn't reference the table replica... this should work: UPDATE model_timemap SET map = updatemap( CASE WHEN replica = 32188 THEN 328787 WHEN replica = 32191 THEN 5739 ELSE -1 END, 1161642129, map) FROM replica WHERE replica.replica_id = model_timemap.replica AND replica.proxy = 32189 - Original Message - From: Brandon Metcalf [EMAIL PROTECTED] To: pgsql-general@postgresql.org Sent: Tuesday, October 24, 2006 12:35 AM Subject: [GENERAL] missing FROM-clause entry Just upgraded to 8.1.5 and the following UPDATE causes the missing FROM-clause entry error: UPDATE model_timemap SET map = updatemap( CASE WHEN replica = 32188 THEN 328787 WHEN replica = 32191 THEN 5739 ELSE -1 END, 1161642129, map) WHERE replica.replica_id = model_timemap.replica AND replica.proxy = 32189 From what I've read, this typically results from referring to a table instead of it's alias, but I don't see how that applies here. -- Brandon ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/ ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match