Re: [PERFORM] Are bitmap index scans slow to start?

2013-03-08 Thread Carlo Stonebanks
ays, thanks. From: Jeff Janes [mailto:jeff.ja...@gmail.com] Sent: March 5, 2013 4:21 PM To: Carlo Stonebanks Cc: pgsql-performance@postgresql.org Subject: Re: [PERFORM] Are bitmap index scans slow to start? On Wed, Feb 27, 2013 at 1:38 PM, Carlo Stonebanks wrote: >> Is the original

Re: [PERFORM] Are bitmap index scans slow to start?

2013-02-28 Thread Carlo Stonebanks
<> Technically, yes. That would really help, but the issue is scheduling. Although the logs are closed off for writes, they aren't closed off for reads, ref PG documentation: "When a table is being clustered, an ACCESS EXCLUSIVE lock is acquired on it. This prevents any other database operatio

Re: [PERFORM] Are bitmap index scans slow to start?

2013-02-27 Thread Carlo Stonebanks
<< I had thought you were saying that any one ETL procedure into one database used 14 concurrent threads. But really, each ETL procedure is single-threaded, and there can be up to 5 (or theoretically up to 14) of them running at a time into different databases? >> Sorry, just caught this.

Re: [PERFORM] Are bitmap index scans slow to start?

2013-02-27 Thread Carlo Stonebanks
<< pgbouncer is more for making connections line up single-file when the line is moving at a very fast clip, say 0.01 second per turn. If I were trying to make tasks that can each last for hours or days line up and take turns, I don't think pgbouncer would be the way to go. >> The recommendati

Re: [PERFORM] Are bitmap index scans slow to start?

2013-02-26 Thread Carlo Stonebanks
<> All writes are single row. All DB's have exactly the same structure, only the content is different. Currently the server is hosting five active DB's - although there 14 DB's actually on the host, the balance are backups and or testing environments. When a feed comes in, it can be anythi

Re: [PERFORM] Are bitmap index scans slow to start?

2013-02-25 Thread Carlo Stonebanks
Hi Jeff, thanks for the insight. << And then the next question would be, once they are in the cache, why don't they stay there? For that you would have to know what other types of activities are going on that might be driving the data out of the cache. >> To give you an idea of the activ

Re: [PERFORM] Are bitmap index scans slow to start?

2013-02-22 Thread Carlo Stonebanks
ke 36 seconds to set up the general index caches? 2) What can I do about it (what stats do I need to look at)? 3) How can I force these caches to expire so I can tell if the strategy worked? From: Nikolas Everett [mailto:nik9...@gmail.com] Sent: February 22, 2013 2:05 PM To

Re: [PERFORM] Are bitmap index scans slow to start?

2013-02-22 Thread Carlo Stonebanks
very month). There are 89 session_id values in the January log (log_2013_01) so this would quickly get out of control. But - like I said - an interesting idea for more specific challenges. From: Marc Mamin [mailto:m.ma...@intershop.de] Sent: February 21, 2013 2:41 PM To: Jeff Janes; Carlo Sto

Re: [PERFORM] Are bitmap index scans slow to start?

2013-02-22 Thread Carlo Stonebanks
Hi Jeff, thanks for the reply. << What is going on during the interregnum? Whatever it is, it seems to be driving the log_2013_01_session_idx index out of the cache, but not the log_2013_01 table. (Or perhaps the table visit is getting the benefit of effective_io_concurrency?) . Rebuilding

[PERFORM] Are bitmap index scans slow to start?

2013-02-21 Thread Carlo Stonebanks
(Sorry moderators for any double posts, I keep making subscription errors. Hopefully this one gets through) Hi speed freaks, Can anyone tell me why the bitmap heap scan takes so long to start for this query? (SQL and EXPLAIN ANALYZE follows). The big culprit in this appears to be: -> Bitmap In

Re: [PERFORM] pl/pgsql functions outperforming sql ones?

2012-01-30 Thread Carlo Stonebanks
Pavel, thank you very much for your explanation. Is it possible to define under what conditions that sql procs will outperform plpgsql ones, and vice-versa? -Original Message- From: Pavel Stehule [mailto:pavel.steh...@gmail.com] Sent: January 30, 2012 2:57 AM To: Carlo Stonebanks Cc

Re: [PERFORM] pl/pgsql functions outperforming sql ones?

2012-01-30 Thread Carlo Stonebanks
urn result; end; $BODY$ LANGUAGE plpgsql IMMUTABLE COST 100; -Original Message- From: pgsql-performance-ow...@postgresql.org [mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Pavel Stehule Sent: January 28, 2012 1:38 AM To: Carlo Stonebanks Cc: Merlin Moncure; pgsql-perfo

Re: [PERFORM] pl/pgsql functions outperforming sql ones?

2012-01-29 Thread Carlo Stonebanks
THEN true ELSE false END ) AS result? 2) Does that not bypass the benefits of IMMUTABLE? -Original Message- From: pgsql-performance-ow...@postgresql.org [mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Pavel Stehule Sent: January 28, 2012 1:38 AM To: Carlo Stonebanks C

Re: [PERFORM] pl/pgsql functions outperforming sql ones?

2012-01-27 Thread Carlo Stonebanks
Was I even right in thinking I would gain any performance by converting to SQL? -Original Message- From: Deron [mailto:fecas...@gmail.com] Sent: January 27, 2012 2:29 PM To: Carlo Stonebanks Cc: pgsql-performance@postgresql.org Subject: Re: [PERFORM] pl/pgsql functions outperforming sql

Re: [PERFORM] pl/pgsql functions outperforming sql ones?

2012-01-27 Thread Carlo Stonebanks
e SQL is emulating the straight non-set-oriented procedural logic of the original plpgsql. -Original Message- From: Merlin Moncure [mailto:mmonc...@gmail.com] Sent: January 27, 2012 10:47 AM To: Carlo Stonebanks Cc: pgsql-performance@postgresql.org Subject: Re: [PERFORM] pl/pgsql func

[PERFORM] pl/pgsql functions outperforming sql ones?

2012-01-26 Thread Carlo Stonebanks
Assuming there was some sort of cost to pl/pgsql, I rewrote a bunch of stored functions s in straight SQL. Each stored proc was calling the next, so to get the full effect I had to track down all the pl/pgsql stored functions and convert them to sql. However, I was surprised to find after all of th

Re: [PERFORM] Performance costs of various PL languages

2011-12-27 Thread Carlo Stonebanks
nt: December 27, 2011 5:54 PM To: Pavel Stehule Cc: Carlo Stonebanks; pgsql-performance@postgresql.org Subject: Re: [PERFORM] Performance costs of various PL languages On Tue, Dec 27, 2011 at 4:20 PM, Pavel Stehule wrote: > Hello > > 2011/12/27 Carlo Stonebanks : >> We are curre

[PERFORM] Performance costs of various PL languages

2011-12-27 Thread Carlo Stonebanks
We are currently using pltclu as our PL of choice AFTER plpgSql. I'd like to know if anyone can comment on the performance costs of the various PL languages BESIDES C. For example, does pltclu instantiate faster than pltcl (presumably because it uses a shared interpreter?) Is Perl more lightweight

Re: [PERFORM] Composite keys

2011-10-11 Thread Carlo Stonebanks
the left. correct? _ From: Dave Crooke [mailto:dcro...@gmail.com] Sent: October 11, 2011 9:28 PM To: Claudio Freire Cc: Carlo Stonebanks; pgsql-performance@postgresql.org Subject: Re: [PERFORM] Composite keys Claudio is on point, I'll be even more pointed If pkey

[PERFORM] Composite keys

2011-10-11 Thread Carlo Stonebanks
Excuse the noob question, I couldn't find any reading material on this topic. Let's say my_table has two fields, pkey_id and another_id. The primary key is pkey_id and of course indexed. Then someone adds a composite index on btree(pkey_id, another_id). Question 1) Is there any benefit

Re: [PERFORM] Migrated from 8.3 to 9.0 - need to update config (re-post)

2011-09-14 Thread Carlo Stonebanks
esql.org > Subject: Re: [PERFORM] Migrated from 8.3 to 9.0 - need to update config > (re-post) > > On 09/14/2011 02:56 AM, Carlo Stonebanks wrote: > > > Even for 300 stateful applications that can remain connected for up to a > > week, continuously distilling data (imports)?

Re: [PERFORM] Migrated from 8.3 to 9.0 - need to update config (re-post)

2011-09-13 Thread Carlo Stonebanks
commendation on the forum? Carlo > Date: Tue, 13 Sep 2011 16:13:00 -0500 > From: kevin.gritt...@wicourts.gov > To: pgsql-performance@postgresql.org; stonec.regis...@sympatico.ca > Subject: RE: [PERFORM] Migrated from 8.3 to 9.0 - need to update config >

Re: [PERFORM] Migrated from 8.3 to 9.0 - need to update config (re-post)

2011-09-13 Thread Carlo Stonebanks
FORM] Migrated from 8.3 to 9.0 - need to update config > (re-post) > > Carlo Stonebanks wrote: > > > this is a full-time ETL system, with only a handful of actual > > *users* and automated processes over 300 connections running > > *import* programs 24/7 >

Re: [PERFORM] Migrated from 8.3 to 9.0 - need to update config (re-post)

2011-09-10 Thread Carlo Stonebanks
om: Kevin Grittner [mailto:kevin.gritt...@wicourts.gov] Sent: September 9, 2011 2:16 PM To: pgsql-performance@postgresql.org; Carlo Stonebanks Subject: Re: [PERFORM] Migrated from 8.3 to 9.0 - need to update config (re-post) Carlo Stonebanks wrote: > this is a full-time ETL system, with onl

[PERFORM] Migrated from 8.3 to 9.0 - need to update config (re-post)

2011-09-09 Thread Carlo Stonebanks
Hello performance wizards! (Sorry for the re-post if this appears twice - I see no evidence e-mailing to pgsql-perfomrance is working yet.) My client has migrated his 8.3 hosted DB to new machines running PG 9.0. It’s time to look at the config settings. Immediately below are the confi

Re: [PERFORM] Very bad plan when using VIEW and IN (SELECT...*)

2010-08-13 Thread Carlo Stonebanks
didn't want to throw too much info as my concern was actually whether views were as klunky as other DB platforms. Carlo -Original Message- From: Kevin Grittner [mailto:kevin.gritt...@wicourts.gov] Sent: August 13, 2010 9:29 AM To: pgsql-performance@postgresql.org; Carlo Stonebanks Subjec

[PERFORM] Very bad plan when using VIEW and IN (SELECT...*)

2010-08-12 Thread Carlo Stonebanks
Ref these two queries against a view: -- QUERY 1, executes < 0.5 secs SELECT * FROM mdx_core.vw_provider AS p WHERE provider_id IN (13083101) -- QUERY 2, executes > 13.5 secs SELECT * FROM mdx_core.vw_provider AS p WHERE provider_id IN (SELECT 13083101) I am using the simple IN (SELECT n) in QU

[PERFORM] Does FILTER in SEQSCAN short-circuit AND?

2010-05-27 Thread Carlo Stonebanks
Sample code: SELECT * FROM MyTable WHERE foo = 'bar' AND MySlowFunc('foo') = 'bar' Let's say this required a SEQSCAN because there were no indexes to support column foo. For every row where foo <> 'bar' would the filter on the SEQSCAN short-circuit the AND return false right away, or would it

Re: [PERFORM] Function scan/Index scan to nested loop

2010-05-11 Thread Carlo Stonebanks
Thanks Scott, This is almost always due to caching. First time the data aren't in the cache, second time they are. << I had assumed that it was caching, but I don't know from where because of the inexplicable delay. Hardware? O/S (Linux)? DB? From the function, which is IMMUTABLE? I am co

[PERFORM] Function scan/Index scan to nested loop

2010-05-10 Thread Carlo Stonebanks
Hello all, A query ran twice in succession performs VERY poorly the first time as it iterates through the nested loop. The second time, it rips. Please see SQL, SLOW PLAN and FAST PLAN below. I don't know why these nested loops are taking so long to execute. " -> Nested Loop (cost=0.00..42

[PERFORM] Got that new server, now it's time for config!

2010-03-22 Thread Carlo Stonebanks
Here we go again! Based on recommendations made here, I got my client to migrate off of our Windows 2003 Server x64 box to a new Linux box. # CENTOS 5.4 # Linux mdx_octo 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux # pgsql 8.3.10, 8 CPUs, 48GB RAM # RAID 1

Re: [PERFORM] default_statistics_target

2010-03-22 Thread Carlo Stonebanks
_counts = on vacuum_cost_delay = 5 # 0-1000 milliseconds wal_buffers = 4MB wal_sync_method = open_sync work_mem = 64MB Carlo "Greg Smith" wrote in message news:4b9e33af.2020...@2ndquadrant.com... Carlo Stonebanks wrote: The whole topic of messing with stats makes my head spin but I am concerned

[PERFORM] default_statistics_target

2010-03-14 Thread Carlo Stonebanks
Hi people, The whole topic of messing with stats makes my head spin but I am concerned about some horridly performing queries that have had bad rows estimates and others which always choose seq scans when indexes are available. Reading up on how to improve planner estimates, I have seen refere

Re: [PERFORM] New server to improve performance on our large and busy DB - advice?

2010-01-22 Thread Carlo Stonebanks
Hi Greg, As a follow up to this suggestion: I don't see effective_cache_size listed there. If that's at the default, I wouldn't be surprised that you're seeing sequential scans instead of indexed ones far too often. I found an article written by you http://www.westnet.com/~gsmith/content/p

Re: [PERFORM] Re: New server to improve performance on our large and busy DB - advice? (v2)

2010-01-20 Thread Carlo Stonebanks
* A database that is of small to medium size (5 - 10 GB)? * Around 10 clients that perform constant write operations to the database (UPDATE/INSERT) * Around 10 clients that occasionally read from the database * Around 6000 tables in your database * A problem with tuning it all * Migration to ne

Re: [PERFORM] New server to improve performance on our large and busy DB - advice?

2010-01-20 Thread Carlo Stonebanks
yeah, the values are at the end. Sounds like your vacuum settings are too non-aggresive. Generally this is the vacuum cost delay being too high. Of course, I have to ask: what's the down side? Yes! You can run vacuum verbose against the regular old postgres database (or just create one for

Re: [PERFORM] New server to improve performance on our large and busy DB - advice?

2010-01-19 Thread Carlo Stonebanks
Hi Scott, Sorry for the very late reply on this post, but I'd like to follow up. The reason that I took so long to reply was due to this suggestion: < My first thought was, does he mean against the entire DB? That would take a week! But, since it was recommended, I decided to see what woul

Re: [PERFORM] New server to improve performance on our large and busy DB - advice?

2010-01-14 Thread Carlo Stonebanks
Pretty much everyone thinks their requirements are exceptional. It's funny how infrequently that's actually true. The techniques that favor index-use aren't that unique: collect better stats, set basic parameters correctly, adjust random_page_cost, investigate plans that don't do what you wa

Re: [PERFORM] Massive table (500M rows) update nightmare

2010-01-14 Thread Carlo Stonebanks
My bad - I thought I had, so it has been re-posted with a (v2) disclaimer in the title... like THAT will stop the flaming! Thanks for your patience! "Craig James" wrote in message news:4b4f8a49.7010...@emolecules.com... Carlo Stonebanks wrote: Guys, I want to thank you for

[PERFORM] New server to improve performance on our large and busy DB - advice? (v2)

2010-01-14 Thread Carlo Stonebanks
My client just informed me that new hardware is available for our DB server. . Intel Core 2 Quads Quad . 48 GB RAM . 4 Disk RAID drive (RAID level TBD) I have put the ugly details of what we do with our DB below, as well as the postgres.conf settings. But, to summarize: we have a PostgreSQL 8.3.

Re: [PERFORM] Massive table (500M rows) update nightmare

2010-01-14 Thread Carlo Stonebanks
-in-hand, looking for advice under the PERFORM post: "New server to improve performance on our large and busy DB - advice?" Thanks again! Carlo "Scott Marlowe" wrote in message news:dcc563d11001071740q572cdae2re410788fe790d...@mail.gmail.com... On Thu, Jan 7, 20

[PERFORM] New server to improve performance on our large and busy DB - advice?

2010-01-14 Thread Carlo Stonebanks
My client just informed me that new hardware is available for our DB server. . Intel Core 2 Quads Quad . 48 GB RAM . 4 Disk RAID drive (RAID level TBD) I have put the ugly details of what we do with our DB below, as well as the postgres.conf settings. But, to summarize: we have a PostgreSQL 8.3

Re: [PERFORM] Massive table (500M rows) update nightmare

2010-01-08 Thread Carlo Stonebanks
I thought that post mentioned that the plan was one statement in an iteration, and that the cache would have been primed by a previous query checking whether there were any rows to update. If that was the case, it might be worthwhile to look at the entire flow of an iteration. This is the only

Re: [PERFORM] Massive table (500M rows) update nightmare

2010-01-08 Thread Carlo Stonebanks
crank it up more and delay the checkpoints as much as possible during these updates. 64 segments is already 1024M. We have 425M rows, total table size is 78GB, so we can imagine a worst case UPDATE write is less than 200 bytes * number of rows specified in the update (is that logic correct?).

Re: [PERFORM] Massive table (500M rows) update nightmare

2010-01-07 Thread Carlo Stonebanks
runtime: 372.141 ms ""Kevin Grittner"" wrote in message news:4b46256302250002d...@gw.wicourts.gov... "Carlo Stonebanks" wrote: An interesting idea, if I can confirm that the performance problem is because of the WHERE clause, not the UPDATE. If you cou

Re: [PERFORM] Massive table (500M rows) update nightmare

2010-01-07 Thread Carlo Stonebanks
It might well be checkpoints. Have you tried cranking up checkpoint segments to something like 100 or more and seeing how it behaves then? No I haven't, althugh it certainly make sense - watching the process run, you get this sense that the system occaisionally pauses to take a deep, long bre

Re: [PERFORM] Massive table (500M rows) update nightmare

2010-01-07 Thread Carlo Stonebanks
What is the rationale behind this? How about doing 10k rows in 1 update, and committing every time? When we did 10K updates, the application would sometimes appear to have frozen, and we were concerned that there was a deadlock condition because of the number of locked rows. While we may have

Re: [PERFORM] Massive table (500M rows) update nightmare

2010-01-07 Thread Carlo Stonebanks
If it is possible to lock this audit table exclusively (may be during off peak hours) I would look into - create new_audit_table as select col1, col2, col3 ... col9, 'new_col_value' from old_audit_table; - create all indexes - drop old_audit_table - rename new_audit_table to old_audit_table That

Re: [PERFORM] Massive table (500M rows) update nightmare

2010-01-07 Thread Carlo Stonebanks
Got an explain analyze of the delete query? UPDATE mdx_core.audit_impt SET source_table = 'mdx_import.'||impt_name WHERE audit_impt_id >= 31941 AND audit_impt_id <= 319400010 AND coalesce(source_table, '') = '' Index Scan using audit_impt_pkey on audit_impt (cost=0.00..92.63 rows=1 width=

[PERFORM] Massive table (500M rows) update nightmare

2010-01-06 Thread Carlo Stonebanks
Our DB has an audit table which is 500M rows and growing. (FYI the objects being audited are grouped semantically, not individual field values). Recently we wanted to add a new feature and we altered the table to add a new column. We are backfilling this varchar(255) column by writing a TCL sc

Re: [PERFORM] How to avoid hashjoin and mergejoin

2007-11-08 Thread Carlo Stonebanks
16MB) = 3.6GB total RAM eaten up under peak load for these two values alone. If we wanted to get more aggressive, we can raise work_mem. Carlo -Original Message- From: Scott Marlowe [mailto:[EMAIL PROTECTED] Sent: November 1, 2007 5:39 PM To: Carlo Stonebanks Cc: pgsql-perfor

Re: [PERFORM] How to avoid hashjoin and mergejoin

2007-11-01 Thread Carlo Stonebanks
:[EMAIL PROTECTED] Sent: November 1, 2007 5:42 PM To: Carlo Stonebanks Cc: pgsql-performance@postgresql.org Subject: Re: [PERFORM] How to avoid hashjoin and mergejoin "Carlo Stonebanks" <[EMAIL PROTECTED]> writes: > Still, the Linux server did not create the same, fast

[PERFORM] How to avoid hashjoin and mergejoin

2007-11-01 Thread Carlo Stonebanks
I am comparing the same query on two different PG 8.2 servers, one Linux (8GB RAM) and one Windows (32GB RAM). Both have similar drives and CPU's. The Windows posgrestsql.config is pretty well tuned but it looks like someone had wiped out the Linux config so the default one was re-installed. A

Re: [PERFORM] REPOST: Nested loops row estimates always too high

2007-09-24 Thread Carlo Stonebanks
a great way to place the enterprise's db-centric business logic at the server. Carlo -Original Message- From: Ow Mun Heng [mailto:[EMAIL PROTECTED] Sent: September 24, 2007 8:51 PM To: Carlo Stonebanks Cc: pgsql-performance@postgresql.org Subject: Re: [PERFORM] REPOST: Nested loops

[PERFORM] Acceptable level of over-estimation?

2007-09-24 Thread Carlo Stonebanks
Is there a rule of thumb about when the planner's row estimates are too high? In particular, when should I be concerned that planner's estimated number of rows estimated for a nested loop is off? By a factor of 10? 100? 1000? Carlo ---(end of broadcast)-

Re: [PERFORM] REPOST: Nested loops row estimates always too high

2007-09-24 Thread Carlo Stonebanks
Has anyone offered any answers to you? No one else has replied to this post. "Ow Mun Heng" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] On Thu, 2007-09-20 at 11:02 -0400, Carlo Stonebanks wrote: (SORRY FOR THE REPOST, I DON'T SEE MY ORIGINAL QUESTION OR A

[PERFORM] REPOST: Performance improves only after repeated VACUUM/ANALYZE

2007-09-20 Thread Carlo Stonebanks
(SORRY FOR THE REPOST, I DON'T SEE MY ORIGINAL QUESTION OR ANY ANSWERS HERE) My client "publishes" an "edition" of their DB from his production site to his hosted web/db server. This is done by FTPing a backup of the DB to his hosting provider. Immediately after a "publication" (restore to we

[PERFORM] REPOST: Nested loops row estimates always too high

2007-09-20 Thread Carlo Stonebanks
(SORRY FOR THE REPOST, I DON'T SEE MY ORIGINAL QUESTION OR ANY ANSWERS HERE) I am noticing that my queries are spending a lot of time in nested loops. The table/index row estimates are not bad, but the nested loops can be off by a factor of 50. In any case, they are always too high. Are the o

[PERFORM] Performance improves only after repeated VACUUM/ANALYZE

2007-09-18 Thread Carlo Stonebanks
My client "publishes" an "edition" of their DB from his production site to his hosted web/db server. This is done by FTPing a backup of the DB to his hosting provider. Immediately after a "publication" (restore to web/db server) we immediately run VACUUM ANALYZE to make sure the statistics and row

[PERFORM] Nested loops row estimates always too high

2007-09-18 Thread Carlo Stonebanks
I am noticing that my queries are spending a lot of time in nested loops. The table/index row estimates are not bad, but the nested loops can be off by a factor of 50. In any case, they are always too high. If this is always occurring, is this an indication of a general configuration problem?

Re: [PERFORM] Query works when kludged, but would prefer "best practice" solution

2007-09-17 Thread Carlo Stonebanks
ep, I have a dream -- and in this dream Tom writes a brilliant three line code sample that makes it all clear to me, and I wake up a PostgreSQL guru) ;-) Carlo -Original Message- From: Tom Lane [mailto:[EMAIL PROTECTED] Sent: September 17, 2007 11:30 PM To: Merlin Moncure Cc: Carlo

Re: [PERFORM] Query works when kludged, but would prefer "best practice" solution

2007-09-17 Thread Carlo Stonebanks
Thanks, it worked. Client happy. Big bonus in the mail. -Original Message- From: Merlin Moncure [mailto:[EMAIL PROTECTED] Sent: September 17, 2007 8:18 PM To: Carlo Stonebanks Cc: pgsql-performance@postgresql.org Subject: Re: [PERFORM] Query works when kludged, but would prefer "

Re: [PERFORM] Query works when kludged, but would prefer "best practice" solution

2007-09-17 Thread Carlo Stonebanks
Well, there goes my dream of getting a recommendation that will deliver a blinding insight into how to speed up all of my queries a thousand-fold. Thanks Merlin! -Original Message- From: Merlin Moncure [mailto:[EMAIL PROTECTED] Sent: September 17, 2007 8:03 PM To: Carlo Stonebanks Cc

[PERFORM] Query works when kludged, but would prefer "best practice" solution

2007-09-17 Thread Carlo Stonebanks
Hi all, Please see the section marked as "PROBLEM" in "ORIGINAL QUERY" plan below. You can see it's pretty slow. Oddly enough, an index for facility_address_id is available but not being used, but I suspect it's questionable whether it would be an improvement. I knew that the filter was best

[PERFORM] random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1

2007-09-10 Thread Carlo Stonebanks
Can anyone answer this for me: Although I realize my client's disk subsystem (SCSI/RAID Smart Array E200 controller using RAID 1) is less than impressive - is the default setting of 4.0 realistic or could it be lower? Thanks! ---(end of broadcast)--

Re: [PERFORM] Performance on 8CPU's and 32GB of RAM

2007-09-06 Thread Carlo Stonebanks
<< If what you mean is that pg has a design that's heavily oriented towards things that tend to be cheap on POSIX and doesn't use the core Win32 features effectively, then let's track that as an optimisation opportunity for the Win32 port. >> Isn't it just easier to assume that Windows Server ca

[PERFORM] How planner decides left-anchored LIKE can use index

2007-09-06 Thread Carlo Stonebanks
Exactly when does the planner decide that a left-anchored like can use the index? I have replaced a WHEN lower(last_name) = 'smith' with WHEN lower(last_name) like 'smith%' There is an index on lower(last_name). I have seen the planner convert the LIKE to lower(last_name) >= 'smith' and lower(

Re: [PERFORM] Performance on 8CPU's and 32GB of RAM

2007-09-06 Thread Carlo Stonebanks
Wow - it's nice to hear someone say that... out loud. Thanks, you gave me hope! -Original Message- From: James Mansion [mailto:[EMAIL PROTECTED] Sent: September 6, 2007 4:55 PM To: Carlo Stonebanks Cc: Scott Marlowe; Alvaro Herrera; pgsql-performance@postgresql.org Subject: Re: [PE

Re: [PERFORM] Performance on 8CPU's and 32GB of RAM

2007-09-05 Thread Carlo Stonebanks
>> Large shared_buffers and Windows do not mix. Perhaps you should leave the shmem config low, so that the kernel can cache the file pages. << Is there a problem BESIDES the one that used to cause windows to fail to allocate memory in blocks larger than 1.5GB? The symptom of this problem was tha

Re: [PERFORM] Performance on 8CPU's and 32GB of RAM

2007-09-05 Thread Carlo Stonebanks
Right, additionally NTFS is really nothing to use on any serious disc array. Do you mean that I will not see any big improvement if I upgrade the disk subsystem because the client is using NTFS (i.e. Windows) ---(end of broadcast)--- TIP 9: In

Re: [PERFORM] Performance on 8CPU's and 32GB of RAM

2007-09-05 Thread Carlo Stonebanks
ing RAID 1. -Original Message- From: Scott Marlowe [mailto:[EMAIL PROTECTED] Sent: September 4, 2007 7:15 PM To: Alvaro Herrera Cc: Carlo Stonebanks; pgsql-performance@postgresql.org Subject: Re: [PERFORM] Performance on 8CPU's and 32GB of RAM On 9/4/07, Alvaro Herrera <[EMAIL PROTECTE

[PERFORM] Performance on 8CPU's and 32GB of RAM

2007-09-04 Thread Carlo Stonebanks
A client is moving their postgresql db to a brand new Windows 2003 x64 server with 2 quad cores and 32GB of RAM. It is a dedicated server to run 8.2.4. The server typically will have less than 10 users. The primary use of this server is to host a database that is continuously being updated by

Re: [PERFORM] Fast tsearch2, trigram matching on short phrases

2007-08-22 Thread Carlo Stonebanks
>> The problem is in idea, not in performance. Oh, I think we both agree on that! ;-D This is why I didn't post any EXPLAINs or anything like that. I thought the problem was in the entire method of how to best zero in on the set of records best suited for closer analysis by my phrase-matching f

Re: [PERFORM] Fast tsearch2, trigram matching on short phrases

2007-08-22 Thread Carlo Stonebanks
Hi Oleg, you didn't show us explain analyze of your select. I didn't because I didn't expect any reaction to it - my understanding is that trigram matching for phrases is not recommended because of the performance. Do you believe that I SHOULD expect good performance from trigram matching o

Re: [PERFORM] Fast tsearch2, trigram matching on short phrases

2007-08-22 Thread Carlo Stonebanks
was oriented towards word mathcing, not phrase matching. Carlo ""Steinar H. Gunderson"" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] On Wed, Aug 22, 2007 at 12:02:54PM -0400, Carlo Stonebanks wrote: Any suggestions on where to go with this project to im

[PERFORM] Fast tsearch2, trigram matching on short phrases

2007-08-22 Thread Carlo Stonebanks
I have read that trigram matching (similarity()) performance degrades when the matching is on longer strings such as phrases. I need to quickly match strings and rate them by similiarity. The strings are typically one to seven words in length - and will often include unconventional abbreviations

Re: [PERFORM] Is Vacuum/analyze destroying my performance?

2006-12-04 Thread Carlo Stonebanks
""Matthew O'Connor"" wrote in message news:[EMAIL PROTECTED] > Just a wild guess, but the performance problem sounds like maybe as your > data changes, eventually the planner moves some query from an index scan > to a sequential scan, do you have any details on what queries are taking > so lon

Re: [PERFORM] Is Vacuum/analyze destroying my performance?

2006-12-03 Thread Carlo Stonebanks
Update on this issue, I "solved" my problem by doing the following: 1) Stopped the import, and did a checkpoint backup on my import target schema 2) Dropped the import target schema 3) Restored a backup from a previous checkpoint when the tables were much smaller 4) Performed a VACUUM/ANALYZE on

[PERFORM] Is Vacuum/analyze destroying my performance?

2006-12-03 Thread Carlo Stonebanks
I have always been frustrated by the wildly erratic performance of our postgresql 8 server. We run aprogram that does heavy data importing via a heuristics-based import program. Sometime records being imported would just fly by, sometimes they would crawl. The import program imports records from

Re: [PERFORM] commit so slow program looks frozen

2006-10-28 Thread Carlo Stonebanks
> I do think we need some better instrumentation for this kind of thing. Well, one thing's for sure - I have little other information to offer. The problem is that the lockups occur after hours of operation and thousands of rows being digested (which is the nature of the program). If "better in

Re: [PERFORM] commit so slow program looks frozen

2006-10-28 Thread Carlo Stonebanks
>Ben Trewern" <[EMAIL PROTECTED]> wrote in message >news:[EMAIL PROTECTED] > It might be worth turning off hyperthreading if your Xeons are using it. > There have been reports of this causing inconsistent behaviour with > PostgreSQL. Yes, this issue comes up often - I wonder if the Woodcrest Xe

Re: [PERFORM] commit so slow program looks frozen

2006-10-26 Thread Carlo Stonebanks
> when it happens, make sure to query pg_locks and see what is going on > there lock issues are not supposed to manifest on a commit, which > releases locks, but you never know. There aren't any pedning locks (assuming that pgAdmin is using pg_locks to display pendin glocks). > There have been r

Re: [PERFORM] commit so slow program looks frozen

2006-10-26 Thread Carlo Stonebanks
This is pretty interesting - where can I read more on this? Windows isn't actually hanging, one single command line window is - from its behaviour, it looks like the TCL postgresql package is waiting for pg_exec to come back from the commit (I believe the commit has actually gone through). It c

Re: [PERFORM] commit so slow program looks frozen

2006-10-26 Thread Carlo Stonebanks
> I can just see the postgresql group getting together at the next > O'Reilley's conference and creating that band. And it will all be your > fault. Finally, a chance for me to wear my black leather pants. > A context switch storm is when your machine spends more time trying to > figure out what

Re: [PERFORM] commit so slow program looks frozen

2006-10-25 Thread Carlo Stonebanks
>> I have a question for you: did you have a long running query keeping open a transaction?  I've just noticed the same problem here, but things cleaned up immediately when I aborted the long-running transaction. No, the only processes are from those in the import applications themselves

Re: [PERFORM] commit so slow program looks frozen

2006-10-25 Thread Carlo Stonebanks
> You may try to figure out what's the process doing (the backend > obviously, not the frontend (Tcl) process) by attaching to it with > strace. It's so sad when us poor Windows guys get helpful hints from people assume that we're smart enough to run *NIX... ;-) > Maybe it's swamped by a context

[PERFORM] commit so slow program looks frozen

2006-10-25 Thread Carlo Stonebanks
(I tried this question on the interface forum and got no result, but I don't know how to tell if it's an interface issue or not) I have a TCL app which typically takes hours to complete. I found out that it is taking longer than it should because it occasionally stalls inexplicably (for tens of

Re: [PERFORM] Is ODBC that slow?

2006-10-24 Thread Carlo Stonebanks
> Try Command Prompt's ODBC driver. Lately it has been measured to be > consistently faster than psqlODBC. > > http://projects.commandprompt.com/public/odbcng Thanks, I tried this, but via Access it always reports a login (username/password) to db failure. However, this a an Alpha - is there a

Re: [PERFORM] Is ODBC that slow?

2006-10-21 Thread Carlo Stonebanks
> carlo: please, please, get your mail server to quit telling me your > mailbox is full :) Merlin, sorry about that. This is the first I've heard of it. Carlo ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster

[PERFORM] Is ODBC that slow?

2006-10-20 Thread Carlo Stonebanks
Our Windows-based db server has to integrate with users that work regularily with Access.When attempting to import user's data from Access MDB files to PostgreSQL, we try on eof two things: either import using EMS SQL Manager's Data Import from Access utility, or export from Access to Postgresql

Re: [PERFORM] Performance Optimization for Dummies 2 - the SQL

2006-10-16 Thread Carlo Stonebanks
> you have a two part part key on facility(country code, postal code), > right? Well, I'm glad you pointed it out, because I THOUGhT I had created it, but apparently I haven't -- I only noticed that it was missing after I listed all the other indexes. Looks like this query is one of the victims

Re: [PERFORM] Performance Optimization for Dummies 2 - the SQL

2006-10-16 Thread Carlo Stonebanks
of the union and you mentioned it. I was just under th eimpression that getting this sub-query to work would have produced the most clear, straightforward ANALYZE results. Carlo "Shaun Thomas" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > On Monday 16 Octobe

Re: [PERFORM] Performance Optimization for Dummies 2 - the SQL

2006-10-16 Thread Carlo Stonebanks
>I think there's 2 things that would help this case. First, partition on > country. You can either do this on a table level or on an index level > by putting where clauses on the indexes (index method would be the > fastest one to test, since it's just new indexes). That should shrink > the size of

Re: [PERFORM] Performance Optimization for Dummies 2 - the SQL

2006-10-16 Thread Carlo Stonebanks
EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > On 10/15/06, Carlo Stonebanks <[EMAIL PROTECTED]> wrote: >> that contains full address data >> */ >> select >> f.facility_id, >> null as facility_address_id, >>

Re: [PERFORM] Performance Optimization for Dummies 2 - the SQL

2006-10-16 Thread Carlo Stonebanks
> Can you try temporarily disabling bitmap scans and see what comes up? Well, that's slowing everything down. I've got a couple of results, below 1) Bitmap scan off, but seq scan enabled. 2) Bitmap scan and seq scan off 3) Bitmap scan back on, seq scan back on, and a new index created 4) VACUUM V

Re: [PERFORM] Performance Optimization for Dummies 2 - the SQL

2006-10-15 Thread Carlo Stonebanks
able, 772747 nonremovable row versions in 7969 pages DETAIL: 0 dead row versions cannot be removed yet. There were 0 unused item pointers. 0 pages are entirely empty. CPU 0.39s/0.18u sec elapsed 10.70 sec. Query returned successfully with no result in 10765 ms. "Tom Lane" <[EMAIL PRO

Re: [PERFORM] Performance Optimization for Dummies 2 - the SQL

2006-10-15 Thread Carlo Stonebanks
Hi Merlin, Well, I'm back. first of all, thanks for your dogged determination to help me out - it is much appreciated. I owe you a beer or twelve. The import has been running for a week. The import program got faster as I tuned things. I capture the dynamic SQL statements generated by the app,

Re: [PERFORM] Performance Optimization for Dummies 2 - the SQL

2006-10-06 Thread Carlo Stonebanks
> how did you determine that it is done every 500 rows? this is the The import program pages the import table - it is currently set at 500 rows per page. With each page, I run an ANALYZE. > default autovacuum paramater. if you followed my earlier > recommendations, you are aware that autovacuum

Re: [PERFORM] Performance Optimization for Dummies 2 - the SQL

2006-10-06 Thread Carlo Stonebanks
f for imports, how frequently should I VACUUM? ""Merlin Moncure"" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > On 10/5/06, Carlo Stonebanks <[EMAIL PROTECTED]> wrote: >> > do we have an multi-column index on >> > facil

Re: [PERFORM] Performance Optimization for Dummies 2 - the SQL

2006-10-05 Thread Carlo Stonebanks
;[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > On 10/5/06, Carlo Stonebanks <[EMAIL PROTECTED]> wrote: >> > do we have an multi-column index on >> > facility_address(facility_id, address_id)? did you run analyze? >> >> There is an index

  1   2   >