Re: [HACKERS] [PERFORM] Slow count(*) again...

2011-02-03 Thread David Wilson
On Thu, Feb 3, 2011 at 6:05 PM, Robert Haas wrote: > > If you want to randomly pick 10,000 rows out of all the rows that are > going to be inserted in the table without knowing in advance how many > there will be, how do you do that? > Reservoir sampling, as the most well-known option: http://en

Re: [PERFORM] Postgres 9.0 has a bias against indexes

2011-01-27 Thread David Wilson
On Thu, Jan 27, 2011 at 10:56 AM, Mladen Gogala wrote: > I even tried with an almost equivalent outer join: > > explain analyze select e1.empno,e1.ename,e2.empno,e2.ename > from emp e1 left outer join emp e2 on (e1.mgr=e2.empno); > QUERY PLAN > > >

Re: [PERFORM] query has huge variance in execution times

2010-03-31 Thread David Wilson
On Wed, Mar 31, 2010 at 2:10 PM, Brian Cox wrote: > > > so the question still remains: why did it take > 20 mins? To see if it was > due to autovacuum running ANALYZE, I turned off autovacuum, created a table > using SELECT * INTO temp FROM ts_stats_tranunit_user_daily, added the index > on ts_in

Re: [PERFORM] query has huge variance in execution times

2010-03-30 Thread David Wilson
On Wed, Mar 31, 2010 at 12:11 AM, Brian Cox wrote: > > 2010-03-30 18:41:11.685261-07 | select b.ts_id from > ts_stats_tranunit_user_daily b, ts_stats_tranunit_user_interval c where > b.ts_transet_incarnation_id = c.ts_transet_incarnation_id and > b.ts_tranunit_id = c.ts_tranunit_id and b.ts_user

Re: [PERFORM] Forcing index scan on query produces 16x faster

2010-03-17 Thread David Wilson
On Wed, Mar 17, 2010 at 5:25 PM, Christian Brink wrote: > > -> Index Scan using sales_tranzdate_index on sales s > (cost=0.01..1825.27 rows=22530 width=12) (actual time=0.072..3.464 rows=225 > loops=1) > Have you tried increasing the statistics on that table (and then analyzing)?

Re: [PERFORM] optimizing query with multiple aggregates

2009-10-21 Thread David Wilson
On Wed, Oct 21, 2009 at 6:51 PM, Doug Cole wrote: > > repeated across many different x,y values and fields to build out several > histograms of the data. The main culprit appears to be the CASE statement, > but I'm not sure what to use instead. I'm sure other people have had > similar queries a

Re: [PERFORM] More speed counting rows

2009-07-27 Thread David Wilson
On Mon, Jul 27, 2009 at 5:06 AM, Developer wrote: > Hello, > > I am trying to optimize the count of files when I am using filters > (select by some row/s parameter/s) > My question is: > Any method for indirect count like ordered indexes + quadratic count? > Any module? > Any suggestion? If all yo

Re: [PERFORM] [BUGS] BUG #4919: CREATE USER command slows down system performance

2009-07-15 Thread David Wilson
On Wed, Jul 15, 2009 at 11:10 AM, Marko Kreen wrote: > From security standpoint, wasting more cycles on bad passwords is good, > as it decreases the rate bruteforce password scanning can happen. > > And I cannot imagine a scenario where performance on invalid logins > can be relevant.. DoS attack

Re: [PERFORM] Poor overall performance unless regular VACUUM FULL

2009-07-14 Thread David Wilson
On Mon, Jul 13, 2009 at 3:31 PM, Wayne Conrad wrote: > Howdy.  Some months back, when advised on one of these lists that it > should not be necessary to issue VACUUM FULL/REINDEX DATABASE, we quit > this nightly "maintenance" practice.  We've been very happy to not > have to do that, since it locke

Re: [PERFORM] superlative missuse

2009-05-14 Thread David Wilson
On Thu, May 14, 2009 at 9:08 PM, Craig James wrote: > I disagree -- it's a glaring error.  "More optimized" or "better optimized" > are perfectly good, and correct, phrases.  Why not use them?  Every time I > read "more optimal," I am embarrassed for the person who is showing his/her > ignorance

Re: [PERFORM] superlative missuse

2009-05-12 Thread David Wilson
On Tue, May 12, 2009 at 5:53 PM, Angel Alvarez wrote: > we suffer a 'more optimal' superlative missuse > > there is  not so 'more optimal' thing but a simple 'better' thing. > > im not native english speaker but i think it still applies. > > Well this a superlative list so all of you deserve a be

Re: [PERFORM] difficulties with time based queries

2009-04-05 Thread David Wilson
On Sun, Apr 5, 2009 at 7:26 PM, Rainer Mager wrote: > Bitmap Heap Scan on ad_log  (cost=73372.57..3699152.24 rows=2488252 > width=32) (actual time=49792.862..64611.255 rows=2268490 loops=1) > >    Recheck Cond: ((date(start_time) < '2009-03-31'::date) AND > (date(start_time) >= '2009-03-30'::date)

Re: [PERFORM] multiple threads inserting into the same table

2009-03-23 Thread David Wilson
On Mon, Mar 23, 2009 at 3:34 PM, Brian Cox wrote: > The application log shows that 99652 rows are being inserted into relation > ts_stats_transet_user_daily. 5 threads are doing the inserts. The schema is > lengthy, but it has a synthetic primary key (ts_id int8 not null) and the > following const

Re: [PERFORM] Problems with ordering (can't force query planner to use an index)

2009-03-03 Thread David Wilson
On Tue, Mar 3, 2009 at 12:05 PM, Sebastjan Trepca wrote: >         ->  Nested Loop  (cost=0.00..31883.13 rows=2861 width=860) > (actual time=0.089..543.497 rows=68505 loops=1) >               ->  Index Scan using core_accessor_fresh_idx on > core_accessor  (cost=0.00..5460.07 rows=2970 width=92)

Re: [PERFORM] Bad plan for nested loop + limit

2009-02-14 Thread David Wilson
On Sat, Feb 14, 2009 at 5:25 PM, Alexander Staubo wrote: > > Output from "explain analyze": > > Limit (cost=0.00..973.63 rows=4 width=48) (actual > time=61.554..4039.704 rows=1 loops=1) > -> Nested Loop (cost=0.00..70101.65 rows=288 width=48) (actual > time=61.552..4039.700 rows=1 loops=1) >

Re: [PERFORM] Deleting millions of rows

2009-02-02 Thread David Wilson
On Mon, Feb 2, 2009 at 3:37 PM, Brian Cox wrote: > David Wilson [david.t.wil...@gmail.com] wrote: > >> Is this table the target of any foreign keys? > > There are 2 "on delete cascade" FKs that reference this table. I believe that's the source of your memory is

Re: [PERFORM] Deleting millions of rows

2009-02-02 Thread David Wilson
On Mon, Feb 2, 2009 at 1:17 PM, Brian Cox wrote: > I'm using 8.3.5. Table ts_defects has 48M rows. Through psql: delete from > ts_defects; > Result: out of memory/Can't allocate size: 32 Is this table the target of any foreign keys? -- - David T. Wilson david.t.wil...@gmail.com -- Sent via pg

Re: [PERFORM] Interesting query plan change linked to the LIMIT parameter

2009-01-20 Thread David Wilson
On Tue, Jan 20, 2009 at 10:45 AM, Yannick Le Guédart wrote: > > The second query scans the whole comment table which is very dangerous for > production servers. That's not quite true. The second does an index scan- the planner seems to be guessing that it'll fulfill the required limit early in t

Re: [PERFORM] index

2009-01-13 Thread David Wilson
On Wed, Jan 14, 2009 at 12:53 AM, Maksim Sosnovskiy wrote: Will it be more efficient to also add index on IP > Address to speedup lookups by IP? Most likely, especially if the IP address is not the first column in your primary key index. Have you done an explain analyze of your ip lookup query?

Re: [PERFORM] slow query

2009-01-12 Thread David Wilson
On Mon, Jan 12, 2009 at 5:59 PM, Scott Marlowe wrote: > I can't figure out where my time's going to. Looks like it's going to: -> Index Scan using users_orgid_idx on users u (cost=0.00..129.52 rows=5 width=271) (actual time=843.825..860.638 rows=0 loops=35) I'd guess the index/pages for users

Re: [PERFORM] Experience with HP Smart Array P400 and SATA drives?

2008-12-10 Thread David Wilson
On Wed, Dec 10, 2008 at 2:13 PM, Greg Smith <[EMAIL PROTECTED]> wrote: > 3ware 9650SE-4LPML is what I'd buy today if I wanted hardware SATA RAID. FWIW, I just put together a system with exactly that (4 320g drives in raid 10) and have been pleased with the results. I won't have any downtime to be

Re: [PERFORM] query plan with index having a btrim is different for strings of different length

2008-12-09 Thread David Wilson
On Tue, Dec 9, 2008 at 2:56 PM, Richard Yen <[EMAIL PROTECTED]> wrote: > In practice, the difference is 300+ seconds when $LASTNAME == 5 chars and <1 > second when $LASTNAME != 5 chars. > > Would anyone know what's going on here? Is there something about the way > btrim works, or perhaps with the

Re: [PERFORM] PostgreSQL OR performance

2008-11-07 Thread David Wilson
On Fri, Nov 7, 2008 at 4:14 AM, Віталій Тимчишин <[EMAIL PROTECTED]> wrote: > "Merge Join (cost=518771.07..62884559.80 rows=1386158171 width=32) (actual > time=30292.802..755751.242 rows=34749 loops=1)" Have you tried increasing the default_statistics_target? The planner is expecting 1.3 billion

Re: [PERFORM] inaccurate stats on large tables

2008-09-04 Thread David Wilson
On Thu, Sep 4, 2008 at 2:21 PM, Kiran Mukhyala <[EMAIL PROTECTED]> wrote: >Can someone please tell me how to improve the query planner >estimate? I did try vacuum analyze. Here are some details: Have you tried increasing the statistics target for that table (or in general)? -- -

Re: [PERFORM] Postgres not using array

2008-08-20 Thread David Wilson
On Wed, Aug 20, 2008 at 2:30 PM, André Volpato <[EMAIL PROTECTED]> wrote: > The CPU is 100% used since a few hours ago. Can anyone tell why? Sounds like you've just got a CPU bound query. The data may even all be in cache. Some information on database size, along with EXPLAIN results for your qu