Steve <[EMAIL PROTECTED]> writes:
> On Thu, 12 Apr 2007, Tom Lane wrote:
>> I'm still not having any luck reproducing the failure here. Grasping at
>> straws again, I wonder if it's got something to do with the order in
>> which the planner examines the indexes --- which is OID order. Could
>> yo
I didn't notice anyone address this for you yet. There is a tool in
contrib/pg_buffercache whose purpose in life is to show you what the shared
buffer cache has inside it. The documentation in that directory leads
through installing it. The additional variable you'll likely never know is
wha
Here you go:
detail_summary_b_record_status_idx
detail_summary_batch_id_idx
detail_summary_batchnum_idx
detail_summary_carrier_id_idx
detail_summary_duplicate_id_idx
detail_summary_e_record_status_idx
detail_summary_encounter_id_idx
detail_summary_encounternum_idx
detail_summary_export_d
Steve <[EMAIL PROTECTED]> writes:
> Just dropping that index had no effect, but there's a LOT of indexes that
> refer to receipt. So on a hunch I tried dropping all indexes that refer
> to receipt date and that worked -- so it's the indexes that contain
> receipt date that are teh problem.
I'm
On Tue, 10 Apr 2007, Steve wrote:
- I've set up a configuration (I'll show important values below), and I"m
wondering if there's any way I can actually see the distribution of memory in
the DB and how the memory is being used.
I didn't notice anyone address this for you yet. There is a tool
Steve <[EMAIL PROTECTED]> writes:
> Okay -- I started leaving indexes on one by one.
> ...
> So does this mean I should experiment with dropping those indexes?
No, I think this means there's a planner bug to fix. I haven't quite
scoped out what it is yet, though.
re
Okay -- I started leaving indexes on one by one.
The explain broke when the detail_summary_receipt_encounter_idx index was
left on (receipt, encounter_id).
Just dropping that index had no effect, but there's a LOT of indexes that
refer to receipt. So on a hunch I tried dropping all indexes t
Steve <[EMAIL PROTECTED]> writes:
> Either way, it runs perfectly fast. So it looks like the indexes are
> confusing this query like you suspected. Any advise?
Wow --- sometimes grasping at straws pays off. I was testing here with
just a subset of the indexes to save build time, but I bet that
If the other indexes are removed, with enable_seqscan=on:
Bitmap Heap Scan on detail_summary ds (cost=154.10..1804.22 rows=1099
width=4)
Recheck Cond: (encounter_id = ANY
('{8813186,8813187,8813188,8813189,8813190,8813191,8813192,8813193,8813194,8813195,8813196,8813197,8813198,8813199,8813
Steve <[EMAIL PROTECTED]> writes:
> With enable_seqscan=off I get:
> -> Bitmap Index Scan on detail_summary_receipt_encounter_idx
> (cost=0.00..4211395.17 rows=1099 width=0)
> Index Cond: ((receipt >= '1998-12-30'::date) AND (encounter_id =
> ANY ...
> The explain analyze is pend
Here's the explain analyze with seqscan = off:
Bitmap Heap Scan on detail_summary ds (cost=4211395.20..4213045.32
rows=1099 width=4) (actual time=121288.825..121305.908 rows=112 loops=1)
Recheck Cond: ((receipt >= '1998-12-30'::date) AND (encounter_id = ANY
('{8813186,8813187,8813188,88131
Table size: 16,037,728 rows
With enable_seqscan=off I get:
Bitmap Heap Scan on detail_summary ds (cost=4211395.20..4213045.32
rows=1099 width=4)
Recheck Cond: ((receipt >= '1998-12-30'::date) AND (encounter_id = ANY
('{8813186,8813187,8813188,8813189,8813190,8813191,8813192,8813193,881319
It's a redhat enterprise machine running AMD x64 processors.
Linux ers3.dddcorp.com 2.6.9-42.0.10.ELsmp #1 SMP Fri Feb 16 17:13:42 EST
2007 x86_64 x86_64 x86_64 GNU/Linux
It was compiled by me, straight up, nothing weird at all, no odd compiler
options or wahtever :)
So yeah :/ I'm quite ba
Steve <[EMAIL PROTECTED]> writes:
> ... even if I force it to use the indexes
> (enable_seqscan=off) it doesn't make it any faster really :/
Does that change the plan, or do you still get a seqscan?
BTW, how big is this table really (how many rows)?
regards, tom lane
--
Steve <[EMAIL PROTECTED]> writes:
> Here's my planner parameters:
I copied all these, and my 8.2.x still likes the bitmap scan a lot
better than the seqscan. Furthermore, I double-checked the CVS history
and there definitely haven't been any changes in that area in REL8_2
branch since 8.2.3. So
Here's my planner parameters:
seq_page_cost = 1.0 # measured on an arbitrary scale
random_page_cost = 1.5 # same scale as above
cpu_tuple_cost = 0.001 # same scale as above
cpu_index_tuple_cost = 0.0005 # same scale as above
cpu_oper
Scott Marlowe <[EMAIL PROTECTED]> writes:
> So there's a misjudgment of the number of rows returned by a factor of
> about 88. That's pretty big. Since you had the same number without the
> receipt date (I think...) then it's the encounter_id that's not being
> counted right.
I don't think that'
So there's a misjudgment of the number of rows returned by a factor of
about 88. That's pretty big. Since you had the same number without the
receipt date (I think...) then it's the encounter_id that's not being
counted right.
Try upping the stats target on that column and running analyze agai
[ itch... ] That code is just completely wrong, because the contents
of a TEXT datum aren't guaranteed null-terminated. It'd be better to
invoke bttextcmp and negate its result.
That's not relevant to your immediate problem, but if you've noticed
any strange behavior with your text_revop indexe
Steve <[EMAIL PROTECTED]> writes:
> Datum ddd_text_revcmp(PG_FUNCTION_ARGS){
> char* arg1=(char*)VARDATA(PG_GETARG_TEXT_P(0));
> char* arg2=(char*)VARDATA(PG_GETARG_TEXT_P(1));
> if((*arg1) != (*arg2)){
> PG_RETURN_INT32(*arg2 - *ar
On 12.04.2007, at 15:58, Jason Lustig wrote:
Wow! That's a lot to respond to. Let me go through some of the
ideas... First, I just turned on autovacuum, I forgot to do that.
I'm not seeing a major impact though. Also, I know that it's not
optimal for a dedicated server.
Hmm, why not? Have
On Thu, 2007-04-12 at 17:04, Steve wrote:
> >> Seq Scan on detail_summary ds (cost=0.00..1902749.83 rows=9962 width=4)
> >> Filter: ((receipt >= '1998-12-30'::date) AND (encounter_id = ANY
> >> ('{8813186,8813187,8813188,8813189,8813190,8813191,8813192,8813193,8813194,8813195,8813196,8813197
Seq Scan on detail_summary ds (cost=0.00..1902749.83 rows=9962 width=4)
Filter: ((receipt >= '1998-12-30'::date) AND (encounter_id = ANY
('{8813186,8813187,8813188,8813189,8813190,8813191,8813192,8813193,8813194,8813195,8813196,8813197,8813198,8813199,8813200,8813201,8813202,8813203,8813204
Oy vey ... I hope this is a read-mostly table, because having that many
indexes has got to be killing your insert/update performance.
Hahaha yeah these are read-only tables. Nightly inserts/updates.
Takes a few hours, depending on how many records (between 4 and 10
usually). But during the
Steve <[EMAIL PROTECTED]> writes:
> Here's the table and it's indexes. Before looking, a note; there's
> several 'revop' indexes, this is for sorting. The customer insisted on,
> frankly, meaninglessly complicated sorts. I don't think any of that
> matters for our purposes here though :)
Oy
Hi all,
Wow! That's a lot to respond to. Let me go through some of the
ideas... First, I just turned on autovacuum, I forgot to do that. I'm
not seeing a major impact though. Also, I know that it's not optimal
for a dedicated server. It's not just for postgres, it's also got our
apache se
On Thu, 2007-04-12 at 16:03, Steve wrote:
> Hey there;
>
> On a Postgres 8.2.3 server, I've got a query that is running very slow in
> some cases. With some work, I've determined the 'slow part' of the query.
> :) This is a query on a table with like 10 million rows or something like
> that.
Could we see the exact definition of that table and its indexes?
It looks like the planner is missing the bitmap scan for some reason,
but I've not seen a case like that before.
Also, I assume the restriction on receipt date is very nonselective?
It doesn't seem to have changed the estimated rowc
Steve <[EMAIL PROTECTED]> writes:
> On a Postgres 8.2.3 server, I've got a query that is running very slow in
> some cases.
Could we see the exact definition of that table and its indexes?
It looks like the planner is missing the bitmap scan for some reason,
but I've not seen a case like that bef
Hey there;
On a Postgres 8.2.3 server, I've got a query that is running very slow in
some cases. With some work, I've determined the 'slow part' of the query.
:) This is a query on a table with like 10 million rows or something like
that. encounter_id is an integer and receipt is of type 'd
Jeff Frost wrote:
You know, I should answer emails at night...
Indeed you shouldN'T ;-)
Carlos
--
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
On Wednesday 04 April 2007 07:51, Arnau wrote:
> Hi Ansgar ,
>
> > On 2007-04-04 Arnau wrote:
> >> Josh Berkus wrote:
> Is there anything similar in PostgreSQL? The idea behind this is how
> I can do in PostgreSQL to have tables where I can query on them very
> often something like e
On Thu, 12 Apr 2007, Scott Marlowe wrote:
On Thu, 2007-04-12 at 10:19, Guido Neitzer wrote:
On 12.04.2007, at 08:59, Ron wrote:
Depends. As I said - if the whole DB fits into the remaining space,
and a lot of website backend DBs do, it might just work out. But this
seems not to be the case
On Thu, 2007-04-12 at 10:19, Guido Neitzer wrote:
> On 12.04.2007, at 08:59, Ron wrote:
>
> Depends. As I said - if the whole DB fits into the remaining space,
> and a lot of website backend DBs do, it might just work out. But this
> seems not to be the case - either the site is chewing on se
Steve <[EMAIL PROTECTED]> writes:
- What is temp_buffers used for exactly?
Temporary tables. Pages of temp tables belonging to your own backend
don't ever get loaded into the main shared-buffers arena, they are read
into backend-local memory. temp_buffers is the max amount (per backend)
of
On Thu, 12 Apr 2007, Jason Lustig wrote:
0 <-- BM starts here
10 0180 700436 16420 9174000 0 176 278 2923 59 41 0
0 0
11 0180 696736 16420 9174000 0 0 254 2904 57 43 0
0 0
12 0180 691272 16420 9174000 0 0 255 3043 60
On 12.04.2007, at 08:59, Ron wrote:
1= Unless I missed something, the OP described pg being used as a
backend DB for a webserver.
Yep.
I know the typical IO demands of that scenario better than I
sometimes want to.
:-(
Yep. Same here. ;-)
2= 1GB of RAM + effectively 1 160GB HD = p*ss
At 10:08 AM 4/12/2007, Guido Neitzer wrote:
On 12.04.2007, at 07:26, Ron wrote:
You need to buy RAM and HD.
Before he does that, wouldn't it be more useful, to find out WHY he
has so much IO?
1= Unless I missed something, the OP described pg being used as a
backend DB for a webserver.
I
On 12.04.2007, at 07:26, Ron wrote:
You need to buy RAM and HD.
Before he does that, wouldn't it be more useful, to find out WHY he
has so much IO?
Have I missed that or has nobody suggested finding the slow queries
(when you have much IO on them, they might be slow at least with a
hig
On 4/4/07, Alexandre Vasconcelos <[EMAIL PROTECTED]> wrote:
We have an application subjected do sign documents and store them
somewhere. The files size may vary from Kb to Mb. Delelopers are
arguing about the reasons to store files direcly on operating system
file system or on the database, as la
1= RAID 1improves data =intregrity=, not IO performance.
Your HD IO performance is essentially that of 1 160GB HD of whatever
performance one of those HDs have.
(what kind of HDs are they anyway? For instance 7200rpm 160GB HDs
are not particularly "high performance")
BEST case is streaming IO
41 matches
Mail list logo