Re: [PERFORM] Best hardware

2005-06-04 Thread Mischa Sandberg
Quoting Bernd Jagla <[EMAIL PROTECTED]>: > ... the speed of the head of the HD is actually > limitiing. Also, I only experimented with RAID5, and heard that > RAID10 will be good for reading but not writing. Au contraire. RAID5 is worse than RAID10 for writing, because it has the extra implicit r

[PERFORM] Best hardware

2005-06-04 Thread Bernd Jagla
Hi there, And sorry for bringing this up again, but I couldn't find any recent discussion on the best hardware, and I know it actually depends on what you are doing... So this is what I had in mind: Our database is going to consist of about 100 tables or so of which only a hand full will be real

Re: [PERFORM] strategies for optimizing read on rather large tables

2005-06-04 Thread hubert lubaczewski
On Sat, Jun 04, 2005 at 02:07:52PM +0200, PFC wrote: > don't forget to mention all the index columns in the order by, or > the planner won't use it. of course. i understand the concept. actually i find kind of ashamed i did not try it before. anyway - thanks for great tip. depesz

Re: [PERFORM] strategies for optimizing read on rather large tables

2005-06-04 Thread PFC
select advert_id from acr_cache where category_id = ? and region_id = ? order by category_id, region_id, XXX limit 20; don't forget to mention all the index columns in the order by, or the planner won't use it. ---(end of broadcast)--- TIP 6

Re: [PERFORM] strategies for optimizing read on rather large tables

2005-06-04 Thread hubert lubaczewski
On Sat, Jun 04, 2005 at 01:18:04PM +0200, PFC wrote: > Then write your query as : > select advert_id from acr_cache where category_id = ? and region_id = ? > order by category_id, region_id, XXX limit 20; this is great idea - i'll check it out definitelly. depesz signature.asc Description: Digi

Re: [PERFORM] strategies for optimizing read on rather large tables

2005-06-04 Thread hubert lubaczewski
On Sat, Jun 04, 2005 at 07:17:17PM +0800, Christopher Kings-Lynne wrote: > Without reading too hard, I suggest having a quick look at contrib/ltree > module in the PostgreSQL distribution. It may or may not help you. acr_cache doesn't care about trees. and - since i have acr_cache - i dont have

Re: [PERFORM] strategies for optimizing read on rather large tables

2005-06-04 Thread Christopher Kings-Lynne
Without reading too hard, I suggest having a quick look at contrib/ltree module in the PostgreSQL distribution. It may or may not help you. Chris hubert lubaczewski wrote: hi first let me draw the outline. we have a database which stores "adverts". each advert is in one category, and one or m

Re: [PERFORM] strategies for optimizing read on rather large tables

2005-06-04 Thread PFC
select advert_id from acr_cache where category_id = ? and region_id = ? order by XXX {asc|desc} limit 20; where XXX is one of 5 possible fields, timestamp, timestamp, text, text, numeric Create 5 indexes on ( category_id, region_id, a field ) where "a field" is one of your 5

Re: [PERFORM] Query plan for very large number of joins

2005-06-04 Thread philb
>> Despite being fairly restricted in scope, >> the schema is highly denormalized hence the large number of tables. > >Do you mean normalized? Or do you mean you've pushed the superclass >details down onto each of the leaf classes? Sorry, I meant normalized, typing faster than I'm thinking here:)

[PERFORM] strategies for optimizing read on rather large tables

2005-06-04 Thread hubert lubaczewski
hi first let me draw the outline. we have a database which stores "adverts". each advert is in one category, and one or more "region". regions and categories form (each) tree structure. assume category tree: a / \ b c / \ d e if any given advert is in category "e". it means it