On Wed, 26 Mar 2008 13:02:13 -0700
"Joshua D. Drake" <[EMAIL PROTECTED]> wrote:
> The slowness is likely attributed to Vacuum's use of I/O. When vacuum
> is running what does iostat -k 10 say?
Seems to be higher than normal - here is the output with vacuum run
without the other queries and the d
I have a query which is
prepare s_18 as select uid from user_profile where name like
$1::varchar and isactive=$2 order by name asc limit 250;
explain analyze execute s_18 ('atxchery%','t');
QUERY
PLAN
--
PFC wrote:
> Also, sometimes offine TOASTing is evil :
> Say you have a forum, you want the posts table to be CLUSTER'ed on
> (topic_id, post_id) so displaying 1 page with 30 posts on it uses 1 seek,
> not 30 seeks. But CLUSTER doesn't touch the data that has been pushed
> offline in the toas
You might try turning ³enable_bitmapscan² off, that will avoid the full
index scan and creation of the bitmap.
- Luke
On 3/27/08 8:34 AM, "Jesper Krogh" <[EMAIL PROTECTED]> wrote:
> Hi
>
> I have a table with around 10 million entries The webpage rendered hits
> at most 200 records which are
So your table is about 80 MB in size, or perhaps 120 MB if it fits in
shared_buffers. You can check it using ³SELECT
pg_size_pretty(pg_relation_size(mytable¹))²
- Luke
On 3/26/08 4:48 PM, "Peter Koczan" <[EMAIL PROTECTED]> wrote:
> FWIW, I did a select count(*) on a table with just over 300
In response to "Jesper Krogh" <[EMAIL PROTECTED]>:
> Hi
>
> I have a table with around 10 million entries The webpage rendered hits
> at most 200 records which are distributed well in the 10m with an average
> of 2 "references" pr. entry.
>
> Is there anyway to speed this query more up than all
Also, sometimes offine TOASTing is evil :
Say you have a forum, you want the posts table to be CLUSTER'ed on
(topic_id, post_id) so displaying 1 page with 30 posts on it uses 1 seek,
not 30 seeks. But CLUSTER doesn't touch the data that has been pushed
offline in the toast table. So, in tha
Hm, so this table has 10 million entries and it does not fit in 32GB of
RAM ?
Could you investigate :
- average size of rows in both tables
- a quick description of your table columns especially the average size of
your TEXT fields, especially the large one(s) like comments etc (don't
bother
On Thu, 27 Mar 2008, Jesper Krogh wrote:
# explain analyze SELECT "me"."created", "me"."created_initials",
"me"."updated", "me"."updated_initials", "me"."start_time",
"me"."end_time", "me"."notes", "me"."id", "me"."sequence_id",
"me"."database", "me"."name", "numbers"."reference_id",
"numbers"."e
Hi
I have a table with around 10 million entries The webpage rendered hits
at most 200 records which are distributed well in the 10m with an average
of 2 "references" pr. entry.
Is there anyway to speed this query more up than allready. .. yes running
it subsequenctly it is blazingly fast, but w
On Mar 26, 2008, at 3:31 PM, Scott Marlowe wrote:
On Wed, Mar 26, 2008 at 1:48 PM, p prince <[EMAIL PROTECTED]>
wrote:
is this 'normal'? (loaded question I know)
Should I be looking to offload expensive reporting queries to read-
only
replicants of my database?
Yes, definitely look into se
11 matches
Mail list logo