On Feb 10, 2007, at 12:34 PM, Tom Lane wrote:

Arturo Perez <[EMAIL PROTECTED]> writes:
Saturday I changed a table to add a varchar(24) and a TEXT column.

You didn't actually say which of these tables you changed?

Sorry, I changed extended_user.


I'm not very good at reading these but it looks like sort memory might
be too low?

The runtime seems to be entirely in the index scan on user_tracking.
I'm surprised it doesn't do something to avoid a full-table indexscan
--- in this case, hashing with extended_user as the inner relation would
seem like the obvious thing.  Is user_id a hashable datatype?

user_id is an integer; Here are the table definitions, since this seems like a problem
that won't go away anytime soon.

                Table "public.extended_user"
      Column       |            Type             | Modifiers
-------------------+-----------------------------+-----------
create_date       | timestamp without time zone | not null
email             | character varying(99)       |
first_name        | character varying(99)       | not null
last_name         | character varying(99)       | not null
license_agreement | boolean                     | not null
license_date      | timestamp without time zone |
password          | character varying(32)       | not null
subscription_id   | integer                     | not null
user_id           | integer                     | not null
user_name         | character varying(99)       | not null
active            | boolean                     | not null
phone             | character varying(24)       |
title             | text                        |
Indexes:
    "extended_user_pkey" PRIMARY KEY, btree (user_id) CLUSTER
    "user_name_uq" UNIQUE, btree (user_name)
    "extended_user_subscription_id_idx" btree (subscription_id)
Foreign-key constraints:
"extended_user_subscription_id_fkey" FOREIGN KEY (subscription_id) REFERENCES subscription(subscription_id) DEFERRABLE INITIALLY DEFERRED

\d user_tracking
Table "public.user_tracking" Column | Type | Modifiers ------------------+----------------------------- +----------------------------------------------------------------------- ---
action           | character varying(255)      | not null
entry_date       | timestamp without time zone | not null
note             | text                        |
report_id        | integer                     |
session_id       | character varying(255)      | not null
user_id          | integer                     |
user_tracking_id | integer | not null default nextval('user_tracking_user_tracking_id_seq'::regclass)
Indexes:
    "user_tracking_pkey" PRIMARY KEY, btree (user_tracking_id)
"user_tracking_monthly_idx" btree (date_part('year'::text, entry_date), date_part('month'::text, entry_date)) "user_tracking_quarterly_idx" btree (date_part('year'::text, entry_date), date_part('quarter'::text, entry_date))
    "user_tracking_report_id_idx" btree (report_id)
    "user_tracking_user_id_idx" btree (user_id)
Foreign-key constraints:
"user_tracking_report_id_fkey" FOREIGN KEY (report_id) REFERENCES article(article_id) DEFERRABLE INITIALLY DEFERRED "user_tracking_user_id_fkey" FOREIGN KEY (user_id) REFERENCES extended_user(user_id) DEFERRABLE INITIALLY DEFERRED



It's possible that adding the columns would have affected the plan by
making it look like a sort or hash would take too much memory, but if
that were it then your hand increase in work_mem should have fixed it.
Tis odd.  I don't suppose you know what plan was used before?

                        regards, tom lane

No, sorry. Further information: on disk the user_tracking table is over 500MB, I can't increase shared_buffers (currently 20000) because of SHMMAX limits (for now, scheduled outage
and all that).

Any suggestions on how to improve the situation?

tias,
-arturo


---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to