[SQL] Postgres entering zombie state once a week in production evnvironment
We have a medium scale installation of Postgres 8.3 that is freezing about once a week. I'm looking for any hints on how to diagnose the situation, as nothing is logged. The system is matched pair of Sunfire servers, running Debian Etch with a 2.6.18-6-amd64 kernel, PostgreSQL 8.3.4, and DRBD 8.0.13. During a failed state, pg_stat_activity will show hundreds of statements pending. query_start will show the statements arriving at a normal rate (a few per second), but clearly they never complete. The bulk of these statement are a simple select that starts each web session, a statement that generally completes in tenths of milliseconds. Restarting postgres restores normal operation, at the loss of all chance of figuring out what was wrong. postgresql.conf has customized: log_destination = 'syslog' log_min_error_statement = error log_min_duration_statement = 5000 Where can I go from here? I'm turning on log_checkpoints now, though all I see is "postgres[14412]: [4-1] LOG: checkpoint starting: time" with no actual time printed. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Postgres entering zombie state once a week in production evnvironment
Στις Tuesday 14 April 2009 09:25:54 ο/η Bryce Nesbitt έγραψε: > We have a medium scale installation of Postgres 8.3 that is freezing > about once a week. I'm looking for any hints on how to diagnose the > situation, as nothing is logged. > > The system is matched pair of Sunfire servers, running Debian Etch with > a 2.6.18-6-amd64 kernel, PostgreSQL 8.3.4, and DRBD 8.0.13. > > During a failed state, pg_stat_activity will show hundreds of statements > pending. query_start will show the statements arriving at a normal rate > (a few per second), but clearly they never complete. The bulk of these > statement are a simple select that starts each web session, a statement > that generally completes in tenths of milliseconds. Restarting postgres > restores normal operation, at the loss of all chance of figuring out > what was wrong. > > postgresql.conf has customized: > log_destination = 'syslog' > log_min_error_statement = error > log_min_duration_statement = 5000 > > Where can I go from here? I'm turning on log_checkpoints now, though > all I see is "postgres[14412]: [4-1] LOG: checkpoint starting: time" > with no actual time printed. > > Did you check for locks? SELECT * from pg_locks; SELECT l.locktype,c.relname,l.pid,l.mode from pg_locks l,pg_class c where l.relation=c.oid ; Were you able to issue other select commands during the problem? Were you able to start psql and do some simple command? Have you ruled out the possibility of the application being locked for other reasons? (limits, network failure,etc) -- Achilleas Mantzios -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Postgres entering zombie state once a week in production evnvironment
On Tue, Apr 14, 2009 at 12:25 AM, Bryce Nesbitt wrote: > We have a medium scale installation of Postgres 8.3 that is freezing about > once a week. I'm looking for any hints on how to diagnose the situation, as > nothing is logged. > > The system is matched pair of Sunfire servers, running Debian Etch with a > 2.6.18-6-amd64 kernel, PostgreSQL 8.3.4, and DRBD 8.0.13. > > During a failed state, pg_stat_activity will show hundreds of statements > pending. query_start will show the statements arriving at a normal rate (a > few per second), but clearly they never complete. The bulk of these > statement are a simple select that starts each web session, a statement that > generally completes in tenths of milliseconds. Restarting postgres restores > normal operation, at the loss of all chance of figuring out what was wrong. What does pg_locks say during this time? Specifically about locks that aren't granted? -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] ENUM vs DOMAIN vs FKyed loookup table
Dirk Jagdmann wrote: When you need to choose between enum types, domain types or lookup tables with foreign keys, what do you usualy choose? When I have a column with valid values that I know when writing my tables and that will *never* change I use an enum. For example a human gender type (and remember that there are 4 values for human sex if you want to model it completely). Otherwise a simple table with a primary key of type 'text' that is used as a foreign key in the other table, so I can change/alter the valid values later. No join needed! Remember that PK/FK do not always have to be of type 'serial'. The reason is, that for a user of the SQL language there is hardly any difference in using an ENUM or a text type, since they are both strings which must be enclosed in single quotes. Of course under the hood for the PostreSQL languange parser and interpreter there is a difference, but we can ignore that. To revisit your example I would do it this way: CREATE TABLE code_type ( t text not null primary key ); insert into code_type values ('Unapproved'), ('ApprovedByEmail'), ('ApprovedByAdmin'); CREATE TABLE codes ( code_id integer, code_value integer, code_type text not null references code_type ); I did a little bit of testing, I created two types, one domain and one enum: CREATE DOMAIN domain_code_type AS character varying NOT NULL CONSTRAINT domain_code_type_check CHECK (VALUE IN ('ApprovedByAdmin', 'Unapproved', 'ApprovedByEmail')); CREATE TYPE enum_code_types AS ENUM ('Unapproved', 'ApprovedByEmail', 'ApprovedByAdmin'); And two tables: CREATE TABLE codes__enum ( code_id integer NOT NULL, code_type enum_code_types NOT NULL, CONSTRAINT codes__enum_pk PRIMARY KEY (code_id) ) CREATE TABLE codes__domain ( code_id integer NOT NULL, code_type domain_code_type NOT NULL, CONSTRAINT codes_domain_pk PRIMARY KEY (code_id) ) I filled them with 12.000.000 rows of the very same data. Here are size differences: enum_test=# select * from pg_size_pretty(pg_relation_size('codes__enum')); pg_size_pretty 415 MB (1 row) enum_test=# select * from pg_size_pretty(pg_relation_size('codes__domain')); pg_size_pretty 520 MB (1 row) Inserting data to _domain table is a bit slower than to _enum table. I also created two tables 'linked' with primary keys, just one used code_type::varchar as FK field, while other used code_type::integer as FK field (but, in referenced table I need to have code_type_description or whatever). I filled them with the same data. Size of the table where FK is varchar is the same as in _domain table, while table where FK is integer is same as in _enum table. Enums and pain to administer - changing enum type definition for a table where I have 12.000.000 rows takes almost forever! Domains are a bit more flexibile, yes. And, yes, 'person sex' is very good example on 'when to use enums'. My example is poor for enums and domains. Separate table referenced with foreign key seems like best solution because I have no idea if the client might want to add 'ApprovedButThenRevoked' or 'ApprovedWhenNotDrunk' code types in the future. Mike -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Postgres entering zombie state once a week in production evnvironment
Thanks for the thoughts on what to check. Unfortunately, the priority of the people responding to the incidents has been to get the system live again. I will add these items to a list that, hopefully, will be run through prior to restarting Postgres. Achilleas Mantzios wrote: Did you check for locks? SELECT * from pg_locks; SELECT l.locktype,c.relname,l.pid,l.mode from pg_locks l,pg_class c where l.relation=c.oid ; Were you able to issue other select commands during the problem? Were you able to start psql and do some simple command? Have you ruled out the possibility of the application being locked for other reasons? (limits, network failure,etc -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] changing multiple pk's in one update
Date: Mon, 13 Apr 2009 17:09:49 -0400 From: Glenn Maynard To: pgsql-sql@postgresql.org Subject: Re: changing multiple pk's in one update Message-ID: (JMdict? I was playing with importing that into a DB a while back, but the attributes in that XML are such a pain--and then my email died while I was trying to get those changed, and I never picked it up again.) On Mon, Apr 13, 2009 at 1:20 PM, Stuart McGraw wrote: > 1 to the number of sentences in the entry) and the sentence text. Â The pk is > of course the entry id and the sense number. > There are other tables that have fk's to the senses. Your PK is a composite of (entry, order)? Won't your foreign keys elsewhere all break when you shift the order around? > I guess I could add an "order"[1] column and use the sense number as a > surrogate partial key to avoid the need for key renumbering, > but all the api's (and the normal human way of thinking) are based > on "sense number 1 of entry x", "sense number 2 of entry y", so > one would need to maintain "order" as a gapless sequence (or add a new > mapping layer to map from/to a arbitrary monotonic sequence > to a 1,2,3,... sequence) -- the gain doesn't seem that big. Why not do this in the straightforward way: three separate fields: a regular, sequential PK; an FK to the entry; and an order number. Add an index on (entry_key, order_number). It's a little more expensive since you have a new column and index (the PK), but in a table with a lot of plain text that's probably insignificant. Now you can use the plain PK for your FK's. I'd agree with this approach. I have a number of tables which are sensitive to arbitrary ordering and they sound roughly similar to your use-case (though my tables are probably smaller). My approach is to create a string column in the table which permits defining arbitrary ordering. I use a string field b/c it's easier for me to stuff (by hand) new ordered records in between other existing records. But an integer would work just as well, so long as you make sure you keep enough space between the integers (e.g. 1000, 2000, 3000). Also, if your ordered list is truly "ordinal" (each record is either 1st, 2nd, 3rd, etc in a single list) you could just use 1,2,3,4 for the ordering, but then you have to mess with two records in order to swap the positions of (say) item 2 and 3. Of course you can do this pretty easily inside a transaction, and you don't have to worry about the mess of moving PK's. Steve -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Postgres entering zombie state once a week in production evnvironment
Scott Marlowe wrote: What does pg_locks say during this time? Specifically about locks that aren't granted? I don't know, yet. Though these events go for 15-30 minutes before postgres restart, and no deadlocks are detected, so I don't think it is locks. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
Re: [SQL] Postgres entering zombie state once a week in production evnvironment
On Tue, Apr 14, 2009 at 2:59 PM, Bryce Nesbitt wrote: > Scott Marlowe wrote: >> >> What does pg_locks say during this time? Specifically about locks >> that aren't granted? > > I don't know, yet. Though these events go for 15-30 minutes before postgres > restart, and no deadlocks are detected, so I don't think it is locks. Ummm, deadlocks <> locks blocking other queries. -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql
[SQL] Postgres process resident size does not drop after killing statement
Every so often our production Postgres 8.3 system will get statement that runs for a few hours, or a few days, or more, and needs to be killed dead. We kill it with pg_cancel_backend(), and cpu usage of the process immediately drops, and the process starts serving other statements. But the curious thing is the resident size does not drop. Is this normal? Something to be concerned about? Here's a mild example (only 2.8 gigs): # top PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND 20108 postgres 16 0 4305m 2.8g 2.8g D4 8.6 3:12.50 postgres: production production 10.100.4.11(44200) SELECT 16306 postgres 15 0 4248m 1.5g 1.5g S0 4.7 0:47.70 postgres: jira jira 10.100.5.12(57517) idle 28472 postgres 15 0 4249m 1.4g 1.4g S2 4.6 0:16.18 postgres: production production 10.100.4.11(48457) idle # uname -a Linux 2.6.18-6-amd64 #1 SMP Thu Dec 25 21:08:54 UTC 2008 x86_64 GNU/Linux -- Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-sql