[GENERAL] Some Autovacuum Questions

2008-02-12 Thread Thomas Chille
Hi!

Some of our clients databases are performing less good after a while.
We are using autovacuum to vacuuming and analyzing the tables.

After some analyzes by my own it looks like that the tables or table
indexes are not analyzed or vacuumed fully or correctly.

A count(*) query takes multiple times longer on this databases as on a
fresh dumped/restored version of the same database on the same
machine.

During the query it looks like that postgres scans all the time the
harddisk and is reading a lot more data in then from the fresh
restored database. This showed me the output of the vmstat-commands
'bi'-attribute (blocks reading in) and the up to 99,9% wait-state of
the cpu.

My 1. question is,
if the known bugfixes for autovacuum after release 8.1.4 addressing my
depicted issues?

We are still using 8.1.4 because a database upgrade for us and our
product is a hefty step wich involves a lot of customer databases. But
if it could help we consider to upgrade to 8.1.11 or 8.3. What would u
suggest?

My 2. questions is,
if i possible configured something improper?

For this i will give a brief overview of our database.

The database stores mainly historical data for a reporting
application. This data will be consolidated per day at frequent
intervals. For this the data of one day will be removed from the
historical tables and will be newly calculated out of some tables with
raw data.  Depending on the daytime and the amount of data it takes
normaly up to 2 minutes to summarize an compress the data of one day.
After one minute break it starts again.

We talk about up to 3000 records per day out of up to 3 million
records (and growing) in the whole historical table.

Can autovacuum handle that much changing data with this configuration?

From default configuration differing settings:

vacuum_cost_delay = 200 
vacuum_cost_page_hit = 6
vacuum_cost_limit = 100
autovacuum_naptime = 60 

Strange enough a manual analyze and vacuum makes the measured count(*)
query less performant? See attached vacuum log.

My 3. question is,
if it possible to read the vaccuming or analyzing state of a given
table manually? Are there any indicatores in statistic tables, wich
the autovacuum demaon is using too wich can show me the progress of a
running autovacuum?

We are using a customized debian Linux on Pentium 4 2,8 GHz


Thanks for any help!

regards, thomas


vacuum.log
Description: Binary data

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] autovacuum process blocks without reporting a deadlock

2007-11-27 Thread Thomas Chille
On Nov 24, 2007 6:20 AM, Tom Lane [EMAIL PROTECTED] wrote:
 What other indexes does that table have?

 regards, tom lane


Hi,

last night it happend again. In the log-snippet u can see all indexes
of this table:

[9293 / 2007-11-26 21:46:28 CET]CONTEXT:  SQL statement UPDATE
hst_timerecording SET id_timerecording_join = NULL WHERE
id_timerecording_join = -1
PL/pgSQL function set_id_timerecording_join line 121 at SQL statement
SQL statement UPDATE hst_timerecording SET sales_volume =
NULL, sales_volume_commission = NULL WHERE business_day =  $1  AND
id_employee =  $2 
PL/pgSQL function compress_salaries_day line 168 at SQL statement
SQL statement SELECT  compress_salaries_day( $1 , NULL, NULL)
PL/pgSQL function compress line 460 at perform
[9293 / 2007-11-26 21:46:28 CET]LOCATION:  exec_stmt_raise, pl_exec.c:2110
[9317 / 2007-11-26 21:46:34 CET]DEBUG:  0: index
hst_timerecording_business_day_idx now contains 8640 row versions in
80 pages
[9317 / 2007-11-26 21:46:34 CET]DETAIL:  4469 index row versions were removed.
13 index pages have been deleted, 9 are currently reusable.
CPU 0.00s/0.00u sec elapsed 3.40 sec.
[9317 / 2007-11-26 21:46:34 CET]LOCATION:  lazy_vacuum_index, vacuumlazy.c:736
[9317 / 2007-11-26 21:46:39 CET]DEBUG:  0: index
hst_timerecording_id_employee_idx now contains 8640 row versions in
95 pages
[9317 / 2007-11-26 21:46:39 CET]DETAIL:  4469 index row versions were removed.
10 index pages have been deleted, 6 are currently reusable.
CPU 0.00s/0.00u sec elapsed 4.22 sec.
[9317 / 2007-11-26 21:46:39 CET]LOCATION:  lazy_vacuum_index, vacuumlazy.c:736
[9317 / 2007-11-26 21:46:43 CET]DEBUG:  0: index
hst_timerecording_id_timerecording_idx now contains 8640 row
versions in 97 pages
[9317 / 2007-11-26 21:46:43 CET]DETAIL:  4469 index row versions were removed.
11 index pages have been deleted, 5 are currently reusable.
CPU 0.00s/0.00u sec elapsed 4.40 sec.
[9317 / 2007-11-26 21:46:43 CET]LOCATION:  lazy_vacuum_index, vacuumlazy.c:736

This are again the last log-entries of the blocking processes.
Obviously the three indexes where successful vacuumed?

I think this are the relevant pg_locks entries:

relation7568577875686189
 9017862 25467   AccessShareLock f
relation7568577875686189
 9009323 9317ShareUpdateExclusiveLock
  t
relation7568577875686189
 9009312 9293AccessShareLock t
relation7568577875686189
 9009312 9293RowExclusiveLockt
relation7568577875686189
 9009312 9293AccessExclusiveLock f
relation7568577875686189
 9012978 28370   AccessShareLock f

75686189 is the table hst_timerecording. for me it looks like the
autovacuum is not releasing the blocking ShareUpdateExclusiveLock?

I hope this infos could help.

regards, t
thomas

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] autovacuum process blocks without reporting a deadlock

2007-11-27 Thread Thomas Chille
On Nov 27, 2007 3:14 PM, Alvaro Herrera [EMAIL PROTECTED] wrote:
hat are the column headings?  I find this difficult to read.

 Please post the whole of pg_locks.  I may be missing something but I
 think we're missing part of the picture here.  Autovacuum does not seem
 to be locking on anything.

Unfortunately i logged the pg_locks-state not well formated. i added
now the heading manually. maybe it is better or i have to wait till
tomorrow morning.

but what i think i can see, is that the process with pid 9317 is
holding a ShareUpdateExclusiveLock

locktype   |  database|  relation   |   transaction |  pid
|  mode |  granted
relation|  75685778| 75686189   |9009323|   9317   |
ShareUpdateExclusiveLock | t

and process with pid 9293 is waiting to get AccessExclusiveLock on the
same relation:

relation|  75685778| 75686189   |9009312|   9293   |
AccessShareLock |  f

... and 9317 is the autvacuumprocess?

afterkiling it, erything is fine.

i attached the full pg_lock.

regards,
thomas
locktype  | database  | relation |transactionid | classid | 
objid | objsubid | transaction |  pid  |  mode   | granted 

relation7568577875686652
9009312 9293AccessShareLock t
transactionid   9017862 
9017862 25467   ExclusiveLock   t
relation7568577877862554
9009312 9293RowExclusiveLockt
relation7568577875686561
9009312 9293AccessShareLock t
relation7568577875686037
9012978 28370   AccessShareLock t
relation7568577875686037
9012978 28370   RowExclusiveLockt
relation7568577875686371
9012978 28370   AccessShareLock t
relation7568577875686371
9012978 28370   RowExclusiveLockt
relation7568577875686671
9009312 9293AccessShareLock t
relation7568577875686189
9017862 25467   AccessShareLock f
relation7568577875686044
9009312 9293AccessShareLock t
relation7568577875686138
9009312 9293AccessShareLock t
relation7568577875686138
9009312 9293RowExclusiveLockt
relation7568577875685984
9017862 25467   AccessShareLock t
relation7568577875686133
9009312 9293AccessShareLock t
relation7568577875686652
9012978 28370   AccessShareLock t
transactionid   9018078 
9018078 10605   ExclusiveLock   t
relation7568577875685979
9009312 9293AccessShareLock t
relation7568577875686031
9012978 28370   AccessShareLock t
relation7568577875686031
9012978 28370   RowExclusiveLockt
relation7568577875686011
9009312 9293AccessShareLock t
relation7568577875686176
9009312 9293AccessShareLock t
relation7568577877862554
9009323 9317ShareUpdateExclusiveLockt
relation7568577875685833
9012978 28370   AccessShareLock t
relation7568577875686189
9009323 9317ShareUpdateExclusiveLockt
relation7568577875686727
9009312 9293AccessShareLock t
relation7568577875686199
9017862 25467   AccessShareLock t
relation7568577877862552
9009312 9293AccessShareLock t
relation7568577877862552  

Re: [GENERAL] autovacuum process blocks without reporting a deadlock

2007-11-27 Thread Thomas Chille
yes, u are right.

this are the 3 involved indexes:

hst_timerecording_business_day_idx on hst_timerecording
hst_timerecording_id_employee_idxon hst_timerecording
hst_timerecording_id_timerecording_idxon hst_timerecording

lg t

On Nov 27, 2007 4:07 PM, Alvaro Herrera [EMAIL PROTECTED] wrote:
 Alvaro Herrera wrote:
  Thomas Chille wrote:
   On Nov 27, 2007 3:14 PM, Alvaro Herrera [EMAIL PROTECTED] wrote:
   hat are the column headings?  I find this difficult to read.
   
Please post the whole of pg_locks.  I may be missing something but I
think we're missing part of the picture here.  Autovacuum does not seem
to be locking on anything.
  
   Unfortunately i logged the pg_locks-state not well formated. i added
   now the heading manually. maybe it is better or i have to wait till
   tomorrow morning.
  
   but what i think i can see, is that the process with pid 9317 is
   holding a ShareUpdateExclusiveLock
 
  You missed that 9317 is also holding an ExclusiveLock.

 Sorry, I misread it.  My analysis is bogus :-)  I suggest you upgrade
 anyway because there are nasty bugs in the version you are using.

 I assume 77862554, 77862552 and 77862553 are indexes on the 75686189
 table?

 --
 Alvaro Herrera http://www.amazon.com/gp/registry/CTMLCN8V17R4

 La experiencia nos dice que el hombre peló millones de veces las patatas,
 pero era forzoso admitir la posibilidad de que en un caso entre millones,
 las patatas pelarían al hombre (Ijon Tichy)


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] autovacuum process blocks without reporting a deadlock

2007-11-27 Thread Thomas Chille
On Nov 27, 2007 4:52 PM, Tom Lane [EMAIL PROTECTED] wrote:
 You didn't happen to note what 9293 was doing did you?  It's living
 fairly dangerously in any case by trying to acquire exclusive lock
 when it already holds a bunch of other lower-level locks; that's a
 recipe for deadlock if I ever saw one.

 regards, tom lane


Ah ok, 9293 is a triggerd process and tries to ALTER TABLE ...
DISABLE TRIGGER (other trigger) and so implicitly tries to acquire an
AccessExclusiveLock and runs in a deadlock?

So is better not to use ALTER TABLE ...  in triggerfunctions,
because there are always existing lower-level locks?

regards,
thomas

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] autovacuum process blocks without reporting a deadlock

2007-11-23 Thread Thomas Chille
Hi anybody,

I step in just one of our identically customer databases in a kind of
a deadlock with Autovacuum involved.

One Autovacuum process stuck in the middle of the night and it seemed
that it compete with another Select process for an index:

[14398 / 2007-11-21 00:52:04 CET]CONTEXT:  SQL statement UPDATE
hst_timerecording SET id_timerecording_join = NULL WHERE
id_timerecording_join = -1
PL/pgSQL function set_id_timerecording_join line 121 at SQL statement
SQL statement UPDATE hst_timerecording SET sales_volume =
NULL, sales_volume_commission = NULL WHERE business_day =  $1  AND
id_employee =  $2 
PL/pgSQL function compress_salaries_day line 168 at SQL statement
SQL statement SELECT  compress_salaries_day( $1 , NULL, NULL)
PL/pgSQL function compress line 460 at perform
[14398 / 2007-11-21 00:52:04 CET]LOCATION:  exec_stmt_raise, pl_exec.c:2110
[14391 / 2007-11-21 00:52:14 CET]DEBUG:  0: index
hst_timerecording_id_timerecording_idx now contains 8537 row
versions in 61 pages
[14391 / 2007-11-21 00:52:14 CET]DETAIL:  4454 index row versions were removed.
12 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 3.47 sec.
[14391 / 2007-11-21 00:52:14 CET]LOCATION:  lazy_vacuum_index, vacuumlazy.c:736

This are the last log entires for these both processes. Over 9 hours
later, i can see them allready running  in the process list :

14391 ?S  0:00 postgres: autovacuum process
backoffice_db
14398 ?S  0:02 postgres: spoon backoffice_db office(39302)
SELECT waiting

This happens every night and a dump restore wont help.

I dont set any explicit locks and so i would expect that no deadlock
could occure? And when, then i would expect that it would be logged
together with the dead locked relations?

Can anyone give a tipp, please?

Version: PostgreSQL 8.1.4 on i686-pc-linux-gnu, compiled by GCC gcc
(GCC) 4.0.2 (Debian 4.0.2-2)

regards
thomas

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] autovacuum process blocks without reporting a deadlock

2007-11-23 Thread Thomas Chille
i have to wait till monday, then i can provide these lines.

thanks,
thomas

On Nov 23, 2007 1:32 PM, Alvaro Herrera [EMAIL PROTECTED] wrote:
 Thomas Chille escribió:

  This are the last log entires for these both processes. Over 9 hours
  later, i can see them allready running  in the process list :
 
  14391 ?S  0:00 postgres: autovacuum process
  backoffice_db
  14398 ?S  0:02 postgres: spoon backoffice_db office(39302)
  SELECT waiting

 Can you paste the relevant lines from pg_locks?

 --
 Alvaro Herrera   http://www.PlanetPostgreSQL.org/
 Estoy de acuerdo contigo en que la verdad absoluta no existe...
 El problema es que la mentira sí existe y tu estás mintiendo (G. Lama)


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] simple query terminated by signal 11

2006-06-22 Thread Thomas Chille

Thanks for your Tipps!


Since it is repeatable in your machine, you can compile a new postgres
version with --enable-cassert (enable assertions in code) and
--enable-debug  (enable gcc debug support) configuration. Then run it on
your data and bt the core dump.


I try to found out the reason for that behavoir.

For now i could drop this damaged table und restore it from an older
backup, so all works fine again.

regards,
thomas!

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] simple query terminated by signal 11

2006-06-20 Thread Thomas Chille

Hi Qingqing,

thanks for your reply!

The postgresql version is 8.0.4 and runs on a debian based linux
server with kernel  2.6.11.2.

I never dealed with a core dump before. but  after setting ulimit -c
1024 i got it.

I don't know how to post it, because the size is 1,5 MB?! I try to
attch it as gzip.

I also could not install dbg on the erroneous system, so i tried to
examine the core dump on another machine (gentoo) with  postgres 8.0.4
anf got the following output:

spoonpc01 ~ # gdb /usr/bin/postgres core
GNU gdb 6.4
Copyright 2005 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type show copying to see the conditions.
There is absolutely no warranty for GDB.  Type show warranty for details.
This GDB was configured as i686-pc-linux-gnu...(no debugging symbols found)
Using host libthread_db library /lib/tls/libthread_db.so.1.


warning: core file may not match specified executable file.
(no debugging symbols found)
Core was generated by `postgres: postgres backoffice_db [local] SELECT'
.
Program terminated with signal 11, Segmentation fault.
#0  0x080753c2 in DataFill ()
(gdb) where
#0  0x080753c2 in DataFill ()
#1  0xb74253d4 in ?? ()
#2  0x001d in ?? ()
#3  0x08356fa8 in ?? ()
#4  0x08379420 in ?? ()
#5  0x in ?? ()
(gdb)

What i can say too, is that i can reproduce the error  everytime with
the same query.

thanks in advonce

On 6/20/06, Qingqing Zhou [EMAIL PROTECTED] wrote:


Thomas Chille [EMAIL PROTECTED] wrote
 Hi List,

 i run in to an error while dumping a db.

 after investigating it, i found a possible corrupted table. but i am not
sure.
 and i dont know how i can repair it? could it be a harddrive error?


 # now the error: SELECT * FROM hst_sales_report WHERE id = 5078867

 [6216 / 2006-06-19 18:46:23 CEST]LOG:  0: connection received:
 host=[local] port=
 [6216 / 2006-06-19 18:46:23 CEST]LOCATION:  BackendRun, postmaster.c:2679
 [6216 / 2006-06-19 18:46:23 CEST]LOG:  0: connection authorized:
 user=postgres database=backoffice_db
 [6216 / 2006-06-19 18:46:23 CEST]LOCATION:  BackendRun, postmaster.c:2751
 [6216 / 2006-06-19 18:46:23 CEST]LOG:  0: statement: SELECT * FROM
 hst_sales_report WHERE id = 5078867
 [6216 / 2006-06-19 18:46:23 CEST]LOCATION:  pg_parse_query, postgres.c:526
 [3762 / 2006-06-19 18:46:23 CEST]LOG:  0: server process (PID
 6216) was terminated by signal 11
 [3762 / 2006-06-19 18:46:23 CEST]LOCATION:  LogChildExit,
postmaster.c:2358
 [3762 / 2006-06-19 18:46:23 CEST]LOG:  0: terminating any other
 active server processes
 [3762 / 2006-06-19 18:46:23 CEST]LOCATION:  HandleChildCrash,
postmaster.c:2251
 [3985 / 2006-06-19 18:46:23 CEST]WARNING:  57P02: terminating
 connection because of crash of another server process
 [3985 / 2006-06-19 18:46:23 CEST]DETAIL:  The postmaster has commanded
 this server process to roll back the current transaction and exit,
 because another server process exited abnormally and possibly
 corrupted shared memory.

Which verison are you using? In any way, except a random hardware error, we
expect Postgres to be able to detect and report the problem instead of a
silent core dump.  So can you gather the core dump and post it here?

Regards,
Qingqing



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



core.gz
Description: GNU Zip compressed data

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] simple query terminated by signal 11

2006-06-19 Thread Thomas Chille

Hi List,

i run in to an error while dumping a db.

after investigating it, i found a possible corrupted table. but i am not sure.
and i dont know how i can repair it? could it be a harddrive error?

Here are the logs:

# all fine: SELECT * FROM hst_sales_report WHERE id = 5078866

[6208 / 2006-06-19 18:46:17 CEST]LOG:  0: connection received:
host=[local] port=
[6208 / 2006-06-19 18:46:17 CEST]LOCATION:  BackendRun, postmaster.c:2679
[6208 / 2006-06-19 18:46:17 CEST]LOG:  0: connection authorized:
user=postgres database=backoffice_db
[6208 / 2006-06-19 18:46:17 CEST]LOCATION:  BackendRun, postmaster.c:2751
[6208 / 2006-06-19 18:46:17 CEST]LOG:  0: statement: SELECT * FROM
hst_sales_report WHERE id = 5078866
[6208 / 2006-06-19 18:46:17 CEST]LOCATION:  pg_parse_query, postgres.c:526
[6208 / 2006-06-19 18:46:18 CEST]LOG:  0: duration: 117.638 ms
[6208 / 2006-06-19 18:46:18 CEST]LOCATION:  exec_simple_query, postgres.c:1076
[6208 / 2006-06-19 18:46:18 CEST]LOG:  0: disconnection: session
time: 0:00:00.12 user=postgres database=backoffice_db host=[local]
port=
[6208 / 2006-06-19 18:46:18 CEST]LOCATION:  log_disconnections, postgres.c:3447

# now the error: SELECT * FROM hst_sales_report WHERE id = 5078867

[6216 / 2006-06-19 18:46:23 CEST]LOG:  0: connection received:
host=[local] port=
[6216 / 2006-06-19 18:46:23 CEST]LOCATION:  BackendRun, postmaster.c:2679
[6216 / 2006-06-19 18:46:23 CEST]LOG:  0: connection authorized:
user=postgres database=backoffice_db
[6216 / 2006-06-19 18:46:23 CEST]LOCATION:  BackendRun, postmaster.c:2751
[6216 / 2006-06-19 18:46:23 CEST]LOG:  0: statement: SELECT * FROM
hst_sales_report WHERE id = 5078867
[6216 / 2006-06-19 18:46:23 CEST]LOCATION:  pg_parse_query, postgres.c:526
[3762 / 2006-06-19 18:46:23 CEST]LOG:  0: server process (PID
6216) was terminated by signal 11
[3762 / 2006-06-19 18:46:23 CEST]LOCATION:  LogChildExit, postmaster.c:2358
[3762 / 2006-06-19 18:46:23 CEST]LOG:  0: terminating any other
active server processes
[3762 / 2006-06-19 18:46:23 CEST]LOCATION:  HandleChildCrash, postmaster.c:2251
[3985 / 2006-06-19 18:46:23 CEST]WARNING:  57P02: terminating
connection because of crash of another server process
[3985 / 2006-06-19 18:46:23 CEST]DETAIL:  The postmaster has commanded
this server process to roll back the current transaction and exit,
because another server process exited abnormally and possibly
corrupted shared memory.
[3985 / 2006-06-19 18:46:23 CEST]HINT:  In a moment you should be able
to reconnect to the database and repeat your command.
[3985 / 2006-06-19 18:46:23 CEST]LOCATION:  quickdie, postgres.c:1945
[3762 / 2006-06-19 18:46:23 CEST]LOG:  0: all server processes
terminated; reinitializing
[3762 / 2006-06-19 18:46:23 CEST]LOCATION:  reaper, postmaster.c:2150
[6217 / 2006-06-19 18:46:23 CEST]LOG:  0: database system was
interrupted at 2006-06-19 18:42:49 CEST
[6217 / 2006-06-19 18:46:23 CEST]LOCATION:  StartupXLOG, xlog.c:4094
[6217 / 2006-06-19 18:46:23 CEST]LOG:  0: checkpoint record is at
11/3E77AB1C
[6217 / 2006-06-19 18:46:23 CEST]LOCATION:  StartupXLOG, xlog.c:4163
[6217 / 2006-06-19 18:46:23 CEST]LOG:  0: redo record is at
11/3E774940; undo record is at 0/0; shutdown FALSE
[6217 / 2006-06-19 18:46:23 CEST]LOCATION:  StartupXLOG, xlog.c:4191
[6217 / 2006-06-19 18:46:23 CEST]LOG:  0: next transaction ID:
3899415; next OID: 46429694
[6217 / 2006-06-19 18:46:23 CEST]LOCATION:  StartupXLOG, xlog.c:4194
[6217 / 2006-06-19 18:46:23 CEST]LOG:  0: database system was not
properly shut down; automatic recovery in progress
[6217 / 2006-06-19 18:46:23 CEST]LOCATION:  StartupXLOG, xlog.c:4250
[6217 / 2006-06-19 18:46:23 CEST]LOG:  0: redo starts at 11/3E774940
[6217 / 2006-06-19 18:46:23 CEST]LOCATION:  StartupXLOG, xlog.c:4287
[6217 / 2006-06-19 18:46:23 CEST]LOG:  0: record with zero length
at 11/3E77AD20
[6217 / 2006-06-19 18:46:23 CEST]LOCATION:  ReadRecord, xlog.c:2496
[6217 / 2006-06-19 18:46:23 CEST]LOG:  0: redo done at 11/3E77ACF8
[6217 / 2006-06-19 18:46:23 CEST]LOCATION:  StartupXLOG, xlog.c:4345
[6217 / 2006-06-19 18:46:23 CEST]LOG:  0: database system is ready
[6217 / 2006-06-19 18:46:23 CEST]LOCATION:  StartupXLOG, xlog.c:4557

Can anyone help me, please?

regards,
thomas!

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] initskript after db start; locks on transactions

2005-08-03 Thread Thomas Chille
Hi!

I have have two questions:

1.
What is the best approach to trigger a service script wich will clean
up something in the db after every db (re-)start? Has PG its own
mechanism for such things or have i to use my /etc/init.d/postgresql
script?

2.
Sometime i read something about locks on transactions. Is this only an
internal thing or can i set them by my own and if yes, for what? With
the LOCK command i can only lock tables, or?

Thanks for any help,
Thomas!

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] 7.3.4 dump/restore error

2005-07-16 Thread Thomas Chille
Hi!

I dumped from a 7.3.4 server on debian with kernel 2.6  using this command:

pg_dump -F c -Z 9 -f backoffice.dmp.c backoffice


I tried to restore on a similiar system using:

pg_restore -vd backoffice backoffice.dmp.c


and stepped into an error:

ERROR:  copy: line 270892, Missing data for column id_tax
..
FATAL:  Socket command type 3 unknown

more detailed messages are attached.


could somebody help me please fixing this problem?

thanks in advance
thomas

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] 7.3.4 dump/restore error

2005-07-16 Thread Thomas Chille
i forgot to attach the error messages. but now!

t

On 7/16/05, Thomas Chille [EMAIL PROTECTED] wrote:
 Hi!
 
 I dumped from a 7.3.4 server on debian with kernel 2.6  using this command:
 
 pg_dump -F c -Z 9 -f backoffice.dmp.c backoffice
 
 
 I tried to restore on a similiar system using:
 
 pg_restore -vd backoffice backoffice.dmp.c
 
 
 and stepped into an error:
 
 ERROR:  copy: line 270892, Missing data for column id_tax
 ..
 FATAL:  Socket command type 3 unknown
 
 more detailed messages are attached.
 
 
 could somebody help me please fixing this problem?
 
 thanks in advance
 thomas

client message:

...
pg_restore: restoring data for table tmp_order_data
pg_restore: [archiver (db)] error returned by PQputline
pg_restore: *** aborted because of error

server mesage (log):

...
2005-07-16 15:42:25 [17889]  DEBUG:  StartTransactionCommand
2005-07-16 15:42:25 [17889]  LOG:  query:
2005-07-16 15:42:25 [17889]  DEBUG:  ProcessUtility
2005-07-16 15:42:34 [17889]  ERROR:  copy: line 270892, Missing data for column 
id_tax
2005-07-16 15:42:34 [17889]  LOG:  statement:



--
-- Data for TOC entry 1754 (OID 1285883)
-- Name: tmp_order_data; Type: TABLE DATA; Schema: public; Owner: postgres
-- Data Pos: 9733254
--

COPY tmp_order_data (ss_id_transaction, ssd_transaction, id_transactiontype, 
business_day, transactiontime, id_company, id_operator, id_dining, 
id_sold_item, id_pos, id_pos_item, selling_price, quantity, id_invoice, 
invoice_number, reason, discount, id_tax, standard_price, id_dining_dest, 
id_pos_dest, id_invoice_dest, operator_name, item_name, plu, is_flow_item, 
pos_name, commodity_name, splitcode, id_sold_item_dest, id_sold_item_remain, 
state, processed, replication_id, id_price_level, id_gift_settle_cat, 
quantity_remain) FROM stdin;
2005-07-16 15:42:34 [17889]  DEBUG:  AbortCurrentTransaction
2005-07-16 15:42:34 [17889]  FATAL:  Socket command type 3 unknown
2005-07-16 15:42:34 [17889]  DEBUG:  proc_exit(0)
2005-07-16 15:42:34 [17889]  DEBUG:  shmem_exit(0)
2005-07-16 15:42:34 [17889]  DEBUG:  exit(0)
2005-07-16 15:42:34 [17771]  DEBUG:  child process (pid 17889) exited with exit 
code 0



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] solaris 10 vs Linux Debian on SUN Fire v40z

2005-05-27 Thread Thomas Chille
Hi,

we are planning to use Postgresql 8.0 on a SUN Fire v40z Server with 4
SingleCore Opterons. It will be later replaced by an DualCore
Opterons.

Would you prefer using Solaris or Linux?

Thanks in Advance,
Thomas

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] Trigger Commandline Command from Postgresql

2005-04-19 Thread Thomas Chille
Hi List!

What ist the best and easiest way to trigger a commandline command out
from the database? We want to start a printjob.

I think it could work with pl/tclu but i am not familar with that language.

Thanks in Advance,
Thomas

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] difficult JOIN

2005-01-26 Thread Thomas Chille
Hi Jim,

thanks for your answer!

 Hrm. So for a given tour, employee, you want to pair the first record in
 plan with the first record in work, and the second record in plan with
 the second record in work?

Yes you understand me well, thats what i'm trying to achieve.

 Doing that will be pretty tricky. I'm not sure you can even do it in a
 single SELECT.

I'm not sure anymore too.

 More important, does it even make sense? What if an employee ends up not
 working at all for one of his/her planned times? Every record after that
 would be completely skewed. Wouldn't it make much more sense to either
 assign an ID to each record in the plan table, and correlate records in
 the work table using that ID, or correlate based on begin and end time?

This report should not be the final thing. It schould only present the
matched times on an easy way and the tourleader can than edit the
worktimes manually. He have to do this after every tour because the
employees are often not using the timerecording unit correct (i'm
talking about restaurants).

In the meanwhile i gave every record an position counter (per tour and
employee), derrived from begin_time, per trigger, and merged them in
this way:

SELECT * FROM work LEFT JOIN plan USING(id_tour, id_employee, counter)
UNION
SELECT * FROM work RIGHT JOIN plan USING(id_tour, id_employee, counter)

Thats works for me but if the tourleader change one worktime record
the counters have to be recalculate and the order of the records will
change.

This is hard to handle and i think i will do all merging
programmatically in the app or with an ppgsql function.

 BTW, I've never seen the convention id_employee; people generally use
 employee_id. Is it more important to know that you're talking about an
 ID or that you're talking about an employee? Just food for thought.

I'm not the father of this strange naming convention :)

Thank you again,
Thomas

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] difficult JOIN

2005-01-25 Thread Thomas Chille
Hi,
i have the following SQL-Problem:
We are using 2 tables. The first, called plan, is holding planned working times 
for employees per
tour:
plan.id_tour
plan.id_employee
plan.begin_time
plan.end_time
The second table 'work' stores the actual worked times for employees per tour:
work.id_tour
work.id_employee
work.begin_time
work.end_time
Employees can be multiple times assigned to one tour. One record will be 
created for every
assignment. They can also work multiple times in one tour.
Now i wanna merge this infos into one report. I wanna join the first plan entry 
for one employee in
one tour with the first work entry for one employee in one tour and so on.
How can i obtain that? A simply USING(id_tour, id_employee) -JOIN will not doit.
Thanks for any hints,
Thomas
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


[GENERAL] difficult JOIN

2005-01-25 Thread Thomas Chille (spoon)
Hi,
i have the following SQL-Problem:
We are using 2 tables. The first, called plan, is holding planned working times for employees per 
tour:

plan.id_tour
plan.id_employee
plan.begin_time
plan.end_time
The second table 'work' stores the actual worked times for employees per tour:
work.id_tour
work.id_employee
work.begin_time
work.end_time
Employees can be multiple times assigned to one tour. One record will be created for every 
assignment. They can also work multiple times in one tour.

Now i wanna merge this infos into one report. I wanna join the first plan entry for one employee in 
one tour with the first work entry for one employee in one tour and so on.

How can i obtain that? A simply USING(id_tour, id_employee) -JOIN will not doit.
Thanks for any hints,
Thomas 

---(end of broadcast)---
TIP 8: explain analyze is your friend


[GENERAL] speaks psql unicode?

2005-01-14 Thread Thomas Chille
Hi,

if i try to import data via SQL-Inserts (exported with pgManager) out
from an utf-8 file i get always parse errors in the first line.

After switching to ascii and using of SET client_encoding TO 'latin1'
i can import all lines, but some unicode-characters are, like
expected, damaged.

Now my question: Can psql process Unicodefiles?

Thanks for any tipp,
thomas

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Are subselects treated as atomic single commands?

2004-11-24 Thread Thomas Chille
Hi,
the docu about the Read Committed Transaction Isolation Level pointed out: ... The point at issue 
here is whether or not within a single command we see an absolutely consistent view of the 
database..

Because i dont want to use the Serializable Transaction Isolation Level or table locks if it not 
necessary i have one question:

Would the insert command with that subselect treated as one single command and can i so prevent a 
race condition between multiple function calls?

CREATE OR REPLACE FUNCTION public.count_parameter (name, integer)
RETURNS pg_catalog.void AS'
BEGIN
   INSERT INTO parameter (parameter_name, parameter_value)
   SELECT $1, $2 WHERE (
   SELECT COUNT(*) = 0 FROM parameter WHERE parameter_name = $1
   );
  IF NOT FOUND THEN
   UPDATE parameter SET parameter_value = parameter_value + $2
   WHERE parameter_name = $1;
  END IF;
  RETURN;
END;
'LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER;
best regards,
thomas 

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


[GENERAL] Are subselects treated as atomic single commands?

2004-11-24 Thread Thomas Chille
Hi,

the docu about the Read Committed Transaction Isolation Level pointed
out: ... The point at issue here is whether or not within a single
command we see an absolutely consistent view of the database..

Because i dont want to use the Serializable Transaction Isolation
Level or table locks if it not necessary i have one question:

Would the insert command with that subselect treated as one single
command and can i so prevent a race condition between multiple
function calls?

CREATE OR REPLACE FUNCTION public.count_parameter (name, integer)
RETURNS pg_catalog.void AS'
BEGIN

INSERT INTO parameter (parameter_name, parameter_value) SELECT $1,
$2 WHERE (
SELECT COUNT(*) = 0 FROM parameter WHERE parameter_name = $1
);
IF NOT FOUND THEN
UPDATE parameter SET parameter_value = parameter_value + $2
WHERE parameter_name = $1;
END IF;

RETURN;

END;
'LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER;

best regards,
thomas

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [GENERAL] Drop Column with Foreign Key Problem

2004-07-03 Thread Thomas Chille
Hi Tom!

Thanx again for your fast help. And its helps indeed. You pointed out the problem 
correct! I could
delete these triggers after scanning the system-tables by hand. Unfortunately this can 
happen in the
futrue again and thatswhy i tried applying the adddepend script on a fresh restored db 
before. But
it wont work(output is at the end of this msg). I think its conflicts with the 
differnt Foreign Key
Styles on the
same relation?

I wrote a small function wich can delete these zombie-triggers by constraint-name 
automatically and
have to be called after dropping an Oldstyle Foreign Key. Maybe it will help someone 
too:

CREATE OR REPLACE FUNCTION public.drop_fk_trigger (name) RETURNS 
pg_catalog.void AS'
DECLARE
_FK ALIAS FOR $1;
_ROW record;
BEGIN

FOR _ROW IN

SELECT tgname, relname
FROM pg_trigger JOIN pg_class ON tgrelid = pg_class.oid
WHERE tgconstrname = _FK

LOOP

EXECUTE ''DROP TRIGGER '' || _ROW.tgname || '' ON '' || _ROW.relname;

END LOOP;

RETURN;
END;
'LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER;



Regards
Thomas!



---
The Error Output of contrib/adddepend:

...
Upgrade the Unique Constraint style via:

DROP INDEX itm_prc_pk RESTRICT;
ALTER TABLE sd_item_price ADD CONSTRAINT itm_prc_pk UNIQUE (id_item_price);

DBD::Pg::st execute failed: ERROR:  Cannot drop index tax_pk because other objects 
depend on it at
adddepend line 351.
...
...
The below commands will upgrade the foreign key style.  Shall I execute them?

 DROP TRIGGER RI_ConstraintTrigger_76044427 ON sd_printer;
 DROP TRIGGER RI_ConstraintTrigger_76044426 ON sd_printer;
 DROP TRIGGER RI_ConstraintTrigger_76043914 ON sd_printer;
 DROP TRIGGER RI_ConstraintTrigger_76043913 ON sd_printer;
 DROP TRIGGER RI_ConstraintTrigger_76044425 ON sd_printer_of_production;
 DROP TRIGGER RI_ConstraintTrigger_76043912 ON sd_printer_of_production;

 ALTER TABLE sd_printer_of_production ADD CONSTRAINT fk_sd_print_fk_sd_pri_sd_print 
FOREIGN KEY
(id_printer)
   REFERENCES sd_printer(id_printer) MATCH SIMPLE ON UPDATE RESTRICT ON DELETE 
RESTRICT;

DBD::Pg::st execute failed: ERROR:  Cannot drop trigger RI_ConstraintTrigger_76043914 
on table
sd_printer because constraint fk_sd_print_fk_sd_pri_sd_print on table 
sd_printer_of_production
requires it at adddepend line 287.
...


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] Drop Column with Foreign Key Problem

2004-07-01 Thread Thomas Chille (spoon)
Hello!

I have a Problem.

A table with a ForeignKeyRef exits. The Name of the field with this FK-Constraint is 
'id_employee'
because it is referencing the field 'id_employee' in 'public.sd_employee'. I drop this 
column of the
table with cascade in a script:

ALTER TABLE public.sd_messaging_participant DROP COLUMN id_employee CASCADE;

In one script later i try to make an update on the referenced Table:

UPDATE sd_employee SET leave = 1.5;

But it doesent works. I get always this Error:

ERROR:  constraint participant_employee: table sd_messaging_participant does not have 
an attribute
id_employee

The constraint 'participant_employee' should be droped too, due the use of CASCADE, 
but it seems
that he is alive.

Also explizit dropping the constraint 'participant_employee' before dropping the field 
will not
solve the problem:

ALTER TABLE public.sd_messaging_participant DROP CONSTRAINT participant_employee 
CASCADE;

If i try to drop the constraint after dropping the field, postgres means the 
constraint is not
existing anymore. But if i try to do the update it produces still this error.

If i dump the DB i can not found the constraint 'participant_employee' anymore but 3 
triggers
belonging to this constraint are still in the DB:

...
CREATE CONSTRAINT TRIGGER participant_employee
AFTER INSERT OR UPDATE ON sd_messaging_participant
FROM sd_employee
NOT DEFERRABLE INITIALLY IMMEDIATE
FOR EACH ROW
EXECUTE PROCEDURE RI_FKey_check_ins ('participant_employee', 
'sd_messaging_participant',
'sd_employee', 'UNSPECIFIED', 'id_employee', 'id_employee');
...

That is for Insert and there are one for Update and one for Deleting too. I have 
absolutly no idea
how can this happens and i think i could solve this problem by dropping these 3 
Triggers. But i dont
know how is the syntax to drop such triggers?

And know anybody out there how it could happen?

Thanks for your help,
Thomas!


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] PLpgSQL-Problem

2004-04-16 Thread Thomas Chille
Hello Tom,

thank u for your fast reply!

Now I understand that i can not split easily my scripts into logical units but what i 
not understand is the documention not correct
or is my problem not similiar to the example in point 37.4.3 on 
http://www.postgresql.org/docs/7.4/static/plpgsql-declarations.html
?

thank u for your help,
thomas


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]