Re: [ADMIN] Disk Utilization Increases And Time for Vacuum Increases.
Bruce Momjian <[EMAIL PROTECTED]> writes: > I have looked at the code and it seems VACUUM FULL does collect index > free space into the FSM via index_vacuum_cleanup(). Is that true? Yes. regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [ADMIN] Disk Utilization Increases And Time for Vacuum Increases.
I have looked at the code and it seems VACUUM FULL does collect index free space into the FSM via index_vacuum_cleanup(). Is that true? --- Bruce Momjian wrote: > Bruce Momjian wrote: > > > > AFAIK, VACUUM doesn't reclaim space taken up by indexes. In fact, the more > > > > deletes you do, the larger the index space gets, and the *slower* the index > > > > performs. A periodic REINDEX cleans up a lot of problems. > > > > > > Note that in 7.4 the fix for this is in, so if you have a chance to test > > > it out with your indexes and their growth problem please test it to see if > > > it works right. > > > > > > I haven't tested 7.4 beta1 yet very hard, just on my workstation, with > > > relatively low level stuff. > > > > I am not sure we have completely dealt with index growth in 7.4. What > > we have new in 7.4 is the ability for non-FULL VACUUM to collect info on > > free index pages and reuse them. > > > > However, VACUUM FULL does not shrink the index table unless those pages > > are the last pages of the file. (Could it shift free pages to the end > > and then truncate index?) Also, does VACUUM FULL on an index put the > > empty index pages in the FSM map? It doesn't do that for heap pages > > because there are none after vacuum, but there might be free index pages > > that we should record. > > I haven't seen anyone comment on this. I think we need to address this > for 7.4. (Collecting index free space in FSM is new in 7.4.) I am > concerned that people who are only running VACUUM FULL at night will > have problems with index growth. Also, why can't we move totally-empty > index pages to the end and truncate the file? > > > Interesting I found a reference of doing an auto-reindex as part of > > VACUUM FULL: > > > > #ifdef NOT_USED > > > > /* > > * reindex in VACUUM is dangerous under WAL. ifdef out until it > > * becomes safe. > > */ > > if (reindex) > > { > > vac_close_indexes(nindexes, Irel); > > Irel = (Relation *) NULL; > > activate_indexes_of_a_table(onerel, false); > > } > > #endif /* NOT_USED */ > > Will we ever reindex as part of VACUUM FULL? > > -- > Bruce Momjian| http://candle.pha.pa.us > [EMAIL PROTECTED] | (610) 359-1001 > + If your life is a hard drive, | 13 Roberts Road > + Christ can be your backup.| Newtown Square, Pennsylvania 19073 > > ---(end of broadcast)--- > TIP 8: explain analyze is your friend > -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] experimenting with coalesce, strange EXPLAIN results
Josh Goldberg <[EMAIL PROTECTED]> writes: > I was experimenting with moving some of my case logic out of the > application and into sql and found a strange EXPLAIN result with one of > my COALESCE statements. Why is each subselect listed twice? This is in > 7.3.4 on freebsd 4.8. Prior to 7.4, COALESCE is just a parse-time macro for a CASE construct, and so the items do appear twice in the CASE. regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[ADMIN] age of statistics
Is there a way to query the database to find out how old the current table statistics are? That is, when the ANALYZE command was last run on a given table? It seems like there ought to be a table somewhere in pg_catalog I can't find it in the PostgreSQL Administrator documentation, nor by searching google. Thanks, -- colin | perfection is reached, not when there is no longer anything to add, roald | but when there is no longer anything to take away. (antoine de saint-exupery) ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] Seeking information about backup/recovery
Murthy Kambhampaty wrote: > >Anyway, unfortunately, WAL doesn't contain enough information > >to recover > >without having the file system files in some consistent state, even if > >that state is old. In fact, the files have to be consistent as of the > >last checkpoint. > This, I'm not so sure of. On Linux, an xfs_freeze -f pauses > the filesystem and flushes all writes to disk; a snapshot of $PGDATA's > filesystem taken while it is frozen gives a consistent copy of $PGDATA as of > that instant (similar functionality is obtained by compiling the VFS locking > patch into the kernel). > > The discussion at > http://marc.theaimsgroup.com/?l=postgresql-admin&w=2&r=1&s=LVM+snapshots&q=b > , includes log files from postmaster startup and shutdown on the backup > $PGDATA, and AFAICT, WAL recovery does not roll back to the last checkpoint. > If there is a better way to test this, let me know, and I'm happy to do it. Yes, this has been discussed before. If you snapshot xfs, you can then restore using that snapshot. Of course, you will be recovering from WAL as though there was a crash, but that works fine. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [ADMIN] Seeking information about backup/recovery
On Thursday, September 04, 2003 18:12, Bruce Momjian [mailto:[EMAIL PROTECTED] >Murthy Kambhampaty wrote: ... >I assume you are contrasting _any_ point-in-time recovery to recover up >to the crash point, right? > Right. >Anyway, unfortunately, WAL doesn't contain enough information >to recover >without having the file system files in some consistent state, even if >that state is old. In fact, the files have to be consistent as of the >last checkpoint. This, I'm not so sure of. On Linux, an xfs_freeze -f pauses the filesystem and flushes all writes to disk; a snapshot of $PGDATA's filesystem taken while it is frozen gives a consistent copy of $PGDATA as of that instant (similar functionality is obtained by compiling the VFS locking patch into the kernel). The discussion at http://marc.theaimsgroup.com/?l=postgresql-admin&w=2&r=1&s=LVM+snapshots&q=b , includes log files from postmaster startup and shutdown on the backup $PGDATA, and AFAICT, WAL recovery does not roll back to the last checkpoint. If there is a better way to test this, let me know, and I'm happy to do it. Thanks, Murthy PS: From the man page for xfs_freeze: "The -f flag requests the specified XFS filesystem to be frozen from new modifications. When this is selected, all ongoing transactions in the filesystem are allowed to complete, new write system calls are halted, other calls which modify the filesystem are halted, and all dirty data, metadata, and log information are written to disk. Any process attempting to write to the frozen filesystem will block waiting for the filesystem to be unfrozen." When $PGDATA/pg_xlog/ is on disks different from $PGDATA's, the XFS filesystem still allows online BAR with the following sequence: 1. rysnc -avr --delete $PGDATA/ ::mirror_pgdata/ 2. xfs_freeze -f $PGDATA/pg_clog/ 3. xfs_freeze -f $PGDATA 4. create snapshots and mount 5. xfs_freeze -f $PGDATA 6. xfs_freeze -f $PGDATA/pg_clog/ 7. rysnc -avr --delete --exclude=pg_xlog/ $PGDATA/ ::mirror_pgdata/pg_xlog/ 8. rysnc -avr --delete $PGDATA/pg_xlog/ $PGDATA/ ::mirror_pgdata/pg_xlog/ 9. remove snapshots By freezing both volumes ($PGDATA/pg_clog/ and $PGDATA/) during snapshot creation, a "consistent" copy is assured. Freezing the pg_xlog first, and unfreezing it last, makes sure that no CHECKPOINT operations are missed, ensuring the consistency of the copy. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [ADMIN] max_fsm_pages
"Jouneau Luc" <[EMAIL PROTECTED]> writes: > INFO: Pages 11944: Changed 11944, Empty 0; Tup 32: Vac 1931936, Keep 0, Un= > Used 0. > Total CPU 1.57s/1.90u sec elapsed 42.01 sec. > It seems that FSM traced all of my deletes since it mentions removes in 864= > 1+3303=3D11944 pages (>1 which was set for max_fsm_page). The numbers output by VACUUM don't tell you anything about how many of those pages will actually be remembered in the FSM. In practice, max_fsm_relations=100 is too small; we've bumped the default to 1000 for 7.4. > My first aim was to know if max_fsm_pages was set for each table or for a w= > hole database. Whole database. That default setting is very small, too (enough for a database of maybe a couple hundred meg, at most). regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[ADMIN] experimenting with coalesce, strange EXPLAIN results
I was experimenting with moving some of my case logic out of the application and into sql and found a strange EXPLAIN result with one of my COALESCE statements. Why is each subselect listed twice? This is in 7.3.4 on freebsd 4.8. cms3=# explain analyze select coalesce ((sELECT thumb FROM content_group_descriptor WHERE content_group_id = 140 AND language_id = 1),(SELECT tc.file FROM thumbnail_content tc, ccl WHERE tc.parent_content_id = cid AND ccgid = 140 limit 1)); QUERY PLAN Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.98..0.98 rows=1 loops=1) InitPlan -> Seq Scan on content_group_descriptor (cost=0.00..2.78 rows=1 width=4) (actual time=0.10..0.10 rows=0 loops=1) Filter: ((content_group_id = 140) AND (language_id = 1)) -> Seq Scan on content_group_descriptor (cost=0.00..2.78 rows=1 width=4) (never executed) Filter: ((content_group_id = 140) AND (language_id = 1)) -> Limit (cost=0.00..8.43 rows=1 width=12) (actual time=0.35..0.42 rows=1 loops=1) -> Nested Loop (cost=0.00..14.94 rows=2 width=12) (actual time=0.34..0.40 rows=2 loops=1) -> Seq Scan on content_collection (cost=0.00..5.46 rows=1 width=4) (actual time=0.24..0.26 rows=3 loops=1) Filter: (content_collection_group_id = 140) -> Index Scan using parent_file_key on thumbnail_content tc (cost=0.00..6.83 rows=1 width=8) (actual time=0.02..0.02 rows=1 loops=3) Index Cond: (tc.parent_content_id = "outer".content_id) -> Limit (cost=0.00..8.43 rows=1 width=12) (actual time=0.32..0.39 rows=1 loops=1) -> Nested Loop (cost=0.00..14.94 rows=2 width=12) (actual time=0.31..0.37 rows=2 loops=1) -> Seq Scan on content_collection (cost=0.00..5.46 rows=1 width=4) (actual time=0.24..0.26 rows=3 loops=1) Filter: (content_collection_group_id = 140) -> Index Scan using parent_file_key on thumbnail_content tc (cost=0.00..6.83 rows=1 width=8) (actual time=0.01..0.02 rows=1 loops=3) Index Cond: (tc.parent_content_id = "outer".content_id) Total runtime: 1.14 msec (19 rows) ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] Recovery assistence....
"Marc Mitchell" <[EMAIL PROTECTED]> writes: > Running 7.3.2 under RedHat we suffered a disk crash. Following was > Postgres log output from after crash: > 2003-09-04 09:24:51 [1873] LOG: redo starts at 0/B3129C68 > 2003-09-04 09:24:51 [1873] PANIC: Invalid page header in block 2696 > of 329110 Try updating to 7.3.4 --- it gets out of some startup problems that earlier 7.3.* releases fail to cope with. If that doesn't help, I think your next step is "pg_resetxlog". regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [ADMIN] Seeking information about backup/recovery
Murthy Kambhampaty wrote: > IMHO, while point-in-time recovery would be great to have, there are many > applications that benefit from having online backup and recovery without > needing log roll-forward. For example, Oracle contrasts "Full Database > Point-in-time Recovery", the feature mentioned by Bruce, with "Tablespace > Point in Time Recovery (TSPITR)" [1]. With postgresql and WAL, you can > implement the equivalent of TSPITR. By using filesystem snapshot > capabilities provided by certain storage appliances (NetApp Filer?) or by > operating system service in Linux (Logical Volume Manager, "LVM") to "... > make periodic saves of data files to another disk, a tape or another host > and also archive the WAL log files". Once you have the copy, you can put it > on disk, start a postgresql server (call it the backup server) on it, and > then dump and restore the data to a running server ("production server") on > the same or a different host. > > This procedure allows you to recover a recent copy of any database, schema > or table, depending on the frequency with which you take snapshots. By using > rysnc to copy the snapshots over to a backup disk volume you can get > incremental backup capability, which shrinks the backup window to a few > minutes (and less than an hour even for databases with multiple gigabytes > worth of INSERTs). With such a small backup window, one can minimize data > loss in case of disk failure or corruption by increasing backup frequency. I assume you are contrasting _any_ point-in-time recovery to recover up to the crash point, right? Anyway, unfortunately, WAL doesn't contain enough information to recover without having the file system files in some consistent state, even if that state is old. In fact, the files have to be consistent as of the last checkpoint. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] Seeking information about backup/recovery
IMHO, while point-in-time recovery would be great to have, there are many applications that benefit from having online backup and recovery without needing log roll-forward. For example, Oracle contrasts "Full Database Point-in-time Recovery", the feature mentioned by Bruce, with "Tablespace Point in Time Recovery (TSPITR)" [1]. With postgresql and WAL, you can implement the equivalent of TSPITR. By using filesystem snapshot capabilities provided by certain storage appliances (NetApp Filer?) or by operating system service in Linux (Logical Volume Manager, "LVM") to "... make periodic saves of data files to another disk, a tape or another host and also archive the WAL log files". Once you have the copy, you can put it on disk, start a postgresql server (call it the backup server) on it, and then dump and restore the data to a running server ("production server") on the same or a different host. This procedure allows you to recover a recent copy of any database, schema or table, depending on the frequency with which you take snapshots. By using rysnc to copy the snapshots over to a backup disk volume you can get incremental backup capability, which shrinks the backup window to a few minutes (and less than an hour even for databases with multiple gigabytes worth of INSERTs). With such a small backup window, one can minimize data loss in case of disk failure or corruption by increasing backup frequency. There have been several discussions of this, including here http://marc.theaimsgroup.com/?l=postgresql-admin&w=2&r=1&s=backup+routine&q= b here http://marc.theaimsgroup.com/?l=postgresql-admin&w=2&r=1&s=LVM+snapshots&q=b and here http://marc.theaimsgroup.com/?l=postgresql-general&m=104610149723362&w=4 Cheers, Murthy [1] http://storacle.princeton.edu:9001/oracle8-doc/server.805/a58396/ch13.htm) >-Original Message- >From: Bruce Momjian [mailto:[EMAIL PROTECTED] >Sent: Thursday, September 04, 2003 14:16 >To: Mary Edie Meredith >Cc: [EMAIL PROTECTED]; osdldbt-general >Subject: Re: [ADMIN] Seeking information about backup/recovery > > > >Right. We need point-in-time-recovery. Someone is working on >it and we >hope to have it for 7.5. > >--- > > >Mary Edie Meredith wrote: >> Dear PostgreSQL admin'ers, >> >> >> Our group at OSDL have been porting our DBT test kits to >PostgreSQL. >> In getting up to speed on PostgreSQL, we have not found a >way to recover >> from a serious database failure (disk corruption, >disk/volume failure). >> >> The following scenario described in the 7.3 docs and the "PostgreSQL >> 7.4devel Documentation" is exactly what we are looking for: >> >> >> "WAL offers the opportunity for a new method for database >on-line backup >> and restore (BAR). To use this method, one would have to >make periodic >> saves of data files to another disk, a tape or another host and also >> archive the WAL log files. The database file copy and the >archived log >> files could be used to restore just as if one were restoring after a >> crash. Each time a new database file copy was made the old log files >> could be removed. Implementing this facility will require >the logging of >> data file and index creation and deletion; it will also require >> development of a method for copying the data files (operating system >> copy commands are not suitable). " >> >> Since it states that WAL, "offers the opportunity" for what >we need, we >> concluded this is _not supported at 7.3 and the 7.4 beta releases. >> >> Is this conclusion correct? >> >> Of course we would like to do even more - point in time recovery, >> incremental backup, but for now just the basics. >> >> Point in Time recovery is listed as "Urgent" on the TODO list, >> incremental backups are listed under "Admin", so it appears >that those >> items are recognized as important. What we cannot >understand is why the >> basic backup/restore described above is not on the TODO list. >> >> Can anyone enlighten us? >> >> >> >> -- >> Mary Edie Meredith <[EMAIL PROTECTED]> >> Open Source Development Lab >> >> >> ---(end of >broadcast)--- >> TIP 7: don't forget to increase your free space map settings >> > >-- > Bruce Momjian| http://candle.pha.pa.us > [EMAIL PROTECTED] | (610) 359-1001 > + If your life is a hard drive, | 13 Roberts Road > + Christ can be your backup.| Newtown Square, >Pennsylvania 19073 > >---(end of >broadcast)--- >TIP 9: the planner will ignore your desire to choose an index >scan if your > joining column's datatypes do not match > ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] Seeking information about backup/recovery
Right. We need point-in-time-recovery. Someone is working on it and we hope to have it for 7.5. --- Mary Edie Meredith wrote: > Dear PostgreSQL admin'ers, > > > Our group at OSDL have been porting our DBT test kits to PostgreSQL. > In getting up to speed on PostgreSQL, we have not found a way to recover > from a serious database failure (disk corruption, disk/volume failure). > > The following scenario described in the 7.3 docs and the "PostgreSQL > 7.4devel Documentation" is exactly what we are looking for: > > > "WAL offers the opportunity for a new method for database on-line backup > and restore (BAR). To use this method, one would have to make periodic > saves of data files to another disk, a tape or another host and also > archive the WAL log files. The database file copy and the archived log > files could be used to restore just as if one were restoring after a > crash. Each time a new database file copy was made the old log files > could be removed. Implementing this facility will require the logging of > data file and index creation and deletion; it will also require > development of a method for copying the data files (operating system > copy commands are not suitable). " > > Since it states that WAL, "offers the opportunity" for what we need, we > concluded this is _not supported at 7.3 and the 7.4 beta releases. > > Is this conclusion correct? > > Of course we would like to do even more - point in time recovery, > incremental backup, but for now just the basics. > > Point in Time recovery is listed as "Urgent" on the TODO list, > incremental backups are listed under "Admin", so it appears that those > items are recognized as important. What we cannot understand is why the > basic backup/restore described above is not on the TODO list. > > Can anyone enlighten us? > > > > -- > Mary Edie Meredith <[EMAIL PROTECTED]> > Open Source Development Lab > > > ---(end of broadcast)--- > TIP 7: don't forget to increase your free space map settings > -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] YOUR SITES SEARCH FEATURE DOES NOT WORK!
k, try the search engine now ... I just found a 'missing index' on one of the critical tables, and I'm getting more realistic speeds on search results ... On Thu, 4 Sep 2003, Marc G. Fournier wrote: > > go for it, point it at archives.postgresql.org though, since that is the > major part of the search engine > > On Thu, 4 Sep 2003, scott.marlowe wrote: > > > On Tue, 2 Sep 2003, Marc G. Fournier wrote: > > > > > > > > okay, first off ... what URL? I've been working on, and testing, archives > > > all week, and altho getting things down below a 2min search appears to be > > > near impossible (anyone know something *better* then mnogosearch that will > > > handle >250k URLs and growing ... ? we tried ht/Dig before, and the > > > resources it suck'd were outrageous) ... > > > > Hi Marc, I use htdig at work to index a rather large web site and it > > doesn't use resources all that bad. My htdig db directory is only 250 > > megs and it screams. Are you sure it was that bad? > > > > Would it be ok if I pointed a test instance of htdig at postgresql.org at > > night and indexed it to get an idea of how much space it'll take to index > > it? I'm a huge fan of htdig/ warts and all, so I'd kinda like to see if > > it can be made to work. > > > > > > ---(end of broadcast)--- > TIP 2: you can get off all lists at once with the unregister command > (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED]) > ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[ADMIN] Problem under Cygwin after Cygwin update
Hello, today i updated my cygwin and now postgresql not run! This is the error: IpcMemoryCreate: shmget(key=5432001, size=1499136, 03600) failed: Function not implemented The newest Cygipc was installed (2.00)! -- Thomas Wegner ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[ADMIN] upgrade
Hekko, All! I'm planing to upgrade from 7.2.1 to 7.3.4. Will pg_dumpall and pg_restore be sufficient for propper data migration? Mark ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[ADMIN] driver download problems
I have been trying to download the driver for windows. When I finally get onto the FTP server (or mirror) and get the zipped msi it will not unzip. I have tried several versions of the file but they are all the same. I am trying to use winzip. Extracting to "C:\" Use Path: no Overlay Files: yes warning [C:\\psqlodbc-07_03_0100.zip]: extra 2029 bytes at beginning or within Zip file (attempting to process anyway) Error in file #1: bad Zip file offset (Error local header signature not found): 0 (attempting to re-compensate) Extracting psqlodbc.msi Error: invalid compressed data to inflate
[ADMIN] max_fsm_pages
Hi everybody, I'm trying to understand how vacuumdb and free space map work together for retrieving free spaces after delete or update statements. max_fsm_pages and max_fsm_relations are set to default values (respectively 1 and 100). I do my tests on a table containing at the beginning 1931968 tuples stocked in 11944 pages : INFO: Pages 11944: Changed 11944, Empty 0; Tup 1931968: Vac 0, Keep 0, UnUsed 0 I delete 1931936 tuples (letting 32 tuples left). I thought the number of pages affected by this delete will overflow fsm storage capacity, but when I run a vacuumdb verbose analyze on this table, I have the following log information : INFO: Removed 1397914 tuples in 8641 pages. CPU 0.76s/0.89u sec elapsed 25.72 sec. INFO: Removed 534022 tuples in 3303 pages. CPU 0.37s/0.29u sec elapsed 7.62 sec. INFO: Pages 11944: Changed 11944, Empty 0; Tup 32: Vac 1931936, Keep 0, UnUsed 0. Total CPU 1.57s/1.90u sec elapsed 42.01 sec. It seems that FSM traced all of my deletes since it mentions removes in 8641+3303=11944 pages (>1 which was set for max_fsm_page). Can someone explain me, where I'm wrong ? My first aim was to know if max_fsm_pages was set for each table or for a whole database. Per table I suppose, but Is it a good supposition ? Where is stored (on disk) this fsm ? Thanks for your answers. Luc
[ADMIN] Seeking information about backup/recovery
Dear PostgreSQL admin'ers, Our group at OSDL have been porting our DBT test kits to PostgreSQL. In getting up to speed on PostgreSQL, we have not found a way to recover from a serious database failure (disk corruption, disk/volume failure). The following scenario described in the 7.3 docs and the "PostgreSQL 7.4devel Documentation" is exactly what we are looking for: "WAL offers the opportunity for a new method for database on-line backup and restore (BAR). To use this method, one would have to make periodic saves of data files to another disk, a tape or another host and also archive the WAL log files. The database file copy and the archived log files could be used to restore just as if one were restoring after a crash. Each time a new database file copy was made the old log files could be removed. Implementing this facility will require the logging of data file and index creation and deletion; it will also require development of a method for copying the data files (operating system copy commands are not suitable). " Since it states that WAL, "offers the opportunity" for what we need, we concluded this is _not supported at 7.3 and the 7.4 beta releases. Is this conclusion correct? Of course we would like to do even more - point in time recovery, incremental backup, but for now just the basics. Point in Time recovery is listed as "Urgent" on the TODO list, incremental backups are listed under "Admin", so it appears that those items are recognized as important. What we cannot understand is why the basic backup/restore described above is not on the TODO list. Can anyone enlighten us? -- Mary Edie Meredith <[EMAIL PROTECTED]> Open Source Development Lab ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] YOUR SITES SEARCH FEATURE DOES NOT WORK!
go for it, point it at archives.postgresql.org though, since that is the major part of the search engine On Thu, 4 Sep 2003, scott.marlowe wrote: > On Tue, 2 Sep 2003, Marc G. Fournier wrote: > > > > > okay, first off ... what URL? I've been working on, and testing, archives > > all week, and altho getting things down below a 2min search appears to be > > near impossible (anyone know something *better* then mnogosearch that will > > handle >250k URLs and growing ... ? we tried ht/Dig before, and the > > resources it suck'd were outrageous) ... > > Hi Marc, I use htdig at work to index a rather large web site and it > doesn't use resources all that bad. My htdig db directory is only 250 > megs and it screams. Are you sure it was that bad? > > Would it be ok if I pointed a test instance of htdig at postgresql.org at > night and indexed it to get an idea of how much space it'll take to index > it? I'm a huge fan of htdig/ warts and all, so I'd kinda like to see if > it can be made to work. > > ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[ADMIN] Column Deletion
5:44p Dear All, Is there any way by which we can restore the deleted column. Bcoz I have deleted a column... and when I describe the contents I am seeing the column back again, but the attribute name has been replaced with the system generated one.. and the structure still remains the same. Have a grate day Warm Regards Shÿam Peri II Floor, Punja Building, M.G.Road, Ballalbagh, Mangalore-575003 Ph : 91-824-2451001/5 Fax : 91-824-2451050 DISCLAIMER: This message contains privileged and confidential information and is intended only for the individual named.If you are not the intended recipient you should not disseminate,distribute,store,print, copy or deliver this message.Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system.5:44p Dear All,Is there any way by which we can restore the deleted column. Bcoz I have deleted a column... and when I describe the contents I am seeing the column back again, but the attribute name has been replaced with the system generated one.. and the structure still remains the same. Have a grate day Warm RegardsShÿam PeriII Floor, Punja Building,M.G.Road,Ballalbagh,Mangalore-575003 Ph : 91-824-2451001/5Fax : 91-824-2451050 DISCLAIMER: This message contains privileged and confidential information and is intended only for the individual named.If you are not the intended recipient you should not disseminate,distribute,store,print, copy or deliver this message.Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] Postgresql book
Giannis Vrentzos wrote: > Hi all. > > I want to buy a book for postgresql administration but i do not know > which one. Can anybody suggest me one? > > Thanks > Gvre Postgresql by Douglas&Douglas has been very usefull to me... -- Tom Van den Brandt I try... ---(end of broadcast)--- TIP 8: explain analyze is your friend
[ADMIN] Postgresql book
Hi all. I want to buy a book for postgresql administration but i do not know which one. Can anybody suggest me one? Thanks Gvre ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] Log error at server startup
Bruce Momjian <[EMAIL PROTECTED]> writes: > Gastón Simone wrote: >> PANIC: XLogWrite: write request 0/44C4000 is past end of log 0/44C4000 >> LOG: startup process (pid 22347) was terminated by signal 6 >> LOG: aborting startup due to startup process failure > What PostgreSQL version are you using. If it is 7.3.X, please upgrade > to 7.3.4. I believe the above is the symptom of 7.3.3's startup bug. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [ADMIN] Log error at server startup
Gastón Simone wrote: > Hello! > Please help me!! > > My pgsql installation is having a problem during startup and I don't konw > how to solve it. postmaster is reporting this: > > LOG: database system shutdown was interrupted at 2003-09-04T > LOG: checkpoint record is at 0/44C3FC0 > LOG: redo record is at 0/44C3FC0; undo record is at 0/0; shutdown TRUE > LOG: next transaction id: 27046; next oid: 28452 > LOG: database system was not properly shut down; automatic recovery in > progress > LOG: ReadRecord: unexpected pageaddr 0/24C4000 in log file 0, segment 4, > offse0 > LOG: redo is not required > PANIC: XLogWrite: write request 0/44C4000 is past end of log 0/44C4000 > LOG: startup process (pid 22347) was terminated by signal 6 > LOG: aborting startup due to startup process failure What PostgreSQL version are you using. If it is 7.3.X, please upgrade to 7.3.4. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [ADMIN] Problems Starting Postgres as a Service
On Wed, 2003-09-03 at 15:21, Ian Huynh wrote: > while postgres is pretty solid under unixes, > i am just curious as regards to the stability of cygwin+Postgres combo under win2k? Since Cygwin is an emulation layer running on Windows, performance of PostgreSQL + Cygwin isn't comparable to a pure *nix environment. For instance, above 50 concurrent connections, Cygwin starts getting unstable which is why we hard code a conneciton limit in our Windows version of Mammoth PosgreSQL. For this reason, PostgreSQL + Cygwin is typically used in development or low-load environments. It's still usuable, just not as scalable as a pure linux/unix solution. -- Best, Al Hulaton| Sr. Account Engineer | Command Prompt, Inc. 503.222.2783 | [EMAIL PROTECTED] Home of Mammoth PostgreSQL and 'Practical PostgreSQL' Managed PostgreSQL, Linux services and consulting Read and Search O'Reilly's 'Practical PostgreSQL' at http://www.commandprompt.com ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] Row locking during UPDATE
On Thu, 4 Sep 2003, Sam Barnett-Cormack wrote: > It might be worth racking your brains to think of other ways. Query > timeouts? Either way, if the back-end is waiting on a semaphore, will it time out the query and terminate? The problem is lots of waiting back-end processes. > why not make it store a new record for each instance, and have a cronjob > each day update the statistics from that. It will be more efficient, > overall. It can be done hourly, even. I posted a compromise solution: A "low-contention" table that reduces the likelihood of contention. Adding a new record could result in pretty large tables that need to be summarised. I'll play around with the low-contention table first. Regards, David. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] LOCK has IGNORED on trigger.
On Thu, 4 Sep 2003, Yudha Setiawan wrote: > Dear Expert, > I got a trouble with LOCK on my table (Inquery table). > I've put the LOCK (using select bla.bla for update) > on trigger(trigger insert) of Table_A. It's work properly. > when i tried "insert into Table_A bla.bla.bla" with more > than 5 user in same time > >Code...fm_In...fm_Out..fm_Balance >- > * 0093 0 3 > # 00110 0 10 > * 00910 0 13 > # 00110 0 20 > * 00915 0 28 > # 0010 3 17 > # 0010 1 16 > # 0010 4 12 > > But when i put the "Insert into Table_A" command on > the other trigger's table. That's LOCK is ignored. > > Somebody help me please... You're probably going to need to provide a standalone example since at least I'm not entirely trying to do. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[ADMIN] unsubscribe
please remove me from the list Do you Yahoo!? Yahoo! SiteBuilder - Free, easy-to-use web site design software
Re: [ADMIN] Row locking during UPDATE
On Thu, 4 Sep 2003, David F. Skoll wrote: > > Zapping clients that are in the middle of database operations is bad > > design IMHO. > > It's required. The clients are e-mail filters and they must reply > quickly, before the end of the SMTP transaction. If they take too long, > they must be killed so the SMTP transaction can be tempfailed. If they > are not killed, the SMTP sessions pile up and eventually kill the machine. It might be worth racking your brains to think of other ways. Query timeouts? > > That's correct, a backend will generally not notice client disconnect > > until it next waits for a client command. It's not totally clear why > > you've got so many processes waiting to update the same row, though. > > It's on a high-volume mail server that receives around 500K > messages/day. About 180,000 of those are viruses, so we often have > multiple processes trying to update the virus statistics row. > > > Which process does have the row lock, and why isn't it completing its > > transaction? > > I don't know the details of PostgreSQL's implementation, but it seems > that when lots of processes are waiting to update the same row, it > gets incredibly slow. All trying to access the same row seems a bad idea generally. Instead, why not make it store a new record for each instance, and have a cronjob each day update the statistics from that. It will be more efficient, overall. It can be done hourly, even. -- Sam Barnett-Cormack Software Developer | Student of Physics & Maths UK Mirror Service (http://www.mirror.ac.uk) | Lancaster University ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [ADMIN] Row locking during UPDATE
"David F. Skoll" <[EMAIL PROTECTED]> writes: >> Which process does have the row lock, and why isn't it completing its >> transaction? > I don't know the details of PostgreSQL's implementation, but it seems > that when lots of processes are waiting to update the same row, it > gets incredibly slow. Hmm. That might represent a fixable bug. How many is "lots"? Can you give a self-contained test case? regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[ADMIN] Recovery assistence....
Running 7.3.2 under RedHat we suffered a disk crash. Following was Postgres log output from after crash: 2003-09-04 08:46:12 [29550] LOG: connection authorized: user=postgres database=gts Found a pre-existing shared memory block (key 5434001, id 65538) still in use. If you're sure there are no old backends still running, remove the shared memory block with ipcrm(1), or just delete "/u3/pgsql/7.3.2/data/postmaster.pid". 2003-09-04 09:24:51 [1873] LOG: database system was interrupted at 2003-09-04 08:45:59 CDT 2003-09-04 09:24:51 [1873] LOG: checkpoint record is at 0/B31378B4 2003-09-04 09:24:51 [1873] LOG: redo record is at 0/B3129C68; undo record is at 0/0; shutdown FALSE 2003-09-04 09:24:51 [1873] LOG: next transaction id: 2675704; next oid: 3607454 2003-09-04 09:24:51 [1873] LOG: database system was not properly shut down; automatic recovery in progress 2003-09-04 09:24:51 [1873] LOG: redo starts at 0/B3129C68 2003-09-04 09:24:51 [1873] PANIC: Invalid page header in block 2696 of 329110 2003-09-04 09:24:51 [1870] LOG: startup process (pid 1873) was terminated by signal 6 2003-09-04 09:24:51 [1870] LOG: aborting startup due to startup process failure 2003-09-04 09:25:50 [1934] LOG: database system was interrupted being in recovery at 2003-09-04 09:24:51 CDT This probably means that some data blocks are corrupted and you will have to use the last backup for recovery. Question: is there any way to force consistency? This is a development database so the main things we need are the schemas which were not changing at time of the crash. The data can be recovered from System Test scripts. We're looking to do something quicker than a recovery from a backup. Any input would be appreciated. Marc Mitchell - Senior Application Architect Enterprise Information Solutions, Inc. Donwers Grove, IL 60515 [EMAIL PROTECTED] -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Rodrigo De la Peña Sent: Thursday, September 04, 2003 7:45 AM To: [EMAIL PROTECTED] Subject: [ADMIN] unsubscrib [ADMIN] unsubscrib please remove me from the list _ Do You Yahoo!? Información de Estados Unidos y América Latina, en Yahoo! Noticias. Visítanos en http://noticias.espanol.yahoo.com ---(end of broadcast)--- TIP 8: explain analyze is your friend ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] Row locking during UPDATE
On Thu, 4 Sep 2003, Tom Lane wrote: > Any process that arrives at the row and finds it already modified by > some concurrent transaction will wait for that concurrent transaction > to complete. Right. And it waits on a semaphore, right? So there's no way to use select() to wait for EITHER the semaphore OR the loss of the connection? I hate SysV IPC. :-( > Zapping clients that are in the middle of database operations is bad > design IMHO. It's required. The clients are e-mail filters and they must reply quickly, before the end of the SMTP transaction. If they take too long, they must be killed so the SMTP transaction can be tempfailed. If they are not killed, the SMTP sessions pile up and eventually kill the machine. > That's correct, a backend will generally not notice client disconnect > until it next waits for a client command. It's not totally clear why > you've got so many processes waiting to update the same row, though. It's on a high-volume mail server that receives around 500K messages/day. About 180,000 of those are viruses, so we often have multiple processes trying to update the virus statistics row. > Which process does have the row lock, and why isn't it completing its > transaction? I don't know the details of PostgreSQL's implementation, but it seems that when lots of processes are waiting to update the same row, it gets incredibly slow. -- David. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] YOUR SITES SEARCH FEATURE DOES NOT WORK!
On Tue, 2 Sep 2003, Marc G. Fournier wrote: > > okay, first off ... what URL? I've been working on, and testing, archives > all week, and altho getting things down below a 2min search appears to be > near impossible (anyone know something *better* then mnogosearch that will > handle >250k URLs and growing ... ? we tried ht/Dig before, and the > resources it suck'd were outrageous) ... Hi Marc, I use htdig at work to index a rather large web site and it doesn't use resources all that bad. My htdig db directory is only 250 megs and it screams. Are you sure it was that bad? Would it be ok if I pointed a test instance of htdig at postgresql.org at night and indexed it to get an idea of how much space it'll take to index it? I'm a huge fan of htdig/ warts and all, so I'd kinda like to see if it can be made to work. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[ADMIN] Log error at server startup
Hello! Please help me!! My pgsql installation is having a problem during startup and I don't konw how to solve it. postmaster is reporting this: LOG: database system shutdown was interrupted at 2003-09-04T LOG: checkpoint record is at 0/44C3FC0 LOG: redo record is at 0/44C3FC0; undo record is at 0/0; shutdown TRUE LOG: next transaction id: 27046; next oid: 28452 LOG: database system was not properly shut down; automatic recovery in progress LOG: ReadRecord: unexpected pageaddr 0/24C4000 in log file 0, segment 4, offse0 LOG: redo is not required PANIC: XLogWrite: write request 0/44C4000 is past end of log 0/44C4000 LOG: startup process (pid 22347) was terminated by signal 6 LOG: aborting startup due to startup process failure How can I solve it? Thanks a lot! Gastón. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[ADMIN] please remove me from the list
please remove me from the list _ Get a FREE computer virus scan online from McAfee. http://clinic.mcafee.com/clinic/ibuy/campaign.asp?cid=3963 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [ADMIN] Row locking during UPDATE
"David F. Skoll" <[EMAIL PROTECTED]> writes: > What appears to be happening is this: > 1) Sometimes, a whole bunch of clients try updating the same row. I > see sevaral postgresql processes marked "UPDATE waiting" Any process that arrives at the row and finds it already modified by some concurrent transaction will wait for that concurrent transaction to complete. > 2) Because the client processes must complete rather quickly, a supervisor > process kills them if they don't complete soon. Zapping clients that are in the middle of database operations is bad design IMHO. > 3) The postgresql processes in an "UPDATE waiting" state seem to be > blocked waiting for a semaphore, and they do not notice the closing of > the connection when the client is killed. That's correct, a backend will generally not notice client disconnect until it next waits for a client command. It's not totally clear why you've got so many processes waiting to update the same row, though. Which process does have the row lock, and why isn't it completing its transaction? regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[ADMIN] unsubscrib
[ADMIN] unsubscrib please remove me from the list _ Do You Yahoo!? Información de Estados Unidos y América Latina, en Yahoo! Noticias. Visítanos en http://noticias.espanol.yahoo.com ---(end of broadcast)--- TIP 8: explain analyze is your friend
[ADMIN] Advice on multi-machine high-reliability setup?
Hi, I've done some work with databases, but nothing extreme. I just got assigned the setting up of what I'd call a "high-reliability" site, as opposed to "high-availability" -- which I'd like too, of course :-) I've got some ideas on how to acheive my goal, but I fear I'm not quite up-to-date on the possibilities of replication and such, and I haven't found much in-depth documentation on the kind of setup I'd like (pointers anyone?), so I'd appreciate it if you could give my ideas the once-over and tell me if I'm missing something. Requirements: = Basic model is that of a reference database; I am a repository for data which is constantly consulted and regularly updated. OK, that pretty much describes any database :-) Anyway, a lot more queries than updates. I'll have one or more webservers using PHP to insert into the database. I want them to be limited to queries and inserts so that bugs in the PHP scripts can only insert bad data and not remove good data. I prefer to have programs that modify the central data to be server-side logic, and not farm out "UPDATE"s to PHP scripts. I want to keep a record of all updates (made through client scripts) for some time, and I do want to be able to query the updates, which I suppose eliminates simple logging. I need a ticket number for each update (I'm avoiding the term "transaction"), and the capability to see when and how the update was fulfilled, which updates were made in a given time-frame, etc. Consulting has to be immediate, while updates don't have to go through instantaneously, but I want it rather quick. One or two minutes is OK, five minutes start being a long time. If the client scripts say "there was an error, try again" it's not a severe problem. If they say "OK" and the data doesn't make it into the nightly backup, even once, that's a showstopper, and nobody will use my system. Least acceptable reason for losing data once acknowledged is the near-simultaneous catastrophic loss of hard disks on two separate machines. (Yes, software RAID -- or maybe even hardware if I get my hands on enough controllers -- will make that four disks lost simultaneously :-)). I think a good way would be for updating clients to write to one machine, and delay acknowledgement until the data is read from the second "slave" machine, possibly saying after some time "there seems to be a delay, please recheck in a few minutes to see if your update went through". Does that sound feasible? I need to be able to exchange machines. I fully expect my crummy hardware to break, I know that sooner or later I'll want to upgrade the OS in ways that require a reboot, or upgrade crummy hardware, and I don't want the downtime. I'm working on the premise that having multiple cheap Linux boxes (I've got lots of layoffs) is better than one or two really big expensive servers (no money for that anyway). I want to be able to take a new machine, install it to specifications (or restore one from backup !), switch it on, bring it up to date with current database, and let it take over as a hot backup and/or primary server. Or as a web server, of course. I don't have much of an idea on data size or update frequency, which is one of the reasons I want to be able to plug in new machines seamlessly; if load gets high, it means I'm popular, and I'll be able to get better hardware :-) My ideas: = I'm thinking that the PHP scripts could write updates to a "write-only" table/database/server. I suppose there is a simple way to make an insert into an auto-increment table and get in return the unique number to use as a ticket. I'd use the newly released replication to have the updates and the data on a query-only server, an update only being acknowleged to the user when it's registered on the query-only server. Once the update is registered, I use some kind of script to convert "ticket=a, time=xx:xx who=Alice name=XXX newvalue=YYY, done=NULL" into "UPDATE data SET value=YYY, lastchange=a where name=XXX; UPDATE tickets set done=time where ticket=a;". Stored Procedure triggered on ticket creation? I've never used them . . . do they work across machines? That is, what would be the best way to have update tables on one machine and data tables on another? If I had that, changing the data master would be transparent to users, who'd just notice a five-minute delay before the update went through instead of 30 (?) seconds. It would be cool to use transactions, but I don't think one transaction can refer to two databases on two machines (yet)? This should also enable me to restore a backup of the primary data (in case of loss of the primary data), apply the updates since the backup was made, and end-up with an up-to date system. Hmm. Is this necessary if I have replication . . . is replication necessary if I have this? My doubts: == If I do manage to put updates and data on two different servers, would it be possible to make a transaction on the dat
[ADMIN] Row locking during UPDATE
Hi, I have a weird problem and want to know if I understand what's happening. I have a table like this: create table statistics ( dateDATE DEFAULT current_date, key TEXT, value INTEGER DEFAULT 0, UNIQUE(date, key) ); and I have a bunch of clients running. Often, a client does something like this: UPDATE statistics SET value = value + 1 WHERE key = 'KEY' AND date = current_date; What appears to be happening is this: 1) Sometimes, a whole bunch of clients try updating the same row. I see sevaral postgresql processes marked "UPDATE waiting" 2) Because the client processes must complete rather quickly, a supervisor process kills them if they don't complete soon. 3) The postgresql processes in an "UPDATE waiting" state seem to be blocked waiting for a semaphore, and they do not notice the closing of the connection when the client is killed. 4) New client processes are spawned and eventually try to update the row. As a result, we end up with more and more postgresql processes until the connection limit is reached, because processes in the "UPDATE waiting" state don't notice the connection has been broken. Questions: 1) Am I on the right track? Is this how PostgreSQL works? 2) I plan on solving it by making a low-contention table like this: create table low_contention_stats ( dateDATE DEFAULT current_date, key TEXT, randomizer INTEGER, value INTEGER DEFAULT 0, summarized BOOLEAN DEFAULT FALSE, UNIQUE(date, key, randomizer, summarized) ); Each client will pick a random "randomizer" value, so the odds of two clients trying to update the same row at the same time are low. Once a day, I summarize the low_contention_stats table and put the summary in statistics. A statistics update looks like this: UPDATE low_contention_stats SET value = value + 1 WHERE date = current_date AND key = 'KEY' AND randomizer = my_random_val AND not summarized; (If the update updates 0 rows, we do an INSERT) The summary process when we copy to statistics looks like this: # "Freeze" existing rows -- any updates during the summary process # will have to insert new rows where summarized is false UPDATE low_contention_stats SET summarized = 't'; # Summarize stats SELECT date, key, sum(value) FROM low_contention_stats WHERE summarized GROUP BY date, key; # Then we iterate over the results, updating "statistics" DELETE FROM low_contention_stats WHERE summarized; Will this help? I can't easily test it, because I only see the problem under high load, and the only high-load environment I have access to is a production one. :-( Regards, David. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[ADMIN] Upgrade
Hello, All! I'm planing to upgrade from 7.2.1 to 7.3.4. Will pg_dumpall and pg_restore be sufficient for propper data migration? Mark ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[ADMIN] LOCK has IGNORED on trigger.
Dear Expert, I got a trouble with LOCK on my table (Inquery table). I've put the LOCK (using select bla.bla for update) on trigger(trigger insert) of Table_A. It's work properly. when i tried "insert into Table_A bla.bla.bla" with more than 5 user in same time Code...fm_In...fm_Out..fm_Balance - * 009 3 0 3 # 00110 0 10 * 009 10 0 13 # 001 10 0 20 * 009 15 0 28 # 001 0 3 17 # 0010 1 16 # 0010 4 12 But when i put the "Insert into Table_A" command on the other trigger's table. That's LOCK is ignored. Somebody help me please... GOD Bless You All and Bravo Postgre --- Yudha Setiawan.