Re: [ADMIN] create view with check option
Karthikeyan Sundaram wrote: Hi Everybody, I have 2 versions of postgres 8.1.0 is my production version and 8.2.1 is my development version. I am trying to create a view in my development version (8.2.3) create view chnl_vw as select * from channel with check option; I am getting an error message: [Error] Script lines: 1-1 -- ERROR: WITH CHECK OPTION is not implemented Line: 1 what does this mean? I looked at the 8.2.1 manual and found the create view has check option. But it says before 8.2 those options are unsupported. Why are you trying to declare a view with check option using a 8.2 engine? Can you show us the part of manual that say you can use that syntax? I see: CREATE [ OR REPLACE ] [ TEMP | TEMPORARY ] VIEW name [ ( column_name [, ...] ) ] AS query Regards Gaetano Mendola ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [ADMIN] create table like syntax
David Durham wrote: is there a syntax that would look something like: create table newTable like oldTable without indexes || records || constraints with indexes || records || constraints built into postgres? Try this: create table newTable as select * from oldTable limit 0; Regards Gaetano Mendola ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] archive_command
Jeff Frost wrote: On Sun, 2 Oct 2005, Simon Riggs wrote: Probably the best idea is to backup the last WAL file for each timeline seen. Keep track of that, so when the current file changes you'll know which timeline you're on and stick to that from then on. Or more simply, put some notes with your program saying if you ever use a complex recovery situation, make sure to clear all archive_status files for higher timeline ids before using this program. Tell me if you think this is the most reasonable way to determine the in use WAL file: ls -rt $PGDATA/pg_xlog/ | grep -v backup\|archive\|RECOVERY | tail -1 Look at this post I did last year: http://archives.postgresql.org/pgsql-admin/2005-06/msg00013.php in that messages there are two script that deliver remotelly the archive wall, and they store too last WAL in order to not loose the current WAL in case of crash. This was the function I used to find the WAL in use: function copy_last_wal { FILE=$( ls -t1p $PGXLOG | grep -v / | head -1 ) echo Last Wal $FILE cp ${PGXLOG}/${FILE} ${PARTIAL}/${FILE}.tmp mv ${PARTIAL}/${FILE}.tmp ${PARTIAL}/${FILE}.partial find ${PARTIAL} -name *.partial | grep -v ${FILE} | xargs -i rm -fr {} } At that time Tom Lane agreed to provide some functions to ask the engine the name of WAL currently in use, dunno if in the new 8.1 something was done in order to help this process. Regards Gaetano Mendola ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] Altering WAL Segment File Size
[EMAIL PROTECTED] wrote: Dear Sir, How can I altered the WAL segment file size when building the server. The default size was 16 MB. I want only 1 MB sizeā¦ What are you trying to achieve? May be you are looking in the wrong direction. Regards Gaetano Mendola ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] Disk Access Question
Robert Treat wrote: On Tuesday 20 September 2005 13:55, Chris Hoover wrote: I have a question on disk access. How often is PostgreSQL accessing/touching the files that are on the hard drive? What I'm trying to ask is, is PostgreSQL constantly updating access timestamps or something like that on the database files even if they are not being queried? This questions comes from an Oracle background where Oracle was updating the individual files every 3 seconds if I am remembering correctly. I am trying to figure out besides the queries, what sort of accessing is PostgreSQL doing to my drives so we can try and resolve some i/o issues. AFAIK unless your actually doing something,it wont access your data files at all. Note that something is pretty wide here, insert/update/vacuum/analyze and even select will cause file access but otherwise it wont. You can mount your partition ( on Linux ) specifing the noatime option this will avoid your inode access time to be updated when files are only read. Postgres do not use this information so you are safe. Regards Gaetano Mendola ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[ADMIN] RPM 8.0.3 for RH7.3, RH7.2 and AS2.1
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi all, I'm trying to create the rpm for these distributions, I'm using the one available for RH9.0 but I got: On RH7.3, RH7.2 and AS2.1 I get: # rpmbuild --rebuild postgresql-8.0.3-1PGDG.src.rpm [...] checking for perl... /usr/bin/perl checking for Perl archlibexp... /usr/lib/perl5/5.6.1/i386-linux checking for Perl privlibexp... /usr/lib/perl5/5.6.1 checking for Perl useshrplib... false checking for flags to link embedded Perl... -L/usr/local/lib /usr/lib/perl5/5.6.1/i386-linux/auto/DynaLoader/DynaLoader.a -L/usr/lib/perl5/5.6.1/i386-linux/CORE -lperl -lnsl -ldl -lm -lc -lcrypt -lutil checking for python... /usr/bin/python checking for Python distutils module... yes checking Python configuration directory... File string, line 1 from distutils.sysconfig import get_python_lib as f; import os; print os.path.join(f(plat_specific=1,standard_lib=1),'config') ^ SyntaxError: invalid syntax checking how to link an embedded Python application... -L -lpython1.5 -lieee -ldl -lpthread -lm checking for main in -lbsd... yes checking for setproctitle in -lutil... no checking for main in -lm... yes checking for main in -ldl... yes checking for main in -lnsl... yes checking for main in -lsocket... no checking for main in -lipc... no checking for main in -lIPC... no checking for main in -llc... no checking for main in -ldld... no checking for main in -lld... no checking for main in -lcompat... no checking for main in -lBSD... no checking for main in -lgen... no checking for main in -lPW... no checking for main in -lresolv... yes checking for library containing getopt_long... none required checking for main in -lunix... no checking for library containing crypt... -lcrypt checking for library containing fdatasync... none required checking for shmget in -lcygipc... no checking for readline... yes (-lreadline -ltermcap) checking for inflate in -lz... yes checking for library containing com_err... -lcom_err checking for library containing krb5_encrypt... no configure: error: could not find function 'krb5_encrypt' required for Kerberos 5 error: Bad exit status from /var/tmp/rpm-tmp.4601 (%build) RPM build errors: Bad exit status from /var/tmp/rpm-tmp.4601 (%build) does anyone knows how to fix these errors or where I can find the RPM for these 3 platforms? -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.5 (MingW32) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFDC0gM7UpzwH2SGd4RAqVcAKCMfwHZ0OHkT5MXRXd0qBhM1uMtvwCg1arQ RReaKumLc0rL9zF13OhQuHQ= =fyYS -END PGP SIGNATURE- ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] Creating tables...not a usual question (I think)
Rodrigo Sakai wrote: Hi all, I have a question! Suppose that you have a 'virtualstore' database that are owned by Peter, the enterprise DBA. So why the tables that we create inside 'virtualstore' don't have peter as owner automatically??? I mean, why don't the tables inherits the owner of the database??? Why it should? If an user have permission to create table why the tables created shall be owned by another user ? It's like you create a file in your filesystem and the owner is someone else. Regards Gaetano Mendola ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[ADMIN] Vacuum full on a big table
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi all, is there a way to vacuum full a table but working only a part of the table ? I have a table with 6 milion rows and vacuum full it will send out of line for hours my server, so I'll like to vacuum that table multiple times in order to not block that table for a long period. It will work decresing the FSM settings ? Regards Gaetano Mendola -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.5 (MingW32) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFCOfIs7UpzwH2SGd4RAsXaAKDIgcNZLqsYULjnNVNhTktXvWmJTgCg4zcK V/gFNRTCu0y99HLbTtGm610= =0SF3 -END PGP SIGNATURE- ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [ADMIN] Vacuum full on a big table
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Scott Marlowe wrote: Vacuum full doesn't use fsm, lazy vacuum does (i.e. plain vacuum). Are you sure? Why then after a vacuum full verbose the FSM settings are displayed ? Is there a reason you're doing a full vacuum? Because I'm only running pg_autovacuum since one month now, but I see that for same table is a disaster do not vacuum full once in a day. and no, you can't vacuum parts at a time. it's all or nothing. (*Unless that changes in 8.0...*) I wish that this happen. Regards Gaetano Mendola -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.5 (MingW32) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFCOfxN7UpzwH2SGd4RArg0AKDn4hDw6aiaQgHW18xBfUsCNWqurgCgtaVj fmDWcXtK+kZsrdSbY6rw3LA= =zZEC -END PGP SIGNATURE- ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] Partitioning Option?
Tomeh, Husam wrote: Thanks Josh, I'll check out the namespace concept. (I was referring to object partitioning. For instance, if I have a huge table with US counties as my partition key, I could create partitions within the same table based on the partition key (a US county for example). When querying, the engine will access the partition instead of the whole table to get the result set. This is provided in Oracle DB EE. So, I was wondering whether I can do similar thing in PostgreSQL since we're exploring PostgreSQL) Look on the performance list my post: horizontal partition. Regards Gaetano Mendola ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [ADMIN] Partitioning Option?
Joshua D. Drake wrote: Tomeh, Husam wrote: Does PostgreSQL support/allow partitioning of objects like tables and indexes, like Oracle does? We support tablespaces but not table partitioning. You can get around this by using namespaces and unions however. I demonstrate that at least with 7.4.x the horizontal partition is not applicable, see my post on performance ( horizontal partition ) As soon you use the view with all UNION joined with other table you loose the index usage on that view :-( Regards Gaetano Mendola ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] Partitioning Option?
Tom Lane wrote: Tomeh, Husam [EMAIL PROTECTED] writes: (I was referring to object partitioning. For instance, if I have a huge table with US counties as my partition key, I could create partitions within the same table based on the partition key (a US county for example). When querying, the engine will access the partition instead of the whole table to get the result set. This is provided in Oracle DB EE. So, I was wondering whether I can do similar thing in PostgreSQL since we're exploring PostgreSQL) You can build it out of spare parts: either a view over a UNION ALL of component tables, or a parent table with a bunch of inheritance children, either way with rules to redirect insertions into the right subtable. (With the inheritance way you could instead use a trigger for that, which'd likely be more flexible.) Tom, I did a post on performance about my attempt to do an horizontal partition, in a 7.4.x engine, but it seems the planner refuse to optimize it, look at this for example: CREATE TABLE user_logs_2003_h () inherits (user_logs); CREATE TABLE user_logs_2002_h () inherits (user_logs); I defined on these tables the index already defined on user_logs. And this is the result: empdb=# explain analyze select * from user_logs where id_user = sp_id_user('kalman'); QUERY PLAN - Result (cost=0.00..426.33 rows=335 width=67) (actual time=20.891..129.218 rows=98 loops=1) - Append (cost=0.00..426.33 rows=335 width=67) (actual time=20.871..128.643 rows=98 loops=1) - Index Scan using idx_user_user_logs on user_logs (cost=0.00..133.11 rows=66 width=67) (actual time=20.864..44.594 rows=3 loops=1) Index Cond: (id_user = 4185) - Index Scan using idx_user_user_logs_2003_h on user_logs_2003_h user_logs (cost=0.00..204.39 rows=189 width=67) (actual time=1.507..83.662 rows=95 loops=1) Index Cond: (id_user = 4185) - Index Scan using idx_user_user_logs_2002_h on user_logs_2002_h user_logs (cost=0.00..88.83 rows=80 width=67) (actual time=0.206..0.206 rows=0 loops=1) Index Cond: (id_user = 4185) Total runtime: 129.500 ms (9 rows) that is good, but now look what happen in a view like this one ( where I join the view above ): create view to_delete AS SELECT v.login, u.* from user_login v, user_logs u where v.id_user = u.id_user; empdb=# explain analyze select * from to_delete where login = 'kalman'; QUERY PLAN Hash Join (cost=4.01..65421.05 rows=143 width=79) (actual time=1479.738..37121.511 rows=98 loops=1) Hash Cond: (outer.id_user = inner.id_user) - Append (cost=0.00..50793.17 rows=2924633 width=67) (actual time=21.391..33987.363 rows=2927428 loops=1) - Seq Scan on user_logs u (cost=0.00..7195.22 rows=411244 width=67) (actual time=21.385..5641.307 rows=414039 loops=1) - Seq Scan on user_logs_2003_h u (cost=0.00..34833.95 rows=2008190 width=67) (actual time=0.024..18031.218 rows=2008190 loops=1) - Seq Scan on user_logs_2002_h u (cost=0.00..8764.00 rows=505199 width=67) (actual time=0.005..5733.554 rows=505199 loops=1) - Hash (cost=4.00..4.00 rows=2 width=16) (actual time=0.195..0.195 rows=0 loops=1) - Index Scan using user_login_login_key on user_login v (cost=0.00..4.00 rows=2 width=16) (actual time=0.155..0.161 rows=1 loops=1) Index Cond: ((login)::text = 'kalman'::text) Total runtime: 37122.069 ms (10 rows) I did a similar attempt with UNION ALL but the result is the same. Regards Gaetano Mendola ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] Determining current WAL
Tom Lane wrote: [EMAIL PROTECTED] writes: I'm currently looking at using WAL / PITR backups for a database, and I need to keep the backups as up to date as possible. As such, keeping a copy of the current WAL file, as suggested in the manual, would seem to be a very good idea. I'm slightly confused, though, about which file is the current. I had assumed that it would always be the highest numbered, No. Go with the most-recently-modified. Segment files are normally created (or renamed into place) in advance of being needed. There should probably be a cleaner/more reliable way of identifying the active file ... Yes it was discussed during the attempt to create the log delivery to another server in replay state, if the master was not responding then it was usefull have a way to identify the active wal segment, at the time I was identifing the current wall the last created one Regards Gaetano Mendola ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] max connections from one ip
wrote: Hi all! Can I restrict max number of connections from one IP address? -- Thanks No within postgres, may be you can play with iptables or something like that Regards Gaetano Mendola ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [ADMIN] Index relation size
Rigmor Ukuhe wrote: Hi, I have table with size ~35 MB, it has several indexes and couple of them are well over 500 MB in size (they are indexes with 3-4 columns involved, with datatypes like varchar, int4 , timestamp, boolean). Database is VACUUMed nightly, version is 7.2.4 (cant upgrade to newest version at the moment). Are these index sizes indicating some problems with our Postgres server? If you do not vacuum full, this is normal. Using a version 7.4 then a normal vacuum shall be enough. Regards Gaetano Mendola ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [ADMIN] About System Catalogs
Thomas Swan wrote: Tom Lane wrote: [EMAIL PROTECTED] writes: 2) As i had a very large pg_largeogject, i deleted rows e now i have a clean, small table. The table is empty but its index pg_largeogject_loid_pn_index lasts to retain a lot of bytes. REINDEX should fix this. Is REINDEX still going to be a necessity in the 8.0 release? I remembered at there was a discussion on the mailing list about a fix or need to fix VACUUM so that manually reindexing would not be necessary... This is already true for the 7.4, I don't remember that vacuum was improved int that direction in the 8.0. Regards Gaetano Mendola ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] replication using WAL archives
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Simon Riggs wrote: |Gaetano Mendola wrote |Postgres can help this process, as suggested by Tom creating a | | pg_current_wal() | |or even better having two new GUC parameters: archive_current_wal_command | | and | |archive_current_wal_delay. | | | OK, we can modify the archiver to do this as well as the archive-when-full | functionality. I'd already agreed to do something similar for 8.1 | | PROPOSAL: | By default, archive_max_delay would be 10 seconds. | By default, archive_current_wal_command is not set. | If archive_current_wal_command is not set, the archiver will archive a file | using archive_command only when the file is full. | If archive_current_wal_command is set, the archiver would archive a file | whichever of these occurs first... | - it is full | - the archive_max_delay timeout occurs (default: disabled) | ...as you can see I've renamed archive_current_wal_delay to reflect the fact | that there is an interaction between the current mechanism (only when full) | and this additional mechanism (no longer than X secs between log files). | With that design, if the logs are being created quickly enough, then a | partial log file is never created, only full ones. | | When an xlog file is archived because it is full, then it is sent to both | archive_current_wal_command and archive_command (in that order). When the | timeout occurs and we have a partial xlog file, it would only be sent to | archive_current_wal_command. It may also be desirable to not use | archive_command at all, only to use archive_current_wal_command. That's not | currently possible because archive_command is the switch by which all of the | archive functioanlity is enabled, so you can't actually turn this off. It seems good to me, the script behind archive command can be a nop if someone want use the archive_current_wal_command | = - = - = | | Gaetano - skim-reading your script, how do you handle the situation when a | new xlog file has been written within 10 seconds? That way the current file | number will have jumped by 2, so when your script looks for the Last wal | using head -1 it will find the N+2 and the intermediate file will never be | copied. Looks like a problem to me... Yes, the only window failure I seen ( but I don't know if it's possible ) Master: ~log N created log N filled archive log N log N+1 created log N+1 filled ~log N+2 created ~ the master die here before to archive the log N+1 ~archive log N+1 in this case as you underline tha last log archived is the N and the N+2 partial wal is added to archived wal collection and in the recovery fase the recovery stop after processing the log N. Is it possible that the postmaster create the N+2 file without finish to archive the N+1 ? ( I suspect yes :-( ) The only cure I see here is to look for not archived WAL ( if possible ). |I problem I discover during the tests is that if you shut down the spare |node and the restore_command is still waiting for a file then the postmaster |will never exit :-( | | | HmmAre you reporting this as a bug for 8.0? It's not on the bug list... For me is a behave to avoid. Regards Gaetano Mendola -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.5 (MingW32) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFBeTkJ7UpzwH2SGd4RAsMxAKCbV7W+wrGBocf2Ftlthm0egAlIWACgp87L KU/YusyHuvT7jSFwZVKpP3M= =rWZx -END PGP SIGNATURE- ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] replication using WAL archives
Simon Riggs wrote: Situation I thought I saw was: - copy away current partial filled xlog N - xlog N fills, N+1 starts - xlog N+1 fills, N+2 starts - copy away current partial filled xlog: N+2 (+10 secs later) i.e. if time to fill xlog (is ever) time to copy away current xlog, then you miss one. So problem: you can miss one and never know you've missed one until the recovery can't find it, which it never returns from...so it just hangs. No. The restore.sh is not smart enough to know the last wal that must be replayed, the only smart thing is to copy the supposed current wal in the archive directory. The script hang (and is a feature not a bug) if and only if the master is alive ( at least I'm not seeing any other hang ). In your example in the archived directory will be present the files until logN and logN+2 ( the current wal ) is in the partial directory, if the master die, the restore.sh will copy logN+2 in the archived directory, the spare node will execute restore.sh with file logN+1 as argument and if is not found then the restore.sh will exit. Regards Gaetano Mendola ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [ADMIN] NIC to NIC connection
Bruno Wolff III wrote: Also I believe that if a switch doesn't remember where a particular mac address is it will send the packet to all of the attached ports. I don't think so, I guess the switch perform a sort of arpping in order to detect who have a macaddress assigned, even the multicast is not sent to all ports but only to that ports where someone sent an arp packet saying the he was registered to a multicast address. However I don't think exist a standard. Regards Gaetano Mendola ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] replication using WAL archives
Robert Treat wrote: On Thu, 2004-10-21 at 02:44, Iain wrote: Hi, I thought I read something about this in relation to v8, but I can't find any reference to it now... is it (or will it be) possible to do master-slave style database replication by transmitting log files to the standby server and having it process them? I'm not certain if this is 8.0, but some folks have created a working version against the 8.0 code that will do something like this. Search the pgsql-hacker mail list archives for more information on it. I sent a post on hackers, I put it here: === Hi all, I seen that Eric Kerin did the work suggested by Tom about how to use the PITR in order to have an hot spare postgres, writing a C program. I did the same writing 2 shell scripts, one of them perform the restore the other one deliver the partial filled wal and check if the postmaster is alive ( check if the pid process still exist ). With these two scripts I'm able to have an hot spare installation, and the spare one go alive when the first postmaster dies. How test it: 1) Master node: modify postgresql.conf using: ~archive_command = 'cp %p /mnt/server/archivedir/%f' ~launch postgres and perform a backup as doc ~http://developer.postgresql.org/docs/postgres/backup-online.html suggest to do launch the script: partial_wal_deliver.sh PID /mnt/server/partialdir pg_xlog path ~this script will delivery each 10 seconds the current wal file, ~and touch the alive file in order to notify the spare node that ~the master node is up and running 2) Spare node: create a recovery.conf with the line: ~restore_command = 'restore.sh /mnt/server/archivedir/%f %p /mnt/server/partialdir' ~replace the content of data directory with the backup performed at point 1, ~remove any file present in the pg_xlog directory ( leaving there the archive_status ~directory ) and remove the postmaster.pid file ( this is necessary if you are running ~the spare postgres on the same hw ). ~launch the postmaster, the restore will continue till the alive file present in the ~/mnt/server/partialdir directory is not updated for 60 seconds ( you can modify this ~values inside the restore.sh script ). Be sure that restore.sh and all directories involved are accessible Let me know. This is a first step, of course, as Eric Kerin did, is better port these script in C and make it more robust. Postgres can help this process, as suggested by Tom creating a pg_current_wal() or even better having two new GUC parameters: archive_current_wal_command and archive_current_wal_delay. I problem I discover during the tests is that if you shut down the spare node and the restore_command is still waiting for a file then the postmaster will never exit :-( == I hope that is clear. Regards Gaetano Mendola #!/bin/bash SOURCE=$1 TARGET=$2 PARTIAL=$3 SIZE_EXPECTED=16777216 #bytes 16 MB DIED_TIME=60#seconds function test_existence { if [ -f ${SOURCE} ] then COUNTER=0 #I have to check if the file is begin copied #I assume that it will reach the right #size in a few seconds while [ $(stat -c '%s' ${SOURCE} ) -lt $SIZE_EXPECTED ] do sleep 1 let COUNTER+=1 if [ 20 -lt $COUNTER ] then exit 1# BAILING OUT fi done cp $SOURCE $TARGET exit 0 fi echo ${SOURCE} not found #if is looking for a history file and not exist #I have suddenly exit echo $SOURCE | grep history /dev/null 21 exit 1 } while [ 1 ] do test_existence #CHECK IF THE MASTER IS ALIVE DELTA_TIME=$(( $( date +'%s' ) - $( stat -c '%Z' ${PARTIAL}/alive ) )) if [ $DIED_TIME -lt $DELTA_TIME ] then echo Master is dead... # Master is dead CURRENT_WAL=$( basename $SOURCE ) echo Partial: ${PARTIAL} echo Current wal: ${CURRENT_WAL} echo Target: ${TARGET} cp ${PARTIAL}/${CURRENT_WAL}.partial ${TARGET} /dev/null 21 exit 0 exit 1 fi sleep 1 done #!/bin/bash PID=$1 PARTIAL=$2 PGXLOG=$3 function copy_last_wal { FILE=$( ls -t1p $PGXLOG | grep -v / | head -1 ) echo Last Wal $FILE cp ${PGXLOG}/${FILE} ${PARTIAL}/${FILE}.tmp mv ${PARTIAL}/${FILE}.tmp ${PARTIAL}/${FILE}.partial find ${PARTIAL} -name *.partial | grep -v ${FILE} | xargs -i rm -fr {} } while [ 1 ] do ps --pid $PID /dev/null 21 ALIVE=$? if [ ${ALIVE} == 1 ] then #The process is dead echo Process dead copy_last_wal exit 1 fi #The process still exist touch ${PARTIAL}/alive copy_last_wal sleep 10 done ---(end of broadcast
Re: [ADMIN] NIC to NIC connection
Jay A. Kreibich wrote: On Thu, Oct 21, 2004 at 10:07:33AM +0200, Gaetano Mendola scratched on the wall: Bruno Wolff III wrote: Also I believe that if a switch doesn't remember where a particular mac address is it will send the packet to all of the attached ports. I don't think so, I guess the switch perform a sort of arpping in order to detect who have a macaddress assigned, No, he's right. If the MAC to port mapping has not been learned by the switch, the packet is flooded to all ports or (for really bad switches) dropped. A switch is a pure layer-two device and ARP involves layer-three addresses and concepts. We have some switches that are able to do ip routing too... :-( I have to admint that I'm not a switch specialist but given the ability to do routing I was imagine the arpping trich... Regards Gaetano Mendola ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] NIC to NIC connection
Jay A. Kreibich wrote: On Thu, Oct 21, 2004 at 07:05:40PM +0200, Gaetano Mendola scratched on the wall: We have some switches that are able to do ip routing too... :-( So called layer-three switches are a whole different game. Ok that's explain all, I was able to create two different VLAN's with a cable between two ports ( in order to simulate two different switches), each port was appartaining to a different VLAN and I was seeing ARPREQUEST passing by the cable in order to detect the address in the other side, just for fun I replaced the cable with a traffic shaper and all was working very fine... However I wasn't curious enough to see who was the arp request source, may be was not the router/switcher but a client that was doing it. Regards Gaetano Mendola ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[ADMIN] Single table vacuum full different when vacuum full the whole database
reusable. CPU 0.11s/0.08u sec elapsed 1.41 sec. INFO: index exp_email now contains 34438 row versions in 2320 pages DETAIL: 27488 index row versions were removed. 1263 index pages have been deleted, 1263 are currently reusable. CPU 0.11s/0.06u sec elapsed 0.90 sec. INFO: index exp_ci_email now contains 34438 row versions in 2357 pages DETAIL: 27488 index row versions were removed. 1290 index pages have been deleted, 1290 are currently reusable. CPU 0.07s/0.08u sec elapsed 0.91 sec. INFO: index exp_lastname now contains 34438 row versions in 1448 pages DETAIL: 27488 index row versions were removed. 507 index pages have been deleted, 507 are currently reusable. CPU 0.04s/0.07u sec elapsed 0.40 sec. INFO: index exp_ci_lastname now contains 34438 row versions in 1444 pages DETAIL: 27488 index row versions were removed. 512 index pages have been deleted, 512 are currently reusable. CPU 0.06s/0.03u sec elapsed 0.47 sec. INFO: index exp_orbital_ptns now contains 34438 row versions in 3001 pages DETAIL: 27488 index row versions were removed. 2766 index pages have been deleted, 2766 are currently reusable. CPU 0.12s/0.07u sec elapsed 1.58 sec. INFO: index exp_stickers now contains 34438 row versions in 2980 pages DETAIL: 27488 index row versions were removed. 2683 index pages have been deleted, 2683 are currently reusable. CPU 0.17s/0.07u sec elapsed 6.25 sec. INFO: index exp_pid now contains 34438 row versions in 2169 pages DETAIL: 27488 index row versions were removed. 1989 index pages have been deleted, 1989 are currently reusable. CPU 0.13s/0.06u sec elapsed 4.19 sec. INFO: index exp_mac_address now contains 34438 row versions in 2012 pages DETAIL: 27488 index row versions were removed. 413 index pages have been deleted, 413 are currently reusable. CPU 0.10s/0.13u sec elapsed 1.08 sec. INFO: index exp_mac_address_normal now contains 34438 row versions in 2014 pages DETAIL: 27488 index row versions were removed. 416 index pages have been deleted, 416 are currently reusable. CPU 0.06s/0.04u sec elapsed 1.20 sec. INFO: index ua_user_data_exp_id_user_key now contains 34438 row versions in 886 pages DETAIL: 27488 index row versions were removed. 3 index pages have been deleted, 3 are currently reusable. CPU 0.03s/0.06u sec elapsed 0.22 sec. INFO: index ua_user_data_exp_login_key now contains 34438 row versions in 1432 pages DETAIL: 27488 index row versions were removed. 122 index pages have been deleted, 122 are currently reusable. CPU 0.05s/0.05u sec elapsed 0.43 sec. INFO: vacuuming pg_toast.pg_toast_18410 INFO: pg_toast_18410: found 0 removable, 0 nonremovable row versions in 0 pages DETAIL: 0 dead row versions cannot be removed yet. Nonremovable row versions range from 0 to 0 bytes long. There were 0 unused item pointers. Total free space (including removable row versions) is 0 bytes. 0 pages are or will become empty, including 0 at the end of the table. 0 pages containing 0 free bytes are potential move destinations. CPU 0.00s/0.00u sec elapsed 0.00 sec. INFO: index pg_toast_18410_index now contains 0 row versions in 1 pages DETAIL: 0 index pages have been deleted, 0 are currently reusable. CPU 0.00s/0.00u sec elapsed 0.00 sec. INFO: vacuuming public.current_connection it seems that a vacuum full on the whole DB is more aggressive. My FSM setting are enough: INFO: free space map: 281 relations, 33804 pages stored; 27344 total pages needed DETAIL: Allocated FSM size: 1000 relations + 200 pages = 11780 kB shared memory. Regards Gaetano Mendola ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] Single table vacuum full different when vacuum full the whole
Tom Lane wrote: Gaetano Mendola [EMAIL PROTECTED] writes: it seems that a vacuum full on the whole DB is more aggressive. It is not. A much more plausible theory is that this is the result of concurrent changes to the table. It is clear from the dead row versions stats that there were concurrent transactions ... That is the more updated/inserted table, and yes there were some concurrent transaction but, is it plausible that 82 dead rows were responsible of grab 26000 index row: INFO: index ua_user_data_exp_id_user_key now contains 34438 row versions in 886 pages DETAIL: 27488 index row versions were removed. instead of: INFO: index ua_user_data_exp_id_user_key now contains 34519 row versions in 886 pages DETAIL: 1362 index row versions were removed. Regards Gaetano Mendola ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] Postmaster terminated without any error message
roger wrote: Thanks for your information. I couldn't find the ulimit setting in postgresql.conf. Where can I find it? I use the default settings in pgsql. Emh. ulimit is a OS command. Do an: ulimit -a and let us know... Regards Gaetano Mendola ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] service not starting
Gary Stainburn wrote: Hi folks. Got a wee small problem: [EMAIL PROTECTED] data]# service postgresql start Starting postgresql service: [FAILED] [EMAIL PROTECTED] data]# Can anyone point to things I need to look at to sort this. Thank got it's Sunday. if you are sure that postmaster is down, try to start it manually and see what it say: your_bin_path/postmaster -D path to your data directory if you are in a REDHAT or Fedora: /usr/bin/postmaster -D /var/lib/pgsql/data the command above suppose you already did an initdb... Regards Gaetano Mendola ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] Multiple WALs for PITR ?
Michael Kleiser wrote: I wan`t to use Point in Time Recovery I allread read http://developer.postgresql.org/docs/postgres/backup-online.html#BACKUP-ARCHIVING-WAL But I still wonder, if it is possible for PostgreSQL 8.0 to write multiple, redundant WAL-Files like Oracles with its redo-log-groups ? Could you explain what do this feature that PITR implementation don't support ? Is threre an equivalent to Oracles ALTER SYSTEM SWITCH LOGFILE ? Given that this is Postgresql list could you kindly explain what do this command, and also why do you think Postgresql must support it? Regards Gaetano Mendola ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] Failover Solution for Postgres
John Allgood wrote: Hello All I am looking for information on postgres failover solutions. I have researched Dbmirror, DBcluster, and GFS along with others. This server is running rh-postgresql-7.3.6 and Redhat ES 3.0 and I am trying to get an idea of what is stable and reliable. What are people on this list running for you failover solutions and what recommendations can you give me. The server will be running about 9 databases anywhere form 350MB to 3GB. Looking to go live on the system 1st quarter 2005. I would love to wait until postgres 8.0 is released and get the point-in-time recovery along with the other features that will be in that version. But that will have to wait. We use the cluster RH solution. Regards Gaetano Mendola ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] PLEASE GOD HELP US!
Scott Marlowe wrote: On Sat, 2004-10-02 at 09:14, Stephan Szabo wrote: On Fri, 1 Oct 2004, Scott Marlowe wrote: On Fri, 2004-10-01 at 14:26, Shane | SkinnyCorp wrote: Okay, just so no one posts about this again... the 'ORDER BY t.status=5,lastreply' clause is meant to float the threads with a status of '5' to the top of the list... it is NOT meant to only grab threads where the status = 5. Oh and believe me, when I take this out of the query, it CERTAINLY doesn't add any more than possible 1/4 of a millesecond to the speed of the SELECT statement. Wouldn't this work just as well? SELECT * FROM thread_listing AS t ORDER BY t.status DESC,t.lastreply desc LIMIT 25 OFFSET 0 Probably not, because I don't think he wants the other statuses to have special ranking over the others, so a status=4 and status=1 row should be sorted by lastreply only effectively. This is the problem of combining separate status flags into a single field if you want to be doing these sorts of queries. So would a union give good performance? Just union the first 25 or less with status=5 with the rest, using a 1 and 0 in each union to order by first? Hopefully the indexes would then be used. anyone seen that the OP is running the server with sequential scan disabled ? Reagards Gaetano Mendola ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] logging
Theo Galanakis wrote: I'm currently logging postgres dubugging to syslogs and wanted to know if it is possible to log which database each log entry is refering to? At the moment it looks something like this : Sep 30 02:49:59 tickle postgres[31285]: [31-1] LOG: statement: select * from content_objects^M Sep 30 02:49:59 tickle postgres[31285]: [31-2] ^I^I^I^Iwhere content_object_id = ^M Sep 30 02:49:59 tickle postgres[31285]: [31-3] ^I^I^I^I^I'680' Sep 30 02:49:59 tickle postgres[31285]: [32-1] LOG: duration: 1.147 ms I was thinking of perhaps looking at the table in the sql and trying to match it up the information_schema.columns and obtain the database from that! However I thought that if there was a debugging option available it would save a heap of time. Unfortunately I have to say that is impossible to do it for version 8.0 ( that BTW is still in beta phase ). Regards Gaetano Mendola ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [ADMIN] PLEASE GOD HELP US!
Shane | SkinnyCorp wrote: # PGSQL Version 7.4.2 Upgrade to 7.4.5 #--- # RESOURCE USAGE (except WAL) #--- # - Memory - shared_buffers = 8192 sort_mem = 8192 vacuum_mem = 127072 Here vacuum_mem is too high, memory wasted, if you do regular vacuum or you use the autovacuum daemon then usualy 32MB are enough # - Free Space Map - max_fsm_pages = 5 # min max_fsm_relations*16, 6 bytes each max_fsm_relations = 1000# min 100, ~50 bytes each you didn't show us the result of vacuum full verbose so I can not tell you if they are enough or not. enable_seqscan = false enable_seqscan = true Do you think that an index scan is always worst then a seqscan ? random_page_cost = 4# units are one sequential page fetch cost decrease this value to 2.5 # - Query/Index Statistics Collector - stats_start_collector = false stats_command_string = false stats_block_level = false stats_row_level = false stats_reset_on_server_start = false I suggest you to start the collectors stats_start_collector = true stats_command_string = true stats_block_level = true stats_row_level = true stats_reset_on_server_start = false Regards Gaetano Mendola ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] moving pg_xlog
Anjan Dave wrote: All: Whats the best way to separate pg_xlog to another drive? I am using a soft link currently in the data directory (per docs), but is there another way of doing this? Nope currently. Regards Gaetano Mendola ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] stopping pg_autovacuum
Cris Carampa wrote: I put pg_autovacuum -D in the start section of my rc postgresql script, just after the pg_ctl command. What should I put in the stop section? I understand the pg_autovacuum script should be stopped before the postmaster stops. Is it safe to kill -9 it? Why to be so brute ? Do you know what does it mean that -9 ? Usually is not a good idea stop *nix process with the -9 signal. Be more nice with a kill -SIGTERM. Regards Gaetano Mendola ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] Getting the OID of inserted row in a rule
Bradley Kieser wrote: Hi Michael, But what if another insert happens in the mean time? Then the sequence would have advanced that the data will be scrambled. |Given that this could be a very active dataset, that situation is almost certain to occur. I think you are wrong, the sequence are not affected by transactions ( on rollback the serial is not rolledback ), but however the value are isolated between transactions. I don't see the problem: #section1 begin; insert into test (b) values ( 1 ); #section2 begin; insert into test (b) values ( 1 ); #section1 select currval('test_a_seq'); give 1 #section2 select currval('test_a_seq'); give 2 Regards Gaetano Mendola ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] Important Question.
[EMAIL PROTECTED] wrote: Hi, I have MySQL installed on my box and many users using it and I would like to start a irc server but I need to use PostgreSQL for the services ( Only Postgree supported ). Can I use MySQL and PostgreSQL on the same box? They will or the won't interfere eachother? kidding You have to install both in differents jails otherwise the code of one will try to overwrite the code of the other one, do you know corewars ? :-) /kidding I'm not aware of any problem. Regards Gaetano Mendola ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [ADMIN] VACUUM FULL achieves nothing / Postgres 7.3.2 + RedHat 7.3
R. Willmington wrote: Gaetano Mendola [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED]... Are you sure you dont have connection in status: idle in transaction? Check with: ps -eafwww | grep post Gaetano Hello Gaetano, Thank you for the reply. I have checked the connections with the command you suggested, they are all in status idle. Update on the problem described in my initial posting: I have deleted some 4000 records from the table on the live system in the meantime, it took about 36 minutes to complete. Do you have cascade delete ? 4000 record in 36 minutes are too much, in my system I delete milion records in a few seconds, are you sure you are presenting the entire scenario ? Regards Gaetano Mendola ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] VACUUM FULL achieves nothing / Postgres 7.3.2 + RedHat 7.3
R. Willmington wrote: Additional information: The applications using the database are java - based and create persistent connections (from a connection pool) to the postgres database. These connections are on autocommit, thus, there should not be any opened transactions preventing the vacuum full from write - locking the tables. Any idea what i am doing wrong? Any help appreciated. Are you sure you dont have connection in status: idle in transaction? Check with: ps -eafwww | grep post Gaetano ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [ADMIN] Dump/Restore vs Vacuum Full
Adi Alurkar wrote: Any comments ? knock knock!! is this thing ON? On Sep 1, 2004, at 5:59 PM, Adi Alurkar wrote: Greetings, Are there any differences, advantages of running a dump/restore vs running the following:1)REINDEX all the tables in all the databases, 2)VACUUM FULL ANALYZE on all the databases Assuming the following 1) the system can be offline. 2) PG version 7.4.x 3) pg_autovacuum running as a daemon with 30 minutes sleep but have had a few occasions where pg_autovacuum had to be killed and restarted. 4) FSM settings are set than required by VACUUM ANALYZE VERBOSE test. Go for a vacuum full and reindex of all tables. The advantages are that the dump/restore is longer if your vacuum the database quite enough BTW, with 7.4.x there is no needs to run vacuum full so much Regards Gaetano Mendola ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] backups and WAL
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 G u i d o B a r o s i o wrote: | On this issue, | | | |Is a good idea to ln -s the pg_xlog directory onto another place, while the postmaster is running? I don't think so | I agree with pablo, documentation at this point is not easy to found. I disagree instead: 1) Go on the first doc page: http://www.postgresql.org/docs/7.4/interactive/index.html 2) Search for backups 3) The first 4 entries are what you have to know Please do not take this as RTFM Regards Gaetano Mendola -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.4 (MingW32) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFBLwQR7UpzwH2SGd4RAltoAKCMvcXuH4zbwleLDJaXXZ+tu6Q+/ACg5jB1 KvTAP2TiX5ZWFXiKvSTRIW0= =GS6b -END PGP SIGNATURE- ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] add serial no
Keith wrote: Dear All, Someone can help me to solve the below problems 1. I create a table for a period of time, there is huge records already posted. I would like to alter table and add serial primary key on that table. It's impossible to add serial no by hand. Please adv how can I add the serial number automatically. Just to inform you that with the future 8.0 postgresl version you can do this task easily: kalman=# select * from test; field_1 - 3 5 7 6 8 (5 rows) kalman=# alter table test add column pk serial primary key; NOTICE: ALTER TABLE will create implicit sequence test_pk_seq for serial column test.pk NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index test_pkey for table test ALTER TABLE kalman=# select * from test; field_1 | pk -+ 3 | 1 5 | 2 7 | 3 6 | 4 8 | 5 (5 rows) Regards Gaetano Mendola ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] PostgreSQL as a DDBMS
Rodrigo Bonfa wrote: Friends, I would like to know if PostgreSQL can run as a Distributed Data Base Management System. Is possible, throught PostgreSQL, to implement horizontal partition, this is, to fragment tables horizontally? For example, can I have a Data Base with 1 table, where it is horizontally fragmented and each fragment is located on different places (postgresql servers)? See that the Data Base is Unique, but the Data are distributed throught a horizontal partitioning. How PostgreSQL could help me to solve this problem? In one word: no. Regards Gaetano Mendola ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] What's the best way to use a Solid State HDD?
Chris Gamache wrote: I'm using a TiGiJet 2GB Solid State drive. How much it cost ? My thought was to place the WAL on it to speed up writes. I don't know of any tools that exist to determine the effectiveness of this except for anecdotal analysis (This type of query took 3 minutes yesterday, and 1 minute today) ... Sar gives me machine-wide stats, but we do more than PostgreSQL here. I can enable stats, but will that tell me what I'm looking for? Vmstat reports no swapping of pages of ram to disk, so I don't think it would make sense to make it a Linux Swap drive. I suppose I could move some actual tables and indexes to the device. Tablespaces might make this easy. I'm still using 7.4.3. If I had to pick between tables and indexes, which would make more sense to put on the solid-state device? What would you do if you had a solid state hdd to use and you wanted to speed up PostgreSQL? Depend what do you want obtain. Do you want speed up writes ? Are you annoyed by a full scan table that you can not avoid ? or what ? Regards Gaetano Mendola ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[ADMIN] Documents storage
Hi all, where I work some times we need to find our documents searching for a couple of words. I was thinking to use the tsearch2. Shall I use OpenFTS? Do you have any suggestion ? Regards Gaetano Mendola ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] Documents storage
Oleg Bartunov wrote: On Tue, 17 Aug 2004, Gaetano Mendola wrote: Hi all, where I work some times we need to find our documents searching for a couple of words. I was thinking to use the tsearch2. Shall I use OpenFTS? Do you have any suggestion ? tsearch2 is ok if you have docs and db on the same server, OpenFTS is a middleware on top of tsearch2. What advantage I have on use OpenFTS instead of use tsearch2 directly ? I read the FAQ but I'm still missing what I can do with OpenFTS that I can not with tsearch2. Regards Gaetano Mendola ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] Killing Long running query or user session
David Ecker wrote: How can I kill/stop a long running query or a complete user session using an superuser account? Thx David Ecker At my knowledge you have to kill SIGTERM the backedn that is managing the connection. Do as super user select * from pg_stat_activity, look at the procpid that have to be killed and kill it. Regards Gaetano Mendola ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] Killing Long running query or user session
David Ecker wrote: Is there a posibilty to kick the session through a sql-statement? NOPE Another posibility would be to block access to a database for a set of users/groups using sql only. Is there such a command? You can allow the acces to certain DB to a user and from some IP, I don't know if this can help you and/or if solve your problem. Regards Gaetano Mendola ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] Win32 and Tablespaces
Thomas Wegner wrote: Why are tablespaces not supported under Win32? Who sayd that ? In the last days Andreas Pflug commited a patch that will permit Win32 version to have tablespace. Regards Gaetano Mendola ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[ADMIN] running only pg_autovacuum for one week
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi all, I did the experiment of running only the autovacuum for one week without running my daily autovacuum full and reindex on heavy updated/inserted tables. Yesterday I reenabled the vacuum full and reindex and, as you can see from the attachment, I recover 600MB of wasted space. Regards Gaetano Mendola -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.4 (MingW32) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFBDjzJ7UpzwH2SGd4RAvq0AKDYWd1yfdZDcTSsnfr8lrkcWFIftwCfRMrZ qEyKk6YppI6mM1scBRgdMHg= =4M5I -END PGP SIGNATURE- inline: space_usage_1.png ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] running only pg_autovacuum for one week
Scott Marlowe wrote: On Mon, 2004-08-02 at 07:08, Gaetano Mendola wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi all, I did the experiment of running only the autovacuum for one week without running my daily autovacuum full and reindex on heavy updated/inserted tables. Yesterday I reenabled the vacuum full and reindex and, as you can see from the attachment, I recover 600MB of wasted space. autovacuum is a daemon. You don't run it once a week, you set it loose and forget about it. Unless you're running it then shutting it down, running once a week is unnecessary. Sorry, I meant that I had running only the pg_autovacuum for the entire week, without run also my vacuum full + reindex once a day. I was only stopping it and rerunning for logrotation purpose. Analyzing the graph on Wednedsay I had a few of processes in idle in transaction state and as you can see the graph had a big ramp and that space was not reclamed till this morning :-( Also, what are you fsm settings in the postgresql.conf file? At the end of my autovacuum full I have: INFO: free space map: 603 relations, 38202 pages stored; 40592 total pages needed DETAIL: Allocated FSM size: 1000 relations + 200 pages = 11780 kB shared memory. So I think that I'm not reaching the limits. As explained in another thread untill I can not set the threasholds per table the autovacuum is useless ( milions rows tables with hundred of insert per day ). Regards Gaetano Mendola ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] replication and linux-ha?
Negyesi Karoly wrote: Welcome, I am planning to do a linux-ha cluster. Is replication applicable in a solution like this? The main questions are that after the main node fails and the slave node becomes the master: -- a postmaster, started as slave, will be able to work standalone? -- after repairing the main node, it will become the slave node, so the aforementioned postmaster shall become master. Is this possible? Thanks Karoly Negyei Yes you can, you have to define a service with a IP address, so when the service postgres is relocated in the passive node also the IP is migrated. And you have also to develop a scripts that linux-ha cluster will use in order to know if the service is working. We adopted this solution with the help of a SAN attached with fiber channell (this assure us that only one node can be attached to the common storage) and all is working fine. Regards Gaetano Mendola ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] Utilizing multiple disks
Steve wrote: Gaetano, Thanks for your reply. Gaetano Mendola wrote: Are you performing queries using the like operator? If yes did you define an index on it using the right operator class ? Yes I do use the like operator to perform queries on a few 'varchar' and 'text' fields. I have indexed almost every column in the table (because the client application allows complex searches on multiple columns and each column has an equal chance of being used in the 'WHERE' condition) And did you use the right operator class ? See http://www.postgresql.org/docs/7.4/interactive/indexes-opclass.html for details Are you performing queries on a calculated field ? If yes then you need to construct a sort of materialized view. Hmm not sure what you mean by a calculated field. Since I'm not sure what it means, I'm probably not using it. I'm definitely not running any functions on columns to get an accumulated result back etc. if you have for example a view like: CREATE VIEW v_test AS SELECT foo(a)AS field_1, bar(a+b) AS field_2 FROM test; and you do: SELECT * from v_test WHERE field_2 = 123; then you are in trouble! If you are on Linux did you mounted your data partition with the option noatime ? Yes I'm on linux and am using the ext3 filesystem. I am just about to mount the data partition with the noatime option. Docos do say that the speed increases considerably. I'll give it a shot but I'm still very interested in utilizing the extra three SCSI disks I have at my disposal. mount that disk in RAID configuration then, you can at least try to move the WAL on a separate disk. Database integrity is of utmost importance and so is speed. I know there are tradeoffs but I really do think that moving the data to a separate disk will help. Can you please tell me how to set this up? Thanks a lot! Steve The version 7.5 will give you the tablespaces feature, in the mean time you can move your table and or index across your disk using symlinks, moving the file associated with your table or index in another disk and replace it with a link. Regards Gaetano Mendola ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] Can't increase max connections
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Roberto De Shong wrote: | I have installed PostgreSQL 7.3 on FreeBSD 4.10 Stable, and have been | trying to increase the max connections. the most I'm able to is 40, if I | increase it any higher I'm unable to start the database. Is there any | tip, s hte any guide where I can get an idea of the other values to set | in postgresql.conf for increasing max connections? I had also recompile | the kernel and changed maxusers from 0 to 64. I'm not sure if increasing | it even more would allow me increase connections. Would appreciate some | help, thanks. Most likely you are out of shared memory segments that you OS allow. If you are on Linux try to pump-up shmmax and shmall Regards Gaetano Mendola -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.4 (MingW32) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFBBDwz7UpzwH2SGd4RAmO7AJ4jqXpBjglSZuwn7e92wRS23sJR+QCgkpLE vK2twgkJ8Wi80vM+5KzWymg= =C9sb -END PGP SIGNATURE- ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] could not fork new process for connection: Cannot allocate memory
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Pavel Veretennikov wrote: | What could be a problem? About 3G of shared memory is allocated to PG, | 85 shared_buffers. I don't know what you problem can be but are you sure you need that *very huge* ammount of shared memory ? Let us know your tipical usage of your system. I seen many Postgres installation with bunch of SHM wasted !!! Regards Gaetano Mendola -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.4 (MingW32) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFBBD0x7UpzwH2SGd4RAiGqAKDai6Pyu/rsN3zMhMtAj964U9PfMgCeIYX3 3YFXWB++pl55PMZmMQIHZEU= =CA0p -END PGP SIGNATURE- ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] Utilizing multiple disks
Steve wrote: Hi, I've been running postgres on my server for over a year now and the tables have become huge. I have 3 tables that have data over 10GB each and these tables are read very very frequently. In fact, heavy searches on these tables are expected every 2 to 3 minutes. This unfortunately gives a very poor response time to the end user and so I'm looking at other alternatives now. This depend on the query that you are running on it: Are you performing queries using the like operator? If yes did you define an index on it using the right operator class ? Are you performing queries on a calculated field ? If yes then you need to construct a sort of materialized view. If you are on Linux did you mounted your data partition with the option noatime ? Please provide use more information on your queries and on your datas, your configurations... Usualy split your tables on multiple disk is the last optimization step, are you sure did you already reach the bootleneck of your sistem ? Regards Gaetano Mendola ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] Profiling
mike g wrote: There are but it depends if you are running under Solaris, Linux etc. Most of the below can be checked with applications included with your OS. Mike On Fri, 2004-07-23 at 06:23, Werner vd Merwe wrote: and make out some graphs using cricket ( google for details ). Regards Gaetano Mendola ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] Replication in main PostgreSQL codebase
Christopher Browne wrote: After a long battle with technology, [EMAIL PROTECTED] (Matt Browne), an earthling, wrote: I apologise in advance if any of my questions are in a FAQ somewhere - I haven't seen them... Does anyone know when (if ever) replication support will be added to the main PostgreSQL codebase? Is there any chance it'll be included in the upcoming 7.5 release? Not much of a chance. I've taken a look at the replication solutions already available, including the sterling effort made on Slony-I, but we're really looking for integrated, base-level support rather than an add-in. The problem is that replication doesn't tend to mean one thing, but people rather have different ideas of what it means. Jan Wieck puts it fairly nicely: The number of different replication solutions available supports the theory that ``one size fits all'' is not true when it comes to database replication. We all agree on that, postgres is hiding behind this concept as well MySQL did till now with stored procedure, referencial integrity, transaction bla bla bla Most of the time the first question that I receive is: it's possible replicate my data in an other database in order to perform heavy queries on it without affect the master performances? You have to see the faces when the answer is: with an add-in. I believe that with the PITR + NT an or more integrated solutions in postgres are not too far away to be developed Regards Gaetano Mendola ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] Logging PostgreSQL output
Joost Kraaijeveld wrote: Hi all, I want to log all PostgreSQL's output in a seperate file. Is it possible to use the syslog system to do that or is the only way to use the stdout method? If it is possible to use the syslog system, how should I do this (I have tried several things but nothing I tried worked)? put these lines on your postgresql.conf syslog = 2 syslog_facility = 'LOCAL0' syslog_ident = 'postgres' after this you have to configure your syslogd: add these lines on yout syslog.conf: # Postgres LOCAL0.* -/var/log/postgresql.log not forget to add LOCAL0.none in your /var/log.messages log ad ad that - char in front of your file location log now you have to SIGHUP both postgres and syslogd. Regards Gaetano Mendola ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] [HACKERS] Point in Time Recovery
Simon Riggs wrote: On Wed, 2004-07-14 at 03:31, Christopher Kings-Lynne wrote: Can you give us some suggestions of what kind of stuff to test? Is there a way we can artificially kill the backend in all sorts of nasty spots to see if recovery works? Does kill -9 simulate a 'power off'? I was hoping some fiendish plans would be presented to me... But please start with this feels like typical usage and we'll go from there...the important thing is to try the first one. I've not done power off tests, yet. They need to be done just to check...actually you don't need to do this to test PITR... We need to exhaustive tests of... - power off - scp and cross network copies - all the permuted recovery options - archive_mode = off (i.e. current behaviour) - deliberately incorrectly set options (idiot-proof testing) If you write also how to perform these tests it's also good in order to show which problem PITR is addressing, I mean I know that is addressing a power off but how I will recover it ? Regards Gaetano Mendola ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] users connected
Cardoso Patrick wrote: Hi, Is it possible to know the number of users connected to a database ? select * from pg_stat_activity. I'm not sure about permissions you need for it, try. Regards Gaetano Mendola ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] Slony NG
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Robert Treat wrote: | On Tue, 2004-07-13 at 19:56, Gaetano Mendola wrote: | |-BEGIN PGP SIGNED MESSAGE- |Hash: SHA1 | |Bruno Wolff III wrote: | || On Tue, Jul 13, 2004 at 15:32:49 +0200, || Gaetano Mendola [EMAIL PROTECTED] wrote: || ||-BEGIN PGP SIGNED MESSAGE- ||Hash: SHA1 || ||Bruno Wolff III wrote: || ||| On Tue, Jul 13, 2004 at 09:54:29 +0200, ||| Gaetano Mendola [EMAIL PROTECTED] wrote: ||| |||Hi all, |||why don't you create comp.database.postgresql.slony NG ? ||| ||| ||| It isn't that simple. For big 8 newsgroups you have to make a proposal ||| and people vote on it. You need 100 more Yes's than No's. || ||Let start then... || || || Go ahead. You don't need anyone here to OK putting forward a new group || creation proposal. Googling for newsgroup creation should point you to || documentation on the process. The only thing you want to get from here || is some idea of whether or not people are likely to vote for creation || of the newsgroup. || Personally I will probably abstain. || | |I was believing that the NGs news.XX.postgresql.org were managed by postgresql |developer group and the vote could be easily done trough a poll in the postgresql |web site. | | | | You are aware that there is a slony mailing list at | http://gborg.postgresql.org/mailman/listinfo/slony1-general right? Does | that not serve your purposes? I knew it, some times I'm in IRC too, but I feel better using a NG instead of a mailing list, just because I can consult it also if my laptop is not with me. Regards Gaetano Mendola -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.4 (MingW32) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFA9pje7UpzwH2SGd4RAl34AJ935ugfJUT4DiBvvokbsm6RMi9T9ACbBoB9 azZrJgHsV76sSoAcJn5gz3k= =VIpd -END PGP SIGNATURE- ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] Slony NG
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Bruno Wolff III wrote: | On Tue, Jul 13, 2004 at 09:54:29 +0200, | Gaetano Mendola [EMAIL PROTECTED] wrote: | |Hi all, |why don't you create comp.database.postgresql.slony NG ? | | | It isn't that simple. For big 8 newsgroups you have to make a proposal | and people vote on it. You need 100 more Yes's than No's. Let start then... Regards Gaetano Mendola -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.4 (MingW32) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFA8+SA7UpzwH2SGd4RAuDFAKCAe+oGGwQyudBAwwy29cywt7JYTQCfVslX lJ6bMHgSp30D4NduVzuCP8g= =KDf2 -END PGP SIGNATURE- ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [ADMIN] Slony NG
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Bruno Wolff III wrote: | On Tue, Jul 13, 2004 at 15:32:49 +0200, | Gaetano Mendola [EMAIL PROTECTED] wrote: | |-BEGIN PGP SIGNED MESSAGE- |Hash: SHA1 | |Bruno Wolff III wrote: | || On Tue, Jul 13, 2004 at 09:54:29 +0200, || Gaetano Mendola [EMAIL PROTECTED] wrote: || ||Hi all, ||why don't you create comp.database.postgresql.slony NG ? || || || It isn't that simple. For big 8 newsgroups you have to make a proposal || and people vote on it. You need 100 more Yes's than No's. | |Let start then... | | | Go ahead. You don't need anyone here to OK putting forward a new group | creation proposal. Googling for newsgroup creation should point you to | documentation on the process. The only thing you want to get from here | is some idea of whether or not people are likely to vote for creation | of the newsgroup. | Personally I will probably abstain. | I was believing that the NGs news.XX.postgresql.org were managed by postgresql developer group and the vote could be easily done trough a poll in the postgresql web site. Regards Gaetano Mendola -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.4 (MingW32) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFA9HbH7UpzwH2SGd4RAg2SAJ46FFZ1/iTnYTIhXc4XkHlqp1b9ZQCgla/m q4X99g8dla89ZuOgUZ2E5xE= =Xo8b -END PGP SIGNATURE- ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] How to list what queries are running in postgres?
Tom Lane wrote: Gaetano Mendola [EMAIL PROTECTED] writes: P.A.M. van Dam wrote: I'm looking for a command or method to find out what queries are currently being serviced by the database. select * from pg_stat_activity; ... having first remembered to turn on stats_command_string; and thou shalt also be superuser. [ anybody else remember Monty Python's scriptures concerning the Holy Hand Grenade? ] Do you mean this ? Then did he raise on high the Holy Hand Grenade of Antioch, saying, Bless this, O Lord, that with it thou mayst blow thine enemies to tiny bits, in thy mercy. And the people did rejoice and did feast upon the lambs and toads and tree-sloths and fruit-bats and orangutans and breakfast cereals ... Now did the Lord say, First thou pullest the Holy Pin. Then thou must count to three. Three shall be the number of the counting and the number of the counting shall be three. Four shalt thou not count, neither shalt thou count two, excepting that thou then proceedeth to three. Five is right out. Once the number three, being the number of the counting, be reached, then lobbest thou the Holy Hand Grenade in the direction of thine foe, who, being naughty in my sight, shall snuff it. :-) Regards Gaetano Mendola ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] Capacity Planning
Jeff Keller wrote: I had a typo in the first post. The Record Reads per day should be 50,000,000, not 500 Million. My mistake. One decimal place makes a huge difference. Our current app is Progress based with running on an IBM p650 with 4 processors and suspect a similar load if we were to changes apps and databases. That's different, and is completely affordable by an IBM p650, how much RAM? What about your disks ? I forgot to say in my previous post, that whith my numbers: 100 concurrent users 2.000.000 read for 12 h 1.000.000 update for 12 h 50.000 new records each day the load average ( the unix one ) is under 2. Regards Gaetano Mendola ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] How to list what queries are running in postgres?
P.A.M. van Dam wrote: Hi! I'm looking for a command or method to find out what queries are currently being serviced by the database. select * from pg_stat_activity; Regards Gaetano Mendola ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] Capacity Planning
Jeff Keller wrote: Hi All - We are reviewing possible database and operating solutions for our company and we are looking at running PostgreSQL on Linux. Does PostgreSQL have the capability to handle the following requirements? Is anyone successfully running an application with similar characteristics? 100 Gig Database with 600 concurrent users. 500,000,000 Record Reads per 12 Hour Business Day 200,000 Record Creates per 12 Hour Business Day 1,500,000 Record Updates per 12 Hour Business Day Well, that are big numbers. What do you need is for sure big iron. Tell us what are you planning to buy in order to support that load. My actual experience is ( rougly ): 100 concurrent users 2.000.000 read for 12 h 1.000.000 update for 12 h 50.000 new records each day as you can see this scenario is far aways from your need but we are using only a two processor Intel Xeon 2.8 GHz in hyperthreding mode with a not so tuned RAID system and only 1 GB of RAM. I think that with 8 processors, good fiber channel access to your RAID, and good ammount of memory you can easily reach that numbers. This is a challenging task to accomplish, do you need any help out there ;-) ? Regards Gaetano Mendola ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] PostgreSQL Performance
Milosz Miecznik wrote: Hi! I have very important question about performance of PostgreSQL Database. How it will work with: - about 300 insert operation per day, - about 100 selects per day, - about 100 still connected users? What hardware platform shall I use for such big database (RAM, No. of CPUs, Hard discs capasity...?) What operating system shall I use? (I think about RedHat 8.x) I don't have 300 insert per day, but 300 update per day for the rest I have more then your numbers and in 3 years I had no one singol data loss or performance problem. regards Gaetano Mendola ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] ALTER table taking ages...
Rajesh Kumar Mallah wrote: Greetings! It takes ages to drop a constraint from one of my tables [ table details at the end ] I cannot insert into it also. I know pg_dump is not running and no other query is accessing the table. Can anyone help me debugging this problem? Can anyone explain the following entires in pg_locks where relation and database are both null. tradein_clients=# SELECT * from pg_locks where mode='ExclusiveLock' and granted is true; +--+--+-+---+---+-+ | relation | database | transaction | pid | mode | granted | +--+--+-+---+---+-+ | NULL | NULL | 116230313 | 19898 | ExclusiveLock | t | | NULL | NULL | 116230309 | 24779 | ExclusiveLock | t | | NULL | NULL | 116230267 | 24780 | ExclusiveLock | t | | NULL | NULL | 116230303 | 24764 | ExclusiveLock | t | | NULL | NULL | 116230302 | 24751 | ExclusiveLock | t | | NULL | NULL | 116230308 | 24767 | ExclusiveLock | t | | NULL | NULL | 116230274 | 24761 | ExclusiveLock | t | | NULL | NULL | 116230306 | 24752 | ExclusiveLock | t | | NULL | NULL | 116230312 | 23222 | ExclusiveLock | t | | NULL | NULL | 116230290 | 24768 | ExclusiveLock | t | | NULL | NULL | 116230292 | 24776 | ExclusiveLock | t | | NULL | NULL | 116230297 | 24753 | ExclusiveLock | t | | NULL | NULL | 116230295 | 24765 | ExclusiveLock | t | | NULL | NULL | 116230152 | 24096 | ExclusiveLock | t | | NULL | NULL | 116230311 | 24769 | ExclusiveLock | t | | NULL | NULL | 116194826 | 23048 | ExclusiveLock | t | | NULL | NULL | 116230307 | 24758 | ExclusiveLock | t | +--+--+-+---+---+-+ (17 rows) May I know how you obtain this kind of output ( biq square around results ) ? Regards Gaetano Mendola ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] determining when a database was created
Somasekhar Bangalore wrote: Hi All, Is there a tool , where in i can convert data from postgres 7.3.2 to oracle 9i. http://www.oracle.com/support/index.html?contact.html thanks in advance som U'r welcome. Regards Gaeatano Mendola ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] postmaster terminated abnormally..
[EMAIL PROTECTED] wrote: Hi, Under what circumstances would the postmaster receive signal 11 my postgres server crashed today and analysis of log yeilded sig 11 is SIGSEGV . Any idea on how it prevent it? Are you running your own functions written in C ? Regards Gaetano Mendola ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [ADMIN] Can't restore from pg_dump. Authentication failed
Greg wrote: Hi, Frankly I don't know what should I do more. The problem is: I've made a pg_dump from shell like that: pg_dump -Upostgres -O -D -Fc --file=baza.sql Copied the file to new server and from shell executed: linux:~ # pg_restore -d ppr -U postgres -i baza.sql The result is pg_restore: [archiver (db)] connection to database ppr failed: FATAL: IDENT authentication failed for user postgres I don't know what is going on since I can login to ppr database like that (in shell) su - postgres psql -dppr try in this way: su - postgres psql -f baza.sql dppr Regards Gaetano Mendola ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] Table maintenance: order of operations important?
Tom Lane wrote: Jeff Boes [EMAIL PROTECTED] writes: DELETE FROM foo WHERE date_expires now(); VACUUM ANALYZE foo; CLUSTER foo; REINDEX TABLE foo; How would you choose to order these (under 7.4.x) for fastest turn-around? Does it matter? If you are going to CLUSTER then the VACUUM and the REINDEX are both utterly redundant. Without cluster with 7.4.2 a REINDEX is redundant after a VACUUM FULL ? Regards Gaetano Mendola ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[ADMIN] HEC Montreal use Postgres
Interesting article on May Linux Journal ( pag 44 ): [...] we installed Spamity, which parses mail logs from the four Postfix servers and update a Postgresql database running on the test server [...] Regards Gaetano Mendola ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] Log msg
Jie Liang wrote: Hi,All, How to config log file to log statement without CONTEXT stuff? I want log query, but not the detail unless there is a problem, e.g. Delete aaa from mytable where mysqlfunction(bbb); I want log this statement, but not any CONTEXT inside mysqlfunction, because it will easily filled up my disk if mytable have millions of entries, meanwhile, the log file become useless. But if I set log_statement = true, log file will log millions of statement in mysqlfunction, but I set log_statement=false, log file won't log query at all. Simply say, I want log what I execute(my query), but not how execute (process) . Decrease the verbosity of your logs using: log_error_verbosity = terse Regards Gaetano Mendola ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[ADMIN] cast not IMMUTABLE?
Hi all, I have a table with ~ 3e+6 rows on it. I do select on this table in this way: (1) select * from user_logs where login_time::date = now()::date; consider that login_time is a TIMESTAMPTZ with an index on it. If I use the select in this way: select * from user_logs where login_time = now(); the the index is used. I'm trying to use define and index in order to help the query (1): test# create index idx on user_logs ( (login_time::date) ); ERROR: functions in index expression must be marked IMMUTABLE why that cast is not considered IMMUTABLE ? How can I define an index for the query (1) ? Regards Gaetano Mendola ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] Limiting user access to a view only
Richard Colman wrote: I am a novice in this area. How would I set up a new user on PostgreSQL so that the new user is limited to reading a particular view, and can do nothing else. Hi, first of all create the user as no a super user, after use the GRANT command ( http://www.postgresql.org/docs/7.4/interactive/sql-grant.html ) in your case: GRANT SELECT ON view_name TO user_name Regards Gaetano Mendola ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] Postmaster hogs CPU
Chris Gamache wrote: What can be done to allow for smarter preemption? Could I do anything at the OS level to throttle that particular postmaster's process? I'm running (IMO) a balanced config, but there's always room for improvement. Its that oddball query that comes around once every so often that causes the problem. You can basically renice the process that is performing the query. See 'man nice' for details. Regards Gaetano Mendola ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] Postmaster hogs CPU
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Tom Lane wrote: | Gaetano Mendola [EMAIL PROTECTED] writes: | |You can basically renice the process that is performing the query. | | | However, that's unlikely to do anything very pleasant, since you'll have | priority-inversion problems. nice has no idea when the process is | holding a lock that someone else wants ... That can be true, however in order to have a priority-inversion problem I think are necessary 3 different level of priority, you have carefully choose the postmaster and good value of nice in order to have it happen. I was wandering about do the same work done with vacuum ( the sleep trick each n records) in order to slow some expensive but not crucial queries: test set query_delay = 10; -- 10 ms test select * from very expensive query ; Regards Gaetano Mendola -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.4 (MingW32) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFAmZuP7UpzwH2SGd4RAvVxAKCfvQDk2CkdcC2dCFtgg7nLzf7qTwCgt8/w F0zVE0HYoI9lb9l7u9qwZIo= =/mFq -END PGP SIGNATURE- ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[ADMIN] index not used for boolean
Hi all, is it normal that in a select like: select * from foo where expired; the index on expired is not used? If I rewrite that query: select * from foo where expired = true; then the index is used! I'm using postrgres 7.4.2 Regards Gaetano Mendola ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] Real time replication of PG DBs accross two servers - any experiences?
Bradley Kieser wrote: Hi All, I desperately need to set up a real time replication of several databases (for failover) between two servers. Last time I looked at the PG replication it wasn't yet production level. I wonder if anyone on this list is doing this successfully and if you won't mind sharing your experience and giving me any tips that you may think would be handy from a real world perspective I would very much appreciate it. I am 100% Linux-based, in case that makes a difference! I have also considered using the CODA files system in case the replication isn't quite up to production levels still. We are using a SAN server, and 2 nodes running a Red Hat HA. Regards Gaetano Mendola ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[ADMIN] CONTEXT: in log file
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi all, using Postgres 7.4.2 I'm seeing in my log file thousand of line with CONTEXT: I need to habe the log level debug2, and I'm seeing that the CONTEXT disappear only with: log_min_messages = fatal I think the fatal level is too much in order to remove the CONTEXT information on the log. May I remove the CONTEXT information ? Or at least have the same information on one line. Regards Gaetano Mendola -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.4 (MingW32) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFAdTi87UpzwH2SGd4RAvm7AJ4wyu6RzpjFECclhfE7e59Ej+PvtwCg7Rf5 FaU09BBcDEI3IeQGgVEgznM= =Itgv -END PGP SIGNATURE- ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] CONTEXT: in log file
Tom Lane wrote: Gaetano Mendola [EMAIL PROTECTED] writes: using Postgres 7.4.2 I'm seeing in my log file thousand of line with CONTEXT: Any particular context? No just mine own: LOG: statement: select sp_id_admin(); LOG: statement: SELECT id_admin FROM administrators WHERE user_name = current_user CONTEXT: PL/pgSQL function sp_id_admin line 4 at select into variables LOG: statement: SELECT NOT $1 CONTEXT: PL/pgSQL function sp_id_admin line 8 at if LOG: statement: SELECT $1 so for each select INTO inside my functions I have a CONTECT line and I'm seeing that the CONTEXT disappear only with: log_min_messages = fatal Perhaps log_error_verbosity is what you want to play with. Yes you're right ( like always ) with log_error_verbosity = terse the CONTEXT line disappeared. Regards Gaetano Mendola ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[ADMIN] release note missing
Hi all, I want avice you that the interactive and as well the static documentation of postgres 7.4 is missing 7.4.1 and 7.3.5 release note. And the documentation of 7.3 is missing the release note 7.3.1 - 5 Regards Gaetano mendola ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [ADMIN] commit after dead lock
Tom Lane wrote: Stephan Szabo [EMAIL PROTECTED] writes: It's not really any different than other errors. The commit doesn't complain (although it also doesn't actually commit anything). People have occasionally suggested that the command tag from a COMMIT should read ABORT or ROLLBACK if the transaction was previously aborted. I don't have a strong feeling about it one way or the other. It'd clearly be helpful for human users, but I could see it confusing programs that expect the existing behavior of command tag always matching command. Well, I agree but I think that is better at least rise a warning. regards Geetano Mendola ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
[ADMIN] commit after dead lock
Hi all, is it normal that postgres dont complain doing a commit after a deadlock ? kalman=# select * from test where a = 5 for update; ERROR: deadlock detected DETAIL: Process 4144 waits for ShareLock on transaction 40180; blocked by process 4141. Process 4141 waits for ShareLock on transaction 40181; blocked by process 4144. kalman=# commit; COMMIT kalman=# Regards Gaetano Mendola ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [ADMIN] Error seen when vacuuming pg_largeobject table
Chris White (cjwhite) wrote: Sorry no core files. The system is running with cores turned off. Next time I will turn on cores prior to trying to debug this. This is not the first time that a usefull core go wasted. Happen to me twice till I realized to modify the script /etc/init.d/postgres in order to have te core file. May be is a good idea ship this file with the core file on ? Regards Gaetano Mendola ---(end of broadcast)--- TIP 8: explain analyze is your friend
[ADMIN] Postgres 7.3.5
Hi all, I'm not finding the version 7.3.5 in any ftp mirror. I need to build the rpms someone did the SRPMS ? Example: ftp://ftp3.it.postgresql.org/postgresql/src no mention of 7.3.5 Regards Gaetano Menodla ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] vacuum slowed by syslogd
Michael Adler wrote: On many occasions, I've noticed that some PostgreSQL activity takes far longer than it previously did and that disabling syslogd addresses the symptoms. Most recently, it took 20-60 seconds to VACUUM a small, heavily updated table, while it used to take a fraction of a second. I found syslog entries like these: 13:19:53 --Relation sometable-- 13:20:03 Removed 2 tuples in 1 pages. 13:20:23 ^ICPU 0.00s/0.00u sec elapsed 0.00 sec. 13:20:54 Pages 1: Changed 1, Empty 0; Tup 4: Vac 2, Keep 0, UnUsed 13. 13:20:54 ^ITotal CPU 0.00s/0.00u sec elapsed 60.12 sec. It took almost-exactly 60 seconds, but virtually no CPU time was used (and no disk IO). Many similar examples have real times that are near perfect multiples of 10 seconds (e.g. 50.09, 40.07). This is not every single VACUUM, but it is frequent. The problem disappears when syslogd is stopped or when PostgreSQL disables syslog usage. This is very consistent and I can reproduce the problem in some installations by toggling these factors on and off. Did you try to put a - before the file log name ? Example: LOCAL0.* -/var/log/postgresql.log Regards Gaetano Mendola ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [ADMIN] what is the cause that scan type is showing as 'seq scan' after
Joseph Lemm wrote: IN RELATION TO THIS POST: On Sun, Jan 26, 2003 at 06:31:50PM +0530, shreedhar wrote: Hi All, Before indexing query plan was showing cost as 40.00, after indexing query plan again showing as 'seq scan' and cost as 3060.55. The field which i indexed is primary key to this table. May i know 1) what is the cause that scan type is showing as 'seq scan' after indexing also 2) why it is showing cost as high value compare to previous. TO WHICH ROSS REPLIED: You trimmed out the other parts of the EXPLAIN, so I'm just guessing, but that cost seems suspiciously round: I'm guessing that you haven't run VACUUM ANALYZE at all. One thing indexing does is update the 'number of tuples' statistic. See the archives for why sequential scans still show up (short answer: index scans aren't free, so at some point, it's cheaper to scan the entire table than to scan both the index and the subset of the table returned) OK, so then what is the explanation for this: Table public.post Column |Type | Modifiers +-+--- id | integer | author | character varying(80) | text | text| hidden | boolean | date | timestamp without time zone | host | character varying(80) | Indexes: idx_post_id unique btree (id), post_author_index btree (author) VACUUM ANALYZE; VACUUM EXPLAIN ANALYZE select host from post where author='George'; QUERY PLAN Seq Scan on post (cost=0.00..2869.30 rows=1768 width=27) (actual time=0.23..520.65 rows=1774 loops=1) Filter: (author = 'George'::character varying) Total runtime: 525.77 msec (3 rows) So the optimizer decided it's less costly to do a sequential scan here than use the index, right? Now: SET ENABLE_SEQSCAN=OFF; EXPLAIN ANALYZE select host from post where author='George'; QUERY PLAN --- Index Scan using post_author_index on post (cost=0.00..5253.63 rows=1768 width=27) (actual time=28.92..210.25 rows=1774 loops=1) Index Cond: (author = 'George'::character varying) Total runtime: 215.00 msec (3 rows) So if I force an index scan, I get much better performance (215 vs 525 msec). Does this mean that the optimizer screwed up when it recommended a sequential scan? No this mean that you are instructing your optimizer in a wrong way. Show us your configuration file and in particular these parameters: effective_cache_size random_page_cost cpu_tuple_cost cpu_index_tuple_cost cpu_operator_cost I use these value, that are good enough for a medium HW: effective_cache_size = 2 random_page_cost = 2.5 cpu_tuple_cost = 0.005 cpu_index_tuple_cost = 0.0005 cpu_operator_cost = 0.0025 Regards Gaetano Mendola ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [ADMIN] Dropping a DB while it is in use?
Chad N. Tindel wrote: Is there a way to kill all the connections to a DB so that it can safely be dropped? Given that connections to a DB can come from any number of clients, the only way I've found to do this is to shutdown postgres, which is clearly an unacceptable solution because it stops access to the other 50 or so databases that are being used on the system. You can do a kill -INT to backend that are managing the connection. Regards Gaeatano Mendola ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] what is the cause that scan type is showing as 'seq scan' after
Joseph Lemm wrote: Gaetano, thanks. My db has only one table (about 29,000 records), so I thought leaving postgreqsql.conf at its defaults would be OK: the params you mention are commented out, so they must be at their defaults, tho I can't tell what the defaults are. The values on the line commented are the default values. Are there any docs that talk specificially about how to set these params and what the defaults are (the official docs don't say much)? Take a look at performance NG, at least is what I use to do. Regards Gaetano Mendola ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] How can I set the cross database reference?
Olivier Hubaut wrote: Hi, I need to use the cross database reference form some select closes. I've install PG 7.3.4 using the 'make all' option, but it seems that the functionnality isn't set anyway. Is there another package I need to install or some commands I need to do? You can do db cross select using dblink but I don't think that you can enforce cross database reference. You can anyway use schemas. Regards Gaetano Mendola ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] How can I set the cross database reference?
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Olivier Hubaut wrote: | Hi, | | I need to use the cross database reference form some select closes. I've | install PG 7.3.4 using the 'make all' option, but it seems that the | functionnality isn't set anyway. | | Is there another package I need to install or some commands I need to do? You can do db cross select using dblink but I don't think that you can enforce cross database reference. You can anyway use schemas. Regards Gaetano Mendola -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.3 (MingW32) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQE/1R5k7UpzwH2SGd4RAmiNAKD6eSxpudqeT1jwVwl6qXrTY0B2UwCdGMbH zZO9Zhwp5GPjusUHmGW4pgI= =sM5I -END PGP SIGNATURE- ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[ADMIN] Oracle - Postgres migration stories
Hi all, I must convince all the managment to convert all Oracle databases to postgres, do you have some stories like the red sheriff one: http://www.redsheriff.com/us/news/news_4_201.html Regards Gaetano Mendola ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [ADMIN] Debugging PL/PGSQL
CLIFFORD ILKAY wrote: Hi, What do you use for debugging PL/PGSQL code? raise notice Regards Gaetano Mendola ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [ADMIN] vacuum full problem
pginfo wrote: Hi, I can not be sure if it is not the case. But we are usin this system on a number of servers and it happen only by one. Can I with a pg_locks help detect the query that is running? No the pg_locks detect only which process handle a lock on a database object. With pg_stat_activity you see the queries running. Also the command ps -eafwww | grep post show you the connection and his state, I bet you have some connection IDLE in TRANSACTION Regards Gaeatano Mendola ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [ADMIN] Upgrading to Solaris 9
Danielle Cossette wrote: Good morning, Could you please let me know if Postgres 7.1.3 will run on Solaris 9. If it does, are you aware of any issues. Seems that solaris is the worst choice for run Postgres. Am I completely wrong ? Regards Gaetano Mendola ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] Trying to pg_restore a 7.1.3 db into 7.3.4 - stalling at 100%
Shane Wright wrote: Gaetano, er, shedloads of this read(0, , 4096) = 0 read(0, , 4096) = 0 read(0, , 4096) = 0 read(0, , 4096) = 0 read(0, , 4096) = 0 read(0, , 4096) = 0 read(0, , 4096) = 0 read(0, , 4096) = 0 read(0, , 4096) = 0 read(0, , 4096) = 0 read(0, , 4096) = 0 methinks thats not a good sign? I a non sense, I don't have clue why is reading block of 4K from a NULL file descriptor ! Anyone any idea ? Regards Gaetano Mendola ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [ADMIN] pg_shadow dump annoying problem
Tom Lane wrote: Gaetano Mendola [EMAIL PROTECTED] writes: [ pg_dumpall emits this: ] DELETE FROM pg_shadow WHERE usesysid (SELECT datdba FROM pg_database WHERE datname = 'template0'); Now if the file pg_user.sql is used accidentally by a user that is not postgres the result is that all users are deleted ( also the user that is doing the restore ) and no other user is created. How so? If that deletes all users then you have already removed the user who owns template0, ie, the user who did initdb. That is a really bad idea. I didn't say this, I just wrote that if the file generated is used by a user different the postgres will result in a disaster, and may be is better not delete all the user except the user that own template0 but not delete the user that own template0 and the current_user. Regards Gaetano Mendola ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [ADMIN] Trying to pg_restore a 7.1.3 db into 7.3.4 - stalling at 100%
Shane Wright wrote: Hi, I'm trying to upgrade our 25Gb database from 7.1.3 to 7.3.4 - pg_dump worked fine, although piping through split to get a set of 1Gb files. But, after a few attempts on using pg_restore to get the data into the new installation I'm having a few problems; basically it restores the first few tables fine (big tables too), but now it's just hanging on one table; using 100% CPU but hardly touching the disk at all (vmstat reports about 50kb every few minutes). what show an strace on that process ? Regards Gaetano Mendola ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]