[ADMIN] tsearch2 migration 8.2 -- 8.3
Hi, I am attempting to migrate 8.2 database to 8.3 database , the database uses tsearch2. The contrib module tsearch2 that is supposed to provide backward compatibility is also installed. It has been observed that inspite of install the contrib module some functions referred in the 8.2 dump is neither provided by the core nor the contrib. Eg CREATE FUNCTION gtsq_in(cstring) RETURNS gtsq AS '$libdir/tsearch2', 'gtsq_in' LANGUAGE c STRICT; ERROR: could not find function gtsq_in in file /usr/local/pgsql/lib/tsearch2.so ALTER FUNCTION public.gtsq_in(cstring) OWNER TO postgres; ERROR: function public.gtsq_in(cstring) does not exist CREATE FUNCTION gtsq_out(gtsq) RETURNS cstring AS '$libdir/tsearch2', 'gtsq_out' LANGUAGE c STRICT; ERROR: could not find function gtsq_out in file /usr/local/pgsql/lib/tsearch2.so ALTER FUNCTION public.gtsq_out(gtsq) OWNER TO postgres; ERROR: function public.gtsq_out(gtsq) does not exist On attempting to delete the type that uses these in/out functions it is observed that they do not touch in table columns or indexes. eg bric=# begin work; drop type gtsq cascade; rollback; BEGIN NOTICE: drop cascades to function gtsq_in(cstring) NOTICE: drop cascades to function gtsq_out(gtsq) NOTICE: drop cascades to function gtsq_consistent(gtsq,internal,integer) NOTICE: drop cascades to operator class gist_tp_tsquery_ops for access method gist NOTICE: drop cascades to function gtsq_same(gtsq,gtsq,internal) DROP TYPE ROLLBACK My Question is is it safe to drop that type and migrate ? Regds mallah -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] controlling autovacuum during the day.
On Wed, Dec 17, 2008 at 7:17 PM, Tom Lane t...@sss.pgh.pa.us wrote: John Lister john.lister...@kickstone.com writes: I'd like to use autovacuum to clean up the tables rather than schedule a full vacuum with cron as it will be more selective/intelligent about what gets cleaned. But is it possible to stop it running during peak/office hours? No. Instead, set the vacuum cost parameters to make it run slow enough to not interfere too much with your work. is it a bad idea to have such a feature ? i was going through ./src/backend/postmaster/autovacuum.c looks like subroutine autovac_start can be modfied to for the said feature. In case it does not have negative implication can i submit a patch ? regds -- mallah. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
[ADMIN] question about measuring shared_buffers usage
Hi , I have following setting in my postgresql.conf file. shared_buffers = 1 (8k blocks) ; which is apprx 80MB my question is how can i know how much is the usage of the shared_buffers so that it can be increased or decreased. below is the output of ipcs , I am wondering is postgres using 420MB shared memory (but shared_buffers is only 80MB) $ ipcs -m -- Shared Memory Segments keyshmid owner perms bytes nattch status 0x 3637248postgres 60052228 0 0x0052e6a9 7798785postgres 60040034304 42 0x 4489218postgres 60052228 0 0x 4521987postgres 60052228 0 0x 4554756postgres 60052228 0 0x 4587525postgres 60052228 0 0x 4620294postgres 60052228 0 0x 2883591postgres 60052228 0 0x 3670024postgres 60074240 0 0x 3702793postgres 6007680 0x 3735562postgres 6005240 0x 3768331postgres 6004 0 0x0052e2c1 7831564postgres 600420610048 121 regds mallah. -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] question about measuring shared_buffers usage
further , can pg_buffercache view serve as a guide to setting appropriate value of shared_buffers ? when i do SELECT count(*) from pg_buffercache where relfilenode is null ; to check unused buffers i find that it has been 0 most of the time. Can shared_buffers be raised to an extent that some buffers remains unused mostly. I read in one of the articles http://www.varlena.com/GeneralBits/Tidbits/perf.html#shbuf that shared_buffers can be put 1.2 to 2 times the peak shared mem usage. My question is how to roughly determine the appropriate value for shared_buffers. Also, What if most of the memory is left for kernel disk buffer cache ? is fetching from kernel disk buffer cache significantly expensive compared to getting from shared buffer cache ? regds mallah. On Thu, Feb 5, 2009 at 6:48 PM, Rajesh Kumar Mallah mallah.raj...@gmail.com wrote: Hi , I have following setting in my postgresql.conf file. shared_buffers = 1 (8k blocks) ; which is apprx 80MB my question is how can i know how much is the usage of the shared_buffers so that it can be increased or decreased. below is the output of ipcs , I am wondering is postgres using 420MB shared memory (but shared_buffers is only 80MB) $ ipcs -m -- Shared Memory Segments keyshmid owner perms bytes nattch status 0x 3637248postgres 60052228 0 0x0052e6a9 7798785postgres 60040034304 42 0x 4489218postgres 60052228 0 0x 4521987postgres 60052228 0 0x 4554756postgres 60052228 0 0x 4587525postgres 60052228 0 0x 4620294postgres 60052228 0 0x 2883591postgres 60052228 0 0x 3670024postgres 60074240 0 0x 3702793postgres 6007680 0x 3735562postgres 6005240 0x 3768331postgres 6004 0 0x0052e2c1 7831564postgres 600420610048 121 regds mallah. -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
[ADMIN] conditional logging based on client
Hi, We want to log all SQLs that has been executed by using psql client. we do not want to use .psql_history as it is distributed and may be deleted by users . The original objective is that we should be able to know what all hand made SQLs have been executed in past (which can date back as long as 1 year even) regds Mallah. -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] conditional logging based on client
On Tue, Jul 22, 2008 at 5:54 PM, Achilleas Mantzios [EMAIL PROTECTED] wrote: Στις Tuesday 22 July 2008 15:17:42 ο/η Rajesh Kumar Mallah έγραψε: Hi, We want to log all SQLs that has been executed by using psql client. we do not want to use .psql_history as it is distributed and may be deleted by users . The original objective is that we should be able to know what all hand made SQLs have been executed in past (which can date back as long as 1 year even) You have to adjust log_statement in your postgresql.conf however you will have to bear in mind the performance implications of this. Normal production sites dont do this. You can enable/disable this kind of logging any time by killing -HUP thanks for reply. any hit on performance is not desired. is it possible to alter logging behaviour just for the psql session ? regds Mallah. -- Achilleas Mantzios -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
[ADMIN] not able to setup tsearch2 synonym dictionary
Hi , I am trying to setup a synonym dictionary setup using instructions in http://people.planetpostgresql.org/mha/index.php?/archives/118-Custom-synonym-dictionaries-in-tsearch2.html after following all the steps lexize is not doing the expected eg tradein_clients=# SELECT * from public.pg_ts_dict where dict_name='pg_dict'; -[ RECORD 1 ]---+--- dict_name | pg_dict dict_initoption | /opt/usr/local/pgsql/share/contrib/pg_dict dict_comment| pg-specific dictionary dict_init | syn_init(internal) dict_lexize | syn_lexize(internal,internal,integer) tradein_clients=# \!cat /opt/usr/local/pgsql/share/contrib/pg_dict postgresql postgres pgsql postgres postgre postgres tradein_clients=# \pset null NULL Null display is NULL. tradein_clients=# select lexize('pg_dict','postgresql'),lexize('pg_dict','pgsql'); -[ RECORD 1 ] lexize | NULL lexize | NULL The expected result is lexize| {postgres} lexize| {postgres} can any one pls tell where things might have gone wrong? regds mallah ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [ADMIN] question about installing perl module
On 4/25/07, Karthikeyan Sundaram [EMAIL PROTECTED] wrote: Hi, We are using postgres 8.1.0. While installation we can tell ./configure --with-perl. In our case, we didn't do it and we just did a ./configure. Now we want to add perl, python and tcl module. How will I add it. Please advise. you can do ./configure --with-perl again and do make install . just make sure postgres is shut down before make install. need not do initdb its just like you would have done in case of minor upgrade. hope it helps. regds mallah Regards skarthi Invite your mail contacts to join your friends list with Windows Live Spaces. It's easy! Try it! ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[ADMIN] xlog related error
Hi, A friend of mine getting following error Error: xlog flush request 0/797FED50/ is not satisfied -- flushed only to 0/797FC1D8 CONTEXT: writing block 681 of relation 1663/10819/16520 Detail: Multiple failures -- write error may be permanent can anyone pls suggest if its indicative of some certain situation. regds mallah. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] xlog related error
On 4/5/07, Tom Lane [EMAIL PROTECTED] wrote: Rajesh Kumar Mallah [EMAIL PROTECTED] writes: A friend of mine getting following error Error: xlog flush request 0/797FED50/ is not satisfied -- flushed only to 0/797FC1D8 CONTEXT: writing block 681 of relation 1663/10819/16520 Which PG version exactly? Has he had any crashes lately? The message means that that block contains an LSN that's past the current end of WAL, which is a pretty suspicious situation. The message will go away pretty soon (as soon as another 10K or so of WAL is generated) but this looks like a symptom of a bug to me. Details would be good. The person Ashwin might contact the list directly. If he does'nt i shall try my best to get the details regds mallah. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [ADMIN] Pls exclude me from the mailing list
Dear Subash, follow the link given below, spend 1 minute to read the instructions and another 15 secs to act. http://mail.postgresql.org/mj/mj_wwwusr?domain=postgresql.orgfunc=lists-long-fullextra=pgsql-admin Cheers mallah. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [ADMIN] pg_dump and pg_dumpall is hanging
have you used the verbose option by adding a couple of -v with pg_dump or pg_dumpall ? try it it may give a hit regds mallah On 3/7/07, Karthikeyan Sundaram [EMAIL PROTECTED] wrote: Hi, I am using 8.1.0 for my production. The pg_dumpall was working fine on all these days. Suddenly from yesterday, the pg_dumpall command fails. Fails means, it hangs (no dump) is taking place. What may be the problem? Regards skarthi _ Mortgage rates as low as 4.625% - Refinance $150,000 loan for $579 a month. Intro*Terms https://www2.nextag.com/goto.jsp?product=10035url=%2fst.jsptm=ysearch=mortgage_text_links_88_h27f6disc=yvers=743s=4056p=5117 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[ADMIN] ERROR: missing chunk number 0 for toast value
Hi, I get the above error while selecting data from a table. i have reindexed the table and possibly toast table but it does not seems to help. result of vacuum full is also attached. can anyone please help. even pg_dump is not able to dump the table. pg_dump: ERROR: missing chunk number 0 for toast value 697890392 pg_dump: SQL command to dump the contents of table bad_emails failed: PQendcopy() failed. pg_dump: Error message from server: ERROR: missing chunk number 0 for toast value 697890392 regds mallah. tradein_clients=# VACUUM full verbose ANALYZE email_handling.bad_emails; INFO: vacuuming email_handling.bad_emails INFO: bad_emails: found 0 removable, 253598 nonremovable row versions in 15920 pages DETAIL: 0 dead row versions cannot be removed yet. Nonremovable row versions range from 91 to 2036 bytes long. There were 16060 unused item pointers. Total free space (including removable row versions) is 5180436 bytes. 0 pages are or will become empty, including 0 at the end of the table. 7059 pages containing 4848956 free bytes are potential move destinations. CPU 0.23s/0.08u sec elapsed 0.93 sec. INFO: index bad_emails_code now contains 253598 row versions in 558 pages DETAIL: 0 index row versions were removed. 0 index pages have been deleted, 0 are currently reusable. CPU 0.01s/0.06u sec elapsed 0.08 sec. INFO: index bad_emails_code_pending now contains 177762 row versions in 588 pages DETAIL: 0 index row versions were removed. 0 index pages have been deleted, 0 are currently reusable. CPU 0.00s/0.05u sec elapsed 0.06 sec. INFO: index bad_emails_date_id now contains 253598 row versions in 558 pages DETAIL: 0 index row versions were removed. 0 index pages have been deleted, 0 are currently reusable. CPU 0.01s/0.06u sec elapsed 0.08 sec. INFO: index bad_emails_email_key now contains 253598 row versions in 558 pages DETAIL: 0 index row versions were removed. 0 index pages have been deleted, 0 are currently reusable. CPU 0.01s/0.09u sec elapsed 0.11 sec. INFO: index bad_emails_pkey now contains 253598 row versions in 558 pages DETAIL: 0 index row versions were removed. 0 index pages have been deleted, 0 are currently reusable. CPU 0.01s/0.06u sec elapsed 0.10 sec. INFO: bad_emails: moved 24 row versions, truncated 15920 to 15915 pages DETAIL: CPU 0.48s/0.31u sec elapsed 22.59 sec. INFO: index bad_emails_code now contains 253598 row versions in 558 pages DETAIL: 24 index row versions were removed. 0 index pages have been deleted, 0 are currently reusable. CPU 0.00s/0.01u sec elapsed 0.09 sec. INFO: index bad_emails_code_pending now contains 177762 row versions in 588 pages DETAIL: 10 index row versions were removed. 0 index pages have been deleted, 0 are currently reusable. CPU 0.00s/0.01u sec elapsed 0.02 sec. INFO: index bad_emails_date_id now contains 253598 row versions in 558 pages DETAIL: 24 index row versions were removed. 0 index pages have been deleted, 0 are currently reusable. CPU 0.00s/0.01u sec elapsed 0.07 sec. INFO: index bad_emails_email_key now contains 253598 row versions in 558 pages DETAIL: 24 index row versions were removed. 0 index pages have been deleted, 0 are currently reusable. CPU 0.01s/0.01u sec elapsed 0.04 sec. INFO: index bad_emails_pkey now contains 253598 row versions in 558 pages DETAIL: 24 index row versions were removed. 0 index pages have been deleted, 0 are currently reusable. CPU 0.01s/0.00u sec elapsed 0.11 sec. INFO: vacuuming pg_toast.pg_toast_555824297 INFO: pg_toast_555824297: found 0 removable, 901381 nonremovable row versions in 213818 pages DETAIL: 0 dead row versions cannot be removed yet. Nonremovable row versions range from 41 to 2034 bytes long. There were 4574 unused item pointers. Total free space (including removable row versions) is 67493772 bytes. 0 pages are or will become empty, including 0 at the end of the table. 71284 pages containing 65771412 free bytes are potential move destinations. CPU 4.49s/1.28u sec elapsed 172.29 sec. INFO: index pg_toast_555824297_index now contains 901381 row versions in 2474 pages DETAIL: 0 index row versions were removed. 0 index pages have been deleted, 0 are currently reusable. CPU 0.06s/0.20u sec elapsed 1.53 sec. INFO: pg_toast_555824297: moved 0 row versions, truncated 213818 to 213818 pages DETAIL: CPU 0.44s/0.24u sec elapsed 11.25 sec. INFO: analyzing email_handling.bad_emails INFO: bad_emails: scanned 3000 of 15915 pages, containing 47417 live rows and 1 dead rows; 3000 rows in sample, 251547 estimated total rows VACUUM ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[ADMIN] mingw compilation problem- warning: implicit declaration of function `bzero'
hi, this is not really a postgres issue. i am trying to compile a trigger function under MINGW32. its giving the said warning. the call of bzero is bzero(data, sizeof(AuditData)); $ gcc --version gcc.exe (GCC) 3.4.2 (mingw-special) Copyright (C) 2004 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. the code compiles fine under linux gcc (version not sure) can anyone please suggest how to prevent this warning. regds mallah. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [ADMIN] RESTORE SCHEMA
On 1/23/07, Kranti [EMAIL PROTECTED] wrote: Hey Guys, This is my first email to list. Hope i will get the solution quickly. Hi, please use pg_dump -n option to select schema. Read manual pages of pg_dump.I am not sure how to do it in pgadminIII all said it is doable. regds malah. I am working on postgresql 8.1 I have a DB, with mre than 4 schemas. I want to take backup of one of the schema and restore the same schema into another DB .ofcourse schema names are same. I am able to take backup and restore at DB level. but i want the same at schema level. Using pgadmin III i am able to take backup of schema. but not able to see restore of that schema. I am in a very urgent need. Please do respond to this and guide me Thanks in advance Kranti ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] Install XML on Postgres 8.1
On 1/22/07, Alexander B. [EMAIL PROTECTED] wrote: Hi, I need to install XML on Postgres 8.1. What's the simples steps that I can follow? I found on the internet, but there are many versions, packages and ways to install. What's the right way to install? Can we compare the same powerful like Oracle has on Xml? Dear Alexander, After reading the thread and related resources it looks like xml support in pgsql may undergo major changes. hence i suggest that you abstract the usage of pgsql for xml . so that it is easy for you to change your app when the xml apis to pgsql changes. this is easily accomplised by using your own functions that wrap the call to contrib/xml2 functions. hope you get the point. regds mallah. If you could help me!! Thanks in advance. ___ Yahoo! Mail - Sempre a melhor opção para você! Experimente já e veja as novidades. http://br.yahoo.com/mailbeta/tudonovo/ ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [ADMIN] Getting previous statements executed a backend currenly in an idle in transaction state.
AFAICS, log_statement and friends are much cheaper than any other alternative could be. Dear Sir, thanks for the reply and clarifying. regds mallah. regards, tom lane ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [ADMIN] Install XML on Postgres 8.1
On 1/22/07, Alexander B. [EMAIL PROTECTED] wrote: Hi, I need to install XML on Postgres 8.1. XML support is provided in PostgreSQL via a contrib module xml2. look for xml related folders in contrib folder after untarring the pgsql source. go through the README files on contrib modules to know the features. What's the simples steps that I can follow? how have you installed postgresql ? I found on the internet, but there are many versions, packages and ways to install. What's the right way to install? have you installed from source or rpm ? Can we compare the same powerful like Oracle has on Xml? what is your requirement ? postgresql xml2 module i think has xslt and xpath support. check the README file of xml2 folder. If you could help me!! Thanks in advance. ___ Yahoo! Mail - Sempre a melhor opção para você! Experimente já e veja as novidades. http://br.yahoo.com/mailbeta/tudonovo/ ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[ADMIN] Getting previous statements executed a backend currenly in an idle in transaction state.
Dear Friends, Due to some bug(s) in our apps , in a mod_perl environment many backend remain in idle in transaction state. This backends sometimes block other statements in batch processes which in turn blocks access to some production tables. This issue has been posted before also but this time i wanted to know, if it is possible to know what statements have been executed by the backend in the same transaction which is currently idle. this can probably help us in identifying the faulty application code. in past we had set a logging prefix in guc and used grep to digout the statements but that was logging all statements which was probably not a good idea in production environment. Can some one suggest if anything else is readily available. regds mallah. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] Getting previous statements executed a backend currenly in an idle in transaction state.
On 1/22/07, Umesh Shastry [EMAIL PROTECTED] wrote: This may help you... thanks for reply. but thats for slow queries. Blocked queries are different regds mallah. http://archives.postgresql.org/pgsql-announce/2006-01/msg7.php ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] psql does not save contents into file at end of command.
On 12/27/06, Tom Lane [EMAIL PROTECTED] wrote: Rajesh Kumar Mallah [EMAIL PROTECTED] writes: tradein_clients= \o db.lst tradein_clients= \l (till this point, there is nothing in file db.lst) tradein_clients= \q (now it saves) (also saves if another \o is issued) Dunno till what extent this behavior is justified. I wasn't too excited about this, but on investigation, psql *does* fflush() the \o file after each ordinary SQL command; i see . I did not investigate to that extent , i thought it was for all. it's only backslash command output that doesn't necessarily reach the file immediately. This is inconsistent ... if we're going to do it for regular commands seems like we should do it for backslash commands too. Thanks for the notice and belated Merry Christmas. Regds mallah. regards, tom lane
Re: [ADMIN] DB problem
On 12/26/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Hi, We are using PostgreSQL 7.3 version, installed in linux system. Only DB is getting shut down 3-4 times a day. Vinay, Although its not an answer to your question, but you should upgrade to 8.2.0version unless you have specific reason of not doing it. Regds mallah. From my application end I get Sorry too many clients error. Refering the FAQ I maximized the DB session limit to 200. I think the logs are not enabled in my DB. So I set the following options in postgresql.con file log_timestamp= true log_directory=pg_log log_filename=%Y-%m-%d_%H%M%S.log log_disconnections=true log_hostname=true log_statement=true After restarting the system got the following error : [EMAIL PROTECTED] root]# su postgres bash-2.05b$ postmaster -i -D /var/lib/pgsql/data FATAL: postgresql.conf:131: syntax error, token=% bash-2.05b$ postmaster -i -D /var/lib/pgsql/data FATAL: postgresql.conf:131: syntax error, token=% bash-2.05b$ postmaster -i -D /var/lib/pgsql/data FATAL: postgresql.conf:131: syntax error, token=% bash-2.05b$ postmaster -i -D /var/lib/pgsql/data FATAL: 'log_directory' is not a valid option name bash-2.05b$ postmaster -i -D /var/lib/pgsql/data FATAL: 'log_directory' is not a valid option name bash-2.05b$ postmaster -i -D /var/lib/pgsql/data FATAL: 'log_filename' is not a valid option name bash-2.05b$ postmaster -i -D /var/lib/pgsql/data FATAL: 'log_disconnections' is not a valid option name bash-2.05b$ postmaster -i -D /var/lib/pg When you install the DB, the logs are enabled by default? Would you pls tell me still what setting need to be done? Thanks, Vinayak V. Raikar Extn : 143
Re: [ADMIN] About Monitoring Software
On 12/27/06, Iannsp [EMAIL PROTECTED] wrote: Hi, I did like to know with some one here use some think like nagios to monitoring postgresql. Dear Iannsp, Nagios is a general purpose monitoring and alerting tool. Any service can have only three states NORMAL , WARNING and CRITICAL, there are plugins(program) both inbuilt or custom made that can probe for a parameter and update nagios database. Threshold for warning and critical states are defined for each service monitored. We use nagios for monitoring port 5432. It is also possible to make nagios run SELECT 1; so that you really know that database is answering queries. I said this because I'm work in one software to make this and I never find onde tool to monitoring postgresql with adminsitractive data like actual run proccess and the query's Possible. , number of connections, Possible user logged, more commom values listed by field in the table,Use of disk, index and another things. Diskspace monitoring Some one people know one tool to monitoring this informations using web browser and have permission to change some one datas in the postgresql.conf, pg_hba.conf ? Nagios is not suitable for above, its monitoring and alerting tool mainly. You may consider writing custom software using rrdtools for archiving historical data. Regds mallah. Thank you. -- Ivo Nascimento Iann tech - Desenvolvendo soluções com performance e segurança http://www.ianntech.com.br ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[ADMIN] psql does not save contents into file at end of command.
Hi, this is psql8.2 client and server. tradein_clients= \o db.lst tradein_clients= \l (till this point, there is nothing in file db.lst) tradein_clients= \q (now it saves) (also saves if another \o is issued) Dunno till what extent this behavior is justified. regds mallah.
Re: [ADMIN] psql does not save contents into file at end of command.
Dunno till what extent this behavior is justified. Maybe it is writing, just not fflushing? Alvaro, i am reporting as an enduser. I feel many would consider it buggy. regds mallah. -- Alvaro Herrera http://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc.
Re: [ADMIN] Trigger trouble
On 12/22/06, Sandy Spence [EMAIL PROTECTED] wrote: Hi Rajesh, I ran the select query SELECT typowner from pg_type where typname='trigger'; In my installation \dT reports the type trigger does it do so in yours? If it does not i am afraid someone else shall be able to help you better. regds mallah. psql\dT *.trigger List of data types ++---+-+ | Schema | Name| Description | ++---+-+ | pg_catalog | trigger | | ++---+-+ (1 row) Regds mallah. with the following results select typowner from pg_type where typname='trigger'; typowner -- (0 rows) I then ran the second select query SELECT * from pg_user; (I also changed slightly select usename, usesysid from pg_user where usename='ltt';) usename | usesysid -+-- ltt | 3517 (1 row) user/database is where the warning message for the trigger originates from. Is there something significant in the first select statement. Regards, Sandy Computer Officer, RA Certification Manager Department of Computer Science - UWA Llandinam Building Penglais Campus Aberystwyth Ceredigion Wales - UK SY23 3DB Tel: (01970)-622433 Fax: (01970)-628536 -Original Message- From: Rajesh Kumar Mallah [mailto:[EMAIL PROTECTED] Sent: 21 December 2006 20:04 To: Sandy Spence Cc: pgsql-admin@postgresql.org Subject: Re: [ADMIN] Trigger trouble Sandy, SELECT typowner from pg_type where typname='trigger'; check if typeowner exists in the column usesysid of pg_user by doing SELECT * from pg_user ; if typeowner does not exists there is a problem. regds On 12/21/06, Sandy Spence [EMAIL PROTECTED] wrote: Hi All, I get the following error when performing a pg_dump dumping database ltt... pg_dump: WARNING: owner of data type trigger appears to be invalid the owner and database exists, I have run the following select statement, select pg_class.oid, pg_trigger.tgrelid from pg_trigger left join pg_class on pg_trigger.tgrelid=pg_class.oid; oid | tgrelid -+- 1260 |1260 6081779 | 6081779 6081981 | 6081981 6081779 | 6081779 6081779 | 6081779 6081981 | 6081981 6081975 | 6081975 6081975 | 6081975 6082027 | 6082027 6081779 | 6081779 6081779 | 6081779 6082027 | 6082027 6121607 | 6121607 6121268 | 6121268 6121268 | 6121268 so it appears that it is not a dropped table causing the warning, is there any other way I can try to determine how this warning is being produced, is there a way of viewing the contents of the trigger? Thanks in advance, Sandy Computer Officer, RA Certification Manager Department of Computer Science - UWA Llandinam Building Penglais Campus Aberystwyth Ceredigion Wales - UK SY23 3DB Tel: (01970)-622433 Fax: (01970)-628536 ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [ADMIN] Trigger trouble
Sandy, SELECT typowner from pg_type where typname='trigger'; check if typeowner exists in the column usesysid of pg_user by doing SELECT * from pg_user ; if typeowner does not exists there is a problem. regds On 12/21/06, Sandy Spence [EMAIL PROTECTED] wrote: Hi All, I get the following error when performing a pg_dump dumping database ltt... pg_dump: WARNING: owner of data type trigger appears to be invalid the owner and database exists, I have run the following select statement, select pg_class.oid, pg_trigger.tgrelid from pg_trigger left join pg_class on pg_trigger.tgrelid=pg_class.oid; oid | tgrelid -+- 1260 |1260 6081779 | 6081779 6081981 | 6081981 6081779 | 6081779 6081779 | 6081779 6081981 | 6081981 6081975 | 6081975 6081975 | 6081975 6082027 | 6082027 6081779 | 6081779 6081779 | 6081779 6082027 | 6082027 6121607 | 6121607 6121268 | 6121268 6121268 | 6121268 so it appears that it is not a dropped table causing the warning, is there any other way I can try to determine how this warning is being produced, is there a way of viewing the contents of the trigger? Thanks in advance, Sandy Computer Officer, RA Certification Manager Department of Computer Science - UWA Llandinam Building Penglais Campus Aberystwyth Ceredigion Wales - UK SY23 3DB Tel: (01970)-622433 Fax: (01970)-628536 ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [ADMIN] Upgrading from 7.4 to 8.2
On 12/19/06, Rickard Sjöström [EMAIL PROTECTED] wrote: Hi! Thanks for your replies! My problem is that I want to move the DBM to a different machine so now when it doesn't seem to be any easy/trivial way to migrate from 7.4 to 8.2 I might consider to continue with 7.4. It was not my intention to scare you. Please do not misread. It is always recommended to run the latest version of pgsql unless you really have to stick to an older version. Release notes shall tell you about the new features that you get in various versions. Newer versions almost always have better planner and numerous performance improvements. If I decide to continue with 7.4 I guess the procedure to move the DATADIR to the new machine and then start the postgresql server there with the old DATADIR should work without problems? Yes that is rite. Thanks, again! /Rickard Citerar Rajesh Kumar Mallah [EMAIL PROTECTED]: ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] Suggestions needed about how to dump/restore a database
On 12/19/06, Arnau [EMAIL PROTECTED] wrote: Hi Jeff, On Tue, 19 Dec 2006, Arnau wrote: I've got a DB in production that is bigger than 2GB that dumping it takes more than 12 hours. thats strange , we dump +20GB data in 2 hrs or so. I have a new server to replace this old one where I have restore the DB's dump. The problem is I can't afford to have the server out of business for so long, if the two biggest tables are *not critical* for application availability i can dump out their data separately into two SQL files and later restore it. once you dump out the data you can drop the tables from the production DB before dumping out and see how long it takes. pg_dump -t schema.table -h hostname -U user dbname can dump out a specific schema.table. (the exact options are version dependent, which version btw u using?) it is always desired to know the root cause of why pg_dump is taking so long in your machine , but in worst case you could take the approach you suggested. so I need your advice about how you'd do this dump/restore. The big amount of data is placed in two tables (statistics data), so I was thinking in dump/restore all except this two tables and once the server is running again I'd dump/restore this data. The problem is I don't know how exactly do this. Arnau, ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] multiple postgres instances from RPM install
On 12/19/06, Tom Kalafut [EMAIL PROTECTED] wrote: I am getting ready to set up 2 postgres instances, but I'm somewhat new to postgres. I've used RPM to list out all the files that are installed, and I think I've isolated all the executables of which I need to make separate copies, but I was hoping for some confirmation. please read about initdb you can initdb two different empty directories as two different unix users. when you initdb a directory you will get a postgresql.conf file in it. you can change the port number in that file to run different instances separately. but do you really need two different instances that would possibly compete each other for resources? I heard the exact files that need copying are listed in a thread somewhere in the archives, but I can't seem to find it. Could someone please point me to it? The motivation behind this is having 2 distinct SELinux policies for 2 separate users with their own separate databases. Thanks, Tom Tom Kalafut Interchange/CG Crew Member Trident Systems Inc. [EMAIL PROTECTED] (919)388-1264 - The opinions expressed in this email may not necessarily reflect the opinions of Trident Systems Inc. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [ADMIN] multiple postgres instances from RPM install
On 12/20/06, Tom Kalafut [EMAIL PROTECTED] wrote: After consulting with a teammate, I've found out that they'd have no problem simply doing 2 separate makes. In fact, I'm glad. It's makes much better sense. i really dont much understand what you have decided to do. have you considered creating two separate databases in same postgresql installation ? Thanks, Tom Tom Kalafut Interchange/CG Crew Member Trident Systems Inc. [EMAIL PROTECTED] (919)388-1264 - The opinions expressed in this email may not necessarily reflect the opinions of Trident Systems Inc. -Original Message- From: Rajesh Kumar Mallah [mailto:[EMAIL PROTECTED] Sent: Tuesday, December 19, 2006 1:26 PM To: Tom Kalafut Cc: pgsql-admin@postgresql.org Subject: Re: [ADMIN] multiple postgres instances from RPM install On 12/19/06, Tom Kalafut [EMAIL PROTECTED] wrote: I am getting ready to set up 2 postgres instances, but I'm somewhat new to postgres. I've used RPM to list out all the files that are installed, and I think I've isolated all the executables of which I need to make separate copies, but I was hoping for some confirmation. please read about initdb you can initdb two different empty directories as two different unix users. when you initdb a directory you will get a postgresql.conf file in it. you can change the port number in that file to run different instances separately. but do you really need two different instances that would possibly compete each other for resources? I heard the exact files that need copying are listed in a thread somewhere in the archives, but I can't seem to find it. Could someone please point me to it? The motivation behind this is having 2 distinct SELinux policies for 2 separate users with their own separate databases. Thanks, Tom Tom Kalafut Interchange/CG Crew Member Trident Systems Inc. [EMAIL PROTECTED] (919)388-1264 - The opinions expressed in this email may not necessarily reflect the opinions of Trident Systems Inc. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[ADMIN] Adding a column with default value possibly corrupting a functional index.
Hi, I have an index on upper(general.cat_url(category_id)) on a table. when i add a column *with* default value , a query that previously used to give result does not give results anymore. REINDEX'ing the table produces correct result. if no default value is giving while adding the column the query continues to give proper result. Regds mallah. tradein_clients= SELECT category_id from general.web_category_masterwhere upper( general.cat_url(category_id)::text) = 'AGRICULTURE'; +-+ | category_id | +-+ | 1 | +-+ (1 row) tradein_clients= explain SELECT category_id from general.web_category_master where upper(general.cat_url(category_id)::text) = 'AGRICULTURE'; +-+ | QUERY PLAN | +-+ | Index Scan using web_category_master_upper_cat_url on web_category_master (cost=0.00..8.02 rows=1 width=4) | | Index Cond: (upper((general.cat_url(category_id))::text) = 'AGRICULTURE'::text) | +-+ (2 rows) tradein_clients= ALTER TABLE general.web_category_master add test_id int default 0; ALTER TABLE tradein_clients= SELECT category_id from general.web_category_masterwhere upper( general.cat_url(category_id)::text) = 'AGRICULTURE'; +-+ | category_id | +-+ +-+ (0 rows) tradein_clients= REINDEX TABLE general.web_category_master; REINDEX tradein_clients= SELECT category_id from general.web_category_masterwhere upper( general.cat_url(category_id)::text) = 'AGRICULTURE'; +-+ | category_id | +-+ | 1 | +-+ (1 row)
Re: [ADMIN] Adding a column with default value possibly corrupting a functional index.
On 12/16/06, Tom Lane [EMAIL PROTECTED] wrote: Rajesh Kumar Mallah [EMAIL PROTECTED] writes: I have an index on upper(general.cat_url(category_id)) on a table. when i add a column *with* default value , a query that previously used to give result does not give results anymore. REINDEX'ing the table produces correct result. Can you provide a self-contained example of this? Hi, thanks for the reply. that was the first thing i was trying to do before the post so far i have not been able to. What PG version are you using? 8.2.0 What is that nonstandard function you're using in the index? Its declared immutable , it queries the same table , its recursive and it queries another custom function also. dumping the function def below , lemme know if there is anything obvious. Warm Regds mallah. CREATE OR REPLACE FUNCTION general.cat_url (integer) RETURNS varchar AS ' DECLARE v_category_id ALIAS FOR $1; DECLARE tmp_record RECORD; DECLARE tmp_name VARCHAR; DECLARE tmp_code VARCHAR; BEGIN tmp_code := ; IF v_category_id = -1 THEN RETURN ''NO SUCH CATEGORY''; END IF; SELECT INTO tmp_record name, category_id, parent_category_id from general.web_category_master join general.category_tree using(category_id) where category_id=v_category_id and link is false; IF NOT FOUND THEN RETURN ; END IF; tmp_name := general.cat_url(tmp_record.parent_category_id) ; IF tmp_record.category_id 0 THEN tmp_code := tmp_name || ''/'' || general.dir_name(tmp_record.name) ; END IF; tmp_code = ltrim(tmp_code,''/''); RETURN tmp_code; END; ' LANGUAGE 'plpgsql' IMMUTABLE; regards, tom lane
Re: [ADMIN] Adding a column with default value possibly corrupting a functional index.
Its declared immutable , it queries the same table , its recursive and it queries another custom function also. fyi dir_name is a simple plperlu function. my ($dir) = @_; $dir =~ s/amp;/and/g; $dir =~ s//and/g; $dir =~ s/[^a-zA-Z0-9]+/_/g; return $dir;
Re: [ADMIN] Adding a column with default value possibly corrupting a functional index.
On 12/16/06, Tom Lane [EMAIL PROTECTED] wrote: Rajesh Kumar Mallah [EMAIL PROTECTED] writes: On 12/16/06, Tom Lane [EMAIL PROTECTED] wrote: What is that nonstandard function you're using in the index? Its declared immutable , it queries the same table , You just lost. If it's querying the table it's not immutable, almost by definition --- certainly not if the table is not static, as yours seemingly is not. This one is cheating even more by trying to read another table too :-( I think the proximate cause of the problem is that the function's SELECT is trying to use an index on the category_id column, and the REINDEX done by ALTER TABLE happens to rebuild the two indexes in the other order, such that the one on category_id isn't valid yet when the functional index is rebuilt. why does ALTER TABLE ADD new_col int default 0 rebuilds existing indexes ? I wonder whether we need to do something to actively prevent functions used in an index from querying the database? It's not too hard to imagine crashing the backend by playing this sort of game. the game was seemingly fulfilling a requirement. dunno what i should be doing now. regds mallah. This particular case is probably not doing anything worse than following index entries pointing at no-longer-existent tuple IDs, which I think we have adequate defenses against now. But in general an index function has got to be capable of operating even when the underlying table is not in a logically consistent state, because the function itself is used in creating/maintaining that consistency. What you've got here definitely fails that test. regards, tom lane
Re: [ADMIN] Adding a column with default value possibly corrupting a functional index.
why does ALTER TABLE ADD new_col int default 0 rebuilds existing indexes ? Because it has to rewrite the whole table to insert the default value in every row. A REINDEX is way more efficient for recovering from that than any row-by-row update would be. thanks for explaining. I wonder whether we need to do something to actively prevent functions used in an index from querying the database? It's not too hard to imagine crashing the backend by playing this sort of game. the game was seemingly fulfilling a requirement. dunno what i should be doing now. It sorta looks to me like you're trying to get the effect of a materialized view --- have you looked at the techdocs pages about how to do those in Postgres? We map the URL to category_id . if someone requests URL SPORTS_AND_ENTERTAINMENT/SPORT_PRODUCTS/CAR_RACING we server the data in category_id 641 thats why the functional index is required. You suggesting to create a mat view for this lookup ? Actually there is no *real* issue we seldom add columns to that table. I might as well leave the system like this and REINDEX the table after i do something that possibly corrupts the functional index. SELECT category_id , upper(general.cat_url(category_id)) from general.web_category_master limit 10; category_idupper --- 641 SPORTS_AND_ENTERTAINMENT/SPORT_PRODUCTS/CAR_RACING 1407 SECURITY_AND_PROTECTION/SECURITY_EQUIPMENT 1065 MINERAL_AND_METALS/MINERALS_AND_REFRACTORIES 474 HEALTH_AND_BEAUTY/PERSONAL_CARE/OTHERS 561 OFFICE_SUPPLIES/OTHERS 277 CONSTRUCTION_AND_REAL_ESTATE/REAL_ESTATE/SHOPS 1017 INDUSTRIAL_SUPPLIES/INDUSTRIAL_BRAKES_AND_CLUTCHES 580 OFFICE_SUPPLIES/PHOTOGRAPHY_AND_OPTICS/TIME_RECORDING 836 CHEMICALS/FINE_CHEMICALS_ALL i think i should also change the function type to STABLE instead of IMMUTABLE . (it does not have impact on this issue though) Regds mallah. regards, tom lane ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [ADMIN] storage
On 12/13/06, Antonios Katsikadamos [EMAIL PROTECTED] wrote: Hi all, Sorry to bother. I would like to know where postgres stores the created database and the tables.Can anyone help, which OS or distribution is it? look into the startup script that starts postgresql service. The path is configurable depends on the package mantainer. in redhat it used to be /var/lib/data. if you can connect to the database as superuser you can give the command psql# SHOW data_directory; in recent versions of postgresql. thnx for any help, Antonios __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [ADMIN] Slowly Queries
On 12/14/06, Anton P. Linevich [EMAIL PROTECTED] wrote: Hi guys. Can you give me a describe what should i do when get a lot of slowly queries? Last week i saw such INFO messages in postgres log: we too faced the same problem wherin a table was updated very frequently and with time it used to loose performance. vaccuuming the table was not very effective. we changed the application to insert to a seperate table whenever a banner was viewed instead of updating view count in the banner master table. this was very fast. but this adds to complexity. some expert will hopefully tell the proper solution to it. regds mallah. LOG: duration: 8144.361 ms statement: update videos set watched=watched+1, last_view_date='1166057431' where id='106980' LOG: duration: 7825.637 ms statement: update videos set watched=watched+1, last_view_date='1166057431' where id='99337' LOG: duration: 7023.592 ms statement: insert into statistics_search(date,search,type,id_member) VALUES ('1166057432','sexy','V','0') LOG: duration: 6603.307 ms statement: insert into statistics_search(date,search,type,id_member) VALUES ('1166057432','Fun Day','V','0') LOG: duration: 6093.724 ms statement: insert into statistics_search(date,search,type,id_member) VALUES ('1166057433','party','V','0') LOG: duration: 5855.535 ms statement: update videos set watched=watched+1, last_view_date='1166057433' where id='102462' LOG: duration: 5464.827 ms statement: update videos set watched=watched+1, last_view_date='1166057433' where id='103901' LOG: duration: 8917.981 ms statement: update users_auth set date_last='1166057430' where id='384891' LOG: duration: 7584.912 ms statement: update videos set watched=watched+1, last_view_date='1166057431' where id='103164' LOG: duration: 5181.864 ms statement: update videos set watched=watched+1, last_view_date='1166057434' where id='112060' LOG: duration: 38394.879 ms statement: update upload_synchronizer set lock_status='N' where sess_id='103503' How i can tune database for quick updates? I have SATA RAID1. Thank you all for your answers. -- Anton P. Linevich ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [ADMIN] Slowly Queries
On 12/14/06, Anton P. Linevich [EMAIL PROTECTED] wrote: Hi guys. Can you give me a describe what should i do when get a lot of slowly queries? btw such questions should go to performance list. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [ADMIN] Should duplicate indexes on same column and same table be allowed?
The cost of preventing every stupid database use is too high. -- thanks it answers my concern. many a times we face the same situation with the marketing people of our company. the complexity of stopping stupid usage can be quite non trivial at times. regds mallah. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [ADMIN] 8.2.0 bug :)
On 12/12/06, Gábriel Ákos [EMAIL PROTECTED] wrote: go to psql issue \di+ server closes connection :) Akos, Can you please paste the exact message with which server closes connection. also start psql with -E option Regds mallah. Regards, Akos ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[ADMIN] query crashes 8.2.0 but not 8.1.5
Hi, below query crashes 8.2.0 tradein_clients= SELECT size from general.eyp_listing where size in (select distinct size from general.eyp_listing ) order by size limit 1; server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Succeeded. tradein_clients= \q Regds mallah. backtrace is below: (gdb) bt #0 0x40098d71 in kill () from /lib/i686/libc.so.6 #1 0x40098af5 in raise () from /lib/i686/libc.so.6 #2 0x4009a1e0 in abort () from /lib/i686/libc.so.6 #3 0x08259ce5 in ExceptionalCondition (conditionName=0x8303c00 !(var-vartypmod == att_tup-atttypmod), errorType=0x8282d14 FailedAssertion, fileName=0x6 Address 0x6 out of bounds, lineNumber=219) at assert.c:51 #4 0x0815d95f in tlist_matches_tupdesc (ps=0x8431f04, tlist=0x4019c9a0, varno=2, tupdesc=0x842ec3c) at execScan.c:219 #5 0x0815d830 in ExecAssignScanProjectionInfo (node=0x8431f04) at execScan.c:184 #6 0x0816c659 in ExecInitSubqueryScan (node=0x8423ed8, estate=0x842ce1c, eflags=0) at nodeSubqueryscan.c:213 #7 0x08157a47 in ExecInitNode (node=0x8423ed8, estate=0x842ce1c, eflags=0) at execProcnode.c:189 #8 0x08169b81 in ExecInitSort (node=0x8424188, estate=0x842ce1c, eflags=0) at nodeSort.c:197 #9 0x081579be in ExecInitNode (node=0x8424188, estate=0x842ce1c, eflags=8) at execProcnode.c:230 #10 0x08168280 in ExecInitMergeJoin (node=0x8424318, estate=0x842ce1c, eflags=8) at nodeMergejoin.c:1539 #11 0x081579f7 in ExecInitNode (node=0x8424318, estate=0x842ce1c, eflags=0) at execProcnode.c:212 #12 0x0816a710 in ExecInitLimit (node=0x84243a4, estate=0x842ce1c, eflags=0) at nodeLimit.c:327 #13 0x08157958 in ExecInitNode (node=0x84243a4, estate=0x842ce1c, eflags=0) at execProcnode.c:260 #14 0x0815557c in InitPlan (queryDesc=0x842ca40, eflags=0) at execMain.c:628 #15 0x08154dd9 in ExecutorStart (queryDesc=0x842ca40, eflags=0) at execMain.c:171 #16 0x081e5632 in PortalStart (portal=0x841c4b4, params=0x0, snapshot=0x0) at pquery.c:426 #17 0x081e1314 in exec_simple_query ( query_string=0x83e905c SELECT size from general.eyp_listing where size in (select distinct size from general.eyp_listing ) order by size limit 1;) at postgres.c:902 #18 0x081e4932 in PostgresMain (argc=4, argv=0x8396f2c, username=0x8396efc tradein) at postgres.c:3419 #19 0x081bb396 in BackendRun (port=0x839e9a8) at postmaster.c:2926 #20 0x081babde in BackendStartup (port=0x839e9a8) at postmaster.c:2553 #21 0x081b8db7 in ServerLoop () at postmaster.c:1206 #22 0x081b822d in PostmasterMain (argc=1, argv=0x83937d8) at postmaster.c:958 #23 0x08177117 in main (argc=1, argv=0x1) at main.c:188 #24 0x40085c57 in __libc_start_main () from /lib/i686/libc.so.6 (gdb) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] query crashes 8.2.0 but not 8.1.5
On 12/12/06, Tom Lane [EMAIL PROTECTED] wrote: Rajesh Kumar Mallah [EMAIL PROTECTED] writes: below query crashes 8.2.0 tradein_clients= SELECT size from general.eyp_listing where size in (select distinct size from general.eyp_listing ) order by size limit 1; Already fixed. Hi, Can you *please* tell how to get the fix. i have installed 8.2 in devlopment server dunno when 8.2.1 would come. if i checkout from CVS i get 8.3dev which refuses to start with 8.2 initdb'ed directory. i tried patching subselect.c but failed. i am badly stuck. regds mallah. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] query crashes 8.2.0 but not 8.1.5
On 12/12/06, Tom Lane [EMAIL PROTECTED] wrote: Rajesh Kumar Mallah [EMAIL PROTECTED] writes: Can you *please* tell how to get the fix. Checkout REL8_2_STABLE branch, not HEAD ... Thanks!. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] how to recover database back from /data folder [ Possibly Recovered ]
Hi List , I restored the filesystem backup tar and ran postgres 7.3.8 on the data folder. there was no problem at all. DEBUG: FindExec: found /opt/usr/local/pgsql7.3/bin/postgres using argv[0] DEBUG: invoking IpcMemoryCreate(size=1466368) DEBUG: FindExec: found /opt/usr/local/pgsql7.3/bin/postmaster using argv[0] LOG: database system was shut down at 2006-12-04 12:30:21 IST LOG: checkpoint record is at 0/39DF3C2C LOG: redo record is at 0/39DF3C2C; undo record is at 0/0; shutdown TRUE LOG: next transaction id: 2006905; next oid: 15212626 LOG: database system is ready DEBUG: proc_exit(0) DEBUG: shmem_exit(0) DEBUG: exit(0) DEBUG: reaping dead processes I think the above messages support the fact that the database was shutdown properly before the filesystem level backup. Can anyone kindly confirm it ? I listed the tables and did some selects the data seems to be there. I think postgres was not starting for some peripheral issues not because that data folder was corrupted. BTW when i first started postmaster it gave an error related to too liberal permissions. Probably the original poster was doing the same mistake. Regds mallah. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [ADMIN] how to recover database back from /data folder [ Recovered ]
On 12/11/06, Shoaib Mir [EMAIL PROTECTED] wrote: I meant all looks *good* according to the logs :) thanks everyone. regds mallah. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[ADMIN] reloading config files pg_ctl reload
Hi, Will it be a good feature to have pg_ctl check the syntaxes of the config files before sending a SIGHUP to postmaster ? Suppose a DBA does somes typos in the files, he shall be confused not to see the changes in server even after pg_ctl reload . Regds mallah. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [ADMIN] Should duplicate indexes on same column and same table be allowed?
On 12/9/06, Tom Lane [EMAIL PROTECTED] wrote: Rajesh Kumar Mallah [EMAIL PROTECTED] writes: Suppose an index get corrupted. And you need create a new index with exact specs and then drop the old index. Is it better to have a performing corrupted index or not have it at all and temporarily suffer some performance degradation ? The case that was being discussed just a day or two ago was where you wanted to do the equivalent of REINDEX because of index bloat, not any functional corruption. In that case it's perfectly clear that temporarily not having the index isn't acceptable ... especially if it's enforcing a unique constraint. Sorry , i guess i digressed . Lemme put the question once again. psql CREATE INDEX x on test (col1); psql CREATE INDEX y on test (col1); What is (are) the downsides of disallowing the second index. which is *exactly* same as previous? Regds mallah. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [ADMIN] problem in logging into database
On 12/8/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: my postmaster is on the same machine .. i am pasting the command and its output here for u :- % psql -d template1 -U postgres psql: FATAL: Ident authentication failed for user postgres You will need to modify pg_hba.conf file in the directory which you initdb'ed and restart the database. this file is well documented and reading it is a good investment. There are several options to allow connecting . mentioned below are roughly in order or security. you need to add/modity /uncomment the lines near the end of this file. local all all trust # very trivial will allow any user to connect from same machine to any database without any password. local all all md5 (same a above but will ask password) hostall all 192.168.0.100/32 md5 (same a above but allows connection from said address range) you can choose any config based on your requirement but make sure it secure enough eventually. Regds mallah. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[ADMIN] tsearch2 problem rank_cd() (possibly) crashing postgres
Hi , We recently upgraded from PostgreSQL 8.1.5 to PostgreSQL 8.2.0. looks like rank_cd function is giving problem . tradein_clients= CREATE TABLE test (name text , name_vec tsvector); CREATE TABLE tradein_clients= INSERT INTO test (name ,name_vec) values ('hello world' , to_tsvector('hello world')); INSERT 0 1 tradein_clients= SELECT name from test where name_vec @@ to_tsquery('hello') ; +-+ |name | +-+ | hello world | +-+ (1 row) tradein_clients= SELECT name, rank_cd(1,name_vec, to_tsquery('hello') ) as rank from test where name_vec @@ to_tsquery('hello') ; server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Succeeded. tradein_clients= Analysis of core dump: (not sure though if its the right way of doing it) $ gdb /opt/usr/local/pgsql/bin/postgres core.2807 GNU gdb 5.3-25mdk (Mandrake Linux) Copyright 2002 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type show copying to see the conditions. There is absolutely no warranty for GDB. Type show warranty for details. This GDB was configured as i586-mandrake-linux-gnu... Core was generated by `postgres: tradein tradein_clients 192.168.0.11(52876'. Program terminated with signal 11, Segmentation fault. Reading symbols from /lib/libcrypt.so.1...done. Loaded symbols for /lib/libcrypt.so.1 Reading symbols from /lib/libdl.so.2...done. Loaded symbols for /lib/libdl.so.2 Reading symbols from /lib/i686/libm.so.6...done. Loaded symbols for /lib/i686/libm.so.6 Reading symbols from /lib/i686/libc.so.6...done. Loaded symbols for /lib/i686/libc.so.6 Reading symbols from /lib/ld-linux.so.2...done. Loaded symbols for /lib/ld-linux.so.2 Reading symbols from /lib/libnss_files.so.2...done. Loaded symbols for /lib/libnss_files.so.2 Reading symbols from /mnt/disk3/opt/usr/local/postgresql820/lib/tsearch2.so...done. Loaded symbols for /mnt/disk3/opt/usr/local/postgresql820/lib/tsearch2.so #0 pg_detoast_datum (datum=0x1) at fmgr.c:1964 1964if (VARATT_IS_EXTENDED(datum)) (gdb) bt #0 pg_detoast_datum (datum=0x1) at fmgr.c:1964 #1 0x40c2a961 in rank_cd (fcinfo=0xbfffeda0) at rank.c:731 #2 0x0815948c in ExecMakeFunctionResult (fcache=0x8423c40, econtext=0x84239a8, isNull=0x8424c85 [EMAIL PROTECTED], isDone=0x8424c9c) at execQual.c:1147 #3 0x0815d373 in ExecTargetList (targetlist=0x8423c08, econtext=0x84239a8, values=0x8424c70, isnull=0x8424c84 , itemIsDone=0x8424c98, isDone=0xb068) at execQual.c:3981 #4 0x0815d672 in ExecProject (projInfo=0x8424bac, isDone=0xb068) at execQual.c:4182 #5 0x0815d785 in ExecScan (node=0x8423b00, accessMtd=0x8169290 SeqNext) at execScan.c:143 #6 0x08169364 in ExecSeqScan (node=0x8423b00) at nodeSeqscan.c:130 #7 0x08157cb1 in ExecProcNode (node=0x8423b00) at execProcnode.c:349 #8 0x08155d5c in ExecutePlan (estate=0x842391c, planstate=0x8423b00, operation=CMD_SELECT, numberTuples=0, direction=ForwardScanDirection, dest=0x83edfbc) at execMain.c:1081 #9 0x08154fbe in ExecutorRun (queryDesc=0x8423b00, direction=ForwardScanDirection, count=0) at execMain.c:241 #10 0x081e5ee1 in PortalRunSelect (portal=0x840f96c, forward=1 '\001', count=0, dest=0x83edfbc) at pquery.c:831 #11 0x081e5a91 in PortalRun (portal=0x840f96c, count=2147483647, dest=0x83edfbc, altdest=0x83edfbc, completionTag=0xb320 ) at pquery.c:684 #12 0x081e1368 in exec_simple_query ( query_string=0x83ed064 SELECT name, rank_cd(1,name_vec, to_tsquery('hello') ) as rank from test where name_vec @@ to_tsquery('hello') ;) at postgres.c:939 #13 0x081e4932 in PostgresMain (argc=4, argv=0x83b09f4, username=0x83b09c4 tradein) at postgres.c:3419 #14 0x081bb396 in BackendRun (port=0x839e9a8) at postmaster.c:2926 #15 0x081babde in BackendStartup (port=0x839e9a8) at postmaster.c:2553 #16 0x081b8db7 in ServerLoop () at postmaster.c:1206 #17 0x081b822d in PostmasterMain (argc=1, argv=0x83937d8) at postmaster.c:958 #18 0x08177117 in main (argc=1, argv=0x1) at main.c:188 #19 0x40085c57 in __libc_start_main () from /lib/i686/libc.so.6 (gdb) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [ADMIN] tsearch2 problem rank_cd() (possibly) crashing postgres
On 12/8/06, Oleg Bartunov oleg@sai.msu.su wrote: You need to read documentation ! rank_cd accepts the same args as rank() function. Dear Oleg, Could you please elaborate a bit more if time permits. our application is old and it was working fine in 8.1.5. do i need to change the sql to use a different function ? Oleg On Fri, 8 Dec 2006, Rajesh Kumar Mallah wrote: ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] tsearch2 problem rank_cd() (possibly) crashing postgres
On 12/8/06, Oleg Bartunov oleg@sai.msu.su wrote: On Fri, 8 Dec 2006, Rajesh Kumar Mallah wrote: On 12/8/06, Oleg Bartunov oleg@sai.msu.su wrote: You need to read documentation ! rank_cd accepts the same args as rank() function. Dear Oleg, Could you please elaborate a bit more if time permits. our application is old and it was working fine in 8.1.5. do i need to change the sql to use a different function ? from reference manual: CREATE FUNCTION rank_cd( [ weights float4[], ] vector TSVECTOR, query TSQUERY, [ normalization int4 ] ) RETURNS float4 Dear Oleg, thanks for the prompt help. looks like we have to modify our application code. i would like to point out : In our Production Database \df public.rank_cd ++-+--+-+ | Schema | Name | Result data type | Argument data types | ++-+--+-+ | public | rank_cd | real | integer, tsvector, tsquery | | public | rank_cd | real | integer, tsvector, tsquery, integer | | public | rank_cd | real | tsvector, tsquery | | public | rank_cd | real | tsvector, tsquery, integer | ++-+--+-+ (4 rows) In tsearch2.sql (with pgsql 8.2.0) $ grep CREATE FUNCTION rank_cd tsearch2.sql CREATE FUNCTION rank_cd(float4[], tsvector, tsquery) CREATE FUNCTION rank_cd(float4[], tsvector, tsquery, int4) CREATE FUNCTION rank_cd(tsvector, tsquery) CREATE FUNCTION rank_cd(tsvector, tsquery, int4) This means first arguments have changed from integer to float4[] This means all the application code needs to be changed now :-/ postgres=# SELECT name, rank_cd(name_vec,to_tsquery('hello') ) as rank from test where name_vec @@ to_tsquery('hello') ; name | rank -+-- hello world | 0.1 (1 row) or postgres=# SELECT name, rank_cd('{1,1,1,1}',name_vec,to_tsquery('hello') ) as rank from test where name_vec @@ to_tsquery('hello') ; name | rank -+-- hello world |1 (1 row) BTW: above did not work for me i had to explicitly cast '{1,1,1,1}' to '{1,1,1,1}'::float4[] , is anything fishy with my database ? SELECT name, rank_cd('{1,1,1,1}'::float4[] ,name_vec,to_tsquery('hello') ) as rank from test where name_vec @@ to_tsquery('hello') ; ( PS: thanks for the nice tsearch software we have been using it since pre tsearch era. (openfts) ) Regds Mallah. Oleg On Fri, 8 Dec 2006, Rajesh Kumar Mallah wrote: Regards, Oleg _ Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru), Sternberg Astronomical Institute, Moscow University, Russia Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(495)939-16-83, +007(495)939-23-83 ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [ADMIN] tsearch2 problem rank_cd() (possibly) crashing postgres [Update]
Dear Oleg, In local development server. the old functions still exists in postgresql catalogs. But probably they are missing in the newer version of tsearch2.so that came with 8.2.0 . And hence postgres is crashing. I feel this should have been informed in the release notes. tradein_clients= \df *.rank_cd List of functions Schema Name Result data type Argument data types -- --- --- (older ones) public rank_cd real integer, tsvector, tsquery public rank_cd real integer, tsvector, tsquery, integer (new ones) public rank_cd real real[], tsvector, tsquery public rank_cd real real[], tsvector, tsquery, integer public rank_cd real tsvector, tsquery public rank_cd real tsvector, tsquery, integer (6 rows) On 12/8/06, Rajesh Kumar Mallah [EMAIL PROTECTED] wrote: On 12/8/06, Oleg Bartunov oleg@sai.msu.su wrote: On Fri, 8 Dec 2006, Rajesh Kumar Mallah wrote: On 12/8/06, Oleg Bartunov oleg@sai.msu.su wrote: You need to read documentation ! rank_cd accepts the same args as rank() function. Dear Oleg, Could you please elaborate a bit more if time permits. our application is old and it was working fine in 8.1.5. do i need to change the sql to use a different function ? from reference manual: CREATE FUNCTION rank_cd( [ weights float4[], ] vector TSVECTOR, query TSQUERY, [ normalization int4 ] ) RETURNS float4 Dear Oleg, thanks for the prompt help. looks like we have to modify our application code. i would like to point out : In our Production Database \df public.rank_cd ++-+--+-+ | Schema | Name | Result data type | Argument data types | ++-+--+-+ | public | rank_cd | real | integer, tsvector, tsquery | | public | rank_cd | real | integer, tsvector, tsquery, integer | | public | rank_cd | real | tsvector, tsquery | | public | rank_cd | real | tsvector, tsquery, integer | ++-+--+-+ (4 rows) In tsearch2.sql (with pgsql 8.2.0) $ grep CREATE FUNCTION rank_cd tsearch2.sql CREATE FUNCTION rank_cd(float4[], tsvector, tsquery) CREATE FUNCTION rank_cd(float4[], tsvector, tsquery, int4) CREATE FUNCTION rank_cd(tsvector, tsquery) CREATE FUNCTION rank_cd(tsvector, tsquery, int4) This means first arguments have changed from integer to float4[] This means all the application code needs to be changed now :-/ postgres=# SELECT name, rank_cd(name_vec,to_tsquery('hello') ) as rank from test where name_vec @@ to_tsquery('hello') ; name | rank -+-- hello world | 0.1 (1 row) or postgres=# SELECT name, rank_cd('{1,1,1,1}',name_vec,to_tsquery('hello') ) as rank from test where name_vec @@ to_tsquery('hello') ; name | rank -+-- hello world |1 (1 row) BTW: above did not work for me i had to explicitly cast '{1,1,1,1}' to '{1,1,1,1}'::float4[] , is anything fishy with my database ? SELECT name, rank_cd('{1,1,1,1}'::float4[] ,name_vec,to_tsquery('hello') ) as rank from test where name_vec @@ to_tsquery('hello') ; ( PS: thanks for the nice tsearch software we have been using it since pre tsearch era. (openfts) ) Regds Mallah. Oleg On Fri, 8 Dec 2006, Rajesh Kumar Mallah wrote: Regards, Oleg _ Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru), Sternberg Astronomical Institute, Moscow University, Russia Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(495)939-16-83, +007(495)939-23-83 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [ADMIN] Should clients abort in case of server version mismatch ? [ subject modf: as they already warn ]
On 12/8/06, Tom Lane [EMAIL PROTECTED] wrote: Rajesh Kumar Mallah [EMAIL PROTECTED] writes: IMHO for major version mismatch psql should not present the user with a prompt at all as certain commands are most likely not work. The analogy you're drawing with pg_dump is faulty. There are at least three good reasons for psql to be more forgiving of version mismatches than pg_dump is: Overall there was no damage at all. 1. pg_dump is commonly run noninteractively (eg, from a cron job) where any mere warning will likely go unnoticed. So it has to raise a hard error to get the DBA's attention. psql's backslash commands are far less likely to be used noninteractively, and a failure is usually pretty obvious to a human user. yep if the scope of problem is limited to \d commands *only* its a nonissue (i was not knowing it). in most automations i think psql would only be acting as a conduit for SQL commands. hence the concerns were not well founded. Warm Regds Mallah. 2. pg_dump is critical: if it dumps an unusable backup due to not understanding the system catalogs of a newer server, the DBA who needs that backup later will be badly screwed. psql's backslash commands, again, are not so critical. 3. psql offers a pretty decent amount of functionality even if some of its backslash commands don't work, whereas a dump that is wrong is worse than useless. So the use-case for operating with a version mismatch is much wider for psql. So I think we have the right tradeoffs in this regard now. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [ADMIN] tsearch2 problem rank_cd() (possibly) crashing postgres
On 12/8/06, Oleg Bartunov oleg@sai.msu.su wrote: On Fri, 8 Dec 2006, Rajesh Kumar Mallah wrote: Dear Oleg, In local development server. the old functions still exists in postgresql catalogs. But probably they are missing in the newer version of tsearch2.so that came with 8.2.0 . And hence postgres is crashing. I feel this should have been informed in the release notes. yes, you're right. Someone has offered help to write this but then silently dissapeared, so we stay without release notes. Could you summarise your experience and write them and we add them for 8.2.1 Sir, Dont you feel the incremental features should be available as upgrade patches. eg at one point of time i found that the = ( ts_vector , ts_vector) operator is missing in my database. later i had to manually create the operator by looking at the tsearch2.sql file of later releases. this is becuse tsearch2.sql (i think) was not runnable on database having older versions of tsearch2. yes i would summarise my experience but lemme know if the above is an issue. also do i need to change my application code ? (becoz the older function is not available) Warm Regds Mallah. Oleg ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [ADMIN] tsearch2 problem rank_cd() (possibly) crashing postgres
On 12/8/06, Oleg Bartunov oleg@sai.msu.su wrote: On Fri, 8 Dec 2006, Tom Lane wrote: Oleg Bartunov oleg@sai.msu.su writes: You need to read documentation ! rank_cd accepts the same args as rank() function. Nonetheless, dumping core on bad input is not acceptable behavior ... we already resolved the situation. This is mostly problem of missing release notes. Sir, if the old functions which are present in system catalogs and are missing in new tsearch2.so file and are not dropped from database then the database seems to be crashing when they are invoked. Should' upgrade scritps not drop the old functions. just a thought , this situation could arise in general not just tsearch2.so . Regds mallah. Regards, Oleg _ Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru), Sternberg Astronomical Institute, Moscow University, Russia Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(495)939-16-83, +007(495)939-23-83 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [ADMIN] tsearch2 problem rank_cd() (possibly) crashing postgres
On 12/8/06, Oleg Bartunov oleg@sai.msu.su wrote: On Fri, 8 Dec 2006, Rajesh Kumar Mallah wrote: On 12/8/06, Oleg Bartunov oleg@sai.msu.su wrote: On Fri, 8 Dec 2006, Rajesh Kumar Mallah wrote: Dear Oleg, In local development server. the old functions still exists in postgresql catalogs. But probably they are missing in the newer version of tsearch2.so that came with 8.2.0 . And hence postgres is crashing. I feel this should have been informed in the release notes. yes, you're right. Someone has offered help to write this but then silently dissapeared, so we stay without release notes. Could you summarise your experience and write them and we add them for 8.2.1 Sir, Dont you feel the incremental features should be available as upgrade patches. eg at one point of time i found that the = ( ts_vector , ts_vector) operator is missing in my database. later i had to manually create the operator by looking at the tsearch2.sql file of later releases. this is becuse tsearch2.sql (i think) was not runnable on database having older versions of tsearch2. yes i would summarise my experience but lemme know if the above is an issue. some people use separate scheme contrib to load all contrib stuff, so upgrading is much easy. Sir, even with a seperate schema contrib one cannot drop the stuff in contrib and reload the new version . Becoz DROPs have to be cascaded to dependencies which contain real data. Eg index and table columns that depend on existence of TYPE. Regds Mallah. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] How to get to command line prompt
Hi, Go to Start-run - type cmd press enter say you get the c: prompt then type C:\blah\blah psql -U postgres template1 above assumes psql is in PATH. if it succeeds you shall get a psql prompt. once you get the prompt , follow the instructions in text book to create new database , users etc. regds mallah. On 12/8/06, Philippe Salama [EMAIL PROTECTED] wrote: I just installed Postgresql for windows. There is an elephant headed icon, which seems to be the way to get to the command line prompt. I have a textbook which assumes that you can communicate with postgres through command line, and run scripts. When I click the icon, it asks for password, and then does nothing but disappear. IF there is no command line available with the windows install, then, is there a 3rd party product? Thanks Have a burning question? Go to Yahoo! Answers and get answers from real people who know. ---(end of broadcast)--- TIP 6: explain analyze is your friend
[ADMIN] Should duplicate indexes on same column and same table be allowed?
Hi, Some of our tables have duplicate indexes on same column by different index names. Should the database server check for the existance of (effectively) same index in a table before creating a new one. Regds Mallah. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] Should duplicate indexes on same column and same table be allowed?
On 12/9/06, Tom Lane [EMAIL PROTECTED] wrote: Rajesh Kumar Mallah [EMAIL PROTECTED] writes: Some of our tables have duplicate indexes on same column by different index names. Should the database server check for the existance of (effectively) same index in a table before creating a new one. I'd vote not; I think this would get in the way of people who do know what they're doing, as much as it would hold the hands of those who don't. (Build a database that even a fool can use, and only a fool would want to use it.) An example: suppose you mistakenly created a plain index on foo.bar, when you meant it to be a unique index. You don't want to just drop the plain index before creating a unique index, because you have live clients querying the table and their performance would tank with no index at all. But surely a plain index and a unique index on the same column are redundant, so a nannyish database should prevent you from creating the desired index before dropping the unwanted one. I meant *exactly* the same index (pls ignore the word effectively in prv post). even same tablespace. Regds mallah. PS: (forgive me for my meager knowledge of internals) Other scenarios: is an index on X redundant with one on X,Y? Is a hash index on X redundant if there's also a btree index on X? How about partial or functional indexes with slightly varying definitions? There's been some discussion lately about an index advisor, which might reasonably provide some advice if it thinks you have redundant indexes. But I'm not eager to put any sort of enforcement of the point into the core database. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [ADMIN] Should duplicate indexes on same column and same table be allowed?
Sir, Suppose an index get corrupted. And you need create a new index with exact specs and then drop the old index. Is it better to have a performing corrupted index or not have it at all and temporarily suffer some performance degradation ? that was one scenerio which comes to my mind for having duplicate indexes. Regds mallah. On 12/9/06, Rajesh Kumar Mallah [EMAIL PROTECTED] wrote: On 12/9/06, Tom Lane [EMAIL PROTECTED] wrote: Rajesh Kumar Mallah [EMAIL PROTECTED] writes: Some of our tables have duplicate indexes on same column by different index names. Should the database server check for the existance of (effectively) same index in a table before creating a new one. I'd vote not; I think this would get in the way of people who do know what they're doing, as much as it would hold the hands of those who don't. (Build a database that even a fool can use, and only a fool would want to use it.) An example: suppose you mistakenly created a plain index on foo.bar, when you meant it to be a unique index. You don't want to just drop the plain index before creating a unique index, because you have live clients querying the table and their performance would tank with no index at all. But surely a plain index and a unique index on the same column are redundant, so a nannyish database should prevent you from creating the desired index before dropping the unwanted one. I meant *exactly* the same index (pls ignore the word effectively in prv post). even same tablespace. Regds mallah. PS: (forgive me for my meager knowledge of internals) Other scenarios: is an index on X redundant with one on X,Y? Is a hash index on X redundant if there's also a btree index on X? How about partial or functional indexes with slightly varying definitions? There's been some discussion lately about an index advisor, which might reasonably provide some advice if it thinks you have redundant indexes. But I'm not eager to put any sort of enforcement of the point into the core database. regards, tom lane ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [ADMIN] problem in logging into database
On 12/7/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: I am new to using postgres. I working on version 8.0. I have created a number of users in the database but I am not able to log into it using any of them, Hi, Could you please paste the ERROR message you get. regds mallah. where in logging into using user postgres is successfull the commands that i am using are -- createuser -a -d -P username for logging into , i am using -- psql -U username In all the references online, it says that the process shud work. The postmaster service and databases are running. =-=-= Notice: The information contained in this e-mail message and/or attachments to it may contain confidential or privileged information. If you are not the intended recipient, any dissemination, use, review, distribution, printing or copying of the information contained in this e-mail message and/or attachments to it are strictly prohibited. If you have received this communication in error, please notify us by reply e-mail or telephone and immediately and permanently delete the message and any attachments. Thank you ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [ADMIN] Problem starting up PostgreSQL 8.0 database service in Windows 2003 server
On 12/7/06, ron [EMAIL PROTECTED] wrote: Please kindly help me with this problem I am experiencing with starting up a PostgreSQL 8.0 db service in a Windows 2003 server. Whenever I attempt to restart the service I always get the message: The PostgreSQL Database Server 8.0 on Local Computer started and then stopped. Some services stop automatically if they have no work to do, for example, the Performance Logs and Alerts service. I tried running postgres in command line with: c:\...\PostgreSQL\8.0\bin\postgres - D:\Progra...\PostgreSQL\8.0\data template1 gives me: Execution of PostgreSQL by a user with administrative permissions is not permitted. The server must be started under an unprivileged user ID to prevent possible system security compromises. ... Go to Control Panel -- users -- L ocate the user you use to login. convert the account to a limited account. OR create a new user postgres and do not give administrative rights while creating. Run postgres using that user. I havent' run postgresql on windows yet so i cannot give very specific instructions but i hope you got the point. Regds mallah Need badly your help in dealing with this one. Many thanks in advance. -- R. Agustin - - - - - - - - - - - - - - - - - - - - - - - - - - E-mail: [EMAIL PROTECTED] ICQ #: 146807041 ___ This email contains confidential information for the sole use of the intended recipient/s. If you are not the intended recipient, please contact the sender, delete this email and maintain the confidentiality of what you may have read. It is a capital mistake to theorize before one has data. - Sherlock Holmes ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [ADMIN] pgdump
On 12/8/06, anuradha devi [EMAIL PROTECTED] wrote: Hi I have a pgdump.sql file and i want to restore it.The postgreSQL database server is running and i issue the command psql -U postgres pgdump.sql in the command prompt. i get the following error. C:\Program Files\PostgreSQL\8.2\binpsql -U postgres pgdump.sql psql: could not connect to server: Invalid argument (0x2726/10022) Is the server running on host ??? and accepting TCP/IP connections on port 5432? Hi, This error means the psql client is not able to connect to the server that (may or maynot) be running in the machine. Make sure that the server is running. see if the process postmaster is running. once the server is running the command to restore is the command to run a sql script file is : psql -U postgres -f pgdump.sql (you missed the -f) Regds mallah. Thanks Anuradha Everyone is raving about the all-new Yahoo! Mail beta. ---(end of broadcast)--- TIP 6: explain analyze is your friend
[ADMIN] Should clients warn in case of server version mismatch ?
Hi, We connected psql 8.2.0 accidently to postmaster ver 8.1.5. \d tablename does not work. should psql at least warn like pg_dump does in case of such client server version mismatches ? regds mallah. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] Should clients warn in case of server version mismatch ?
On 12/8/06, Tom Lane [EMAIL PROTECTED] wrote: Rajesh Kumar Mallah [EMAIL PROTECTED] writes: We connected psql 8.2.0 accidently to postmaster ver 8.1.5. \d tablename does not work. should psql at least warn like pg_dump does in case of such client server version mismatches ? Um, did you read the banner? $ ~/version82/bin/psql -p 5581 regression Welcome to psql 8.2.0 (server 8.1.5), the PostgreSQL interactive terminal. Type: \copyright for distribution terms \h for help with SQL commands \? for help with psql commands \g or terminate with semicolon to execute query \q to quit WARNING: You are connected to a server with major version 8.1, but your psql client is major version 8.2. Some backslash commands, such as \d, might not work properly. Dear Sir, I am extremely sorry i was just going to apologize for the mistake. But i like the way pg_dump behaves , it refuses to work unless -i is specified. Actually my colleagues complained that \d is not working. I think they got the database prompt as they get everyday and did not notice the warning. ( even i missed to see the WARNING) Warm Regds mallah. regression=# regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [ADMIN] Should clients warn in case of server version mismatch ?
On 12/8/06, Adam Radlowski [EMAIL PROTECTED] wrote: I had problem such this. I've got 8.1.3 client and 8.0.6 server. But psql 8.1.3 warned me, that some \? functions can not work properly. I've resolved this problem in very simply way. thanks. my problem is resolved as i mantain symbolic links to different pgsql versions in /opt/usr/local i just gave full path to psql binary of old version instead of psql. My concern was that it is quite likely that developers willl FAIL to see the warning if they are presented with the psql prompt despite mismatch of major version numbers. IMHO for major version mismatch psql should not present the user with a prompt at all as certain commands are most likely not work. Regds Mallah. I got binary psql from 8.0.x and run. Psql program is a client program, that uses libpq library, so it is (I think) no difference until the libpq includes correct and enough routines to call them from psql. I've tested for example my own application compiled with 8.1.3 libpq on 7.4.8 libpq, or compiled with 8.0.7 on 8.1.3. But the not use functions, like \?. No problems. But the \? functions in psql are implemented (I think) in other way and are version specific, becouse the consttruction of information part of database is not compatible between versions 8.0.x and 8.1.x and 8.2.x (the first two numbers of a version are signifficant in this case). Brgds Adam Rajesh Kumar Mallah wrote: Hi, We connected psql 8.2.0 accidently to postmaster ver 8.1.5. \d tablename does not work. should psql at least warn like pg_dump does in case of such client server version mismatches ? regds mallah. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] Postgres 8.1.4 is not starting in RHEL 4
On 12/1/06, Manish Pillai [EMAIL PROTECTED] wrote: Hi I installed 8.1.4 POSTGRESQL RPMS on my system. When I tried to run /etc/init.d/postgresql start I got this error. [EMAIL PROTECTED] init.d]# ./postgresql start Initializing database: mkdir: cannot create directory `/var/lib/pgsql/data/pg_log': File exists [FAILED] Starting postgresql service: [FAILED] * Then i removed pg_log and given [EMAIL PROTECTED] data]# pg_ctl --log /tmp/pg.log start pg_ctl: cannot be run as root Please log in (using, e.g., su) as the (unprivileged) user that will own the server process. * Then i cahnged the user as postgres [EMAIL PROTECTED] data]# su postgres bash-3.00$ pg_ctl --log /tmp/pg.log start pg_ctl: no database directory specified and environment variable PGDATA unset Try pg_ctl --help for more information. bash-3.00$ * Please help me you should 'su - postgres' instead of 'su postgres' and try again. probably PGDATA is not getting set because of that. regds mallah. from man page of su --- su is used to become another user during a login session. Invoked without a username, su defaults to becoming the super user. The optional argument - may be used to provide an environment similar to what the user would expect had the user logged in directly. Thanks Manish -- Access over 1 million songs - Yahoo! Music Unlimited.http://pa.yahoo.com/*http://us.rd.yahoo.com/evt=36035/*http://music.yahoo.com/unlimited/
Re: [ADMIN] PostgreSQL and Performance of Functions
On 12/1/06, Aaron Bono [EMAIL PROTECTED] wrote: Recently I have been put on a project that uses DB2 (not my choice). I am using functions for reports and was told this may not be allowed. They said: One gap is the ability to tune the code in the function (not really static) or to perform real-time analysis of performance problems. Tuning the code in function requires to change the file containing the function defination and reloading the function. Hence the cycle of performance tuning is longer. Regarding real time analysis what I understand is EXPLAIN ANALYSE (of main SQL using the function) in postgresql cannot provide the realtime analysis of the performace of queries inside the function. probably such a limitation may exists in db2 also. I think the gap is genuine. regds mallah. I normally use PostgreSQL for all my applications (this being an exception). Does anyone know what this statement is about and, more importantly, does PostgreSQL have the same limitations? Thanks, Aaron -- == Aaron Bono Aranya Software Technologies, Inc. http://www.aranya.com http://codeelixir.com ==
Re: [ADMIN]
Dear Subhas, There are many approaches. 1. dump the data to a sql script file , you may use -D option with pg_dump for max portability 2. use DBI the exact process depends on the complexity of data , number of tables type of data etc. i think you should migrate to a newer version of postgresql like 8.1.5 or the upcoming 8.2 version instead of oracle. regds mallah. On 10/31/06, SUBASH CHANDRA MOHAPATRA [EMAIL PROTECTED] wrote: Hi All, Please help me how to migrate from the postgresql 7.4 to oracle 9i . Please let me know the process and the script also Regards, Subash
Re: FW: [ADMIN] Setting up of PITR system.
Hi List, please find below a 1. A script that takes remote base backups of a postgresql database. it assumes a password less communication between the backup server and the database server. After taking a base backup it removes that un neccesary WAL log files from the archive folder. The script has been running in my server for past few weeks without any problems. It takes apprx 40 mins to backup around 40 GB data. sample execution on the backup server $ ./pg_rsync.sh 216.247.238.130 /mnt/disk2/base_backups /mnt/disk2/wal_archive or in crontab as 30 20 * * * /usr/bin/time /home/postgres/pg_rsync.sh 216.247.238.130 /mnt/disk2/base_backups /mnt/disk2/wal_archive Note: script also assumes that you have setup remote wal archiving using a suitable archive_command in postgresql.conf , these scripts are *not* posted here. any suggestions welcome. script begins (pg_rsync.sh) #!/bin/bash ## # it does following # 1. checks existance and permission of imp folders. # 2. takes base backup to a destined folder by rsync # 3. removes unwanted archived log files. # Contributions: Grega Bremec ([EMAIL PROTECTED]) , # Alvaro Herrera ([EMAIL PROTECTED]) # Preliminary Version: Mallah ([EMAIL PROTECTED]) ## if [ $# -ne 3 ] then echo Usage: $0 HOSTNAME BACKUP DIRECTORY WAL ARCHIVE DIRECTORY exit 1 fi HOSTNAME=$1 BACKUPFOLDER=$2 WAL_ARCHIVE=$3 echo HOSTNAME=$HOSTNAME echo BACKUPFOLDER=$BACKUPFOLDER echo WAL_ARCHIVE=$WAL_ARCHIVE if [ -z $BACKUPFOLDER ] || [ ! -d $BACKUPFOLDER ] || [ ! -w $BACKUPFOLDER ] then echo Sorry base backup folder $BACKUPFOLDER does not exists or is not writable or is not specified! exit 1 fi if [ -z $WAL_ARCHIVE ] || [ ! -d $WAL_ARCHIVE ] || [ ! -w $WAL_ARCHIVE ] then echo Sorry WAL archive folder $WAL_ARCHIVE does not exists or is not writable or is not specified! exit 1 fi PSQL_BIN=`which psql` || /usr/local/pgsql/bin/psql RSYNC_BIN=`which rsync` || /usr/bin/rsync SSH_BIN=`which ssh` || /usr/bin/ssh for PROG in $PSQL_BIN $RSYNC_BIN $SSH_BIN ; do if [ ! -f $PROG ] || [ ! -x $PROG ] then echo Sorry $PROG does not exists or is not executable by you echo Please set env variable PATH to include psql and rsync exit 1 else echo Using $PROG fi done RSYNC_OPTS=--delete-after -a -e $SSH_BIN --exclude pg_xlog RSYNC=$RSYNC_BIN $RSYNC_OPTS PSQL=$PSQL_BIN today=`date +%d-%m-%Y-%H-%M-%S` label=base_backup_${today} echo Querying data_directory and tablespace folders from server $HOSTNAME DATA_DIR=`$PSQL -q -Upostgres -d template1 -h $HOSTNAME -c show data_directory; -P tuples_only -P format=unaligned 21` RVAL=$? if [ $RVAL -ne 0 ] then echo Some error in getting data_directory:$DATA_DIR exit 1; fi echo DATA_DIR:$DATA_DIR TBL_SPCS=(`$PSQL -q -Upostgres -d template1 -h $HOSTNAME -c SELECT spclocation from pg_catalog.pg_tablespace where length(spclocation)0; -P tuples_only -P format=unaligned`) RVAL=$? if [ $RVAL -ne 0 ] then echo There is some problem in getting table spaces exit 1; fi DIRS=( [EMAIL PROTECTED] $DATA_DIR) echo Folders for Backup: CTR=0 while [ -n ${DIRS[${CTR}]} ]; do echo -n ${DIRS[${CTR}]} -- DIRS[${CTR}]=`ssh $HOSTNAME readlink -f ${DIRS[${CTR}]}` echo ${DIRS[${CTR}]} (after symlink resolution) CTR=$((CTR + 1)) done unset CTR CP=`$PSQL -q -Upostgres -d template1 -h$HOSTNAME -c SELECT pg_start_backup('$label'); -P tuples_only -P format=unaligned 21` RVAL=$? if [ $RVAL -ne 0 ] then echo PSQL pg_start_backup failed:$CP exit 1; fi echo pg_start_backup executed successfully: $CP echo RSYNC begins.. # rsync each of the folders to the backup folder. CTR=0 while [ -n ${DIRS[${CTR}]} ]; do echo Syncing ${DIRS[${CTR}]}... time ${RSYNC} $HOSTNAME:${DIRS[${CTR}]} ${BACKUPFOLDER} RVAL=$? echo Sync finished with exit status ${RVAL} if [[ ${RVAL} -eq 0 || ${RVAL} -eq 23 ]]; then echo Rsync success else echo Rsync failed $PSQL -Upostgres -h$HOSTNAME template1 -c SELECT pg_stop_backup(); exit 1 fi CTR=$((CTR + 1)) done unset CTR echo Executing pg_stop_backup in server ... $PSQL -Upostgres -h$HOSTNAME template1 -c SELECT pg_stop_backup(); if [ $? -ne 0 ] then echo PSQL pg_stop_backup failed exit 1; fi echo pg_stop_backup done successfully # read the backup_label file in pgdatadir and get the name of start wal file # below is example content. #START WAL LOCATION: E/A9145E4 (file 0001000E000A) #CHECKPOINT LOCATION: E/A92939C #START TIME: 2006-04-01 14:36:48 IST #LABEL: base_backup_01-04-2006-14-36-45 DATA_DIR_NAME=`basename $DATA_DIR` BACKUP_LABEL=$BACKUPFOLDER/$DATA_DIR_NAME/backup_label echo BACKUP_LABEL: $BACKUP_LABEL START_LINE=`grep -i START WAL LOCATION $BACKUP_LABEL` # get the like containing START WAL LOCATION START_LINE=${START_LINE/#START*file /} # strip something like 'START WAL LOCATION: E/A9145E4 (file ' from begin. START_LINE=${START_LINE/%)/} #
Re: FW: [ADMIN] Setting up of PITR system.
On 4/13/06, Grega Bremec [EMAIL PROTECTED] wrote: -BEGIN PGP SIGNED MESSAGE- Hash: RIPEMD160 Rajesh Kumar Mallah wrote: Is cluttering of the wal archive area in cases where that backup had to be re-started for whatever reasons is the *only* concern ? Well, yes, to be honest. But it may in consequence cause problems of another kind, which I don't feel qualified to reassure you on: I am not positive about what happens if you try to replay an old WAL on a current database backup. If nothing else, it is going to make the person restoring the backup rather unnerved about success of the operation they are currently performing, which is not a good thing, IMHO. They are restoring a backup afterall, which means they'd already undergone a fair amount of stress as it is. :) End all, it is your choice to decide which is more trouble and which is worth more: fixing the script to produce clean backups or informing your backup operators about the extra care they need to take when restoring backups. Dear Grega, gald to see the clarification. The concern you have is valid only if something goes wrongafter pg_start_backup() . In such case the backup admin can get notified in advance by using the MAILTO env variable in crontab. so its not that we are stressing him in already stressed situation.the reason of my aversion in incorporating you suggestion is however different. There does not seem to be any realiable way of getting the name of .backup file from the contents in backup_label. consider the method you gave. REF_FILE=`grep 'START WAL LOCATION' ${BACKUP_LABEL} | \ awk '{ sub(/)/, , $6); sub(/[0-9A-F]\//, , $4); printf(%s.%08s.backup, $6, $4); }'` firstly i feel there should be a + after [0-9A-F] , when i run the command on my data following is the outcome. -bash-2.05b$ cat /mnt/disk2/base_backups/pgdatadir/backup_label START WAL LOCATION: 1C/4C7E5E90 (file 0001001C004C) CHECKPOINT LOCATION: 1C/4C824A44 START TIME: 2006-04-12 21:32:43 IST LABEL: base_backup_12-04-2006-21-32-52 -bash-2.05b$ grep 'START WAL' /mnt/disk2/base_backups/pgdatadir/backup_label | awk '{ sub (/)/, , $6); sub(/[0-9A-F]+\//, , $4); printf(%s.%08s.backup\n, $6, $4 ); }' 0001001C004C.4C7E5E90.backup -bash-2.05b$ ls -l /mnt/disk2/wal_archive/*.backup -rw--- 1 postgres postgres 270 Apr 12 21:53 /mnt/disk2/wal_archive/0001001C004C.007E5E90.backup was you can see the predicted name 0001001C004C.4C7E5E90.backup is not same as 0001001C004C.007E5E90.backup one may argue that replacing first 2 charachers of 4C7E5E90 with '0's may yeild the correct result, but it is not so , i have observed a case which runs against this hypothesis. therefore unless someone tells the correct method of derieving the filename , the current approach is the best that can be done. I really hope some guru to throw some light here. i tried digging xlog.c and xlogutils.c without any success. Please do not put too much effort, as i the drives in my other server has got installed and i am adapting the script for doing remote backup ( which is a more common senerio). Very nice! How is it going? And how are you copying the WALs? scp? rsync?I finished it long back but i was waiting for comments from the list regarding the concern to which you replied. I think we discuss the new version once this issue is resolved. Regds Rajesh Kumar Mallah. Kind regards, - -- Grega Bremec gregab at p0f dot net -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.0 (GNU/Linux) iD8DBQFEPgU9fu4IwuB3+XoRA+MeAJ0dbbfcgBqP9SCYq0VICN8xrtGN0wCffE6i kq1LlDwlJwmfrOtwRBwGqFg= =olf3 -END PGP SIGNATURE-
Re: [ADMIN] Storage and Backup
On 4/7/06, Sidar López Cruz [EMAIL PROTECTED] wrote: It's hard for PostgreSQL to administrate and serve a database with over 150GB of information? It's complicated to backup and restore a database with this size? It's there a program or a procedure to backup and restore this kind of databases? PITR is recommended for backup of such databases as it does not involves dumping of entrie data everytime. rsyncing pgdatadir while refreshing the base backup can minimise the time for syncing as it has an intelligent algo. Regds Rajesh Kumar Mallah. What operating system do you recommend to use (Windows 2003 or Linux Ubuntu)? Thanks for you very important help !!! Sídar LC. MSN Amor Busca tu ½ naranja ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: FW: [ADMIN] Setting up of PITR system.
| Do you see any problem in the current approach ? | i have seen it working fine till now. I do, to be honest. The WAL location counter accounts for 4294967295 positions and while I'm certain that's WAY more than the average number of transactions that go into a WAL, quite a number of small ones can certainly happen before a WAL is rolled over, and until then, you're dealing with the same log file. If two backups happen in that period of time for whatever reason, you're going to have a false positive by looking into ${WAL_ARCHIVE} and searching just for the WAL name, so including the location in the search of a WAL fragment is certainly necessary. Infact, going purely by chance, the probability of hitting the same location in two different log files in two subsequent backups is much lower than hitting the same WAL twice. Dear Grega, sincere thanks for your time, The current wal log is not being removed from the wal archive area in any case. The files less than the current ones are being rm'ed. I am sorry i am not able to get your apprehension. But i shall surely try harder to understand your point. anyways have a look at the current script with following improvements. 1. Do some sanity checks about folder existance and permissions 2. accepts 3 mandatory args now , PGDATADIR , BACKUP DUMP FOLDER and WAL ARCHIVE AREA 3. use readlink -f to probe all the directories to be included in basebackup 4. Attempt to probe psql and rsync in system and bail out if not found. Regarding : | 2. Frees disk space by removing unwanted LOG files in WAL_ARCHIVE_DIR Perhaps moving the old log files into a father backup directory and having them stick around for a period of time before removing them isn't a bad idea either, just in case something goes wrong with your latest backup. You could go about that using find as well; see the -ctime predicate in find(1). the old log files without the base backup are not useful. since rsync is being used to optimise the copying by overwriting the base backup everytime, i dont thing preserving the old files makes sense. Had it been and non overwritng backup the files would have made sense. BEGIN - #!/bin/bash ## # it does following # 1. checks existance and permission of imp folders. # 2. takes base backup to a destined folder by rsync # 3. removes unwanted archived log files. ## if [ $# -ne 3 ] then echo Usage: $0 DATADIR BACKUP DIRECTORY WAL ARCHIVE DIRECTORY exit 1 fi DATADIR_IN=$1 BACKUPFOLDER=$2 WAL_ARCHIVE=$3 if [ -z $BACKUPFOLDER ] || [ ! -d $BACKUPFOLDER ] || [ ! -w $BACKUPFOLDER ] then echo Sorry base backup folder $BACKUPFOLDER does not exists or is not writable or is not specified! exit 1 fi if [ -z $WAL_ARCHIVE ] || [ ! -d $WAL_ARCHIVE ] || [ ! -w $WAL_ARCHIVE ] then echo Sorry WAL archive folder $WAL_ARCHIVE does not exists or is not writable or is not specified! exit 1 fi if [ -L $DATADIR_IN ] then DATADIR=`readlink -f $DATADIR_IN` echo Using $DATADIR instead of $DATADIR_IN as $DATADIR_IN is a link else DATADIR=$DATADIR_IN fi # get all tablespaces from $DATADIR/pg_tblspc DIRS=(`find $DATADIR/pg_tblspc -type l -exec readlink -f {} \;`) # append DATADIR to it DIRS=( [EMAIL PROTECTED] $DATADIR) CTR=0 echo Script shall backup following folders while [ -n ${DIRS[${CTR}]} ]; do echo ${DIRS[${CTR}]} CTR=$((CTR + 1)) done unset CTR PSQL_BIN=`which psql` || /usr/local/pgsql/bin/psql RSYNC_BIN=`which rsync` || /usr/bin/rsync for PROG in $PSQL_BIN $RSYNC_BIN ; do if [ ! -f $PROG ] || [ ! -x $PROG ] then echo Sorry $PROG does not exists or is not executable by you echo Please set env variable PATH to include psql and rsync exit 1 else echo Using $PROG fi done RSYNC_OPTS=--delete-after -a --exclude pg_xlog RSYNC=$RSYNC_BIN $RSYNC_OPTS PSQL=$PSQL_BIN today=`date +%d-%m-%Y-%H-%M-%S` label=base_backup_${today} echo Executing pg_start_backup with label $label in server ... # get the checkpoint at which backup starts # the .backup files seems to be bearing this string in it. CP=`$PSQL -q -Upostgres -d template1 -c SELECT pg_start_backup('$label'); -P tuples_only -P format=unaligned` RVAL=$? if [ $RVAL -ne 0 ] then echo PSQL pg_start_backup failed:$CP exit 1; fi echo pg_start_backup executed successfully # read the backup_label file in pgdatadir and get the name of start wal file # below is example content. #START WAL LOCATION: E/A9145E4 (file 0001000E000A) #CHECKPOINT LOCATION: E/A92939C #START TIME: 2006-04-01 14:36:48 IST #LABEL: base_backup_01-04-2006-14-36-45 BACKUP_LABEL=$DATADIR/backup_label # assuming pg_start_backup immediate puts backup_label in # pgdatadir on finish.
Re: [ADMIN] Show tables query
On 4/2/06, Christopher Browne [EMAIL PROTECTED] wrote: After takin a swig o' Arrakan spice grog, [EMAIL PROTECTED] (Andy Shellam) belched out: Is there an SQL command supported by Postgres to return a list of tables in a database? Yes, it's called SELECT.There is a standard schema called INFORMATION_SCHEMA, which contains avariety of relevant views.Notably, you could request: SELECT * FROM INFORMATION_SCHEMA.TABLES where table_type='BASE TABLE' ; if you need tables only otherwise it returns the Views also. That has the merit of actually conforming to SQL standards...--output = reverse( moc.liamg @ enworbbc)http://cbbrowne.com/info/They have finally found the most ultimately useless thing on the web...Found at the Victoria's Secret website: The online shop: Text Only Listing---(end of broadcast)---TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] Show tables query
On 4/2/06, Andy Shellam [EMAIL PROTECTED] wrote: After takin a swig o' Arrakan spice grog, [EMAIL PROTECTED] (Andy Shellam) belched out: Is there an SQL command supported by Postgres to return a list of tables in a database? Sorry, did I say something wrong? I thought it was a perfectly valid question actually. The application in mind is going to be run exclusively on Postgres, so I'm not overly fussed over standards - I just wanted a quick win, of which Grega's SQL gave it me perfectly - tables only, nothing else included. the information_schema approach is still better than querying the system catalogs. The system catalogs are internal to postgresql what if future versions of postgresql change the sys catalogs dramatically ? (your app breaks!) information_schema is the standard which are more likely to behave the same in all versions of pgsql becoz they are(currently) views on the sys catalogs. Regds Rajesh Kumar Mallah. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: FW: [ADMIN] Setting up of PITR system.
On 4/2/06, Grega Bremec [EMAIL PROTECTED] wrote: -BEGIN PGP SIGNED MESSAGE- Hash: RIPEMD160 Rajesh Kumar Mallah wrote: | | Instead of taking a round about method i am using the content of the | file, (This was also suggested by Andy at somepoint) After reading the docs again, that's what I would do as well, indeed. :) | lately i feel that we should not be discussing the topic over here | becoz it has less to do with postgresql and more of bash. I've been considering that seriously in the very first post I wrote, but since there seems to be a lot of people interested in a working, flexible WAL archiving script, I decided to keep it on the list. It is an administration issue, afterall. I will stand corrected if someone feels we're clogging their mailboxes. | ## | #START WAL LOCATION: E/A9145E4 (file 0001000E000A) | #CHECKPOINT LOCATION: E/A92939C | #START TIME: 2006-04-01 14:36:48 IST | #LABEL: base_backup_01-04-2006-14-36-45 | ### | | BACKUP_LABEL=$DATADIR/backup_label | # get the like containing line START WAL LOCATION | | START_LINE=`grep -i START WAL LOCATION $BACKUP_LABEL` | # strip something like 'START WAL LOCATION: E/A9145E4 (file ' from begin. | START_LINE=${START_LINE/#START*file /} | # strip ')' from end. | START_LINE=${START_LINE/%)/} | # REF_FILE_NUM is something like 0001000A0068 | REF_FILE_NUM=$START_LINE Why not go for the entire filename? it takes a while (i dont know how much) for the .backup file to get archived and appear in the wal archive area. thats why i prefer to use the wal log filename (0001000A0068) instead of something like 0001000A0068.0A348A45.backup. Do you see any problem in the current approach ? i have seen it working fine till now. Another area i was thinking to improve this script was to make it dig out all the tablespace folders to be archived by looking into PGDATADIR/pg_tblspc . This shall make the script more generic. pg_tblspc as contents like below: $ ls -l total 0 lrwxrwxrwx 1 postgres postgres 19 Mar 27 21:45 16391 - /mnt/indexspace_new lrwxrwxrwx 1 postgres postgres 18 Mar 27 21:45 16392 - /mnt/bigtables_new can you suggest the sane/recommended way to get the destination folders?( i was thinking find $PGDATADIR/pg_tblspc -type l -printf %??? ) or do i parse output of ls ! if above is done i see the script INPUT/OUTPUT'ACTIVITY as below INPUTS: PGDATADIR , WAL_ARCHIVE_DIR , LOCAL DUMP DIRECTORY OUTPUT: n/a ACTIVITY: 1. it shall take a base backup of PGDATADIR (minus pg_xlog) and all tablespaces into *LOCAL* DUMP Directory 2. Frees disk space by removing unwanted LOG files in WAL_ARCHIVE_DIR i am waiting for my another machine to get fitted with 2 more drives so that i can test/develop scripts for the restoration part and modify the script for remote base backups and archiving. that shall happen in a week or so. thanks for your support till now. Regds Rajesh Kumar Mallah. Kind regards, - -- ~Grega Bremec ~gregab at p0f dot net -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.0 (GNU/Linux) iD8DBQFEL9x/fu4IwuB3+XoRA3IgAJ9Qn7dYsNhv3e9f+P64mJoiz+s77gCeLELY 4xAxFb3Ncd8RHWkBbgyag7U= =7MXQ -END PGP SIGNATURE- ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [ADMIN] Setting up of PITR system.
On 4/3/06, Brendan Duddridge [EMAIL PROTECTED] wrote: Whenever you get a finalized script including all the enhancements you've been talking about, I would really love to have a copy as I'm sure lots of people here would. I think this is something that should be included in the standard distribution. Can I make a couple of suggestions? 1. Include a mail option to send the admin an email when the backup succeeds or fails. Considering the fact that the script is hugely unix oriented currently, such a facility is anyway avialble in crontab using MAILTO variable. 2. Call pg_stop_backup() if the script fails for whatever reason. You can't run the script twice unless you stop the backup. I ran the script a few times to see if I can get a PITR backup system going and due to directory permissions and such and during basic testing, I had to manually call pg_stop_backup() after the script exits abnormally. yes this is a problem , i do not know if calling pg_stop_backup() is the end to the damage control steps, but i shall surely add it for the time being. Thanks very much for building this script and sharing it with us. Many of us don't have the skills to write such a script. I'm a Java programmer, but I have little bash scripting skills, so this is very much appreciated. Thanks to prying eyes of the bash gurus who are supervising it :) i only have the machines and situation to run it and seek suggestions. Regds Rajesh Kumar Mallah. Brendan Duddridge | CTO | 403-277-5591 x24 | [EMAIL PROTECTED] ClickSpace Interactive Inc. Suite L100, 239 - 10th Ave. SE Calgary, AB T2G 0V9 http://www.clickspace.com On Apr 2, 2006, at 10:14 AM, Rajesh Kumar Mallah wrote: On 4/2/06, Grega Bremec [EMAIL PROTECTED] wrote: -BEGIN PGP SIGNED MESSAGE- Hash: RIPEMD160 Rajesh Kumar Mallah wrote: | | Instead of taking a round about method i am using the content of the | file, (This was also suggested by Andy at somepoint) After reading the docs again, that's what I would do as well, indeed. :) | lately i feel that we should not be discussing the topic over here | becoz it has less to do with postgresql and more of bash. I've been considering that seriously in the very first post I wrote, but since there seems to be a lot of people interested in a working, flexible WAL archiving script, I decided to keep it on the list. It is an administration issue, afterall. I will stand corrected if someone feels we're clogging their mailboxes. | ## | #START WAL LOCATION: E/A9145E4 (file 0001000E000A) | #CHECKPOINT LOCATION: E/A92939C | #START TIME: 2006-04-01 14:36:48 IST | #LABEL: base_backup_01-04-2006-14-36-45 | ### | | BACKUP_LABEL=$DATADIR/backup_label | # get the like containing line START WAL LOCATION | | START_LINE=`grep -i START WAL LOCATION $BACKUP_LABEL` | # strip something like 'START WAL LOCATION: E/A9145E4 (file ' from begin. | START_LINE=${START_LINE/#START*file /} | # strip ')' from end. | START_LINE=${START_LINE/%)/} | # REF_FILE_NUM is something like 0001000A0068 | REF_FILE_NUM=$START_LINE Why not go for the entire filename? it takes a while (i dont know how much) for the .backup file to get archived and appear in the wal archive area. thats why i prefer to use the wal log filename (0001000A0068) instead of something like 0001000A0068.0A348A45.backup. Do you see any problem in the current approach ? i have seen it working fine till now. Another area i was thinking to improve this script was to make it dig out all the tablespace folders to be archived by looking into PGDATADIR/pg_tblspc . This shall make the script more generic. pg_tblspc as contents like below: $ ls -l total 0 lrwxrwxrwx 1 postgres postgres 19 Mar 27 21:45 16391 - /mnt/ indexspace_new lrwxrwxrwx 1 postgres postgres 18 Mar 27 21:45 16392 - /mnt/ bigtables_new can you suggest the sane/recommended way to get the destination folders?( i was thinking find $PGDATADIR/pg_tblspc -type l -printf %??? ) or do i parse output of ls ! if above is done i see the script INPUT/OUTPUT'ACTIVITY as below INPUTS: PGDATADIR , WAL_ARCHIVE_DIR , LOCAL DUMP DIRECTORY OUTPUT: n/a ACTIVITY: 1. it shall take a base backup of PGDATADIR (minus pg_xlog) and all tablespaces into *LOCAL* DUMP Directory 2. Frees disk space by removing unwanted LOG files in WAL_ARCHIVE_DIR i am waiting for my another machine to get fitted with 2 more drives so that i can test/develop scripts for the restoration part and modify the script for remote base backups and archiving. that shall happen in a week or so. thanks for your support till
Re: FW: [ADMIN] Setting up of PITR system.
On 4/1/06, Grega Bremec [EMAIL PROTECTED] wrote: -BEGIN PGP SIGNED MESSAGE- Hash: RIPEMD160 Rajesh Kumar Mallah wrote: | Dear Grega , | | Thanks for the useful tips and error spotting, | i am incorporating some of them and testing | the script in my server . I have concerns regarding | some of your optimisations that makes the script less | generic , below find my comments. | Hello, Rajesh, I'm glad you found some use to the comments :). Hmm i am glad , i have found one person whom i can ask any bash doubt ;-) i am discussing only the differentials and not posting the flab. As described in docs pg_start_backup() puts the file backup_label in the datadir which can has the info about the start log file. Instead of taking a round about method i am using the content of the file, (This was also suggested by Andy at somepoint) below is the code . # read the backup_label file in pgdatadir and get the name of start wal file # below is example content. ## #START WAL LOCATION: E/A9145E4 (file 0001000E000A) #CHECKPOINT LOCATION: E/A92939C #START TIME: 2006-04-01 14:36:48 IST #LABEL: base_backup_01-04-2006-14-36-45 ### BACKUP_LABEL=$DATADIR/backup_label # get the like containing line START WAL LOCATION START_LINE=`grep -i START WAL LOCATION $BACKUP_LABEL` # strip something like 'START WAL LOCATION: E/A9145E4 (file ' from begin. START_LINE=${START_LINE/#START*file /} # strip ')' from end. START_LINE=${START_LINE/%)/} # REF_FILE_NUM is something like 0001000A0068 REF_FILE_NUM=$START_LINE -- End of relevent portion --- please optimize the above regex capturing process if possible. lately i feel that we should not be discussing the topic over here becoz it has less to do with postgresql and more of bash. ~ RM_LIST= ~ find ${WAL_ARCHIVE} -type f | sort -g | while read archive; do ~if [ ! ${archive} = ${REF_FILE} ]; then i think you meant instead of '=' in above line. ~ RM_LIST=${RM_LIST:+${RM_LIST} }${archive} ~else ~ break ~fi ~ done ~ rm -f ${RM_LIST} last doubt: regarding $ env LC_ALL=C backup_script.sh can i do export LC_ALL=C or LC_ALL=C inside the script itself to be sure ? Thanks for you help once again. Regds Rajesh Kumar Mallah. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: FW: [ADMIN] Setting up of PITR system.
On 3/30/06, Grega Bremec [EMAIL PROTECTED] wrote: -BEGIN PGP SIGNED MESSAGE- Hash: RIPEMD160 Rajesh Kumar Mallah wrote: | | OK i am posting my full script [ its not heavy programming i guess :) ] | shall be grateful if you/someone could review it . (its well commented i think) | script also carries sample data. | | it does following | 1. takes base backup to a destined folder by rsync | 2. waits for .backup file to arrive in archive folder |after pg_stop_bacup() | 3. searches and removes unwanted archived log files. | | I have run it many times in my server and it seems to | be working fine. Hello, Rajesh. Just a couple of comments on the script itself, not what it actually does - I never tried WAL archiving before, so I can't comment on that. I inserted the comments at relevant points in the script. I'm sorting them into three categories, one is just improvements in style, the other is optimization and the third is correction of an error. Dear Grega , Thanks for the useful tips and error spotting, i am incorporating some of them and testing the script in my server . I have concerns regarding some of your optimisations that makes the script less generic , below find my comments. | BEGIN | | #!/bin/bash | | # folder where base_backup is put | BACKUPFOLDER=/mnt/disk3/base_backups | today=`date +%d-%m-%Y-%H-%M-%S` | PSQL=/opt/usr/local/pgsql/bin/psql | RSYNC=/usr/bin/rsync -a | PGDATADIR=/mnt/disk5/pgdatadir | | # two table spaces. | | TS1=/mnt/disk4/bigtables | TS2=/mnt/disk3/indexspace (optimization) Since you're using bash, you can use arrays. This could be better written as ~ TS[0]=/mnt/disk5/pgdatadir ~ TS[1]=/mnt/disk4/bigtables ~ TS[2]=/mnt/disk3/indexspace or even ~ TS=(/mnt/disk5/pgdatadir \ ~ /mnt/disk4/bigtables \ ~ /mnt/disk3/indexspace) agreed , already incorporated. That way, you can add tablespaces at will and just use a while loop to back them up, which greatly simplifies adding new tablespaces or moving the script somewhere else. See below for how to implement that. | # folder where *archived* logs are put. | WAL_ARCHIVE=/mnt/wal_archive | | label=base_backup_${today} | | echo Executing pg_start_backup with label $label in server ... | | # get the checkpoint at which backup starts | # the .backup files seems to be bearing this string in it. | | CP=`$PSQL -q -Upostgres -d template1 -c SELECT | pg_start_backup('$label'); -P tuples_only -P format=unaligned` | | echo Begin CheckPoint is $CP # this contain string like A/681D1214 | | if [ $? -ne 0 ] | then | echo PSQL pg_start_backup failed | exit 1; | fi | echo pg_start_backup executed successfully (style) If you want to capture any error messages pg_start_backup may have caused and store them into ${CP}, you should add 21 at the end of the psql invocation, see below snippet. Incorporated it. (error) Checking for exit status of pg_start_backup using $? at this point will never report an error, as you've used echo prior to checking what pg_start_backup returned. You should either move the echo below the if statement (by adding an else clause) or store the exit status of pg_start_backup into RVAL like this: ~ CP=`$PSQL ... 21` ~ RVAL=$? ~ echo Begin CheckPoint says: ${CP} ~ if [ ${RVAL} -ne 0 ]; then ~... ~ fi | echo RSYNC begins.. | | # rsync each of the folders to the backup folder. | for i in $TS1 $TS2 $PGDATADIR ; | do | echo Syncing $i .. | time $RSYNC $i $BACKUPFOLDER | echo Done | done (optimization) If you store locations into an array, you could rewrite this as follows: ~ CTR=0 ~ while [ -n ${TS[${CTR}]} ]; do ~echo Syncing ${TS[${CTR}]}... ~time ${RSYNC} ${TS[${CTR}]} ${BACKUPFOLDER} ~RVAL=$? ~echo Sync finished with exit status ${RVAL} ~if [ ${RVAL} -ne 0 ]; then ~ handle errors ~fi ~CTR=$((CTR + 1)) ~ done ~ unset CTR | # fortunately rsync does *not* seems to be exitting with non zero exit code | # for expected file disappearances and modifications. | if [ $? -ne 0 ] | then | echo RSYNC failed | exit 1; | fi (error) Same error as above - what you're checking here is whether the last command in the last for loop run was successful, and this is always going to be true as echoing to stdout will never fail until stdout is closed for some reason. Yes it was an error, i am doing repeat runs of the script to find the non zero exit codes which should be treated as normal in context of taking base backups. | echo RSYNC Done successfully | | echo Executing pg_stop_backup in server ... | $PSQL -Upostgres template1 -c SELECT pg_stop_backup(); | if [ $? -ne 0 ] | then | echo PSQL pg_stop_backup failed | exit 1; | fi | echo
Re: FW: [ADMIN] Setting up of PITR system.
On 3/29/06, Andy Shellam [EMAIL PROTECTED] wrote: If pg_start_backup is used correctly, no data pages will be written to disk until pg_stop_backup is called, so this shouldn't be an issue - HOWEVER - check that you're excluding your pg_xlog directory (within your database directory) from the tar backup as these will be changing. Whatever archive utility you're using, if a file is being changed at the time of reading, it won't be archived correctly. ?? What you are saying is not in sync with docs 23.3.2. Making a Base Backup The procedure for making a base backup is relatively simple: 3. Perform the backup, using any convenient file-system-backup tool such as tar or cpio. It is neither necessary nor desirable to stop normal operation of the database while you do this. I used rsync like many others , it seems to work fine for me. Regds mallah. Andy -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Rajesh Kumar Mallah Sent: Tuesday, 28 March, 2006 5:27 PM To: Tom Lane Cc: [EMAIL PROTECTED]; pgsql-admin@postgresql.org Subject: Re: FW: [ADMIN] Setting up of PITR system. --- secondly , i was asking about the tool for copying the database directory not the one for archiving the logs. when i use tar , it gives warning that the file changed while it was reading the file, i need a tool that does not give warning for this kind of activity. --- ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] postgres and persistant connections (using Apache::DBI)
On 3/29/06, Salem Berhanu [EMAIL PROTECTED] wrote: I am trying to set up a persistent connection to a handful of postgres dbs at startup using Apache::DBI (also using mod_perl) Here is what I am doing but I am not sure how to check if it's working right. in httpd.conf I have PerlRequire startup.pl in startup.pl I load all the necessary perl modules and also include the lines use Apache::DBI; Apache::DBI-connect_on_init( host, login, password); This doesn't seem to make any difference. I was expecting to see database connections when I restarted apache and that all queries would get handled by these connections. Is this a wrong assumption? What is the expectation and what would be a good test when setting up persistent connections to a postgres db? We use DBI connection pooling with postgres in mod_perl environment and it works fine. We however do not use connect_on_init as Apache::DBI docs say that all DBI-connect() anyway gets intercepted transparently. please take care that 1. Make sure Apache::DBI is loaded before any module that issues a DB connect. you can even load Apache::DBI from httpd.conf by PerlRequire Apache::DBI 2. You can know if Apache::DBI is effective or not by installing Apache::Status ,load Apache::Status before Apache::DBI in that case and add a Location section in the httpd.conf like: --- httpd.conf- PerlModule Apache::Status PerlModule Apache::DBI Location /perl-status SetHandler perl-script PerlHandler Apache::Status /Location -- point your browser to http://your_mod_perl_server/perl-status and check the section about database conenctions. i am not sure if this info answers your question but i am describing something that works for me. regds Rajesh Kumar Mallah. Thanks much Salem ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [ADMIN] postgres and persistant connections (using Apache::DBI)
since it is not a postgresql issue i think we should not discuss it over here. i may persue it on pvt emails. regds mallah. On 3/30/06, Salem Berhanu [EMAIL PROTECTED] wrote: Rajesh, Thanks for the suggestion. I tried what you told me. In my startup script I did use Apache2::Status; use Apache::DBI; before any modules that use DB connect were loaded. I also included the handler for perl-status in the conf file as you mentioned. However as I access different pages the database connection doesn't persist. I check netstat on the db server and I see connections from the web server that appear and disappear. I checked http://my_mod_perl_server/perl-status and it has a link that says DBI connections but with no data in it. Also I wasn't clear on why you are not using connect_on_init. Thanks Salem From: Rajesh Kumar Mallah [EMAIL PROTECTED] To: Salem Berhanu [EMAIL PROTECTED] CC: pgsql-admin@postgresql.org Subject: Re: [ADMIN] postgres and persistant connections (using Apache::DBI) Date: Wed, 29 Mar 2006 21:32:47 +0530 On 3/29/06, Salem Berhanu [EMAIL PROTECTED] wrote: I am trying to set up a persistent connection to a handful of postgres dbs at startup using Apache::DBI (also using mod_perl) Here is what I am doing but I am not sure how to check if it's working right. in httpd.conf I have PerlRequire startup.pl in startup.pl I load all the necessary perl modules and also include the lines use Apache::DBI; Apache::DBI-connect_on_init( host, login, password); This doesn't seem to make any difference. I was expecting to see database connections when I restarted apache and that all queries would get handled by these connections. Is this a wrong assumption? What is the expectation and what would be a good test when setting up persistent connections to a postgres db? We use DBI connection pooling with postgres in mod_perl environment and it works fine. We however do not use connect_on_init as Apache::DBI docs say that all DBI-connect() anyway gets intercepted transparently. please take care that 1. Make sure Apache::DBI is loaded before any module that issues a DB connect. you can even load Apache::DBI from httpd.conf by PerlRequire Apache::DBI 2. You can know if Apache::DBI is effective or not by installing Apache::Status ,load Apache::Status before Apache::DBI in that case and add a Location section in the httpd.conf like: --- httpd.conf- PerlModule Apache::Status PerlModule Apache::DBI Location /perl-status SetHandler perl-script PerlHandler Apache::Status /Location -- point your browser to http://your_mod_perl_server/perl-status and check the section about database conenctions. i am not sure if this info answers your question but i am describing something that works for me. regds Rajesh Kumar Mallah. Thanks much Salem ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [ADMIN] Remote On-line Backup
On 3/29/06, Thomas F. O'Connell [EMAIL PROTECTED] wrote: I just want to make sure that I've got a good enough understanding of the built-in on-line backup facility to be able to minimize data loss and unavailability of the database during a remote recovery from on- line backup. Here are the steps I'm proposing: 1. Set up archive_command in postgresql.conf on oldhost to archive to remote repository on newhost. 2. Perform base backup on oldhost. (I'll probably just use rsync to backup directly to newhost.) 3. On newhost, remove postmaster.pid from $PGDATA, disable archive_command in postgresql.conf, and create clean pg_xlog tree. 4. Stop the postmaster on oldhost. 5. If the WAL file referenced by the backup file in my archive directory on newhost is not archived when the postmaster is stopped, copy it from oldhost to pg_xlog on newhost. even if it(STOP WAL) was archived in the new machine , you should also copy the last partially filled WAL log from pg_xlog that was created just after the file refrenced in backup file was archived. I think the overall process is fine , you may consider executing a test run skipping 4 Regds Rajesh Kumar Mallah. 6. Create recovery.conf on newhost. 7. Start the postmaster on newhost. 8. Rejoice when recovery.done appears. The part I most want to make sure I understand well enough is step 5, which I'm understanding to be a modification of steps 2 and 6 from section 23.3.3 in the docs. As I understand it, there's a pretty good possibility that the WAL file referenced by stop_backup() will not be archived by the time I stop the postmaster on oldhost. In which case, I should be in good shape to recover if I have a base backup, the archived WAL files up to that final file referenced by stop_backup(), and the partial segment file referenced by stop_backup(), which should be the only unarchived WAL segment file and just needs to exist in pg_xlog on newhost for things to run smoothly. Does this seem right? Or will I rather want to copy all the contents of pg_xlog from oldhost as they represent current (as of stopping the postmaster) unarchived WAL activity? -- Thomas F. O'Connell Database Architecture and Programming Co-Founder Sitening, LLC http://www.sitening.com/ 3004 B Poston Avenue Nashville, TN 37203-1314 615-260-0005 (cell) 615-469-5150 (office) 615-469-5151 (fax) ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: FW: [ADMIN] Setting up of PITR system.
Hi, Thanks everyone for the reply, is it reasonable/advisable to start with the output of pg_stop_backup() in a shell script to find the WAL file being discussed, ie the FILE which can be used as a reference for removing the older files ? secondly , i was asking about the tool for copying the database directory not the one for archiving the logs. when i use tar , it gives warning that the file changed while it was reading the file, i need a tool that does not give warning for this kind of activity. Regds Rajesh Kumar Mallah. On 3/28/06, Tom Lane [EMAIL PROTECTED] wrote: Andy Shellam [EMAIL PROTECTED] writes: The DOCs say that filenames numerically less than the WAL record that pg_stop_backup() suggests can be removed. Will an alphabetical sorting be different from numerically sorted filename ? If you're worried about that, try LANG=C ls ... to make sure the sorting is done in C locale. I don't know of any locales that would sort hex numbers differently from C, but maybe there are some. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[ADMIN] killing a query safely
Hi, many a times the backend are stubborn. A backend is currently executing a Query , the query usually finishes quickly 5 secs. But this query is on for past many minutes. I want to debug why it is hung if it is hung. I have also tried to kill this query using following methods: SQL SELECT pg_cancel_backend(27649); # does not kill # kill 27649 # does not kill # kill -TERM 27649 # does not kill # ps auxwww| grep 27649 postgres 27649 0.0 1.9 421060 81576 ? S 10:41 0:00 postgres: tradein tradein_clients 216.247.238.131(59261) SELECT this query does not block any other tradein_clients=# SELECT * from blockers; +-+-+ | blocker | blockee | +-+-+ +-+-+ (0 rows) \d blockers View "public.blockers" +-+-+---+ | Column | Type | Modifiers | +-+-+---+ | blocker | integer | | | blockee | integer | | +-+-+---+ View definition:(Courtesy: Tom Lane) SELECT h.pid AS blocker, w.pid AS blockee FROM ONLY pg_locks h, ONLY pg_locks w WHERE h."granted" AND NOT w."granted" AND (h.relation = w.relation AND h."database" = w."database" OR h."transaction" = w."transaction"); Can anyone please guide what should be done in such situations. Regds mallah. -- Best Regards, regds Mallah. Rajesh Kumar Mallah +---+ | Tradeindia.com (3,97,300) Registered Users | | Indias' Leading B2B eMarketPlace | | http://www.tradeindia.com/ | +---+
Re: [ADMIN] query for view code
Yes connect using psql -E database \d viewname observe the query that are used to produce the view defination Regds Rajesh Kumar mallah. On 10/13/05, Colton A Smith [EMAIL PROTECTED] wrote: Hi: Can I query the database for the code for a particular view? Thanks for your help! ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [ADMIN] Question: 2 GB file limit on linux
Hi Will, There is not much reason of concern. Firstly the limit is not on the size of database that postgres can handle, it is on the size of a single file that can be created on the filesystem and the total size of filesystem. We have database of 18GB and compressed dump files are between 2-3 GB. On modern linux system the size of single file and files system is quite large which depends on filesystem type , architecture (32bit or 64bit) , glibc version and (may be other things). Refer Below for some info. http://linuxreviews.org/sysadmin/filesystems/ It may be a good idea to VERIFY the largest size of file that you can create becuase that is going to limit the size of the database dump file. to verify you may use dd command to create a file of say 5GB $ dd if=/dev/zero of=test.dat bs=1024 count=5242880 $ ls -lh test.dat Regds Rajesh Kumar Mallah On 10/11/05, Will Lewis [EMAIL PROTECTED] wrote: Hi, I sent this request recently but have heard nothing. I'm new to he whole procedure and may be doing this incorrectly. Please advise. Thanks Will Lewis Database Administrator (DBA) Central IT Romney House Bristol City Council (0117 9222736) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[ADMIN] possibly outdated info in pg_stat_activity
Hi, PID 3533 does not exists in the system but its still in pg_stat_activity. This is PostgreSQL 8.0.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.3 20030502 (Red Hat Linux 3.2.3-49). Not sure if its already known issue , advance apologies if so. tradein_clients=# SELECT procpid,query_start, substring(btrim(current_query),0,50) from pg_stat_activity order by query_start limit 20; +-+--+---+ | procpid | query_start| substring | +-+--+---+ |3533 | 2005-10-16 22:26:21.981728+05:30 | select prod_serv,memb_affil,eyp,pacode,estd,entry | |3649 | 2005-10-16 22:26:33.168702+05:30 | IDLE in transaction | | 31579 | 2005-10-17 10:19:21.185764+05:30 | IDLE | | 32142 | 2005-10-17 10:21:46.696032+05:30 | IDLE | | 31715 | 2005-10-17 10:21:57.676293+05:30 | IDLE | |3508 | 2005-10-17 10:22:12.500686+05:30 | IDLE | | 830 | 2005-10-17 10:23:04.34449+05:30 | IDLE | | 32197 | 2005-10-17 10:23:05.458473+05:30 | IDLE | | 31210 | 2005-10-17 10:23:20.55451+05:30 | IDLE | | 751 | 2005-10-17 10:24:33.519166+05:30 | IDLE | | 31908 | 2005-10-17 10:24:51.839088+05:30 | IDLE | | 31651 | 2005-10-17 10:24:53.896042+05:30 | IDLE | |8777 | 2005-10-17 10:24:54.074668+05:30 | IDLE | | 31338 | 2005-10-17 10:25:28.248292+05:30 | IDLE | |7547 | 2005-10-17 10:25:36.473533+05:30 | IDLE | | 10387 | 2005-10-17 10:26:00.250297+05:30 | IDLE | |1399 | 2005-10-17 10:26:02.775205+05:30 | IDLE | | 867 | 2005-10-17 10:26:13.393084+05:30 | IDLE | | 10543 | 2005-10-17 10:26:14.258694+05:30 | IDLE | |8680 | 2005-10-17 10:26:19.020959+05:30 | IDLE | +-+--+---+ (20 rows) ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[ADMIN] Is pgdump_all == pg_dumpall -g + pg_dump of individual databases ?
Hi , I suppose pgdump_all creates a SQL script that is capable of recreating the entire database except for large_objects. However pgdump_all produces one single large SQL files and is not useful(easy) if one has to restore a particular table. For this reason we backup our databases individually using custom format and use pg_restore to restore individual objects as when required. My doubt is are the individual database backup files created through -Fc options along with an SQL file created using pgdump_all -g sufficient to recreate the entire database ? Regds Mallah. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] Backing up several tables using pg_dump -t
man pg_dump seems to say it is not possible -t table --table=table Dump data for table only. It is possible for there to be multi- ple tables with the same name in different schemas; if that is the case, all matching tables will be dumped. Specify both --schema and --table to select just one table. Note: In this mode, pg_dump makes no attempt to dump any other database objects that the selected table may depend upon. There- fore, there is no guarantee that the results of a single-table dump can be successfully restored by themselves into a clean database. On 9/23/05, Tomeh, Husam [EMAIL PROTECTED] wrote: I'm having problem backing up several tables using pg_dump. When selecting one table only with the '-t' option, it works just fine. When I tried to include several tables, pg_dump fails with error that the tables specified don't exist. Can you show me an example of backing up several tables using pg_dump. Thanks in advance, -- Husam ---(end of broadcast)--- TIP 6: explain analyze is your friend ** This message contains confidential information intended only for the use of the addressee(s) named above and may contain information that is legally privileged. If you are not the addressee, or the person responsible for delivering it to the addressee, you are hereby notified that reading, disseminating, distributing or copying this message is strictly prohibited. If you have received this message by mistake, please immediately notify us by replying to the message and delete the original message immediately thereafter. Thank you. FADLD Tag ** ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 6: explain analyze is your friend
[ADMIN] Is pgdump_all == pg_dumpall -g + pg_dump of individual databases ?
Hi , I suppose pgdump_all creates a SQL script that is capable of recreating the entire database except for large_objects. However pgdump_all produces one single large SQL files and is not useful(easy) if one has to restore a particular table. For this reason we backup our databases individually using custom format and use pg_restore to restore individual objects as when required. My doubt is are the individual database backup files created through -Fc options along with an SQL file created using pgdump_all -g sufficient to recreate the entire database ? Regds Mallah. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[ADMIN] Limiting postmaster to listen in a particular ip address only
Hi, I have two NICs in my machine i want that postmaster shud only listen to the internal NIC . I cannot limit it to listen in unix domain sockets only because it needs to be accessed from other machine inside LAN. Is there any way to accomplish it or a restrictive pg_hba.conf is the only solution ? Most of the servers ( like bind,apache) provide the facitly of accepting connections only on particular ip addresses / interfaces shud postmaster also provide the same (in case its not so)? Regds mallah. -- regds Mallah. Rajesh Kumar Mallah +---+ | Tradeindia.com (3,11,246) Registered Users | | Indias' Leading B2B eMarketPlace | | http://www.tradeindia.com/ | +---+ ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [ADMIN] Upgrading to 7.4.5
Vishal Kashyap @ [Sai Hertz And Control Systems] wrote: Dear Rajesh , The release notes of upgrades usually indicate whether initdb is required or not. Generally when the last digit of a version change it does not require a dump and reload. In this case the release notes say: http://www.postgresql.org/docs/7.4/static/release.html#RELEASE-7-4-5 E.1.1. Migration to version 7.4.5 A dump/restore is not required for those running 7.4.X But migration from 7.4.2 requires a initdb. Check the release notes of 7.4.3 The release notes does not seems to say anything like that can you tell me where u read ? http://www.postgresql.org/docs/7.4/static/release-7-4-3.html E.3.1. Migration to version 7.4.3 A dump/restore is not required for those running 7.4.X. However: <>http://www.postgresql.org/docs/7.4/static/release-7-4-2.html does say that dump/initdb/reload is an option to fix 2 errors. These two errors can also be fixed manually without dump / reload which is described in same URL. Since Chris is already using 7.4.2 he may have already fixed it if the data were imported from 7.4.2 . If original database was created using 7.4.2 dump/reload shud not be required for upgrading to 7.4.5 It is a great timesaver to skip dump/initdb/reload especially when GIGs' of data is involved. Not just in terms of time but also minor manual tweaks that are required to be done to dump files sometime. Regds Mallah. Will a DB created under 7.4.2 work with 7.4.5, or do I need to back it up and restore it to a newly created 7.5.4 DB? Chris White -- regds Mallah. Rajesh Kumar Mallah +---+ | Tradeindia.com (3,11,246) Registered Users | | Indias' Leading B2B eMarketPlace | | http://www.tradeindia.com/| +---+ ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] -- regds Mallah. Rajesh Kumar Mallah +---+ | Tradeindia.com (3,11,246) Registered Users | | Indias' Leading B2B eMarketPlace | | http://www.tradeindia.com/ | +---+
Re: [ADMIN] db rename
Dear Jodi, psql\h CREATE DATABASE Command: CREATE DATABASE Description: create a new database Syntax: CREATE DATABASE name [ [ WITH ] [ OWNER [=] dbowner ] [ LOCATION [=] 'dbpath' ] [ TEMPLATE [=] template ] [ ENCODING [=] encoding ] ] tradein_clients=# You might consider using the TEMPLATE=template option by default its template1 , if you put the existing database name a replica of specified database is made Hope it helps. Regds Mallah. Jodi Kanter wrote: I thought there was an option to rename a database? I cannot seem to find the syntax in any of my books or on the web site. Am I incorrect? We are running 7.3.4. We have to bring our sytem down to upgrade the release this afternoon. There are a number of db changes taking place so I was thinking instead of taking the system down for an extended amount of time I could build another database with all our new items and then just rename it when we are ready. Is there a rename option out there for our version? thanks Jodi -- ___ ___ Jodi L Kanter BioInformatics Database Administrator University of Virginia (434) 924-2846 [EMAIL PROTECTED] -- regds Mallah. Rajesh Kumar Mallah +---+ | Tradeindia.com (3,11,246) Registered Users | | Indias' Leading B2B eMarketPlace | | http://www.tradeindia.com/ | +---+
[ADMIN] 'IDLE in transaction' problem in mod_perl/DBI/DBD-Pg environment.
Hi, We run our webserver using mod_perl and perl/DBI/DBD-Pg . I have observed that most of the pg backends that block other queries are in 'IDLE in transaction' state. Usually sending a -INT or -TERM to such blocking backends solve the problem. My question is 1. Are there some precuation to be taken in perl/DBI programming in mod_perl environment to avoid backends getting into 'IDLE in transaction' mode? 2. Is it advisable/safe to run a daemon that TERMs such blocking backend at regular interval. 3. Most Importantly , If a do not have access to to the host is it possible to TERMinate such backeds from psql Regds Mallah. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [ADMIN] Upgrading to 7.4.5
The release notes of upgrades usually indicate whether initdb is required or not. Generally when the last digit of a version change it does not require a dump and reload. In this case the release notes say: http://www.postgresql.org/docs/7.4/static/release.html#RELEASE-7-4-5 E.1.1. Migration to version 7.4.5 A dump/restore is not required for those running 7.4.X Regds mallah. Chris White (cjwhite) wrote: Will a DB created under 7.4.2 work with 7.4.5, or do I need to back it up and restore it to a newly created 7.5.4 DB? Chris White -- regds Mallah. Rajesh Kumar Mallah +---+ | Tradeindia.com (3,11,246) Registered Users | | Indias' Leading B2B eMarketPlace | | http://www.tradeindia.com/ | +---+ ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [ADMIN] data not getting inserted into table
Are all the insert operations inside a transaction? ie do you notice a begin ; or begin work; line in the initial part of the .sql file then there shud be a commit; line in the end of the sql file otherwise the transaction will rollback. in case you are inserting from a program you must send commit instruction to the server. in DBI/perl eg its $dbh - commit() regds mallah. akanksha kulkarni wrote: Hi, I am trying a simple insert command with select statement (Insert into table (column list) select column list from table). The data to be inserted is around 1 GB. The problem is that at the end of the insert operation, data is NOT inserted. I was monitoring target table file size and it was getting increased during the operation. However at the end of insert operation, the size has returned to old size. I changed some parameters in postgres configuration file which are as follows: checkpoint_segments=20 sort_mem = 16384 checkpoint_timeout=500 This I did because during earlier operations, I was getting message too many checkpoints. Consider increasing checkpoints_segments So please can someone tell me why data is not getting inserted in table? Thanks, Akanksha Kulkarni __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ---(end of broadcast)--- TIP 8: explain analyze is your friend -- regds Mallah. Rajesh Kumar Mallah +---+ | Tradeindia.com (3,11,246) Registered Users | | Indias' Leading B2B eMarketPlace | | http://www.tradeindia.com/ | +---+ ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [ADMIN] weired behavior... after pg_resetxlog- dump-initdb--reload.
Tom Lane wrote: Rajesh Kumar Mallah [EMAIL PROTECTED] writes: Yep the problem of original posting could be replicated on disabling hash aggregates. On disabling i could get the repeated rows. Okay. What I suspect is happening is that there are entries in the column that are equal according to the datatype's comparison function, but are not bitwise equal and therefore yield different hash codes. This makes it a crapshoot whether they are seen to be equal or not when hash aggregation is used. We identified a similar bug in the inet/cidr datatypes just a few weeks ago. What exactly is the datatype of the "name" column? name | character varying(120) | not null If it's a text type, what database encoding and locale settings (LC_COLLATE/LC_CTYPE) are you using? List of databases +-+--+---+ | Name | Owner | Encoding | +-+--+---+ | bric | postgres | UNICODE | ++-+ | name | setting | ++-+ | lc_collate | en_US.UTF-8 | | lc_ctype | en_US.UTF-8 | | lc_messages | en_US.iso885915 | | lc_monetary | en_US.iso885915 | | lc_numeric | en_US.iso885915 | | lc_time | en_US.iso885915 | Can you investigate exactly what's stored within each of these groups of matching names? Can you tell me how to do it please? regards, tom lane Regds Mallah.
[ADMIN] weired behavior... after pg_resetxlog- dump-initdb--reload.
I recovered my database by pg_resetxlog and then did a dump , initdb , reload. one of the tables exhibited this phenomenon. I thought it was worth mentioning to the developers. Regds mallah. rt2=# SELECT name from users group by name having count(*) 1 limit 20; +--+ | name | +--+ | | | 163.com | | [EMAIL PROTECTED] | | p | +--+ (4 rows) Time: 1961.199 ms rt2=# rt2=# SELECT name from users group by name having count(*) 1 ; +--+ | name | +--+ +--+ (0 rows) Time: 789.184 ms ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings