Re: [ADMIN] Any way to see what queries are currently running?

2005-08-15 Thread adey
You could try the following query after enabling the
stats_command_string parameter in the postgresql.conf file:-

-- displays current activity in the database
-- postgresql.conf parameter stats_command_string populates the
current_query -- column


select  datid, 

datname as "DB Name", 

substr(procpid,1,6) as "Procpid", 

substr(usesysid,1,5), 

usename, 

current_query as SQL, 

query_start

from 

pg_stat_activity

order by 

procpid

On 8/16/05, Chris Hoover <[EMAIL PROTECTED]> wrote:
> I'm running 7.3.4.  Is there anyway to see the actual sql being run by
> the database via a query?  I am having some problems with long running
> load causing querires from our reporting system.  I have sql logging
> turned on to log the sql statements to my postgres.log file.  However,
> these reporting queries do not appear to be logging the entire query.
> I am only getting the select clause (and I'm not sure if it's all of
> it) but no from or where clause.
> 
> Can I pull these queries out from postgres?
> 
> Thanks,
> 
> Chris
> 
> ---(end of broadcast)---
> TIP 5: don't forget to increase your free space map settings
>

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[ADMIN] A good reference book for Postgresql administration

2005-09-05 Thread adey
Has anyone found the book "PostgreSQL Essential Reference" by Barry Stinson useful in administering Postgresql please? I'm looking for a comprehensive reference rather than a history or introduction to databases and SQL, and would like to know how suitable this book would be.

 
Your opinion would be appreciated.
Adrian


[ADMIN] How to determine date / time of last postmaster restart

2005-09-06 Thread adey
Please let me know if there is a way to determine when the Postmaster was last restarted?
This is for tuning. I have stats_reset_on_server_start turned on, but I don't know when last the Postmaster was started, so I can't determine the period my current system stats cover.


[ADMIN] Stats turned on but not being updated

2005-09-08 Thread adey
Please tell me what I'm missing?
My stats Postgres 7.4 parameters are as follows:-
 
stats_start_collector = truestats_command_string = truestats_block_level = truestats_row_level = truestats_reset_on_server_start = true 
After turning all of these on, I restarted Postgres, then ran a table query and saw statistics appear in the following view:-
 
pg_statio_all_tables
 
However, since that first test query yesterday (and I've run several since on different tables today - some of them repeated to ensure the buffer should be populated), the statistics haven't changed. Is there another parameter, or more space, or something, that I need to allocate please? Even the stats on the system tables don't appear to change with several users on ths system.

 
Adrian


[ADMIN] How much shared memory am I using please?

2005-09-12 Thread adey
A "free" command on my linux server returns the following:-
 

 total   used   free  shared    buffers cachedMem:   4077400    3958256 119144  0 104208    1551116-/+ buffers/cache:    2302932    1774468Swap:  284   4744    1995340

Postgres shared_buffers is set to 16384
SHMMAX is set to 256554432 bytes (245mb)
Why free show "shared" as 0 consistently please, and am I really not sharing any memory?
I'm trying to tune my SHMMAX parameter to provide an optimal shared memory segment size in which Postgres Shared_buffers can be optimised.
How do I do this please?


Re: [ADMIN] vacuum message

2005-09-26 Thread adey
That's only 1.8mb of RAM, so I can't see it being a problems if you have no RAM shortage on your server.
On 9/27/05, Gourish Singbal <[EMAIL PROTECTED]> wrote:

 
Thats a good Idea.
 
My output after full vaccum is :-
 
INFO:  free space map: 9 relations, 114736 pages stored; 76592 total pages neededDETAIL:  Allocated FSM size: 1000 relations + 30 pages = 1856 kB shared memory.VACUUM 
I just have a Question:
Is the allocate pages 30 okey ?. OR do i need to decrease it to 10 ?
 
regards
Gouirsh Singbal 
On 9/23/05, Hannes Dorbath <[EMAIL PROTECTED]
> wrote: 
On 23.09.2005 08:01, Adrian Engelbrecht wrote:> You need to increase your max_fsm_pages to at least 29584 pages to 
> accommodate / monitor all free pages.Better set it to much more to have some room for future growth, 5 maybe.--Regards,Hannes Dorbath---(end of broadcast)--- 
TIP 1: if posting/reading through Usenet, please send an appropriate  subscribe-nomail command to 
[EMAIL PROTECTED] so that your  message can get through to the mailing list cleanly -- Best,Gourish Singbal 



Re: [ADMIN] How to determine database activity???

2005-10-30 Thread adey
Try this query that displays current activity:-
 
select  datid,  datname as "DB Name",  substr(procpid,1,6) as "Procpid",  substr(usesysid,1,5) AS "UseSysid",  usename,  current_query as SQL,  query_startfrom 
 pg_stat_activityorder by  procpid
 
You may need to turn on some of the parameters in postgresql.conf to populate some of the returned columns in the table (view?). 
On 10/30/05, Joost Kraaijeveld <[EMAIL PROTECTED]> wrote:
Hi,Is there a way to actually determine if the database is doing anythingafter a query is send to the backend during *long* lasting queries
besides waiting and hoping?--Groeten,Joost KraaijeveldAskesis B.V.Molukkenstraat 146524NB Nijmegentel: 024-3888063 / 06-51855277fax: 024-3608416e-mail: 
[EMAIL PROTECTED]web: www.askesis.nl---(end of broadcast)---TIP 2: Don't 'kill -9' the postmaster



[ADMIN] Latest GA (Generally Available) version

2006-01-12 Thread adey
Please tell me where I can regularly find what the current generally available (not beta) version of Postgres is, and what the current version is?


[ADMIN] Delete performance

2006-02-20 Thread adey
Please give me some guidance?
We are attempting many deletes in our production database for the first time, and we're getting nowhere fast.
The SQL runs for more than 12 hours to delete 2 million rows, and hasn't finished each time we've tried it as we've had to cancel it.
I have tried running queries for locks, current activity, and buffer hits. I can see row locks on the affected tables for the delete PID, but no significant buffer hits or changes in row numbers while it is running. We have fsync set to default (true) with default 8 buffers. Postgres 
7.4.2 is running on Debian on a 4 processor server with 4gb RAM. TOP shows cache increasing slowly, and postmaster using at least 1 CPU 100%. pg_clog files swap about every 4 hours. We Vacuum (no  parms) and ANALYZE daily, but no VACUUM FULL for months. Delete is being performed on a parent table of 11 million rows, related to 5 child tables by foreign keys with ON DELETE CASCADE. We have followed previous advice in this forum and tweaked / increased the "famous" performance parameters in v7 such as effective_cache_size, vacuum_mem and buffer size with associated SHMMAX increase.

 
Where to next please?


Re: [ADMIN] Bloated pg_shdepend_depender_index

2006-03-24 Thread adey
Two questions in this regard please?
1) Is tuple theory not the root of this problem
2) Vacuum does much the same as a traditional database reorg, and online reorgs are a reality now
 
1) If I understand tuple theory correctly, copies of rows are created through normal Postgres processing, that expire after a period, leaving multiple copies of redundant data to be vacuumed dead, then vacuumed out (or both). Most databases have been built using one copy of a row with sophisticated locking control mechanism that Postgres has some of anyway, and the industry has developed methods and designs to exploit locking to best advantage. Even with tuples, locking is still evident in Postgres.

OR
2) Can vacuum full not be redesigned to run online without locking tables and users, like a conventional online reorg, eg: work on 1 data page at a time instead of locking the whole table with a shorter period at the end to lock the table and "compress" the remaining populated data pages and release disk space back to the OS; or one data file at a time, and have vacuum full 
per table reduce / tidy up the wraparound value, thereby avoiding a full DB vacuum for longer periods. In this way vacuum can be performed regularly and be less intrusive. Nowadays 24x7 is more of a reality for systems and we can't afford to take systems down for many hours to perform regular maintenance.

 
(It would be extremely helpful to DBA's with little OS experience or access to have more automation in PGAdmin, especially task scheduling and alerting, so SQL can be scheduled in PGAmin instead of crontab, which is usually a sysadmin function).

 
 
On 3/25/06, Tom Lane <[EMAIL PROTECTED]> wrote:
Peter Eisentraut <[EMAIL PROTECTED]> writes:> Am Freitag, 24. März 2006 05:48 schrieb Tom Lane:
>> Well, the VACUUM FULL algorithm is incapable of shrinking indexes --->> the only way is REINDEX, or something else that reconstructs indexes>> from scratch, such as CLUSTER.  One of the things we need to look into
>> is putting more smarts into VACUUM so that it automatically does>> something reasonable when faced with extreme cases like these.> If the user is running VACUUM FULL, he has presumably determined that the
> table is too bloated to be recovered in a graceful way, and quite likely the> indexes are going to be bloated similarly.  So seemingly one might as well> launch a reindexing on the table after VACUUM FULL has done its thing.
> Whether that should be automatic is another question but perhaps the advice> should be documented somewhere?Actually, I wonder whether VACUUM FULL shouldn't be thrown away andreplaced by something else entirely.  That algorithm only really works
nicely when just a small percentage of the rows need to be moved tore-compact the table --- if you're moving lots of rows, it makes theindex bloat situation *worse* not better because of the transient need
for index entries pointing to both copies of moved rows.  Lazy VACUUMhas become the de-facto standard for situations where there's not a hugeamount of empty space, and so it's not clear where the sweet spot is for
VACUUM FULL anymore.  If you've got enough disk space, a rewrite (likeCLUSTER or ALTER TABLE) is going to blow the doors off VACUUM FULL,let alone VACUUM FULL plus REINDEX.  Not to mention that forsufficiently huge tables, VACUUM FULL fails outright because it runs out
of RAM.We need to fix CLUSTER to make it MVCC-safe (ie, not discardrecently-dead rows), and it'd be nice to have something like it thatdidn't worry about ordering but just did a seqscan of the source table.
Then I'd be inclined to recommend that instead of VACUUM FULL for mostcases of severe bloat.Unfortunately this all breaks down for shared system catalogs and thecore (nailed-in) catalogs, because we can't change their relfilenodes
and so the crash-safe CLUSTER/REINDEX approach doesn't work.  We stillneed a new idea or two there.   regards, tom lane---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] Bloated pg_shdepend_depender_index

2006-03-26 Thread adey
In vacuum full output, indexes are listed as having been vacuumed along with their table.
Have I misinterpreted this message saying that vacuum is incapable of vacuuming indexes (and reindex is the only option to do so) please? 
On 3/25/06, Tom Lane <[EMAIL PROTECTED]> wrote:
Peter Eisentraut <[EMAIL PROTECTED]> writes:> Am Freitag, 24. März 2006 05:48 schrieb Tom Lane:
>> Well, the VACUUM FULL algorithm is incapable of shrinking indexes --->> the only way is REINDEX, or something else that reconstructs indexes>> from scratch, such as CLUSTER.  One of the things we need to look into
>> is putting more smarts into VACUUM so that it automatically does>> something reasonable when faced with extreme cases like these.> If the user is running VACUUM FULL, he has presumably determined that the
> table is too bloated to be recovered in a graceful way, and quite likely the> indexes are going to be bloated similarly.  So seemingly one might as well> launch a reindexing on the table after VACUUM FULL has done its thing.
> Whether that should be automatic is another question but perhaps the advice> should be documented somewhere?Actually, I wonder whether VACUUM FULL shouldn't be thrown away andreplaced by something else entirely.  That algorithm only really works
nicely when just a small percentage of the rows need to be moved tore-compact the table --- if you're moving lots of rows, it makes theindex bloat situation *worse* not better because of the transient need
for index entries pointing to both copies of moved rows.  Lazy VACUUMhas become the de-facto standard for situations where there's not a hugeamount of empty space, and so it's not clear where the sweet spot is for
VACUUM FULL anymore.  If you've got enough disk space, a rewrite (likeCLUSTER or ALTER TABLE) is going to blow the doors off VACUUM FULL,let alone VACUUM FULL plus REINDEX.  Not to mention that forsufficiently huge tables, VACUUM FULL fails outright because it runs out
of RAM.We need to fix CLUSTER to make it MVCC-safe (ie, not discardrecently-dead rows), and it'd be nice to have something like it thatdidn't worry about ordering but just did a seqscan of the source table.
Then I'd be inclined to recommend that instead of VACUUM FULL for mostcases of severe bloat.Unfortunately this all breaks down for shared system catalogs and thecore (nailed-in) catalogs, because we can't change their relfilenodes
and so the crash-safe CLUSTER/REINDEX approach doesn't work.  We stillneed a new idea or two there.   regards, tom lane---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[ADMIN] Fwd: Notice Processor hook

2006-03-26 Thread adey
Below is an old message from Tom Lane regarding "Notice Processor hook". I have not been able to contact him to explain it, nor does it appear in the Postgresql.org archives. Would one of you be able to help me to implement this please? We are trying to obtain the output from a vacuum run by automated script.
-- Forwarded message --From: adey <[EMAIL PROTECTED]>Date: Mar 14, 2006 8:51 AMSubject: Notice Processor hook
To: Tom Lane <[EMAIL PROTECTED]>

Hi Tom,
A colleague found this entry from yourself in thescripts.com:-
 
"Adam H.Pendleton" <[EMAIL PROTECTED]> writes:[color=blue]> When running a "VACUUM VERBOSE" query using PQsendQuery,
> PQconsumeInput, and PQisBusy, all the "INFO:" output from the "VACUUM > VERBOSE" command is printed on the terminal, and is not available> through the PGresult structure.[/color]
You could capture that output by setting up a notice processor hookroutine.regards, tom lane 
 
We are having similar problems - please could you tell me how to implement a "Notice Processor hook"?
 
Adrian


Re: [ADMIN] Bloated pg_shdepend_depender_index

2006-04-02 Thread adey
Please could someone help me with my questions below?
On 3/25/06, adey <[EMAIL PROTECTED]> wrote:


Two questions in this regard please?
1) Is tuple theory not the root of this problem
2) Vacuum does much the same as a traditional database reorg, and online reorgs are a reality now
 
1) If I understand tuple theory correctly, copies of rows are created through normal Postgres processing, that expire after a period, leaving multiple copies of redundant data to be vacuumed dead, then vacuumed out (or both). Most databases have been built using one copy of a row with sophisticated locking control mechanism that Postgres has some of anyway, and the industry has developed methods and designs to exploit locking to best advantage. Even with tuples, locking is still evident in Postgres. 

OR
2) Can vacuum full not be redesigned to run online without locking tables and users, like a conventional online reorg, eg: work on 1 data page at a time instead of locking the whole table with a shorter period at the end to lock the table and "compress" the remaining populated data pages and release disk space back to the OS; or one data file at a time, and have vacuum full 
per table reduce / tidy up the wraparound value, thereby avoiding a full DB vacuum for longer periods. In this way vacuum can be performed regularly and be less intrusive. Nowadays 24x7 is more of a reality for systems and we can't afford to take systems down for many hours to perform regular maintenance. 

 
(It would be extremely helpful to DBA's with little OS experience or access to have more automation in PGAdmin, especially task scheduling and alerting, so SQL can be scheduled in PGAmin instead of crontab, which is usually a sysadmin function). 


 
 
On 3/25/06, Tom Lane <[EMAIL PROTECTED]> wrote:
 
Peter Eisentraut <
[EMAIL PROTECTED]> writes:> Am Freitag, 24. März 2006 05:48 schrieb Tom Lane: >> Well, the VACUUM FULL algorithm is incapable of shrinking indexes --->> the only way is REINDEX, or something else that reconstructs indexes
>> from scratch, such as CLUSTER.  One of the things we need to look into >> is putting more smarts into VACUUM so that it automatically does>> something reasonable when faced with extreme cases like these.
> If the user is running VACUUM FULL, he has presumably determined that the > table is too bloated to be recovered in a graceful way, and quite likely the> indexes are going to be bloated similarly.  So seemingly one might as well
> launch a reindexing on the table after VACUUM FULL has done its thing. > Whether that should be automatic is another question but perhaps the advice> should be documented somewhere?Actually, I wonder whether VACUUM FULL shouldn't be thrown away and
replaced by something else entirely.  That algorithm only really works nicely when just a small percentage of the rows need to be moved tore-compact the table --- if you're moving lots of rows, it makes the
index bloat situation *worse* not better because of the transient needfor index entries pointing to both copies of moved rows.  Lazy VACUUMhas become the de-facto standard for situations where there's not a huge
amount of empty space, and so it's not clear where the sweet spot is for VACUUM FULL anymore.  If you've got enough disk space, a rewrite (likeCLUSTER or ALTER TABLE) is going to blow the doors off VACUUM FULL,
let alone VACUUM FULL plus REINDEX.  Not to mention that forsufficiently huge tables, VACUUM FULL fails outright because it runs out of RAM.We need to fix CLUSTER to make it MVCC-safe (ie, not discardrecently-dead rows), and it'd be nice to have something like it that
didn't worry about ordering but just did a seqscan of the source table. Then I'd be inclined to recommend that instead of VACUUM FULL for mostcases of severe bloat.Unfortunately this all breaks down for shared system catalogs and the
core (nailed-in) catalogs, because we can't change their relfilenodes and so the crash-safe CLUSTER/REINDEX approach doesn't work.  We stillneed a new idea or two there.   regards, tom lane
---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] Autovacuum ideas

2006-04-15 Thread adey
If autovacuum is to be provided with facilities for individual table and time window controls, that can be done currently using VACUUM TABLE FULL. Currently this must be scheduled in crontab - a task scheduler in PGAdmin would be very helpful to do this instead of crontab that usually requires sysadmin access, or is heavily controlled.

On 4/13/06, Chris Hoover <[EMAIL PROTECTED]> wrote:

This is what we are doing today (except we are on Linux :) ).  However, since there is talk of making autovacuum on by default and the advantages of having it run, it would be nice to be able to block out hours that it can not run.  It would also be very nice to have the database take care of all of this via the config file and/or tables.  That way, no external programs (cron or what ever) would be needed. 
Even nicer would be to have the time exclusion granular enough to be table to exclude only certain tables.  That way autovacuum can do it's work as needed on the majority of the tables, but tables that the DBA knows will cause problems during business hours could be excluded until the slow time. 
Any comments on this? 
Chris

On 4/11/06, Anthony Ransley <[EMAIL PROTECTED]
> wrote: 
Hi Chris Hoover,There is already a method for this! Can i suggest that you set a O/S 
schedule to simple run vacuumdb.exe at the desired time, preferable lateevening, as the vacuum of large databases can take all night, based onmy DBA experience. Don't for get to setup the uses of password files, as 
vacuumdb.exe will require a password and user account each end everytime it runs.Anthony.Chris Hoover wrote:> I have an idea/request for the autovacuum daemon.>> As far as I can tell, there is no way to tell the daemon to only do 
> it's work between certain hours. Could this be added as an option? In> my situation, I have tables that if vacuumed during the day would> cause a noticeable slowdown to my users and has therefore been deemed 
> unacceptable by management. This is one thing that is preventing me> from turning on autovacuun,. However, due to the activity on them,> they really need to be vacuumed daily. It would be nice to be able to 
> tell autovacuum to run, but have a way to tell it to always sleep> during these hours ( i.e. 7am to 5pm daily). Has there been any> thought into a feature like this?>> Thanks,>
> Chris


[ADMIN] Where do I execute the DISCONNECT command?

2006-06-27 Thread adey
Please could you tell me where I must execute the following to disconnect idle transactions in Potgres:-
 
EXEC SQL DISCONNECT [connection-name];
 
I've tried variations of this syntax in PGAdmin, psql, and the Linus command line in both v8.1.4 and v7.4.2 - all of which fail.
COuld you also tell me what "connection-name" is? I've assumed it's the PID as listed in pg_stat_activity, but haven't been able to test this as I can't get the syntax and environment correct.


[ADMIN] pgpool

2006-07-05 Thread adey
Please could someone give me an idea of what pgpool is, and where I can research it?
I have run a search on postgresql.org and found many references, but they don't explain what it is, and it doesn't appear in the documentation index of the v8.1 manuals.


[ADMIN] PGAdmin "Jobs"

2006-07-06 Thread adey
Please assist me in implementing this for the first time?
I have installed Postgres 8.1.4 on my laptop, including Admin, then accessed the database "postgres" from PGAdmin 1.4.2 and run the "pgagent.sql" script tp populate the database with the following result:-

 

NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "pga_jobagent_pkey" for table "pga_jobagent"NOTICE:  CREATE TABLE will create implicit sequence "pga_jobclass_jclid_seq" for serial column "pga_jobclass.jclid"
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "pga_jobclass_pkey" for table "pga_jobclass"NOTICE:  CREATE TABLE will create implicit sequence "pga_job_jobid_seq" for serial column "pga_job.jobid"
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "pga_job_pkey" for table "pga_job"NOTICE:  CREATE TABLE will create implicit sequence "pga_jobstep_jstid_seq" for serial column "pga_jobstep.jstid"
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "pga_jobstep_pkey" for table "pga_jobstep"NOTICE:  CREATE TABLE will create implicit sequence "pga_schedule_jscid_seq" for serial column "pga_schedule.jscid"
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "pga_schedule_pkey" for table "pga_schedule"NOTICE:  CREATE TABLE will create implicit sequence "pga_exception_jexid_seq" for serial column "pga_exception.jexid"
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "pga_exception_pkey" for table "pga_exception"NOTICE:  CREATE TABLE will create implicit sequence "pga_joblog_jlgid_seq" for serial column "pga_joblog.jlgid"
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "pga_joblog_pkey" for table "pga_joblog"NOTICE:  CREATE TABLE will create implicit sequence "pga_jobsteplog_jslid_seq" for serial column "pga_jobsteplog.jslid"
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "pga_jobsteplog_pkey" for table "pga_jobsteplog"
Query returned successfully with no result in 1407 ms.
...but these objects don't appear in PGAdmin under the postgres database. If I rerun all or part of this script, it says the object already exists, and I can see the objects through psql from the command line, eg: select * from 
pgagent.pga_jobstep.
How do I get these objects to appear in PGAdmin please, and how do I get the schema "Information Schema" to appear too?
 
 
 


Re: [ADMIN] lock a database from new connections, without modifying pg_hba.conf

2006-07-10 Thread adey
Can hba be manipulated in such a way to make Postgres "read only" to obtain a complete and full backup please, and if so, how?
On 5/19/06, Tom Lane <[EMAIL PROTECTED]> wrote:
"Marc G. Fournier" <[EMAIL PROTECTED]> writes:
> Is that possible?  I've checked the docs for 8.1, and am not finding> anything, nor anything in contrib ...> the best I've been able to think of so far is to modify pg_hba.conf to not> allow new connections for the duration of the operations I need to perform
> (drop and create a database) ...Not sure I understand what you need.  DROP DATABASE already locks outnew connections.   regards, tom lane---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate  subscribe-nomail command to [EMAIL PROTECTED] so that your  message can get through to the mailing list cleanly



Re: [ADMIN] Performance Slowly Decreasing As Database Grows

2006-07-10 Thread adey
Check the stats at the end of your vacuum to ensure your max_fsm_pages (free space map) is large enough. Also check work_mem and maintenance_work_mem are not running at defaults that may be too small. If you have many updates, increase the number of wal_buffers and checkpoint_segments.

On 7/11/06, Lane Van Ingen <[EMAIL PROTECTED]> wrote:
I am using Postgresql 8.1.4 on Windows 2003; platform is Dell Precision 330,1.8 GhzCPU, 1 gByte of RAM. This database is subject to 'vacuum full analyze' once
/ day.I am watching a recently created database grow; as it grows, I am findingthat some of the performance statistics appear to be falling out of bed. Thefunctions I have usedto capture this information are the standard functions that come with
PostgreSQL:   date  commits   rollbks  dsk_reads   mem_reads  pct_mem_hitsdb_size2006-06-19  9411510264  553053905  13126498559   95.9600"1674MB"(server restarted)
2006-06-20   438360026   24129603   1162150532   97.9700"1471MB"2006-06-21   917979927   56084142   2456950412   97.7700"1587MB"2006-06-22  1444711128   89452397   3875993962   
97.7400"1710MB"2006-06-23  2023394647  128309666   5553425498   97.7400"1858MB"2006-06-26  34252036  4765  237496776   9024547005   97.4400"2218MB"
2006-06-27  40290065403368  273062334   9156477077   97.1000"2401MB"2006-06-28  46436389870211  307983449   9260208418   96.7800"2497MB"2006-06-29  52251198   1352370  348552701   9367093206   
96.4100"2575MB"2006-06-30  58585373   1839034  392822069   9477661079   96.0200"2647MB"2006-07-03  75747589   3318388  551767504   981632   94.6800"2815MB"
2006-07-05  87768328   4524047  678763032  10076029919   93.6900"2973MB"(server restarted - shared_buffers changed)2006-07-06   4841006326389   50641814121507743   70.5800"3031
MB"2006-07-07  10711794732981  113816094274683161   70.7000"3076MB"2006-07-10  19428743   1300797  287848710517770353   64.2700"3452MB"My questions concerning these statistics:
(1) This application monitors networks. On 6/26, it began monitoring anumber of   new network nodes (traffic increase of 40 - 50%); but the applicationitselfhas not been changed, either in terms of number of users, program or
databasechanges, or other changes. Yet, the rollback column has increasedsubstantially.   QUESTION: is there a database setting that can affect this statistic?(2) I noticed that pct_mem_hits was dropping in early July at about the same
time   that the platform started monitoring additional network nodes. On lateJuly5th, increased shared_buffers from default value of 1000 to 1500, butpct_mem_hitscontinue to decline.   QUESTION: I evidently touched the wrong thing. What should I be
adjusting to   help keep the database in memory?All other statistics appear to be normal.---(end of broadcast)---TIP 2: Don't 'kill -9' the postmaster



Re: [ADMIN] PG slowdown

2006-07-12 Thread adey
If you turn WAL logging off (via fsynch parameter in .conf), it is my understanding that you will no longer log database updates, and will therefore not be able to recover any uncommitted transactions in the event of failure, (which seems contradictory to database practice). 

 
Gut feel to me says your chckpoint_segments and timeout are too high. 
On 7/12/06, Abu Mushayeed <[EMAIL PROTECTED]> wrote:
Hello,My hardwareCPU: Intel 3 GHZ X 4Memory: 9 GBLINUXMy conf file are as follows:
shared_buffers=1wal_buffers=128vacuum_mem = 256000sort_mem=32768max_fsm_pages = 300max_fsm_relations = 32768checkpoint_segments = 2048checkpoint_timeout = 1800checkpoint_warning = 300
effective_cache_size =  262144random_page_cost = 2default_statistics_target = 250Some of my queries that used to run within a minute, before I changed the"default_statistics_target" from 10 to 250, now runs for 20-30 minutes. Some
queries did see improvement in their run time.I have two questions:1. Why did the other queries slowdown? I tried to tune them, as their is ajoin between two tables and it reads all the rows, I forced it to do a seq
scan rather than index scan but it quadrupled it run time. Why?2. How can I disable the logging of updates? What are the pitfalls? When anupdate or insert happening there are lot of logging going on and the system
writes messages like this "2006-07-11 10:08:54 [2330] LOG:  checkpoints areoccurring too frequently (182 seconds apart)HINT:  Consider increasing the configuration parameter"checkpoint_segments"."
Thank you for the help.Abu---(end of broadcast)---TIP 4: Have you searched our list archives?  
http://archives.postgresql.org


[ADMIN] Where are the temporary work / sort files please

2006-07-12 Thread adey
The postgresql parameter explanation for work_mem say :-
 
"Another way to set this value is to monitor the Postgres temp files(in PGDATA/base/DB_OID/pgsql_tmp) and adjust sort_memupward if you see a lot of queries swapping from these temp files."
 
...but I can't find pgsql_tmp in my v8.1.4 installation.
Where can I find these temp files please?
 


[ADMIN] Recycle error logs

2006-07-15 Thread adey
Please advise if there is a way to start a fresh Postgresql error / output log at regular intervals?
We are tuning a "new toproduction" system, so I have all the SQL logged, but the file gets too big and cumbersome, so I'd like to start a new one each morning without restarting Postgres.


[ADMIN] Tables not restored from V7.4 to v8.1.4

2006-07-17 Thread adey
We recently backed up our production database (dumpall) on Postgres v7.4 and restored on v8.1.4. All of the tables restored except three. The tables were created, but their data wasn't applied. I can see the tables in the dump file, and no errors were reported in the restore. What am I missing please, and what should I check next?


Re: [ADMIN] Recycle error logs

2006-07-17 Thread adey
Many thanks to all of you that helped with this.
On 7/18/06, Scott Marlowe <[EMAIL PROTECTED]> wrote:
On Sat, 2006-07-15 at 02:37, adey wrote:> Please advise if there is a way to start a fresh Postgresql error /
> output log at regular intervals?> We are tuning a "new toproduction" system, so I have all the SQL> logged, but the file gets too big and cumbersome, so I'd like to start> a new one each morning without restarting Postgres.
As mentioned previously, it's built into 8.0 and up.  If you're stuck ona version before that, you can use apache's log rotator to do the job.Something like this:pg_ctl start | /usr/sbin/rotatelogs /logs/pgsql 86400



Re: [ADMIN] Tables not restored from V7.4 to v8.1.4

2006-07-19 Thread adey
We found corrupt data today in production that cannot be converted to UTF8. We are in the process of coorecting the data values. Thanks for your response.
On 7/19/06, Aaron Bono <[EMAIL PROTECTED]> wrote:

On 7/17/06, adey <[EMAIL PROTECTED]
> wrote: 


We recently backed up our production database (dumpall) on Postgres v7.4 and restored on v8.1.4. All of the tables restored except three. The tables were created, but their data wasn't applied. I can see the tables in the dump file, and no errors were reported in the restore. What am I missing please, and what should I check next? 

You say you see the tables in the restore.  Do you see the data in the restore file as well?== 
   Aaron Bono    Aranya Software Technologies, Inc.   http://www.aranya.com
== 


Re: [ADMIN] UTF8 characters

2006-07-25 Thread adey
I had the same problem when upgrading v7.4 database to v8, SQL-ASCII to UTF8 - we had to replace single backslashes with double backslashes in the v7 database to get the data to display correctly, then dump and restore in v8 / UTF8. Have we done something wrong please?

On 7/22/06, Peter Eisentraut <[EMAIL PROTECTED]> wrote:
PostgreSQL Admin wrote:> I had my database set to SQL_ASCII and switched to UTF8, but now I> notice that I must add a slash for periods/dots ( \. vs . ) to insert
> into varchar.   Is this normal?No.--Peter Eisentrauthttp://developer.postgresql.org/~petere/---(end of broadcast)---
TIP 4: Have you searched our list archives?  http://archives.postgresql.org


Re: [ADMIN] Task/Jobs issue

2006-07-25 Thread adey
If you manage to get "Jobs" working, please could you help me get mine going?
On 7/21/06, Shoaib Mir <[EMAIL PROTECTED]> wrote:

HI Michael,You can try using an addon like pgjobs (
http://gborg.postgresql.org/project/pgjobs/projdisplay.php)Thanks, 
Shoaib Mir EnterpriseDB

On 7/21/06, Michael Grossman <
[EMAIL PROTECTED]> wrote: 




 
Hi!
 
I am using PostgreSQL 8.1.3 Windows version and pgAdmin 1.4.3
 
I cannot find Jobs section for schedule tasks?
 
Please assist
 
Best regards,Michael Grossman

 
New Concept Travel Technologies Ltd
Mobile: (972) 54 555 7383

Mobile: (972) 50 4284 555Tel  :  (972)  8 971 6890
Fax  :  (972)  8 926 8049Skype: Michael.grossmanEmail: 
[EMAIL PROTECTED]URL  : 
http://www.newconcept.co.il
 


Re: [ADMIN] VACUUM ANALYZE suddenly taking forever

2006-07-27 Thread adey
Try running pgstattuple on some of the tables that have not had vacuum full, and some of those that have. It will tell you how many dead tuples there are in the table, which is an indicator of how seriously the table needs a full vacuum.

 
Run IPCS on the Unix command line to see how much of your SHMMAX is being used? You may have reached a threshold in one of the 6 Postgres parameters that use SHMMAX causing a slowdown, like max_fsm_pages (see the report at the end of a database vacuum full output).
 
On 7/28/06, Nolan Cafferky <[EMAIL PROTECTED]> wrote:
Synopsis: VACUUM ANALYZE on full database used to take just a fewminutes, now it takes several hours, with no apparant improvement in
successive runs.Details:I have a production database server hosting two heavily used databasesand not much else.  We're currently running postgres 8.0.8.  Normally wehave a VACUUM ANALYZE run nightly on both databases, which only takes a
couple of minutes each to complete.  We also have a report that runshourly on one of the databases and dumps a large amount of data into amaterialized view.  It normally takes 10-20 minutes (we could probably
optimize it, but it's never made it up the priority list).Anyway, about two nights ago, the hourly report started runningindefinitely, and we've had to turn it off, after having 16 copies of itwaiting in line for the first to finish.  Since then, VACUUM ANALYZE has
been taking several hours instead of several minutes on both databases.Yesterday I ran the VACUUM ANALYZE manually on both databases, hopingthat there was just some transient cleanup problem, but we've had the
same results today.What would cause this, and what can I do to fix it?  For the moment, I'mgoing to claim the "we didn't change anything!" mantra - no developmentwe've done in the past few days seems like it would significantly
influence both databases.  The so far untried ideas I've had are:* Try out the autovacuum service* Re-index tables (this hasn't been done for at least months, maybe never)* Do some selective VACUUM FULL on high-use tables (materialized view
for report seems like a likely culprit, but also seems like it wouldn'tinfluence both databases)* Restart postgres, restart the machine itself, and other useless handwaving\begin{more-background-information}
* The database server is a quad Opteron, about 2GHz each.  8 GB of RAM,and a several hard disk RAID. It's burly. I believe we're running on aGentoo linux installation, although postgres was installed from source.
Again, we're running postgres 8.0.8.  Here's some sample output from a"vmstat 1 5" that I just ran:procs ---memory-- ---swap-- -io --system--cpur  b   swpd   free   buff  cache   si   sobibo   incs us sy
id wa0  0   1208 5658464  0 225638400   554   3441 1 102 83  61  0   1208 5640272  0 22739280024   476 1405  1885 123 83  21  0   1208 5652368  0 225862800 0   560 1194   663  6
1 91  20  0   1208 5653392  0 22591040016   750 1979  4362 154 78  21  0   1208 5649744  0 22597160024   661 1651  3114 214 73  2* Yes, so far we've been doing a direct VACUUM ANALYZE on everything,
plus VACUUM FULL ANALYZE on a few tables, instead of using theautovacuum service like we should. It seems like there wouldn't be suchan abrupt change in performace because of that.* Shortly after killing the 16 or so backed-up reports, the partition
postgres had the data/subtrans directory in filled up, and we had abunch of "No space left on device" errors for a minute or two.  Thepartitions do deserve some rearranging, but for now we've made some
adjustments and postgres is wallowing in free disk space.\end{more-background-information}Suggestions?--Nolan CafferkySoftware DeveloperIT DepartmentRBS Interactive
[EMAIL PROTECTED]---(end of broadcast)---TIP 3: Have you checked our extensive FAQ?  
http://www.postgresql.org/docs/faq


Fwd: [ADMIN] vacuuming template0 gave ERROR

2006-07-30 Thread adey

I have the discussion below in which it is advised that template0 should not be touched, however, my production v8.1.4 template0 and 1 database XID "age", determined with the following query, changes constantly:-

 
select datname, age(datfrozenxid), datfrozenxid, current_date, current_time FROM pg_database
 
If template0 shouldn't be touched (and I don't know of deliberate access to it), why is this changing please? template0 currently returns -1797761636 (a minus number), and template1 returns 1436634067 and is growing constantly. Should I be doing a vacuum full on them both, and is this activity normal please? I'm concerned the XID / wraparound age will expire eventually.

 
Adrian-- Forwarded message --From: Tom Lane <
[EMAIL PROTECTED]>Date: Dec 27, 2005 5:02 AMSubject: Re: [ADMIN] vacuuming template0 gave ERRORTo: Gourish Singbal <
[EMAIL PROTECTED]>Cc: "pgsql-admin@postgresql.org " <
pgsql-admin@postgresql.org>Gourish Singbal <[EMAIL PROTECTED]> writes:> Got the following ERROR when i was vacuuming the template0 database. 
Why were you doing that in the first place?  template0 shouldn't everbe touched.> postgresql server version is 7.4.5The underlying cause is likely related to this 7.4.6 bug fix:2004-10-13 18:22  tgl 
   * contrib/pgstattuple/pgstattuple.c,   src/backend/access/heap/heapam.c,   src/backend/utils/adt/ri_triggers.c (REL7_4_STABLE): Repair   possible failure to update hint bits back to disk, per 
   http://archives.postgresql.org/pgsql-hackers/2004-10/msg00464.php
.   I plan a more permanent fix in HEAD, but for the back branches it    seems best to just touch the places that actually have a problem.> INFO:  vacuuming "pg_catalog.pg_statistic"
> ERROR:  could not access status of transaction 1107341112> DETAIL:  could not open file "/home/postgres/data/pg_clog/0420": No such > file or directoryFortunately for you, pg_statistic doesn't contain any irreplaceable
data.  So you could get out of this via   TRUNCATE pg_statistic;   VACUUM ANALYZE;  -- rebuild contents of pg_statistic    VACUUM FREEZE;   -- make sure template0 needs no further vacuuming
Then reset template0's datallowconn to false, and get rid of that codeto override it.  And then update to a more recent release ;-)(I don't recall exactly what rules 7.4 uses, but likely you'll find that
you need to run a standalone backend with -O switch to performTRUNCATE on a system catalog.)   regards, tom lane ---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to  choose an index scan if your joining column's datatypes do not   match 


At the Datamail Group we value team commitment, respect, achievement, customer focus, and courage.This email with any attachments is confidential and may be subject to legal privilege. If it is not intended for you please advise by reply immediately, destroy it and do not copy, disclose or use it in any way. 

 


This email has been scanned by the DMZGlobal Business Quality Electronic Messaging Suite.Please see 
http://www.dmzglobal.com/services/bqem.htm for details.

 


Re: Fwd: [ADMIN] vacuuming template0 gave ERROR

2006-07-30 Thread adey
Thanks Tom - datfrozenxid has been the same for at least the last month on template0 and 1
On 7/31/06, Tom Lane <[EMAIL PROTECTED]> wrote:
adey <[EMAIL PROTECTED]> writes:> select datname, age(datfrozenxid), datfrozenxid, current_date, current_time
> FROM pg_database> If template0 shouldn't be touched (and I don't know of deliberate access to> it), why is this changing please?Because current xid is changing and template0's datfrozenxid is not.
Nonetheless, if template0 is properly frozen then you don't need tovacuum it, regardless of what age(datfrozenxid) says.   regards, tom lane


[ADMIN] What's using all my RAM?

2006-08-01 Thread adey
Please could someone tell me how to discover what is using all of my RAM?
I am trying to run a vacuum against Postgres, but it fails immediately with:-
 
"ERROR:  out of memoryDETAIL:  Failed on request of size 1073741820."
 
TOP shows the following:-
Mem:   4077544k total,  3897868k used,   179676k free,   146960k buffersSwap:  284k total,  440k used,  1999644k free,  3352892k cached
None of the listed processes appear to be using more than 1 or 2% MEM.
 
Postgres v7.4.2 (upgrade underway)
 


Re: [ADMIN] What's using all my RAM?

2006-08-02 Thread adey
Thanks for this.
I couldn't get vacuum to run on even the smallest table, so I re-investigated RAM. We changed our SHMMAX to 1.3gb on this 4gb RAM system in line with the systems we have already converted to Postgres 8.1.4, but v8 handles RAM differently (vacuum_mem replaced with maintenance_work_mem), and SHMMAX usage has changed (v8 uses less SHMMAX - about 300mb - where v7 grabs all 
1.3gb and holds it), so when vacuum tried to run and get 1gb of RAM on v7, it failed. We reduced vacuum_mem to 8192kb on v7 and the vacuum runs happily with SHMMAX at 1.3gb. 
On 8/3/06, Scott Marlowe <[EMAIL PROTECTED]> wrote:
On Tue, 2006-08-01 at 23:09, adey wrote:> Please could someone tell me how to discover what is using all of my
> RAM?> I am trying to run a vacuum against Postgres, but it fails immediately> with:->> "ERROR:  out of memory> DETAIL:  Failed on request of size 1073741820.">
> TOP shows the following:-> Mem:   4077544k total,  3897868k used,   179676k free,   146960k> buffers> Swap:  284k total,  440k used,  1999644k free,  3352892k> cached> None of the listed processes appear to be using more than 1 or 2% MEM.
nothing is using your ram in the traditional sense.You're kernel iscaching files in the space that's not currently being used byapplications.  note that you show 3897868k used, and 3352892k cached.So, the actual memory being use by applications is the in use minus the
cached, or about 544,000k.  The rest is actually available, and thekernel will gladly give it back if a program needs it.You actual problem here is that your machine is trying to allocateapprox 1 Gig at once.  Large allocations tend to point towards some kind
of data corruption in your database, but not always.Try a vacuuming each table one at a time to see which one is causingthis error (if it's not already in the error message and got editedaway.)  See if a select of all rows on that table gets the same error
message.> Postgres v7.4.2 (upgrade underway)If you can get a clean backup, look into at least 8.0.  There were hugeimprovements from 7.4 to 8.0. 8.1 is even more impressive. (says the DBAwho's still running 
7.4.13  on all his boxes... :)


Re: [ADMIN] how to increase performance of Postgre

2006-08-02 Thread adey
How much RAM do you have, and what is your SHMMAX set to?
On 8/1/06, Javanesevn <[EMAIL PROTECTED]> wrote:
Hi all,My project ( web application) is using Postgre 8.1 and PHP 4.x (ADOlibrary to access database). And I would like to increase perfomance, so
I changed configuration follow this:shared_buffers = 16000(Default = 1000)wal_buffers = 64(Default = 8)But this setting is not strong effect. Please show me other way or youridea to resolve my problem. Thanks a lot.
Best regards,-Javanesevn.---(end of broadcast)---TIP 6: explain analyze is your friend


Re: [ADMIN] Performance tuning question

2006-08-09 Thread adey
A vacuum full analyze might help.
On 8/8/06, Benjamin Krajmalnik <[EMAIL PROTECTED]> wrote:
Chris,I just finished running some benchmarks on an underpowered servercompared to the one I am running in production.
My initial tests were run on an ampty database, pg_xlog on the samespindle.Stored procedure execution speed was ~15 ms.I then restored the production database so I would have ~3GB database.Execution time for the stored procedure went up to about 40 ms average
(with a miuch higher variance), but with a processing speed of 18 storedprocedure calls per second.I them moved pg_xlog to a separate spindle.Execution time went down to about 17 msBeyond moving pg_xlog to a separate spindle, are there any other things
you can think of which may improve the performance?> -Original Message-> From: Chris Mair [mailto:[EMAIL PROTECTED]]> Sent: Monday, August 07, 2006 4:38 PM
> To: Benjamin Krajmalnik> Cc: pgsql-admin@postgresql.org> Subject: RE: [ADMIN] Performance tuning question>>> > isweb01# vmstat 10
> >  procs  memory  pagedisks faults> > cpu> >  r b w avmfre  flt  re  pi  po  fr  sr ad4 ad6   in>   sy  cs us> > sy id> >  1 0 0  648368  47052 10322   0   0   0 7505 136   0   0
> 839 6241 2114> > 18 10 71> >  1 0 0  651392  42464 9823   0   0   0 6624   0   0   0> 667 5374 1703> > 16 10 73> >  0 0 0  648368  42316 9672   0   0   0 6677   0   0   0
> 652 5290 1674> > 16 10 74> >  1 0 0  650300  39840 6843   0   0   0 4695   0   0   0> 866 6123 2217> > 15 10 76> >  0 0 0  648388  39540 6913   0   0   0 4808   0   0   0
> 1279 9694 3367> > 18 10 72> >  1 0 0  649764  36780 10528   0   0   0 7337   0   0   0> 1182 9207 3127> > 23 11 66> >  1 0 0  651372  33180 13763   0   0   0 9392   0   0   0
> 1129 9458 2950> > 26 13 61> >  1 0 0  651452  57444 14711   0   0   0 10087 666   0   0> 889 8044 2315> > 23 13 63> >  1 0 0  650664  55956 12388   0   0   0 8479   0   0   0
> 773 6791 2006> > 20 11 68> >  2 0 0  649632  55152 10621   0   0   0 7256   0   0   0> 805 5811 1985> > 18 11 71> >> > I have increased the shared memory by 50%, and temp_buffers
> to 5000,> > but no noticeable difference in speed.> > As I mentioned, the system has 2 drives in RAID-1, so pg_xlog is on> > the same disk.> > Would moving pg_xlog to a different disk increase the performance?
> > The server I am currently running this on is a temporary> server while> > I rebuild our main data server which is SCSI.> > Right now I am going to test a few things on a secondary
> dev server I> > set (old server with IDE).  This one has 2 drives, so I> will run some> > tests with pg_xlog on the same drive and on a separate drive.>> Having pg_xlog on another disk than the data itselft helps a
> lot for frequent writes/updates.>> Still it's not so clear to me on *where* exactly your> performance problem is. Is it that 40 msec time you> mentioned? On *average* your machine doesn't seem to be
> overloaded at all from reading vmstat's output. Do you need> do have this call terminate in less than 40 msec, even though> average load is no problem? Then you have a responsivness> problem, and not an easy one, I'm afraid :/ FreeBSD (or
> Linux) are not real time systems that can guarantee they will> complete something within msecs.>> If this is the case (and I'm a bit guessing here), I'm afraid> you need to buffer data in the client.
>>> > Also, I> > will load the data on an empty database as well as a> restored database.> >> > I really need to find a way to make this faster :(  The monitoring
> > agent which we use has a single logging thread, and if the database> > does not keep up with it it will stall.>> Does it buffer at all?>> > Worst case, I will virtualize the monitroing agent, but that will
> > require quite a bit of work on our side.>>> Bye, Chris.>> -->> Chris Mair> http://www.1006.org>>>
---(end of broadcast)---TIP 4: Have you searched our list archives?  http://archives.postgresql.org



Re: [ADMIN] How to know the sizes of all tables & indexes in a database

2006-08-09 Thread adey
Try:-
 
SELECT relname, relpages, CAST(reltuples AS numeric) , ( (relpages*8192)/1024 ) as MBFROM pg_class 
 
...which will give you space used on disk for all data pages currently in use per table (assuming you have 8k data pages - the default). 
On 8/9/06, Sharma;G.S. <[EMAIL PROTECTED]> wrote:



Hi,
Postgres 7.3
Is there any way to know the size taken up by each table and indexes in postgres .
I am having a 53 GB database which contains tables , indexes, functions etc , how i can distribute these 53 GB's in database objects .
 
Thanks in advance Disclaimer :- This e-mail message including any attachment may contain confidential, proprietary or legally privileged information. It should not be used by who is not the original intended recipient. If you have erroneously received this message, you are notified that you are strictly prohibited from using, copying, altering or disclosing the content of this message. Please delete it immediately and notify the sender. Newgen Software Technologies Ltd and / or its subsidiary Companies accept no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus and further acknowledges that any views expressed in this message are those of the individual sender and no binding nature of the message shall be implied or assumed unless the sender does so expressly with due authority of Newgen Software Technologies Ltd and / or its subsidiary Companies, as applicable. 



Re: [ADMIN] vacuumdb exclude tables option?

2006-08-09 Thread adey
Does autovacuum replace the need for a FULL vacuum please (to recover free space, etc)?
On 8/10/06, Scott Marlowe <[EMAIL PROTECTED]> wrote:
On Wed, 2006-08-09 at 13:24, Joel Stevenson wrote:> Hi,>> I have a database that includes both highly transactional tables and
> archive tables - OLTP and OLAP mixed together.  Some of the archival> tables, which only experience inserts and reads, not updates or> deletes, contain many millions of rows and so they take a *long* time
> to vacuum.  Is there currently any means to exclude only these tables> from vacuumdb operations other than explicitly vacuuming all the> other tables and explicity skipping the archival tables?>
> If not, are there any plans to add a command line switch to vacuumdb> to exclude tables from it's operation, ala 'vacuumdb -d mytest -x> archive_tab1 -x archive_tab2 -x archive_tab3'?  I searched the list
> archives but didn't see anything about being able to do this.> Something like this would save many hours of useless vacuuming in> this particular setup.Does autovacuum fail you in this quest?  I've found it's pretty good
about leaving along tables that don't need vacuuming.  It's usually theother way around that people have problems with, autovacuum NOTvacuuming tables that need vacuuming often enough.---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [ADMIN] WAL configuration and REINDEX

2006-09-24 Thread adey
I apologise if I am uninformed, but it is my understanding that REINDEX is still required even with the most regular of vacuums?
On 9/22/06, Jim Nasby <[EMAIL PROTECTED]> wrote:
On Sep 21, 2006, at 6:40 AM, Jose Manuel Garci a Valladolid wrote:> I have a PostgreSQL 8.1.4 server under Linux Red Hat with several
> databases. The server is configured with WAL archiving turned on.> As a maintenance process, every night one cron job launches a> backup process to the server with> pg_dump, then the server shuts down and starts up and do a REINDEX
> DATABASE to all databases to keep all tables reindexed.>> After 30 days of no activity to the server (this is a DBA test> server) the amount of WAL segments is increasing at very high> speed. Every REINDEX process generates between 25 and 30 WAL
> segments. With this behavior (and no activity!!) I can not maintain> this amount of data to performs future WAL backups and recovers.>> Any idea to decrease the number of WAL segments generated?
> Is possible to know when a table would be reindexed and to avoid> dayly REINDEX process?I think you're running off some pretty old information.First, you should only enable WAL archiving if you intend to use
PITR. pg_dump and PITR are completely un-related. In fact, if you'venever taken a base filesystem backup, all those old WAL files arecompletely useless.Second, as long as you are vacuuming frequently enough, there's no
reason you should ever have to REINDEX. You certainly shouldn't bedoing it on a regular basis.--Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)--Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)---(end of broadcast)---TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your  message can get through to the mailing list cleanly


Re: [ADMIN] WAL configuration and REINDEX

2006-09-24 Thread adey
Thanks Tom.
How does Postgres handle page splits and redirections in indexes then please?
I had an unsuccessful look at :-
http://www.postgresql.org/docs/8.1/interactive/index-functions.html
 
and
 
http://www.postgresql.org/docs/8.1/interactive/indexes-types.html 
On 9/25/06, Tom Lane <[EMAIL PROTECTED]> wrote:
adey <[EMAIL PROTECTED]> writes:> I apologise if I am uninformed, but it is my understanding that REINDEX is
> still required even with the most regular of vacuums?REINDEX shouldn't be required at all under normal circumstances.   regards, tom lane


Re: [ADMIN] best way to choose index to cluster on?

2006-09-25 Thread adey
A good guide for a clustered index is:-
Will the index be unique, static and narrow.
If not, avoid clustered indexing as they require extra work on disk to maintain their sequence. 
On 9/26/06, Chris Hoover <[EMAIL PROTECTED]> wrote:
I am looking to squeeze a bit more speed out of my database by clustering most of my tables.  However, on tables with multiple indexes, how is the best way to choose the index.  I am thinking I want to use the index with the largest pg_stat_user_indexes.idx_tup_read.  Would this be correct?  If not, what column(s) and views should I be looking at to find the most popular index? 
Secondly, I have some partial indexes on a very active table, and it tends to have the highest idx_tup_read for that table.  Is is possible to cluster on a partial index?Thanks,Chris



Re: [ADMIN] WAL configuration and REINDEX

2006-09-25 Thread adey
Hi Tom, 
Further to my request below, the following recommendation exists in the Postgres 8.1 documentation:-

http://www.postgresql.org/docs/8.1/interactive/routine-reindex.html
 
On 9/25/06, adey <[EMAIL PROTECTED]> wrote:

Thanks Tom.
How does Postgres handle page splits and redirections in indexes then please?
I had an unsuccessful look at :-
http://www.postgresql.org/docs/8.1/interactive/index-functions.html

 
and
 
http://www.postgresql.org/docs/8.1/interactive/indexes-types.html
 

On 9/25/06, Tom Lane <[EMAIL PROTECTED]> wrote:
 
adey <
[EMAIL PROTECTED]> writes:> I apologise if I am uninformed, but it is my understanding that REINDEX is > still required even with the most regular of vacuums?REINDEX shouldn't be required at all under normal circumstances.
   regards, tom lane


Re: [ADMIN] pg_dump/pg_restore problem

2006-10-05 Thread adey
I had a similar experience when we upgraded from 7.4 to 8.1.4.
When we attempted the restore in 8.1.4 / UTF8, it failed and told us the offending row. We edited the original database to correct the data and retried. There were several ofending rows, but fortunatley for the exact same value, so we could search the dump, find all the rows (about 6 of them), correct the original database, pgdump, and restoer successfully.

 
Our problem was the original database was sql_ascii and didn't care what was loaded (?), whereas 8.1.4 was more strict. 
On 10/6/06, Benjamin Krajmalnik <[EMAIL PROTECTED]> wrote:
I have a database which has UTF8 encoding enabled (why?  I am really notsure why I did tihs other than the source of the data is windows and I
had some issues with characters > ascii 128 being sent across from someof the Windows event logs).The problem which I am having is as follows:The data is passed via the ODBC driver to a stored procedue, and it made
it successfully into the tables.I can create a pg_dump without any problem, but pg_restore is giving thefollowing error:pg_restore: ERROR: invalid byte sequence for encoding "UTF8": 0x80
CONTEXT: COPY tblksalerts, line 22736I have tried running pg_dump changing the encoding to Latin1 and Latin9.When creating the dunp, it is giving an error that there is noequivalent in the character set.
The problem is that, as it stands, pg_dump/pg_restore cannot be used toeasily backup/restore.In the past, I perfrmd singe table dumps and ran them so I couldidentify which line was the problem, went back to the database, deleted
the offending line, and so forth, but this is a very long process.I was initially runnin 8.1.2.  I am now running 8.1.4.  I was hopingthat 8.1.4 would alleviate the problem (since some encoding issues were
addressed).Any ideas how to easily identify the offending rows and remove themeasily?I need to move the database to a new server with higher performance, andthis is currently a sticking point.
---(end of broadcast)---TIP 2: Don't 'kill -9' the postmaster


Re: [ADMIN] How do I adjust max_fsm_pages

2006-11-07 Thread adey
When last did you do a FULL vacuum and analyse the stats report at the end?
On 1/2/90, amar buwa <[EMAIL PROTECTED]> wrote:
Hi, My postgresql configuration ismax_locks_per_transaction = 256  max_connections = 200
  shared_buffers = 6000  max_fsm_pages = 20  max_fsm_relations = 1500My serversers total RAM=32GB Still I am not able to insert more than 10million records, itstaking lot of time. It also gives hint to increse max_fsm_pages.
So Please suggest me how do I adjust above parameters to avoid errorsand to increase postgres speed performance?Thanks ,Amar---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to  choose an index scan if your joining column's datatypes do not  match


Re: [ADMIN] Question

2006-11-07 Thread adey
I think the following query should list clustered indexes for you:-
 
select * from pg_indexwhere indisclustered = 't' 
On 11/8/06, Mark Steben <[EMAIL PROTECTED]> wrote:
Thank you, Jeff.  That worked like a champ.  My initial problemHad to do with not understanding the concept of OID datatypes.
My next learning task is to read up on OID.Thanks again, Mark-Original Message-From: [EMAIL PROTECTED] [mailto:
[EMAIL PROTECTED]]Sent: Tuesday, November 07, 2006 3:29 PMTo: Mark StebenCc: pgsql-admin@postgresql.orgSubject: Re: [ADMIN] QuestionOn Tue, 7 Nov 2006, Mark Steben wrote:
> I am very new to PostgreSQL.  Is it appropriate to pose questions to this> email list?It might be better on pgsql-sql, but I'm not sure.> I am trying to come up with a query that will list the names of the
database> indexes that>> Have been chosen as clustering indexes.  I know I can get the INDEXRELID> from PG.INDEX>> But have yet to figure out how to get the index name from there.  Any help
> would be appreciated.>> And, if this is not an appropriate forum to ask questions please tell me.You want to join on pg_class.oid.  You can see the info on pg_index here inthe docs: 
http://www.postgresql.org/docs/8.1/static/catalog-pg-index.htmlselect i.indexrelid,c.relname from pg_index i, pg_class c where i.indexrelid=c.oid;will likely give you what you're after.
--Jeff Frost, Owner   <[EMAIL PROTECTED]>Frost Consulting, LLC   http://www.frostconsultingllc.com/
Phone: 650-780-7908 FAX: 650-649-1954---(end of broadcast)---TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] Use Jobs

2006-11-08 Thread adey
Jim - is there a component that needs to be installed on the database server with Postgres that PGAdmin communicates with please, or is only PGAdmin required?
Adrian 
On 11/8/06, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
On Tue, Nov 07, 2006 at 03:34:36PM -0200, Alexander Burbello wrote:> Where can I find Job contrib and how to install jobs on postgres?
Right now the job agent that comes with pgAdminIII is your only option,though I'm hoping to get started again on the pgjobs project onpgFoundry.--Jim Nasby
[EMAIL PROTECTED]EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [ADMIN] Use Jobs

2006-11-09 Thread adey
THanks Jim.
I've tried to get Jobs working, but have not been successful so far. I have some instructions, but they're sketchy at best, and my Unix skills are not strong, so I don't know what is required both in PGAdmin and in Postgres to get it working, or in what order, etc.
 
On 11/9/06, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
On Wed, Nov 08, 2006 at 09:29:28PM +1300, adey wrote:> Jim - is there a component that needs to be installed on the database server
> with Postgres that PGAdmin communicates with please, or is only PGAdmin> required?I know there's a server-side set of components for pgAdmin, but I don'tknow if the job agent needs them or not.
BTW, my main motivation behind pgJob is to have something that has noexternal dependencies, so that you can just install it in any database.--Jim Nasby
[EMAIL PROTECTED]EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)


Re: [ADMIN] Move indexes

2006-11-13 Thread adey
Drop the index and recreate it in the new tablespace.
On 11/13/06, Alexander Burbello <[EMAIL PROTECTED]> wrote:
Hi,I have one tablespace with all my objects.I would like to separate indexes from other.
How can I select and move with SQL???Thank you for your help.---(end of broadcast)---TIP 7: You can help support the PostgreSQL project by donating at
   http://www.postgresql.org/about/donate


Re: [ADMIN] Issue with processes open for clients

2006-11-13 Thread adey
Is there not a parameter in Postgres to timeout idle connections?
On 11/11/06, imad <[EMAIL PROTECTED]> wrote:
I found it on postgresql archives:
http://archives.postgresql.org/pgsql-admin/2004-09/msg00261.phpThis post was by Tom Lane, I hope this is what you want.It will time out by itself eventually (order of a couple hours probably--- it depends on your TCP stack).  You can try a SIGINT or SIGTERM.
SIGINT is safe but may or may not cause the backend to quit.  SIGTERMwill make it quit, but I'm not prepared to guarantee that SIGTERM won'thave bad side effects.  I'd suggest waiting...--Imad
www.EnterpriseDB.comOn 11/11/06, jose fuenmayor <[EMAIL PROTECTED]> wrote:> Ok but what can i do with that information?
>> On 11/10/06, Shoaib Mir <[EMAIL PROTECTED]> wrote:> > Check pg_stat_activity for any queries that are in execution mode, where> command_query will give you the query and you will also get the process ids
> for them in pg_stat_activity.> >> >> > Thanks,> > ---> > Shoaib Mir> > EnterpriseDB (www.enterprisedb.com)
> >> >> > On 11/10/06, jose fuenmayor <[EMAIL PROTECTED] > wrote:> > > Hi thanks for the answer, but> > > Is not when the query is executed ,is when the user cancells the query
> or kill the client that he is using when this query has not finished, then> the process in the server is not killed still there taking resources, it> happens with some clients (psql, phpPgAdmin), how can I configurate the
> firewall to elminate idle connections? will this eliminate all conections> that are not doing anything?, because in my case the client is killed but> the process in  the server for that client not and continues taking
> resources.> > > thanks in advance!> > >> > >> > >> > > On 11/10/06, Shoaib Mir < [EMAIL PROTECTED]> wrote:
> > > > If you don't need the client connection after query has executed then> just close that client connection or you can have it set in your firewall to> kill such idle connections who are not doing anything.
> > > >> > > > Thanks,> > > > > > > > Shoaib Mir> > > > EnterpriseDB (www.enterprisedb.com)
> > > >> > > >> > > >> > > > On 11/10/06, jose fuenmayor < [EMAIL PROTECTED]> wrote:> > > > > Hi all , I have a question or a doubt, when I ran psql remotely it
> opens a process on the server, so I issue a query (select count(*) from> 80millionsrowstable) then I can see that process taking cpu and mem from the> server (using top), then I cancel the psql on the client, I expect to see
> that process gone but that is not the case it is still there! and taking> resources!.> > > > >> > > > > When I cancelled the query issued by the client is not supposed to
> eliminate the process open for him on the server?> > > > >> > > > > The process stays hanging there?> > > > > why is that?.> > > > > Is there any parameter on postgresql or the operating system that
> can tune?> > > > > I am using mandriva linux free 2006> > > > > kernel 2.6.12> > > > > Postgresql 8.0.8> > > > > phpPgAdmin 4.0.1 (running in a web server whit php module 
5.0.4)> > > > >> > > > > Same thing using phpPgAdmin> > > > >> > > > > Thanks in advance! I really aprecciate your help!> > > > >
> > > >> > > >> > >> > >> >> >>>---(end of broadcast)---TIP 6: explain analyze is your friend



Re: [ADMIN] Use Jobs

2006-11-13 Thread adey
Where can I find instructions to get it working please Jim?
I have Postgres and PGAdmin installed but I've not been able to get it working. 
On 11/14/06, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
Right now the only job scheduling utility for PostgreSQL is the jobagent that was written by the pgAdmin folks. Or of course you could use
your OS's built-in scheduling.On Fri, Nov 10, 2006 at 04:57:42PM -0200, Alexander Burbello wrote:> Sorry, but now I am in a doubt!!>> pgAdmin isn't a tool???> How can I install the feature Jobs and how can I schedule some procedure?
>> Jim C. Nasby escreveu:>> >On Wed, Nov 08, 2006 at 09:29:28PM +1300, adey wrote:> >> >> >>Jim - is there a component that needs to be installed on the database
> >>server> >>with Postgres that PGAdmin communicates with please, or is only PGAdmin> >>required?> >>> >>> >> >I know there's a server-side set of components for pgAdmin, but I don't
> >know if the job agent needs them or not.> >> >BTW, my main motivation behind pgJob is to have something that has no> >external dependencies, so that you can just install it in any database.
> >> >>--Jim Nasby[EMAIL PROTECTED]EnterpriseDB  http://enterprisedb.com
  512.569.9461 (cell)


Re: [ADMIN] Monitoring PostgreSQL Process

2006-11-22 Thread adey

The following query with the stats_command_string parameter turned on will
give you some of the SQL per transaction:-


SELECT
datid,
datname as "DB Name",
substr(procpid,1,6) as "Procpid",
substr(usesysid,1,5) as "UseSysid",
usename,
current_query as "SQL",
query_start
FROM
pg_stat_activity
order by
procpid;


On 11/22/06, Aaron Bono <[EMAIL PROTECTED]> wrote:


I have a couple processes/connections to one of our databases that appears
to be eating up most of the CPU and we are trying to determine what these
processes are doing that is taking so much CPU time.

Is there a way to monitor the SQL being run for a specific
connection/process?

We are using PostgreSQL 8.1.5 on CentOS Linux.

Thanks,
Aaron

--
==
   Aaron Bono
   Aranya Software Technologies, Inc.
   http://www.aranya.com
   http://codeelixir.com
==


Re: [ADMIN] PGSQL manager password lost.

2006-11-22 Thread adey

If you do a pgdump -s will dump out schema only from the Postgres database.
Much of this will be usable in SQL server to at least create your basic
database and objects. You will then need to interpret the remianing more
complex structures and build them / adapt the DDL, to work in SQL Server.

If you have PGAdmin you should be able to login to the Postgres database as
a superuser and change the admin password, but be careful to change any
applications that may be using that login to access Postgres.


On 11/23/06, Paul Lambert, Help Desk Team Leader, BSD, Infomedia Ltd <
[EMAIL PROTECTED]> wrote:


 Hi folks, new to this mailing group and here are my first questions.

We've been running a PGSQL database for a while now on a Weenblows 2K
server, on version 8.0.something.

Our resident DBA has been shown the door and has taken with himself the
admin password to get into the databases we run in PostGreSQL.

Question 1: Is there any way to recover and/or reset the administrator
password in a database or do I have to recreate the db from scratch?

Question 2: One of the DB's we run in PGSQL for reasons outside my control
needs to be moved to M$-SQL server. Is there any way that anyone is aware of
that I can export the database definition and data (table structures, views
etc) into a format usable by SQL server - I know M$ speaks it's own version
of SQL which I've run into problems with in the past. Just looking for some
suggestions on what I can do, otherwise I have to wade through a few
thousand lines of SQL to make it workable in M$. Is is possible to pipe
table structures/views through an ODBC connection? Is this something I
should be asking on M$ forums?

--
Paul Lambert
Database Administrator
Infomedia Ltd. - Business Systems Division
Email: [EMAIL PROTECTED] <[EMAIL PROTECTED]> <*
http://www.infomedia.com.au* >





Re: [ADMIN] Can't restore a pg_dump due to encoding errors

2007-04-07 Thread adey

We had a similar conversion conflict some time back whe upgrading and found
using a search for anything not A-Z, 0-9, etc, was useful to find offensive
data and fix it.

On 4/8/07, Dan Harris <[EMAIL PROTECTED]> wrote:


Hello all:

I'm in a bit of a bind here.  Today is my maintenance window for upgrading
from
8.0.3 to 8.2.3.  I did a pg_dumpall overnight ( resulting in a 72GB file )
and
then a pg_dump on one of my small databases this morning to test
pg_restore.
When trying to reload, I'm getting the following error:

psql:sac.bak:2793776: ERROR:  invalid byte sequence for encoding "UTF8":
0x92

A google search came up with this link:
http://www.hostingforum.ca/280472-invalid-byte-sequence-encoding-utf8.html

Which sounds very simliar to what's going on here.

Again the dump was done on 8.0.3, which sounds like it was before the
anti-sql
injection patch that would refuse to import "invalid" characters.  The
database
( for now ) *should* only be storing English/Latin characters.  Although,
the
old server had LOCALE='C' previously ( this was unnecessary so I'm trying
to
drop that today ).  I don't know if this would matter, but that's the only
thing
that was different from a bog standard Postgres setup besides the obvious
tuning
changes.  I have never changed any of the encoding settings on client or
server
from the defaults.

So far, I tried editing the dump file and changing the client_encoding to
'UTF8'
at the top, hoping this might make a difference but did not.

The advice in that thread seems to be "fix the bad data".  But I am not
seeing
how I can tell exactly what or where this bad data is.  Luckily, this
small dump
file is manageable by opening in vim and pointing to the line 2793776 the
error
references, however the line referenced is just the "\." at the end of a
huge
COPY section.  I don't know which line inside that secion is the problem
and
there's enough of them that it'd be pretty impractical for me to visually
scan
the file looking for something ( I'm not even quite sure what to look for
at
this point ).  And, if that is the only solution and I run into a problem
restoring the 72GB dump file, I'm going to be in some serious trouble,
especially if I am not alerted to the problem until *after* it's finished
looking at all the COPY statements!

What say the sages?

Thanks

-Dan

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly



Re: [ADMIN] Checking a size of a given database

2007-04-17 Thread adey

I found the following script useful; for calculating database size:-

-- Query returns size of a database calculated from the no. 8k pages
allocated to tables
SELECT
sum( relpages*8/1024 ) as MB
FROM
pg_class
WHERE
-- relname != ''
-- relname = 'tablename'
relnamespace = '2200'


On 4/17/07, Chuck Amadi <[EMAIL PROTECTED]> wrote:


I have been given a task to try a proof of concept of migrating are active
databases from PostgreSQL 8.0.1 to 8.2.3

I will install and configPostgreSQL on my local w/s and replicate the
database users etc etc.

I have had a look inside /var/lib/pgsql/data/base and there are
numerous  files (db's) which I believe is where I can check data base size.


1  108012  114465  122407  1261267  1261963  17229  17233  17234  17235  174066 
 19247  205009  21272  2793694  2794334  333245  75

I am aware that Postgresql creates directories to keep each database in.
These directories aren't names,they are kept as the *'OID's *of each
database (oid's are*"object identifiers"*).

Previously I just run pg_dump and thus run ls -lh on the db dump. Anyway
of checking a  size of a given database I have tried
the sql using select pg_database_size ('dbaname'); but no joy.

Cheers
Chuck

--
Chuck Amadi
ROK Corporation Limited
Ty ROK,
Dyffryn Business Park,
Llantwit Major Road,
Llandow,
Vale Of Glamorgan.
CF71 7PY

Tel: 01446 795 839
Fax: 01446 794 994
International Tel:   +44 1446 795 839

email: [EMAIL PROTECTED]

This email is confidential to the addressee only. If you do not believe
that you are the intended recipient, do not pass it on or copy it in any
way. Please delete it immediately.



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match



Re: [ADMIN] Cript for DISK SPACE USE psql

2007-04-17 Thread adey

Try this:-

-- Query returns size of a database calculated from the no. 8k pages
allocated to tables
SELECT
-- *
sum( relpages*8/1024 ) as MB
FROM
pg_class
WHERE
-- relname != ''
-- relname = 'document'
relnamespace = '2200'


On 4/16/07, Trula Thomas <[EMAIL PROTECTED]> wrote:


Hello, would anyone know the cript for disk space use (psql)? Please help.
Thank you
Trula

--
Ahhh...imagining that irresistible "new car" smell?
Check out new cars at Yahoo! 
Autos.




Re: [ADMIN] High Avaliable in the PostgreSQL.

2007-05-30 Thread adey

HP was providing CA (Continuous Access) software that was claimed to provide
WAN SAN replication by repeating IO in exactly the sequence it was generated
on the master, to the slave. SO while there was a delay, updates on the
slave would be sequentially intact, providing a good level of integrity for
database rollback on the slave in the event of recovery on the slave / DR
system. I'm not sure this would be sufficient for 99.9% uptime though, as
there would be some startup requirements on the slave.

On 5/31/07, Fernando Ike de Oliveira <[EMAIL PROTECTED]> wrote:


Hi,


   I need solution to PostgreSQL for High Available, originally
was 99,7% but considering the current necessities, change percentual
to 99,99%. I think in solution probability  pgpool-2 or Heartbeat +
GFS. The PostgreSQL servers will be in different physical places.

   Suggestions?



Kind Regards,
--
Fernando Ike
http://www.midstorm.org/~fike/weblog

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq



Re: [ADMIN] No create table

2007-07-26 Thread adey

Did your cammand fail, or did it work, but the user can still create tables?

On 7/27/07, Anderson Alves de Albuquerque <[EMAIL PROTECTED]> wrote:



 I need to use a user that can not have permission to create table in one
BD.

 I use revoke,  I tray with "revoke all on database XXX to|from
USERNAME;", but I don't have success.

 Could someone help me?

 With revoke I only limit access in tables.




Re: [ADMIN] Lock Statistics

2007-07-30 Thread adey
Try this query for a start, and add system tables to the join to find what
you want:-


-- displays locks with database name and username, but not table

-- CAUTION: this query may impact system performance as you are selecting
from system tables

select

relation

, c.relname

, u.usename

, pid

, mode

, transaction

, granted

, datname

, u.usesysid

, usesuper

--*

from pg_locks l, pg_stat_activity s, pg_user u, pg_class c

where l.pid = s.procpid

and l.relation = c.relfilenode

and s.usesysid = u.usesysid

order by l.pid;



On 7/31/07, Milen A. Radev <[EMAIL PROTECTED]> wrote:
>
> I have performance problems with a DB (slow queries) and I suspect the
> main cause is that a lot of queries wait for a lock on one small
> table. That's why I need some stats about the number and (average)
> wait-time for locks (even only for this particular table).
>
> After a bit of googling I found a project in PgFoundry with a
> promising description - http://pgfoundry.org/projects/pglockdiag/.
> Unfortunately the projects seems stillborn - no published files and
> nothing in CVS.
>
>
> --
> Milen A. Radev
>
> ---(end of broadcast)---
> TIP 5: don't forget to increase your free space map settings
>


Re: [ADMIN] Lock Statistics

2007-07-30 Thread adey
On 7/31/07, Michael Fuhr <[EMAIL PROTECTED]> wrote:
>
> On Tue, Jul 31, 2007 at 10:35:25AM +1200, adey wrote:
> > -- CAUTION: this query may impact system performance as you are
> selecting
> > from system tables
>
> What difference are you expecting that to make and why?


It was a warning I received some years ago when I riginally discovered the
query. Presumably reading from system tables competes with user queries for
the system tables, slowing them down. I can't say I've ever noticed any
impact when using it though.

--
> Michael Fuhr
>


[ADMIN] Daylight Savings Updates

2007-09-19 Thread adey
Hi,
Please advise whether it is necessary for us to apply a later version of
Postgres in preparation for the new daylight savings times coming in to
effect in New Zealand at the end of September?
We are running PG 8.1.4 and 8.1.6.
FAQ's indicate that our one remaining system on 7.1.4 will use the operating
system daylight savings settings, but do we have to apply anything to our
8.1 systems, why, and what is involved in doing so please?
Adrian


Re: [ADMIN] how to check objects dependency??

2007-10-31 Thread adey
Doesn't ownership also apply to views and stored procs...

On 11/1/07, Scott Marlowe <[EMAIL PROTECTED]> wrote:
>
> On 10/31/07, Jessica Richard <[EMAIL PROTECTED]> wrote:
> > I am trying to drop a user but it complained with the following message:
> >
> > ERROR:  role "joe" cannot be dropped because some objects depend on it
> > DETAIL:  1 objects in database Jan_test
> > 1 objects in database Jan
> >
> > this user does not own any tables in those two databases. Is there any
> > command I can use to check all the dependencies on this user?
>
> What version of pgsql are you running?  In 8.2 I get a more useful
> error message for a table that is owned by a user, i.e.:
>
> ERROR: role ... cannot be dropped ...
> DETAIL: owner of table 
>
> Have you looked in the logs to see if something like that shows up in
> them?
>
> ---(end of broadcast)---
> TIP 2: Don't 'kill -9' the postmaster
>


Re: [ADMIN] Get a list of ALL tables

2007-11-19 Thread adey
This usually does it for me:-


select *

from

-- information_schema.tables

pg_catalog.pg_tables

--WHERE

-- table_schema = 'public'

--ORDER BY

-- table_name



On 11/20/07, Kevin Kempter <[EMAIL PROTECTED]> wrote:
>
> Sorry, I meant to send this to the list:
>
>
> We have a handful of tables that are quite large and take several hours to
> vacuum. We're managing these large tables by using cron via a schedule
> that
> accounts for system load. I want to pull the list of all tables and
> exclude
> these large tables from the list, then run the vacuum on the remaining
> tables
> once a week to avoid transaction ID wraparound failures.
>
>
> /Kevin
>
>
>
>
> > Kevin Kempter <[EMAIL PROTECTED]> writes:
> > > I'm working with a client with several highly active 8.1.4 servers. I
> > > want to run a weekly cron that will vacuum ALL db tables except those
> > > with entries in pg_autovacuum (where we've setup manual/cron vacuums)
> in
> > > order to eliminate transaction ID wraparound failure warnings (which
> > > comes up fairly often).
> > >
> > > My question is:
> > > will a select from pg_tables for each database in the cluster provide
> me
> > > with a full table listing for the purposes of ensuring that all tables
> in
> > > the db have been vacuumed or are there others I'll need to account for
> as
> > > well ?
> >
> > Why are you insisting on inventing your own wheel for this, when
> > "vacuum" without a parameter does that just fine?
> >
> > regards, tom lane
>
>
>
>
> On Monday 19 November 2007 16:29:15 you wrote:
> > Kevin Kempter <[EMAIL PROTECTED]> writes:
> > > I'm working with a client with several highly active 8.1.4 servers. I
> > > want to run a weekly cron that will vacuum ALL db tables except those
> > > with entries in pg_autovacuum (where we've setup manual/cron vacuums)
> in
> > > order to eliminate transaction ID wraparound failure warnings (which
> > > comes up fairly often).
> > >
> > > My question is:
> > > will a select from pg_tables for each database in the cluster provide
> me
> > > with a full table listing for the purposes of ensuring that all tables
> in
> > > the db have been vacuumed or are there others I'll need to account for
> as
> > > well ?
> >
> > Why are you insisting on inventing your own wheel for this, when
> > "vacuum" without a parameter does that just fine?
> >
> >   regards, tom lane
>
>
>
> ---(end of broadcast)---
> TIP 3: Have you checked our extensive FAQ?
>
>   http://www.postgresql.org/docs/faq
>