Re: [GENERAL] nagios -- number of postgres connections

2009-06-16 Thread Jeff Frost
John R Pierce wrote:
 Whit Armstrong wrote:
 anyone know a way to get nagios to monitor the number of postgres
 connections?
   


Check out the check_postgres nagios plugin:
http://bucardo.org/check_postgres/

Specifically you want the backends check:
http://bucardo.org/check_postgres/check_postgres.pl.html#backends


-- 
Jeff Frost, Owner   j...@frostconsultingllc.com
Frost Consulting, LLC   http://www.frostconsultingllc.com/
Phone: 916-647-6411 FAX: 916-405-4032


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Full backup - pg_dumpall sufficient?

2009-01-29 Thread Jeff Frost
Tom Lane wrote:
 Gerhard Wiesinger li...@wiesinger.com writes:
   
 Hello Ray,
 Yes, that's clear. But there was even some stuff which isn't dumped with 
 pg_dumpall (as far as I read).
 

 Perhaps you were reading some extremely obsolete information?
 It used to be that pg_dumpall couldn't dump large objects,
 but that was a long time back.

   
Tom one thing I noticed recently is that pg_dumpall --globals doesn't
seem to pick up when you alter the GUCs at the database level and
neither does pg_dump.  How should you dump to grab that per-database
stuff? 

For example on 8.3.5:

discord:~ $ psql jefftest
Welcome to psql 8.3.5, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help with psql commands
   \g or terminate with semicolon to execute query
   \q to quit

jefftest=# show default_statistics_target ;
 default_statistics_target
---
 10
(1 row)

Time: 0.139 ms

jefftest=# ALTER DATABASE jefftest SET default_statistics_target = 100;
ALTER DATABASE
Time: 46.758 ms

jefftest=# \q
discord:~ $ psql jefftest
Welcome to psql 8.3.5, the PostgreSQL interactive terminal.

Type:  \copyright for distribution terms
   \h for help with SQL commands
   \? for help with psql commands
   \g or terminate with semicolon to execute query
   \q to quit

jefftest=# show default_statistics_target ;
 default_statistics_target
---
 100
(1 row)

Time: 0.318 ms
jefftest=# \q
discord:~ $ pg_dumpall --globals|grep default_statistics_target
discord:~ $ pg_dump jefftest | grep default_statistics_target
discord:~ $

-- 
Jeff Frost, Owner   j...@frostconsultingllc.com
Frost Consulting, LLC   http://www.frostconsultingllc.com/
Phone: 916-647-6411 FAX: 916-405-4032



md5 doesn't work (Was Re: [GENERAL] Pet Peeves?)

2009-01-29 Thread Jeff Frost

On Thu, 29 Jan 2009, rhubbell wrote:


Umm, because md5 doesn't work and trust does work.


Generally this is because you haven't yet set a password for the postgres 
user.  You have to set a password for at least the postgres user via ALTER 
ROLE while you've still got it set to trust or ident before changing to md5.



On Thu, 29 Jan 2009 13:16:19 -0500
Bill Moran wmo...@potentialtech.com wrote:


In response to rhubbell rhubb...@ihubbell.com:


I'm a new user to PostgreSQL so mine's fresh from doing an install recently.

In /etc/postgresql/8.3/main/pg_hba.conf

# METHOD can be trust, reject, md5, crypt, password, gss, sspi,
# krb5, ident, pam or ldap.  Note that password sends passwords
# in clear text; md5 is preferred since it sends encrypted passwords.


So I chose md5 but it will not work, seems like a basic thing. So I am
forced to use trust.


How on earth does failure of md5 to work force you to use trust?

How about crypt or password (password is pretty darn simple to set up).



--
Jeff Frost, Owner   j...@frostconsultingllc.com
Frost Consulting, LLC   http://www.frostconsultingllc.com/
Phone: 916-647-6411 FAX: 916-405-4032

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Full backup - pg_dumpall sufficient?

2009-01-29 Thread Jeff Frost

On Thu, 29 Jan 2009, Tom Lane wrote:


Jeff Frost j...@frostconsultingllc.com writes:

Tom one thing I noticed recently is that pg_dumpall --globals doesn't
seem to pick up when you alter the GUCs at the database level and
neither does pg_dump.  How should you dump to grab that per-database
stuff?


Regular pg_dumpall will catch that.

There's been some previous discussion about redrawing the dividing lines
so that this doesn't fall between the cracks when you try to use
--globals plus per-database pg_dump, but AFAIR nothing's been done about
it yet.  It's a bit tricky since it's not entirely clear who's
responsible for creating the individual databases when you restore in
that scenario.


I guess I could pg_dumpall -s | grep ALTER DATABASE to grab that stuff.

--
Jeff Frost, Owner   j...@frostconsultingllc.com
Frost Consulting, LLC   http://www.frostconsultingllc.com/
Phone: 916-647-6411 FAX: 916-405-4032

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Full backup - pg_dumpall sufficient?

2009-01-29 Thread Jeff Frost
Jeff Frost wrote:
 On Thu, 29 Jan 2009, Tom Lane wrote:

 Jeff Frost j...@frostconsultingllc.com writes:
 Tom one thing I noticed recently is that pg_dumpall --globals doesn't
 seem to pick up when you alter the GUCs at the database level and
 neither does pg_dump.  How should you dump to grab that per-database
 stuff?

 Regular pg_dumpall will catch that.

 There's been some previous discussion about redrawing the dividing lines
 so that this doesn't fall between the cracks when you try to use
 --globals plus per-database pg_dump, but AFAIR nothing's been done about
 it yet.  It's a bit tricky since it's not entirely clear who's
 responsible for creating the individual databases when you restore in
 that scenario.

 I guess I could pg_dumpall -s | grep ALTER DATABASE to grab that stuff.

That seems silly.  Is this the best way to find this data:

SELECT name, setting FROM pg_settings where source = 'database' ORDER BY
name;   

?

-- 
Jeff Frost, Owner   j...@frostconsultingllc.com
Frost Consulting, LLC   http://www.frostconsultingllc.com/
Phone: 916-647-6411 FAX: 916-405-4032


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fwd: Performance Tuning

2008-11-17 Thread Jeff Frost

On Mon, 17 Nov 2008, Scott Marlowe wrote:


On Mon, Nov 17, 2008 at 11:21 AM, John Zhang [EMAIL PROTECTED] wrote:



Hi the list,

I have a performance problem and would like to any input on how to make it
perform as desired. In the DB, there are a few tables over 3 million records
in postgis. When I do some operation on the tables, e.g. CREATE INDEX, it
takes hours without results. I believe there must be something wrong within
the DB. However, I have not figured it out. Could anyone point me to the
right direction to tune and configured the database efficiently?


Assuming it's creating GIN indexes, and possibly even if not, index
creation is very dependent on having a large enough work_mem for it to
fit a large portion of the data it's working on in memory.  Try
cranking up work_mem before index creation.  Note that you don't have
to do this in postgresql.conf and reload, you can just do it in the
session you're in:

set work_mem=512000;
create index blah blah blah;


Doesn't he want to change maintenance_work_mem instead of work_mem for index 
creation?


--
Jeff Frost, Owner   [EMAIL PROTECTED]
Frost Consulting, LLC   http://www.frostconsultingllc.com/
Phone: 916-647-6411 FAX: 916-405-4032

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restoring a database

2008-10-15 Thread Jeff Frost
Chris Henderson wrote:
 I backup all my databases by using pg_dumpall - pg_dumpall 
 /tmp/postgres.backup.`hostname`. It should backup four DBs: analyze,
 postgres,  template0 and template1
 I guess this backs up the schemas as well.

 Now I want to restore one of the databases and schema from this backup
 dump file onto a different server. The databases is call analyze.
 Does anyone know how to do that with pg_restore? Thanks.
   
Chris,

pg_restore is used to restore backups that were saved in the custom
format by pg_dump (option -Fc).  You don't need pg_restore to restore a
pg_dumpall archive.  If you look at the backup file, you'll find that
it's just straight SQL.  If you want to restore a particular database
out of it and not all of them, then you will need to edit the sql file
to include only what you want to restore.  Then you simply pass it
through psql like so:

psql -f /tmp/postgres.backup.`hostname` postgres

Hint: the above will require that the CREATE DATABASE and \connect
commands for the database you want to restore are still in the file.

-- 
Jeff Frost, Owner   [EMAIL PROTECTED]
Frost Consulting, LLC   http://www.frostconsultingllc.com/
Phone: 916-647-6411 FAX: 916-405-4032


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Restoring a database

2008-10-15 Thread Jeff Frost


Jeff Frost wrote:
 Chris Henderson wrote:
   
 I backup all my databases by using pg_dumpall - pg_dumpall 
 /tmp/postgres.backup.`hostname`. It should backup four DBs: analyze,
 postgres,  template0 and template1
 I guess this backs up the schemas as well.

 Now I want to restore one of the databases and schema from this backup
 dump file onto a different server. The databases is call analyze.
 Does anyone know how to do that with pg_restore? Thanks.
   
 
 Chris,

 pg_restore is used to restore backups that were saved in the custom
 format by pg_dump (option -Fc).  You don't need pg_restore to restore a
   
Whoops, that should say, in the custom or tar format (options -Fc or
-Ft).

 pg_dumpall archive.  If you look at the backup file, you'll find that
 it's just straight SQL.  If you want to restore a particular database
 out of it and not all of them, then you will need to edit the sql file
 to include only what you want to restore.  Then you simply pass it
 through psql like so:

 psql -f /tmp/postgres.backup.`hostname` postgres

 Hint: the above will require that the CREATE DATABASE and \connect
 commands for the database you want to restore are still in the file.

   

-- 
Jeff Frost, Owner   [EMAIL PROTECTED]
Frost Consulting, LLC   http://www.frostconsultingllc.com/
Phone: 916-647-6411 FAX: 916-405-4032



Re: [GENERAL] [ADMIN] Problem with large table not using indexes (I think)

2006-12-23 Thread Jeff Frost

On Sat, 23 Dec 2006, Benjamin Arai wrote:

The largest table in my database (30GB) has mysteriously went from taking 
milli-seconds to perform a query to minutes.  This disks are fine and I have 
a 4GB shared_memory.  Could this slow down have to do with the fsm_max_pages 
or something else like that?  I made it larger but the queries still taking a 
long time.  I do daily vacuum's but I don't run it with -z or --full.  I 
would like to avoid doing a --full if possible because it would literally 
take over a week to complete.  Any help would be greatly appreciated.


Benjamin,

When is the last time you ran ANALYZE?  That's what the -z option does.  If 
you're only vacuuming once daily, you should definitely analyze with the -z 
flag as well.


--
Jeff Frost, Owner   [EMAIL PROTECTED]
Frost Consulting, LLC   http://www.frostconsultingllc.com/
Phone: 650-780-7908 FAX: 650-649-1954

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] [ADMIN] Problem with large table not using indexes (I think)

2006-12-23 Thread Jeff Frost

On Sat, 23 Dec 2006, Benjamin Arai wrote:

I thought that you only need to use the -z flag if the distribution of the 
data is changing.


You're absolutely correct.  Have you not been inserting, updating or deleting 
data?  It sounds like you are based on the followup email you just sent:


One more note about my problem, when you run a query on older data in the 
table then it work great but if you query newer data then is very slow.



Ex.



SELECT * from my_table WHERE date =12/1/2005 and date = 12/1/2006; - slow



SELECT * from my_table WHERE date =12/1/2002 and date = 12/1/2003; - fast



It just has to do with the new data for some reason.


Try and run ANALYZE my_table; from psql and see if that makes things faster. 
If it does, then you likely need to analyze more often than never.  I'm 
guessing if you're inserting data that has a date or timestamp, then you 
definitely need to be analyzing..that's even one of the examples used in the 
docs:


http://www.postgresql.org/docs/8.1/interactive/maintenance.html#VACUUM-FOR-STATISTICS

For example, a timestamp column that contains the time of row update will 
have a constantly-increasing maximum value as rows are added and updated; such 
a column will probably need more frequent statistics updates than, say, a 
column containing URLs for pages accessed on a website. The URL column may 
receive changes just as often, but the statistical distribution of its values 
probably changes relatively slowly.





Jeff Frost wrote:

On Sat, 23 Dec 2006, Benjamin Arai wrote:

The largest table in my database (30GB) has mysteriously went from taking 
milli-seconds to perform a query to minutes.  This disks are fine and I 
have a 4GB shared_memory.  Could this slow down have to do with the 
fsm_max_pages or something else like that?  I made it larger but the 
queries still taking a long time.  I do daily vacuum's but I don't run it 
with -z or --full.  I would like to avoid doing a --full if possible 
because it would literally take over a week to complete.  Any help would 
be greatly appreciated.


Benjamin,

When is the last time you ran ANALYZE?  That's what the -z option does.  If 
you're only vacuuming once daily, you should definitely analyze with the -z 
flag as well.







--
Jeff Frost, Owner   [EMAIL PROTECTED]
Frost Consulting, LLC   http://www.frostconsultingllc.com/
Phone: 650-780-7908 FAX: 650-649-1954

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] [ADMIN] OUTER JOIN IS SLOW

2006-12-23 Thread Jeff Frost

On Sat, 23 Dec 2006, Benjamin Arai wrote:

-  Index Scan using mutualfd_weekday_qbid_pkey_idx on mutualfd_weekday_qbid 
(cost=0.00..6.01 rows=1 width=19) (actual time=34.579..8510.801 rows=253 
loops=1)


You're right that this is the problem and show that the planner was expecting 
a very low cost on the index scan, but it turned out to be much higher.  This 
is because of old statistics.  VACUUM ANALYZE should remedy this.


Just run:

ANALYZE mytablename;

in psql and see if that gets you going.

Also of note, you can set work_mem per session, so it's reasonable to 
benchmark various values until you find one that helps your query.


Note that you might not want to set work_mem too high, because it's per 
operation, so you could easily run your machine out of RAM if you set this too 
high and have many concurrent queries running.


Also, to answer your question regarding FSM settings, you should run a 
vacuumdb -av and look at the last 8 lines to see if you have your FSM 
settings high enough.




is the problem.  As I understand it is using the index but it is low as dirt. 
Hopefully, the -z will fix this.  I also ran the same query but with earlier 
dates in the table and the query ran much faster.


Merge Full Join  (cost=3492.48..3505.60 rows=1034 width=19) (actual 
time=7.605..12.851 rows=273 loops=1)

 Merge Cond: (outer.d1 = inner.pkey)
 -  Sort  (cost=62.33..64.83 rows=1000 width=4) (actual time=4.586..5.530 
rows=263 loops=1)

   Sort Key: d1.d1
   -  Function Scan on getDateRange d1  (cost=0.00..12.50 rows=1000 
width=4) (actual time=2.493..3.515 rows=263 loops=1)
 -  Sort  (cost=3430.15..3432.74 rows=1034 width=19) (actual 
time=2.998..3.971 rows=263 loops=1)

   Sort Key: mutualfd_weekday_qbid.pkey
   -  Index Scan using mutualfd_weekday_qbid_pkey on 
mutualfd_weekday_qbid  (cost=0.00..3378.38 rows=1034 width=19) (actual 
time=0.075..1.843 rows=263 loops=1)
 Index Cond: ((cusip = '92193920'::text) AND (pkey = 
'1999-12-15'::date) AND (pkey = '2000-12-15'::date))

Total runtime: 13.935 ms
(10 rows)

In this case it only took 13.935ms as compared to 8522.894 ms for the newer 
data.


Benjamin

Shoaib Mir wrote:
adding to the last email, for now try the work_mem but you should be 
adding ANALYZE along with the VACUUM (with a cron job I guess) you do 
regularly.



Shoaib Mir
EntperpriseDB ( www.enterprisedb.com http://www.enterprisedb.com)

On 12/24/06, *Shoaib Mir* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Try increasing the work_mem first to see the change, that might help.

-
Shoaib Mir
EnterpriseDB ( www.enterprisedb.com http://www.enterprisedb.com)

On 12/24/06, *Benjamin Arai* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:

I have been running pieces of my PL function by hand and I
have found
that the following queries work by themselves taking less than
a second
to execute.

getDateRange('12/1/2005','12/1/2006')  - simply generates a
date
list.  Doesn't even access a table

SELECT * FROM mutualfd_weekday_qbid WHERE cusip='92193920'
AND  pkey =
'12/15/2005' AND pkey = '12/15/2006';

But when combined as below it takes 10 seconds to execute.

SELECT d1 as date, d2.data as data FROM
getDateRange('12/1/2005','12/1/2006') d1 FULL OUTER JOIN
(SELECT *
FROM mutualfd_weekday_qbid WHERE cusip='92193920' AND  pkey =
'12/15/2005' AND pkey = '12/15/2006') d2 ON d1=d2.pkey;

Do I need to increase the work_mem or is this possible still a
ANALYZE
issue?

Benjamin

---(end of
broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate
http://www.postgresql.org/about/donate





---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

  http://www.postgresql.org/about/donate




--
Jeff Frost, Owner   [EMAIL PROTECTED]
Frost Consulting, LLC   http://www.frostconsultingllc.com/
Phone: 650-780-7908 FAX: 650-649-1954

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly