Re: [GENERAL] database corruption

2009-04-08 Thread Albe Laurenz *EXTERN*
This thread is a top posting mess. I'll try to rearrange:

Jeff Brenton wrote:
> REINDEX INDEX testrun_log_pkey;
>
> ERROR:  could not write block 1832079 of temporary file: No space left
> on device
> HINT:  Perhaps out of disk space?
>
> There is currently 14GB free on the disk that postgres is installed on.
> Does anyone know what I can do to get the db up and running again?
[...]
> /dev/amrd2s1d663G596G 14G98%/db

 I guess the first question is, does the db have permissions(access) to
 all that space?
>>>
>>> There are no filesystem level content size restrictions that I am aware
>>> of on this system.  The user pgsql should have full access to the
>>> filesystems indicated except for the root filesystem. 
>>
>> Inodes?
>
> There are 9 miilion inodes free on /db.  All other partitions have at
> least 1/2 million free.  

Assuming that this is ext3 on Linux, it could be space reserved for root.

What do you get if you run the following as root:

dumpe2fs /dev/amrd2s1d | grep 'Reserved block count'

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database corruption

2009-04-08 Thread Craig Ringer
Jeff Brenton wrote:

> I've attempted to re-index the pkey listed but after an hour it fails
> with 
> 
> REINDEX INDEX testrun_log_pkey;
> 
> ERROR:  could not write block 1832079 of temporary file: No space left
> on device
> 
> HINT:  Perhaps out of disk space?
> 
> There is currently 14GB free on the disk that postgres is installed on.
> Does anyone know what I can do to get the db up and running again?  

Is there 14GB free at the time PostgreSQL claims to run out of space? Try:

watch -n 60 "df -m /db"

and see if the free space falls close to zero during your reindex attempt.

Personally, I'd just try to give Pg some room to breathe.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] existence of column name

2009-04-08 Thread John R Pierce

Eric Smith wrote:

All,

From the C API, how do I check for the existence of a column name in a 
given table?


   select data_type from information_schema.columns where 
table_schema='public' and table_name='given_table' and 
column_name='some_column';


that will return the data_type if the column exists, or return zero rows 
if it doesn't.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] existence of column name

2009-04-08 Thread Eric Smith

All,

From the C API, how do I check for the existence of a column name in  
a given table?


Thanks,
Eric


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] LDAP TLS certificate error

2009-04-08 Thread Paul Fisher
I'm trying to configure Postgres to connect to my university's LDAP
server to authenticate database users.  In my pg_hba.conf, I have:

> hostssl all +members129.21.0.0/16 ldap 
> "ldaps://ldap.rit.edu:636/ou=people,dc=rit,dc=edu;uid="

These are the same connection settings I'm using successfully in Apache.
 When I try to connect as an LDAP-authenticated user, I get the
following error in the logs:

> 2009-04-08 22:15:13 EDT LOG:  could not start LDAP TLS session: error code -1
> 2009-04-08 22:15:13 EDT FATAL:  LDAP authentication failed for user "pkf1214"

I'm not sure why it doesn't want to start TLS.  I've got the appropriate
CA certificates listed in my /etc/ldap/ldap.conf:

> TLS_CACERT /etc/ssl/certs/ca-certificates.crt

I'm on Ubuntu, and this file is a concatenated list of all the CA
certificates, including the LDAP server's CA.  I've confirmed this
should work under normal circumstances -- if I connect to LDAP in, say,
Python, startTLS works just fine.

Any ideas?  Is there a way I can turn on extra debugging to get out a
more detailed error message?

Thanks in advance!
Paul Fisher


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database corruption

2009-04-08 Thread Jeff Brenton

There are 9 miilion inodes free on /db.  All other partitions have at
least 1/2 million free.  

-Original Message-
From: Joshua D. Drake [mailto:j...@commandprompt.com] 
Sent: Wednesday, April 08, 2009 10:26 PM
To: Jeff Brenton
Cc: Adrian Klaver; pgsql-general@postgresql.org
Subject: Re: [GENERAL] database corruption

On Wed, 2009-04-08 at 22:14 -0400, Jeff Brenton wrote:
> There are no filesystem level content size restrictions that I am
aware
> of on this system.  The user pgsql should have full access to the
> filesystems indicated except for the root filesystem. 

Inodes?

> 
> Where is the temporary location?  I am searching around to see if I
can
> specify it anywhere in the config files but can't seem to find
anything
> which leads me to believe that its part of the postgres data
directory.
> 
> 
> -Original Message-
> From: Adrian Klaver [mailto:akla...@comcast.net] 
> Sent: Wednesday, April 08, 2009 10:10 PM
> To: pgsql-general@postgresql.org
> Cc: Jeff Brenton
> Subject: Re: [GENERAL] database corruption
> 
> On Wednesday 08 April 2009 6:32:06 pm Jeff Brenton wrote:
> > I've encountered some db corruption after restarting postgres on my
> > database server running 8.2.4.  I think that postgres did not shut
> down
> > cleanly.  Postgres started appropriately but crashed 45 minutes
later.
> > I used pg_resetxlog after the crash to get the db to start again but
> it
> > appears that the database is not running properly now.  When users
try
> > to access some of the tables in the db they get the error below;
> >
> >
> >
> > ERROR:  index "testrun_log_pkey" contains unexpected zero page at
> block
> > 3155408
> >
> > HINT:  Please REINDEX it.}>  > record
> >
> > SQL connection is null
> >
> > SQL statement diagnostic: XX002 7 {Error while executing the query;
> >
> > ERROR:  index "testrun_log_pkey" contains unexpected zero page at
> block
> > 3155408
> >
> > HINT:  Please REINDEX it.}
> >
> >
> >
> > I've attempted to re-index the pkey listed but after an hour it
fails
> > with
> >
> >
> >
> > REINDEX INDEX testrun_log_pkey;
> >
> >
> >
> > ERROR:  could not write block 1832079 of temporary file: No space
left
> > on device
> >
> > HINT:  Perhaps out of disk space?
> >
> >
> >
> > There is currently 14GB free on the disk that postgres is installed
> on.
> > Does anyone know what I can do to get the db up and running again?
> 
> I guess the first question is, does the db have permissions(access) to
> all that 
> space?
> 
> >
> >
> >
> > /dev/amrd0s1a3.9G2.7G898M75%/
> >
> > /dev/amrd0s1e115G 43G 63G40%/backup
> >
> > /dev/amrd1s1d133G748M121G 1%/wal
> >
> > /dev/amrd2s1d663G596G 14G98%/db
> >
> > /dev/amrd0s1d3.9G184M3.4G 5%/var
> 
> 
> 
> -- 
> Adrian Klaver
> akla...@comcast.net
> 
-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database corruption

2009-04-08 Thread Joshua D. Drake
On Wed, 2009-04-08 at 22:14 -0400, Jeff Brenton wrote:
> There are no filesystem level content size restrictions that I am aware
> of on this system.  The user pgsql should have full access to the
> filesystems indicated except for the root filesystem. 

Inodes?

> 
> Where is the temporary location?  I am searching around to see if I can
> specify it anywhere in the config files but can't seem to find anything
> which leads me to believe that its part of the postgres data directory.
> 
> 
> -Original Message-
> From: Adrian Klaver [mailto:akla...@comcast.net] 
> Sent: Wednesday, April 08, 2009 10:10 PM
> To: pgsql-general@postgresql.org
> Cc: Jeff Brenton
> Subject: Re: [GENERAL] database corruption
> 
> On Wednesday 08 April 2009 6:32:06 pm Jeff Brenton wrote:
> > I've encountered some db corruption after restarting postgres on my
> > database server running 8.2.4.  I think that postgres did not shut
> down
> > cleanly.  Postgres started appropriately but crashed 45 minutes later.
> > I used pg_resetxlog after the crash to get the db to start again but
> it
> > appears that the database is not running properly now.  When users try
> > to access some of the tables in the db they get the error below;
> >
> >
> >
> > ERROR:  index "testrun_log_pkey" contains unexpected zero page at
> block
> > 3155408
> >
> > HINT:  Please REINDEX it.}>  > record
> >
> > SQL connection is null
> >
> > SQL statement diagnostic: XX002 7 {Error while executing the query;
> >
> > ERROR:  index "testrun_log_pkey" contains unexpected zero page at
> block
> > 3155408
> >
> > HINT:  Please REINDEX it.}
> >
> >
> >
> > I've attempted to re-index the pkey listed but after an hour it fails
> > with
> >
> >
> >
> > REINDEX INDEX testrun_log_pkey;
> >
> >
> >
> > ERROR:  could not write block 1832079 of temporary file: No space left
> > on device
> >
> > HINT:  Perhaps out of disk space?
> >
> >
> >
> > There is currently 14GB free on the disk that postgres is installed
> on.
> > Does anyone know what I can do to get the db up and running again?
> 
> I guess the first question is, does the db have permissions(access) to
> all that 
> space?
> 
> >
> >
> >
> > /dev/amrd0s1a3.9G2.7G898M75%/
> >
> > /dev/amrd0s1e115G 43G 63G40%/backup
> >
> > /dev/amrd1s1d133G748M121G 1%/wal
> >
> > /dev/amrd2s1d663G596G 14G98%/db
> >
> > /dev/amrd0s1d3.9G184M3.4G 5%/var
> 
> 
> 
> -- 
> Adrian Klaver
> akla...@comcast.net
> 
-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database corruption

2009-04-08 Thread Jeff Brenton
I was looking at dropping the index and recreating.  Part of the reason
that I restarted postgres was to enable WAL archiving so that I can
migrate to a larger filesystem next week.  I've got a system with a 1.3T
array that I will be migrating to.  This DB has been neglected and I am
trying to fix it but things are arguing with me. 

 

Will dropping the index have any negative consequences if the indexed
table has a primary key associated with it?  I think not but want to be
certain.   

 



From: Chris [mailto:rfu...@gmail.com] 
Sent: Wednesday, April 08, 2009 10:08 PM
To: Jeff Brenton
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] database corruption

 

I would imagine you would have better luck dropping the index and
recreating.  But considering you're 98% full on that drive, it looks
like you're about to have other problems...

 

On Wed, Apr 8, 2009 at 8:32 PM, Jeff Brenton 
wrote:

I've encountered some db corruption after restarting postgres on my
database server running 8.2.4.  I think that postgres did not shut down
cleanly.  Postgres started appropriately but crashed 45 minutes later.
I used pg_resetxlog after the crash to get the db to start again but it
appears that the database is not running properly now.  When users try
to access some of the tables in the db they get the error below;

 

ERROR:  index "testrun_log_pkey" contains unexpected zero page at block
3155408

HINT:  Please REINDEX it.}> 

Re: [GENERAL] database corruption

2009-04-08 Thread Jeff Brenton

There are no filesystem level content size restrictions that I am aware
of on this system.  The user pgsql should have full access to the
filesystems indicated except for the root filesystem. 

Where is the temporary location?  I am searching around to see if I can
specify it anywhere in the config files but can't seem to find anything
which leads me to believe that its part of the postgres data directory.


-Original Message-
From: Adrian Klaver [mailto:akla...@comcast.net] 
Sent: Wednesday, April 08, 2009 10:10 PM
To: pgsql-general@postgresql.org
Cc: Jeff Brenton
Subject: Re: [GENERAL] database corruption

On Wednesday 08 April 2009 6:32:06 pm Jeff Brenton wrote:
> I've encountered some db corruption after restarting postgres on my
> database server running 8.2.4.  I think that postgres did not shut
down
> cleanly.  Postgres started appropriately but crashed 45 minutes later.
> I used pg_resetxlog after the crash to get the db to start again but
it
> appears that the database is not running properly now.  When users try
> to access some of the tables in the db they get the error below;
>
>
>
> ERROR:  index "testrun_log_pkey" contains unexpected zero page at
block
> 3155408
>
> HINT:  Please REINDEX it.}>  record
>
> SQL connection is null
>
> SQL statement diagnostic: XX002 7 {Error while executing the query;
>
> ERROR:  index "testrun_log_pkey" contains unexpected zero page at
block
> 3155408
>
> HINT:  Please REINDEX it.}
>
>
>
> I've attempted to re-index the pkey listed but after an hour it fails
> with
>
>
>
> REINDEX INDEX testrun_log_pkey;
>
>
>
> ERROR:  could not write block 1832079 of temporary file: No space left
> on device
>
> HINT:  Perhaps out of disk space?
>
>
>
> There is currently 14GB free on the disk that postgres is installed
on.
> Does anyone know what I can do to get the db up and running again?

I guess the first question is, does the db have permissions(access) to
all that 
space?

>
>
>
> /dev/amrd0s1a3.9G2.7G898M75%/
>
> /dev/amrd0s1e115G 43G 63G40%/backup
>
> /dev/amrd1s1d133G748M121G 1%/wal
>
> /dev/amrd2s1d663G596G 14G98%/db
>
> /dev/amrd0s1d3.9G184M3.4G 5%/var



-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database corruption

2009-04-08 Thread Adrian Klaver
On Wednesday 08 April 2009 6:32:06 pm Jeff Brenton wrote:
> I've encountered some db corruption after restarting postgres on my
> database server running 8.2.4.  I think that postgres did not shut down
> cleanly.  Postgres started appropriately but crashed 45 minutes later.
> I used pg_resetxlog after the crash to get the db to start again but it
> appears that the database is not running properly now.  When users try
> to access some of the tables in the db they get the error below;
>
>
>
> ERROR:  index "testrun_log_pkey" contains unexpected zero page at block
> 3155408
>
> HINT:  Please REINDEX it.}>  record
>
> SQL connection is null
>
> SQL statement diagnostic: XX002 7 {Error while executing the query;
>
> ERROR:  index "testrun_log_pkey" contains unexpected zero page at block
> 3155408
>
> HINT:  Please REINDEX it.}
>
>
>
> I've attempted to re-index the pkey listed but after an hour it fails
> with
>
>
>
> REINDEX INDEX testrun_log_pkey;
>
>
>
> ERROR:  could not write block 1832079 of temporary file: No space left
> on device
>
> HINT:  Perhaps out of disk space?
>
>
>
> There is currently 14GB free on the disk that postgres is installed on.
> Does anyone know what I can do to get the db up and running again?

I guess the first question is, does the db have permissions(access) to all that 
space?

>
>
>
> /dev/amrd0s1a3.9G2.7G898M75%/
>
> /dev/amrd0s1e115G 43G 63G40%/backup
>
> /dev/amrd1s1d133G748M121G 1%/wal
>
> /dev/amrd2s1d663G596G 14G98%/db
>
> /dev/amrd0s1d3.9G184M3.4G 5%/var



-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database corruption

2009-04-08 Thread Chris
I would imagine you would have better luck dropping the index and
recreating.  But considering you're 98% full on that drive, it looks like
you're about to have other problems...

On Wed, Apr 8, 2009 at 8:32 PM, Jeff Brenton  wrote:

>  I’ve encountered some db corruption after restarting postgres on my
> database server running 8.2.4.  I think that postgres did not shut down
> cleanly.  Postgres started appropriately but crashed 45 minutes later.  I
> used pg_resetxlog after the crash to get the db to start again but it
> appears that the database is not running properly now.  When users try to
> access some of the tables in the db they get the error below;
>
>
>
> ERROR:  index "testrun_log_pkey" contains unexpected zero page at block
> 3155408
>
> HINT:  Please REINDEX it.}>  record
>
> SQL connection is null
>
> SQL statement diagnostic: XX002 7 {Error while executing the query;
>
> ERROR:  index "testrun_log_pkey" contains unexpected zero page at block
> 3155408
>
> HINT:  Please REINDEX it.}
>
>
>
> I’ve attempted to re-index the pkey listed but after an hour it fails with
>
>
>
> REINDEX INDEX testrun_log_pkey;
>
>
>
> ERROR:  could not write block 1832079 of temporary file: No space left on
> device
>
> HINT:  Perhaps out of disk space?
>
>
>
> There is currently 14GB free on the disk that postgres is installed on.
> Does anyone know what I can do to get the db up and running again?
>
>
>
> /dev/amrd0s1a3.9G2.7G898M75%/
>
> /dev/amrd0s1e115G 43G 63G40%/backup
>
> /dev/amrd1s1d133G748M121G 1%/wal
>
> /dev/amrd2s1d663G596G 14G98%/db
>
> /dev/amrd0s1d3.9G184M3.4G 5%/var
>
>
>



-- 
Chris Spotts
rfu...@gmail.com


Re: [GENERAL] Is it possible for PostgreSQL to support mmddyyyy date format?

2009-04-08 Thread Justin




This is from the link 


  

  1/8/1999
  January 8 in MDY mode; August 1 in DMY mode


  1/18/1999
  January 18 in MDY mode; rejected in
other modes


  01/02/03
  January 2, 2003 in MDY mode;
February 1, 2003 in DMY mode; February 3,
2001 in YMD mode

  


 you can run this 

select '01/01/2009'::date, '01-01-2009'::date

You can change the output  for the client..

http://www.postgresql.org/docs/current/static/runtime-config-client.html#GUC-DATESTYLE

Bernard Barton wrote:. 

  I'm currently porting an Informix based application to PostgreSQL 8.3. 
All of the dates processed by the application are in the mmdd
format.  According to the date input table at the link below, the
mmdd format is not listed.  The mmdd format is supported, but
I'd have to modify a LOT of code to change the format, which I'm trying
to avoid.  Is there any way to get PG to work with the mmdd date
format?  I know I can use the to_date function, but again, it would be
MUCH simpler if PG worked with this format.

-Thanks

http://www.postgresql.org/docs/current/static/datatype-datetime.html#DATATYPE-DATETIME-DATE-TABLE

  





[GENERAL] database corruption

2009-04-08 Thread Jeff Brenton
I've encountered some db corruption after restarting postgres on my
database server running 8.2.4.  I think that postgres did not shut down
cleanly.  Postgres started appropriately but crashed 45 minutes later.
I used pg_resetxlog after the crash to get the db to start again but it
appears that the database is not running properly now.  When users try
to access some of the tables in the db they get the error below;

 

ERROR:  index "testrun_log_pkey" contains unexpected zero page at block
3155408

HINT:  Please REINDEX it.}> 

Re: [GENERAL] Are there performance advantages in storing bulky field in separate table?

2009-04-08 Thread Leif B. Kristensen
On Wednesday 8. April 2009, Ron Mayer wrote:
>Sam Mason wrote:
>> On Wed, Apr 08, 2009 at 04:56:35PM +0100, Ian Mayo wrote:
>>> One more thing:  hey, did you hear?  I just got some advice from
>>> Tom Lane!
>>
>> Statistically speaking; he's the person most likely to answer you by
>
>Even so, this might be the #1 advantage of Postgres over Oracle (cost
>being #2).

I'll subscribe to that. Like almost everybody else on this list, I've 
got some excellent replies from Tom. And of course it's one of the 
major advantages of PostgreSQL. (The cost being a close second.)
-- 
Leif Biberg Kristensen | Registered Linux User #338009
Me And My Database: http://solumslekt.org/blog/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] recovery after segmentation fault

2009-04-08 Thread Craig Ringer
Martijn van Oosterhout wrote:
> On Wed, Apr 08, 2009 at 05:24:08PM +0200, Ivan Sergio Borgonovo wrote:
>> How on Debian?
>> Debian does all it's automagic stuff in init. I never learned how to
>> start pg manually.
> 
> What might be easier is turning on core dumps (ulimit -S -c unlimited)
> and then start postgres and see if it drops a core dump, which you can
> then feed to gdb.

Note that ulimit is inherited by child processes; it doesn't apply
system wide. You'll need to set the ulimit somewhere like the postgresql
init script, where the postmaster is a child of the shell in which the
ulimit command is run.

Also, because Debian strips its binaries by default, you might need to
rebuild the postgresql packages with debugging enabled and without
stripping to get a useful backtrace. Worth a try anyway, though.

Does Debian have a repository full of debug symbol packages like Ubuntu
does?

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Is it possible for PostgreSQL to support mmddyyyy date format?

2009-04-08 Thread Bernard Barton
I'm currently porting an Informix based application to PostgreSQL 8.3. 
All of the dates processed by the application are in the mmdd
format.  According to the date input table at the link below, the
mmdd format is not listed.  The mmdd format is supported, but
I'd have to modify a LOT of code to change the format, which I'm trying
to avoid.  Is there any way to get PG to work with the mmdd date
format?  I know I can use the to_date function, but again, it would be
MUCH simpler if PG worked with this format.

-Thanks

http://www.postgresql.org/docs/current/static/datatype-datetime.html#DATATYPE-DATETIME-DATE-TABLE

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] recovery after segmentation fault

2009-04-08 Thread Ivan Sergio Borgonovo
On Wed, 8 Apr 2009 23:59:43 +0200
Martijn van Oosterhout  wrote:

> What might be easier is turning on core dumps (ulimit -S -c
> unlimited) and then start postgres and see if it drops a core

thanks.

> > Is there a way to just kill recovery for one DB? Just don't
> > start it at all?
> 
> Unfortunatly, the XLOG is shared betweens all databases on one
> cluster.

bwaaa. That's a bit of a pain.

I'm trying to understand this a bit better...
I think nothing terrible really happened since:
a) the DB that has the higher write load was actually the one that
caused the problem and I restored from a backup.
b) the other DBs have some writes too... but the software using them
doesn't have any idea about transactions so it is built with atomic
statement in mind... No operation I can think of was writing in more
than one table and I'd think most (all?) the operations were atomic
at the statement level.

So if I lost some writes in logs for the other DBs... that shouldn't
be a problem, right? I just lost some data... not coherency? right?

> > This is the same DB having problem with recreation of gin index
> > BTW... and I've the feeling that the problem is related to that
> > index once more... I was vacuuming full, I aborted...

> > I think the DB is trying to recreate the index but due to some
> > problem (can I say bug or is it too early?) it segfaults.

> Interesting, hope you can get a good backtrace.

I backed up all the data dir.
I'm currently transferring it to my dev box.
I've already the same DB... but it is on lenny.
And it never gave me a problem.
Version are slightly different anyway:

Version: 8.3.6-1 (working)
Version: 8.3.4-1~bpo40+1 (sometimes problematic[1])

8.4 is at the door... and the only choice I have to fix the problem
on that box is:
- upgrade to lenny
- build postgresql from source, that is going to be a maintenance
  pain.

Could anything related to vacuum and/or gin index had been fixet
between 8.3.4 and 8.3.6?

I think that if I'll stick with some rituals I can live with it.
Avoid vacuum full when there is load and restart the server before
doing it.


[1] slow vacuum full and gin index update

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table has 22 million records, but backup doesn't see them

2009-04-08 Thread Chris


I still would like to understand why the feedback table cannot be backed 
up by itself.


Because there is no actual data in the feedback table.

It's being stored in the "feedback_active", "feedback_archived_7000" and 
your other tables instead.


--
Postgresql & php tutorials
http://www.designmagick.com/


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] nooby Q: temp tables good for web apps?

2009-04-08 Thread Erik Jones


On Apr 7, 2009, at 7:32 PM, Kenneth Tilton wrote:


Scott Marlowe wrote:
On Tue, Apr 7, 2009 at 7:12 PM, Kenneth Tilton  
 wrote:


Scott Marlowe wrote:

You can use a different method if you need a table available to the
same session.  Create a schema based on the session id, and put  
your

temp tables there, only don't call them temp tables.  You'll either
need to make sure you always clean up your temp schema your session
created or come up with a daemon that comes along every hour or  
so and

kills off old schemas that aren't in use anymore.
I am LMAO because Lisp (my server-side lang) does this to noobs,  
too: three
(at least) ways to do everything. Well, if all things are equal  
dropping one
schema and not kludging up mangled table names has a lot of  
appeal. Thx.

Schemas, search_path and views together can let you do some pretty
cool things in terms of integrating external postgresql based apps
with each other.


Or between XHRs? It just occurred to me that if I go with a schema  
instead of temp tables then I do not need to worry about hanging on  
to a connection/pgsession, or even worry about routing a web session  
to the same process if all state is stored in pg under the session id.


ken *coming up to speed slowly, going to look up search_path*


If you're using pg_dump for backups then you'll probably want at least  
a standard prefix on your "temp" schemas so that you can easily have  
pg_dump ignore them when doing backups with it's -N flag.


Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
866.518.9273 x 260
Location: US/Pacific
IRC: mage2k






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there performance advantages in storing bulky field in separate table?

2009-04-08 Thread Ron Mayer
Robert Treat wrote:
> 
> You can be sure that discussion of this topic in this forum will soon be 
> visited by religious zealots, but the short answer is "nulls are bad, mmkay". 
>  
> A slightly longer answer would be that, as a general rule, attributes of your 
> relations that only apply to 1% of the rows are better represented as a one 

To fulfill your prophecy of zealotry, I've got a number of tables
with columns that are mostly null that I can't think of that nice a
way of refactoring.  I'd love ideas to improve the design, though.

One example's an address table. Most addresses have a few fields
that are typically present (building number, city, state, etc).
Others, as described in various government's address standards,
are fields that are typically absent.  For example in US addressing
rules, the "Urbanization Name" line:
http://www.usps.com/ncsc/addressstds/addressformats.htm
MRS MARIA SUAREZ  Name
URB LAS GLADIOLAS Urbanization name
150 CALLE A   House no. and st. name
SAN JUAN PR 00926-3232City, state, and ZIP+4
Similarly sparse columns in my address tables are,
titles, division/department Names and mailstop codes.
(described here: http://pe.usps.gov/text/pub28/pub28c3_011.htm)

While I realize I could stick in some string (empty string, or
some other magic string like "urbanization name doesn't apply to
this address") into a table, it sure is convenient to put nulls
in those columns.

I'm quite curious what you'd suggest a well-designed address table
would look like without nulls.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Are there performance advantages in storing bulky field in separate table?

2009-04-08 Thread Thomas Kellerer

Robert Treat wrote on 08.04.2009 23:06:


http://www.databasedesign-resource.com/null-values-in-a-database.html


That is a very - hmm - strange article.

One of the proofs that nulls are bad is that "SELECT * FROM theTable" (theTable 
being empty) returns nothing, whereas SELECT COUNT(*) FROM theTable returns a 
single row with 0 (zero):


"This last example is even worse: The SELECT * returns 'No rows selected', but 
the SELECT COUNT(*) returns ONE row with the value 0!"


I stopped reading the article at that point...


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there performance advantages in storing bulky field in separate table?

2009-04-08 Thread Sam Mason
On Wed, Apr 08, 2009 at 05:06:44PM -0400, Robert Treat wrote:
> A slightly longer answer would be that, as a general rule, attributes 
> of your relations that only apply to 1% of the rows are better
> represented as a one to N relationship using a second table.  

Have you tried to maintain a non-trivial schema that does this?  I'd be
interested to know how it works because I've only tried to work with
small examples that do this and it gets difficult to maintain very
quickly.

> For a longer answer, see
> http://www.databasedesign-resource.com/null-values-in-a-database.html
> or http://www.dbazine.com/ofinterest/oi-articles/pascal27

Both of those articles seem to be written by people who struggle with,
or have incomplete mental models of, the semantics of NULL values.  The
second also appears to be designed to sell a book so is of course going
to be presenting biased viewpoints.

How would outer joins work without some concept of a missing value.
Once you allow these missing values as the result of an outer join
you would be deliberately introducing limits if you couldn't also save
these values back into tables.  I would say that defaulting columns to
allowing NULLs was a mistake though.

I'd be happy without NULLs in databases if there was some other way
to handle missing values.  Parametric polymorphism and some sort of
option[1] or Maybe[2] type is what springs to mind for me.  NULL would
be represented as NONE or Nothing respectively and non-NULL values as
(SOME v) or (Just v).

-- 
  Sam  http://samason.me.uk/

 [1] http://www.standardml.org/Basis/option.html
 [2] http://www.haskell.org/onlinereport/maybe.html

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] recovery after segmentation fault

2009-04-08 Thread Martijn van Oosterhout
On Wed, Apr 08, 2009 at 05:24:08PM +0200, Ivan Sergio Borgonovo wrote:
> How on Debian?
> Debian does all it's automagic stuff in init. I never learned how to
> start pg manually.

What might be easier is turning on core dumps (ulimit -S -c unlimited)
and then start postgres and see if it drops a core dump, which you can
then feed to gdb.

All the binaries are in /usr/lib/postgresql/8.3/bin/ (Debian supports
parallel installs of multiple versions of postgres).

> What if I just don't care about recovery of *one* DB (that is maybe
> the culprit) and just see the server restart then just do a restore
> from a VERY recent backup?
> 
> Is there a way to just kill recovery for one DB? Just don't start it
> at all?

Unfortunatly, the XLOG is shared betweens all databases on one cluster.

> This is the same DB having problem with recreation of gin index
> BTW... and I've the feeling that the problem is related to that
> index once more... I was vacuuming full, I aborted...
> 
> I think the DB is trying to recreate the index but due to some
> problem (can I say bug or is it too early?) it segfaults.

Interesting, hope you can get a good backtrace.

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> Please line up in a tree and maintain the heap invariant while 
> boarding. Thank you for flying nlogn airlines.


signature.asc
Description: Digital signature


Re: [GENERAL] Are there performance advantages in storing bulky field in separate table?

2009-04-08 Thread Robert Treat
On Wednesday 08 April 2009 15:30:28 Ian Mayo wrote:
> On Wed, Apr 8, 2009 at 8:13 PM, Robert Treat
>
>  wrote:
> > Maybe I've been reading too much Pascal again lately, but if only 1% of
> > your rows are going to have data in this column, personally, I'd put it
> > in a separate table.
>
> thanks for that Robert - it does match my (completely groundless)
> first impression.
>
> In the nature of debate, would you mind passing on the pascal-related
> reasons why you'd put the data in another table?
>

You can be sure that discussion of this topic in this forum will soon be 
visited by religious zealots, but the short answer is "nulls are bad, mmkay".  
A slightly longer answer would be that, as a general rule, attributes of your 
relations that only apply to 1% of the rows are better represented as a one 
to N relationship using a second table. For a longer answer, see
http://www.databasedesign-resource.com/null-values-in-a-database.html
or http://www.dbazine.com/ofinterest/oi-articles/pascal27

-- 
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there performance advantages in storing bulky field in separate table?

2009-04-08 Thread Chris Browne
ianm...@tesco.net (Ian Mayo) writes:
> On Wed, Apr 8, 2009 at 8:13 PM, Robert Treat
>  wrote:
>> Maybe I've been reading too much Pascal again lately, but if only 1% of your
>> rows are going to have data in this column, personally, I'd put it in a
>> separate table.
>
> thanks for that Robert - it does match my (completely groundless)
> first impression.
>
> In the nature of debate, would you mind passing on the pascal-related
> reasons why you'd put the data in another table?

Fabian Pascal's thesis is that you shouldn't have NULLs altogether, as
this leads to having to support the 3-or-more-valued logic of NULLs.
The "Third Manifesto" declines to support having NULLs in relations.

Hugh Darwen wrote the relevant paper explaining how to avoid them:
   
http://web.onetel.com/~hughdarwen/TheThirdManifesto/Missing-info-without-nulls.pdf

I tend to agree that it is reasonable to go to *some* effort to avoid
having NULL values.

Unfortunately, it seems to me that Darwen's papers elaboration on the
issue doesn't present a solution that is without points to criticize.
He does nicely describe how you may indicate various reasons why you
might have missing information.  This is both good and bad...

 - It is good because it provides unambiguous ways to determine why
   the data was missing.  NULL leaves that ambiguous.

 - It is Not So Good because it replaces the 3-value-logic of NULLs
   with an "as many values for logic as we have kinds of unknown
   values," which is more like a 5- or 6-value logic.

http://en.wikipedia.org/wiki/Fabian_Pascal

"Pascal is known for his sharp criticisms of the data management
industry, trade press, current state of higher education, Western
culture and alleged media bias. Pascal advocates strict adherence to
the principles of the relational model, and argues that departing from
the model in the name of pragmatism is responsible for serious data
management troubles. Criticism of Pascal's advocacy often centers
around his polemical style, which some perceive as overly
confrontational and unprofessional.

He has retired from the technological industry and now does political
commentary, specially on Middle East issues."
-- 
let name="cbbrowne" and tld="cbbrowne.com" in name ^ "@" ^ tld;;
http://linuxdatabases.info/info/linuxxian.html
"The only thing  better than TV with the  sound off is  Radio with the
sound off." -- Dave Moon

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] some external sql not working in psql

2009-04-08 Thread Tom Lane
Kashmir  writes:
> being a sql-lamer, i used some query builder help to build my query (which 
> served me quite well in the past for all my 'complicated' sqls), and was 
> suggested for f_rrd_id=444 to use something as:
> SELECT
> td_fetch1m_by_rrd_id.f_timestamp,
> td_fetch_by_rrd_id.f_ds,
> td_fetch_by_rrd_id.f_ds,
> td_fetch1m_by_rrd_id.f_ds,
> td_fetch1m_by_rrd_id.f_us
> FROM td_fetch_by_rrd_id
> RIGHT JOIN td_fetch1m_by_rrd_id ON 
> td_fetch_by_rrd_id.f_timestamp=td_fetch1m_by_rrd_id.f_timestamp
> WHERE td_fetch1m_by_rrd_id.f_rrd_id=444
> ORDER BY td_fetch1m_by_rrd_id.f_timestamp;

Seems like that should be a LEFT JOIN, if you're expecting there to be
missing values in td_fetch_by_rrd_id rather than the other.  The WHERE
and ORDER BY clauses don't look right either unless LEFT was meant.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No return from trigger function

2009-04-08 Thread James B. Byrne

On Wed, April 8, 2009 16:06, Tom Lane wrote:
> "James B. Byrne"  writes:
>> I just need another set of eyes to see whatever it is that I am
>> overlooking.
>
> The RETURN is inside the EXCEPTION clause.
>
> You really need two BEGINs here, one for the outer function body and
> one for the exception block around the INSERT.
>
>   regards, tom lane
>

So, something like:

CREATE OR REPLACE FUNCTION hll_pg_fn_ident_insert()
  RETURNS TRIGGER AS $pg_fn$
-- ROW AFTER TRIGGER
-- trigger passes identifier_type and _description
-- received as ARGV[0] and ARGV[1]
  BEGIN
BEGIN
  INSERT INTO identifiers(...
  ...
EXCEPTION
  WHEN 
END;
  RETURN NULL:
  END;

-- 
***  E-Mail is NOT a SECURE channel  ***
James B. Byrnemailto:byrn...@harte-lyne.ca
Harte & Lyne Limited  http://www.harte-lyne.ca
9 Brockley Drive  vox: +1 905 561 1241
Hamilton, Ontario fax: +1 905 561 0757
Canada  L8E 3C3


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No return from trigger function

2009-04-08 Thread hubert depesz lubaczewski
On Wed, Apr 08, 2009 at 04:13:58PM -0400, James B. Byrne wrote:
> Does this mean that the example given on pg. 798 of the manual is in
> error, or have I misread it?
>  BEGIN
>  EXCEPTION
>  WHEN UNIQUE_VIOLATION THEN
>  -- do nothing
>  END;

please notice that the BEGIN EXCEPTION ... END; are withing main
function BEGIN END.
so it has to be like this:

create function x() returns ... as $$
declare
begin
   whatever;
   begin
 do something
 exception
   end;
   whatever;
end;
$$ language plpgsql;

notice 2 pairs of begin/end.

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No return from trigger function

2009-04-08 Thread James B. Byrne

On Wed, April 8, 2009 16:05, hubert depesz lubaczewski wrote:
> On Wed, Apr 08, 2009 at 03:59:22PM -0400, James B. Byrne wrote:
>> EXCEPTION
>>   WHEN unique_violation THEN
>> -- NULL -- do nothing
>>
>> RETURN NULL; -- AFTER trigger results are ignored anyway
>>   END;
>
> exception is part of begin/exception/end; block
> so you will need one more begin and one more end;
>

Does this mean that the example given on pg. 798 of the manual is in
error, or have I misread it?


-- Insert or update the summary row with the new values.
<>
LOOP
 UPDATE sales_summary_bytime
 SET amount_sold = amount_sold + delta_amount_sold,
 units_sold = units_sold + delta_units_sold,
 amount_cost = amount_cost + delta_amount_cost
 WHERE time_key = delta_time_key;
 EXIT insert_update WHEN found;
 BEGIN
 INSERT INTO sales_summary_bytime (
  time_key,
  amount_sold,
  units_sold,
  amount_cost)
 VALUES (
  delta_time_key,
  delta_amount_sold,
  delta_units_sold,
  delta_amount_cost
);
 EXIT insert_update;
 EXCEPTION
 WHEN UNIQUE_VIOLATION THEN
 -- do nothing
 END;
END LOOP insert_update;
RETURN NULL;
END;

-- 
***  E-Mail is NOT a SECURE channel  ***
James B. Byrnemailto:byrn...@harte-lyne.ca
Harte & Lyne Limited  http://www.harte-lyne.ca
9 Brockley Drive  vox: +1 905 561 1241
Hamilton, Ontario fax: +1 905 561 0757
Canada  L8E 3C3


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No return from trigger function

2009-04-08 Thread Tom Lane
"James B. Byrne"  writes:
> I just need another set of eyes to see whatever it is that I am
> overlooking.

The RETURN is inside the EXCEPTION clause.

You really need two BEGINs here, one for the outer function body and
one for the exception block around the INSERT.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No return from trigger function

2009-04-08 Thread Guillaume Lelarge
Le mercredi 08 avril 2009 à 21:59:22, James B. Byrne a écrit :
> I just need another set of eyes to see whatever it is that I am
> overlooking.
>
> This is the function:
>
>   CREATE OR REPLACE FUNCTION hll_pg_fn_ident_insert()
>   RETURNS TRIGGER AS $pg_fn$
> -- ROW AFTER TRIGGER
> -- trigger passes identifier_type and _description
> -- received as ARGV[0] and ARGV[1]
>   BEGIN
> INSERT INTO identifiers(
> entity_id,
> identifier_type,
> identifier_value,
> identifier_description,
> changed_at,
> changed_by,
> created_at,
> created_by,
> effective_from)
>   VALUES(
> NEW.id,
> TG_ARGV[0],
> NEW.entity_common_name,
> TG_ARGV[1],
> current_timestamp,
> 'trigger',
> current_timestamp,
> 'trigger',
> current_timestamp);
>
>   -- Assume the INSERT fails because of a unique key violation,
>   --   (identifier_type + identifier_value + entity_id)
>   --
>   -- This does not matter since we only need ensure that this
>   -- alias exists, so handle the exception and return:
> EXCEPTION
>   WHEN unique_violation THEN
> -- NULL -- do nothing
>
> RETURN NULL; -- AFTER trigger results are ignored anyway
>   END;
>   $pg_fn$ LANGUAGE plpgsql;
>
> This is the trigger:
>
>   CREATE TRIGGER hll_pg_tr_entity_identifier_akna
> AFTER INSERT OR UPDATE ON entities
> FOR EACH ROW EXECUTE PROCEDURE hll_pg_fn_ident_insert(
>   "AKNA", "Common Name auto-insert");
>
>
> I am getting this error:
>
>   PGError: ERROR:  control reached end of trigger procedure
> without RETURN
>   CONTEXT:  PL/pgSQL function "hll_pg_fn_ident_insert"
>
>   : INSERT INTO "entities" ("entity_legal_name",
>
> "entity_legal_name_key", "changed_by", "entity_common_name",
> "created_by", "lock_version", "changed_at",
> "entity_legal_form", "created_at") VALUES(E'My Entity Legal
> Name', E'myentitylegalname', E'not available', E'my entity',
> E'not available', 0, '2009-04-08 19:46:49', E'PERS',
> '2009-04-08 19:46:49.446650')
>
> Help??
>

The RETURN NULL is in the exception block. You need to put one before this 
block.


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] some external sql not working in psql

2009-04-08 Thread Kashmir

i'm in need of some psql advise,
believe its rather a trivial issue, but confusing for me...
(and hope this is the correct list for this?)

facing following issue:
got 2 tables like:
CREATE TABLE td_fetch_by_rrd_id (
  f_rrd_id numeric NOT NULL,
  f_timestamp numeric NOT NULL,
  f_ds numeric,
  f_us numeric,
  CONSTRAINT td_fetch_by_rrd_id_pkey PRIMARY KEY (f_rrd_id, f_timestamp)
)
and:
CREATE TABLE td_fetch1m_by_rrd_id (
  f_rrd_id numeric NOT NULL,
  f_timestamp numeric NOT NULL,
  f_ds numeric,
  f_us numeric,
  CONSTRAINT td_fetch1m_by_rrd_id_pkey PRIMARY KEY (f_rrd_id, f_timestamp)
)

only difference is:
first table stores data per 'f_rrd_id' evey 5min, and the second table every 
single minute.
I want to run a query that would return for the same 'f_rrd_id' all values from 
both tables sorted by f_timestamp, of course a set would only have values from 
the 5m table if the timestamp was present there too (every 5th set only)

being a sql-lamer, i used some query builder help to build my query (which 
served me quite well in the past for all my 'complicated' sqls), and was 
suggested for f_rrd_id=444 to use something as:
SELECT
td_fetch1m_by_rrd_id.f_timestamp,
td_fetch_by_rrd_id.f_ds,
td_fetch_by_rrd_id.f_ds,
td_fetch1m_by_rrd_id.f_ds,
td_fetch1m_by_rrd_id.f_us
FROM td_fetch_by_rrd_id
RIGHT JOIN td_fetch1m_by_rrd_id ON 
td_fetch_by_rrd_id.f_timestamp=td_fetch1m_by_rrd_id.f_timestamp
WHERE td_fetch1m_by_rrd_id.f_rrd_id=444
ORDER BY td_fetch1m_by_rrd_id.f_timestamp;

and this works quite fine and as expected in the source env (some gui-sqler). 
but when i take this into psql, i get totally messed up results, the values 
just dont make any sense...

assume it is a simple 'shoot the monkey messing with sql' type of an issue, and 
was hoping real sqlers could help out quickly?
:)

an example set in the 1m table would look like:
444;20090408135500;15049;3898
444;20090408135600;11760;1023
444;20090408135700;21956;13913
444;20090408135800;14313;3427
444;20090408135900;12876;1007
444;2009040814;13307;2101
444;20090408140100;25905;5611

and the other table would only have every 5th ts matching, 
with minor diffs in the f_us/f_ds columns, e.g. like:
444;20090408135500;15054;3958
444;2009040814;13322;2131

many tia!
-k



  

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there performance advantages in storing bulky field in separate table?

2009-04-08 Thread Ron Mayer
Sam Mason wrote:
> On Wed, Apr 08, 2009 at 04:56:35PM +0100, Ian Mayo wrote:
>> One more thing:  hey, did you hear?  I just got some advice from Tom Lane!
> 
> Statistically speaking; he's the person most likely to answer you by

Even so, this might be the #1 advantage of Postgres over Oracle (cost
being #2).

Unless you're one of their ten biggest customers, I imagine it'd take
quite some time to similar support from the core team's counterparts
of the other big databases.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No return from trigger function

2009-04-08 Thread hubert depesz lubaczewski
On Wed, Apr 08, 2009 at 03:59:22PM -0400, James B. Byrne wrote:
> EXCEPTION
>   WHEN unique_violation THEN
> -- NULL -- do nothing
> 
> RETURN NULL; -- AFTER trigger results are ignored anyway
>   END;

exception is part of begin/exception/end; block
so you will need one more begin and one more end;

Best regards,

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] No return from trigger function

2009-04-08 Thread James B. Byrne

I just need another set of eyes to see whatever it is that I am
overlooking.

This is the function:

  CREATE OR REPLACE FUNCTION hll_pg_fn_ident_insert()
  RETURNS TRIGGER AS $pg_fn$
-- ROW AFTER TRIGGER
-- trigger passes identifier_type and _description
-- received as ARGV[0] and ARGV[1]
  BEGIN
INSERT INTO identifiers(
entity_id,
identifier_type,
identifier_value,
identifier_description,
changed_at,
changed_by,
created_at,
created_by,
effective_from)
  VALUES(
NEW.id,
TG_ARGV[0],
NEW.entity_common_name,
TG_ARGV[1],
current_timestamp,
'trigger',
current_timestamp,
'trigger',
current_timestamp);

  -- Assume the INSERT fails because of a unique key violation,
  --   (identifier_type + identifier_value + entity_id)
  --
  -- This does not matter since we only need ensure that this
  -- alias exists, so handle the exception and return:
EXCEPTION
  WHEN unique_violation THEN
-- NULL -- do nothing

RETURN NULL; -- AFTER trigger results are ignored anyway
  END;
  $pg_fn$ LANGUAGE plpgsql;

This is the trigger:

  CREATE TRIGGER hll_pg_tr_entity_identifier_akna
AFTER INSERT OR UPDATE ON entities
FOR EACH ROW EXECUTE PROCEDURE hll_pg_fn_ident_insert(
  "AKNA", "Common Name auto-insert");


I am getting this error:

  PGError: ERROR:  control reached end of trigger procedure
without RETURN
  CONTEXT:  PL/pgSQL function "hll_pg_fn_ident_insert"
  : INSERT INTO "entities" ("entity_legal_name",
"entity_legal_name_key", "changed_by", "entity_common_name",
"created_by", "lock_version", "changed_at",
"entity_legal_form", "created_at") VALUES(E'My Entity Legal
Name', E'myentitylegalname', E'not available', E'my entity',
E'not available', 0, '2009-04-08 19:46:49', E'PERS',
'2009-04-08 19:46:49.446650')

Help??

-- 
***  E-Mail is NOT a SECURE channel  ***
James B. Byrnemailto:byrn...@harte-lyne.ca
Harte & Lyne Limited  http://www.harte-lyne.ca
9 Brockley Drive  vox: +1 905 561 1241
Hamilton, Ontario fax: +1 905 561 0757
Canada  L8E 3C3


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there performance advantages in storing bulky field in separate table?

2009-04-08 Thread Ian Mayo
On Wed, Apr 8, 2009 at 8:13 PM, Robert Treat
 wrote:
> Maybe I've been reading too much Pascal again lately, but if only 1% of your
> rows are going to have data in this column, personally, I'd put it in a
> separate table.

thanks for that Robert - it does match my (completely groundless)
first impression.

In the nature of debate, would you mind passing on the pascal-related
reasons why you'd put the data in another table?

cheers,
Ian

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table has 22 million records, but backup doesn't see them

2009-04-08 Thread Steve Crawford

Radcon Entec wrote:


*
*
You are, of course, correct.  "select count(*) from only feedback" 
returns 0.  I have never used (or even seen) PostgreSQL rules before. 
 
When I run the query "select * from feedback where charge = 23017", I 
get 538 records.  Adding the word "only" gives me zero records, as 
expected, and querying the feedback_active table gets me my 538 
records.  But the feedback table only has the INSERT rules you quoted 
above.  I clicked on the feedback table's Rules leaf and selected "New 
Rule", and saw that I can create SELECT, INSERT, UPDATE and DELETE 
rules.  But even though I don't have a SELECT rule explicitly defined, 
PostgreSQL appears to be smart enough to retrieve data from the 
correct actual table when I think I'm selecting from the feedback 
table.  Is that standard behavior? 
 
Of course, my next step will be to read the documentation.
 
You won't find the cause of your surprise reading up on rules. Read up 
on inheritance, instead. It can do a lot, but a typical simple use is 
table-partitioning as appears to be the case in your situation.


You start with the main (parent) table, say "events" then create a bunch 
of child tables that inherit events, say events_jan, events_feb, 
events_mar...


You never put actual data in "events" but create a rule or trigger that 
looks at the month information and puts January events in events_jan, 
February in events_feb and so on.


Select * from events is more-or-less equivalent to:
select * from events_jan union
select * from events_feb union...

Inheritance goes far beyond the simple case shown above. There's a lot 
of nifty stuff you can do and a number of things that can bite you. But, 
as you say, that's where the documentation comes in.


Cheers,
Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there performance advantages in storing bulky field in separate table?

2009-04-08 Thread Robert Treat
On Wednesday 08 April 2009 11:56:35 Ian Mayo wrote:
> Cheers Tom,
>
> On Wed, Apr 8, 2009 at 4:47 PM, Tom Lane  wrote:
> > Ian Mayo  writes:
> >> [snip]
> >
> > No.  You'd basically be manually reinventing the TOAST mechanism;
> > or the large object mechanism, if you choose to store the blob
> > as a large object rather than a plain bytea field.  Either way,
> > it won't physically be in the same table as the main row data.
>
> fine, that keeps the design simpler
>

Maybe I've been reading too much Pascal again lately, but if only 1% of your 
rows are going to have data in this column, personally, I'd put it in a 
separate table. 

-- 
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Now I am back, next thing. Final PGS tuning.

2009-04-08 Thread Greg Smith

On Wed, 8 Apr 2009, Jennifer Trey wrote:


shared_buffer = 1024MB # Kept it


As mentioned a couple of times here, this is a really large setting for 
Windows.  Something like 256MB would work better, and you might even find 
some people making a case for 64MB or less on Windows.  I don't really 
know for sure myself.



Is the effective cache only the one for the OS ? not for them combined ?


It is sizing the combination of the shared_buffers *plus* what you expect 
in the OS buffer cache.  I normally look at the size of the OS buffer 
cache before the PostgreSQL server is started as a rough estimate here.



Since I use Java, prepared statements are quite natural.


Prepared statements are not prepared transactions.  It's unlikely you've 
got any code that uses PREPARE TRANSACTION, so you shouldn't need to 
increase max_prepared_transactions.


All three of the above are not really clear in the tuning guide on the 
wiki, I'll do an update to improve those sections when I get a minute.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Now I am back, next thing. Final PGS tuning.

2009-04-08 Thread Scott Marlowe
On Wed, Apr 8, 2009 at 8:01 AM, Jennifer Trey  wrote:

> I would like to further tune the tuning wizards recommendations though. I
> think it put itself on the lower scale.

OK, instead of blindly guessing at better values, and making a lot of
concurrent changes, you need to set up some kind of simple yet
realistic benchmark for your database.  It doesn't have to be perfect,
but it should realistically reflect the number of clients you'll have
connecting at once and the types of queries they're likely to run.
Write a simple pgbench script and use it to test your changes.

> I have 8GB memory, Intel Quad Core 2.4Ghz with 8MB L2 cache. I am running
> Windows Web Server 2008 x64 and will be running a Java (64 bit version)
> application.

Note that the very first thing you could do to performance tune your
server would be to run it on something other than windows.  This is
not to bash windows, it's a simple fact of postgresql's architecture
not being a great match for windows under heavy load.

> I want to give the java app room for working on 2-3GB. The operating system
> is currently consuming around 1GB but lets give it a little more room. Lets
> give it a total of 2GB.
>
> That leaves 8-3-2 = 3 GB for PostgreSQL alone. Sounds good?

Note that PostgreSQL relies on the OS caching as much as its own, and
this tends to be even more true in windows environments.

> Here is my config file :
>
> max_connections = 100
>
> What does this do? Should I be looking at this as max similtaneous queries ?
> is 100 really enough? I think I want to max this more.

Max connections is the maximum number of clients that can connect at
the same time.  Each connection uses up a few megs of memory and can
start a query independent of other connections.  Generally a hundred
or so is a reasonable place to start.  But capacity planning will tell
you how many you really need.  If you find yourself going over 100 by
much, start looking at connection pooling.

> I am looking for a worst scenario around like 50-100 similitaneous user
> clicks (per second?). But the querying might be around like 200 queries per
> seocond, not really, but I want to be prepared. :)

OK, the real issue here will be whether or not you have persistent
pooled connections.  Creating PostgreSQL connections is expensive,
especially so on Windows.  Definitely look at pooling

> I would appreciate if could have a discussion on these topics. On whats
> important and whats not.
>
> Here is some other settings I am thinking about :
>
> effective_cache_size = 449697
>
> is this kilo bytes ? Is this a good value?

In 8.3 you can put the actual unit after, so 400MB would be a nicer
way to put that number in.  Effective cache size just tells the
planner about how much cache there is in the OS, and postgresql.  It's
not a very fine grained control, so just guestimate it at say 3000MB
or something for now.

> maintenance_work_mem = 16384

Again, it's a good idea to put units after in 8.3.  Since
maintenance_work_mem is only used by things like vacuum, you can set
it pretty high (256MB is common) and not worry too much.

> work_mem = 1024  # I think this is kb. Way to low, right? What is a better
> value?

Again, throw a unit on the end.  default is kb.  1M is fine for now.
Again, implement some kind of benchmark, increase it when it provably
makes a difference for most of your queries.  If there's a lone query
that can use a lot more, then set work_mem higher in that session or
for a special user so it's not higher for everybody.

work_mem is PER SORT type op / PER SESSION.  So, if you have 100 users
doing 2 sorts each you can theoretically use up 100x2xwork_mem memory.
 A machine that's running fine one moment can collapse under load as
the number of processes increase and memory gets allocated out of
control.

That said, on my servers, with 100 to 200 connections, it's set to 8
meg.  That machine has 32Gig of ram, so 800 to 1600 Meg of ram
theoretically getting used won't cause some kind of swap storm.

> shared_buffers = 1024 # min 128kB or max_connections*16kB  ## Also to low.
> Right? I've got 3GB to work with!

But, you're on Windows, and the shared_buffer implementation there
doesn't scale as well as it does on linux or other flavors of unix.
So, while setting it a bit higher is good, don't set it any higher
than it needs to be to hold the current working set of all queries,
which is usually in the hundreds of megabytes, not the gigabyte range.
 Again, benchmark and test, but a good starting point is likely in the
128MB to 512MB range for windows.

> Please give your thoughts. I was also wondering about the Vacuum, force
> reindex and stuff. Are those things good to run once in a while? Force
> sounds a little brutal though!

It's much better to monitor your db for such things and vacuum full /
reindex only when / if needed, and do what you can to head those
things off.

> Something else I should consider?

If your dataset can fit in memory, consider lowering random_

Re: [GENERAL] Table has 22 million records, but backup doesn't see them

2009-04-08 Thread Radcon Entec


From: Steve Crawford 
To: Radcon Entec 
Cc: "pgsql-general@postgresql.org" 
Sent: Wednesday, April 8, 2009 1:15:55 PM
Subject: Re: [GENERAL] Table has 22 million records, but backup doesn't see them

Radcon Entec wrote:
> 
>  Here is the text that results from dumping my 22-million-row feedback table:
>  ...
> 
> CREATE RULE feedback_partition_active AS ON INSERT TO feedback WHERE 
> (new.charge > 7000) DO INSTEAD INSERT INTO feedback_active (charge, 
> elapsed_time, tag_type, stack, tag_value, heating, status) VALUES 
> (new.charge, new.elapsed_time, new.tag_type, new.stack, new.tag_value, 
> new.heating, new.status);
> 
> ...
> 
> CREATE RULE feedback_partition_archived AS ON INSERT TO feedback WHERE 
> (new.charge <= 7000) DO INSTEAD INSERT INTO feedback_archived_7000 (charge, 
> elapsed_time, tag_type, stack, tag_value, heating, status) VALUES 
> (new.charge, new.elapsed_time, new.tag_type, new.stack, new.tag_value, 
> new.heating, new.status);
> 

Are you certain that feedback actually contains any data or is it just the 
parent table and the real data is in the child tables? What is the output of 
"select count(*) from only feedback;" ?

Cheers,
Steve


Steve,

You are, of course, correct.  "select count(*) from only feedback" returns 0.  
I have never used (or even seen) PostgreSQL rules before.  

When I run the query "select * from feedback where charge = 23017", I get 538 
records.  Adding the word "only" gives me zero records, as expected, and 
querying the feedback_active table gets me my 538 records.  But the feedback 
table only has the INSERT rules you quoted above.  I clicked on the feedback 
table's Rules leaf and selected "New Rule", and saw that I can create SELECT, 
INSERT, UPDATE and DELETE rules.  But even though I don't have a SELECT rule 
explicitly defined, PostgreSQL appears to be smart enough to retrieve data from 
the correct actual table when I think I'm selecting from the feedback table.  
Is that standard behavior?  

Of course, my next step will be to read the documentation.

Thank you yet again!

RobR


  

Re: [GENERAL] Are there performance advantages in storing bulky field in separate table?

2009-04-08 Thread Ivan Sergio Borgonovo
On Wed, 8 Apr 2009 17:39:02 +0100
Sam Mason  wrote:

> On Wed, Apr 08, 2009 at 04:56:35PM +0100, Ian Mayo wrote:
> > One more thing:  hey, did you hear?  I just got some advice from
> > Tom Lane!
> 
> Statistically speaking; he's the person most likely to answer you
> by quite a long way.  Out of the ~24k emails going back to Oct
> 2007 I've got from pgsql-general the most common people who wrote
> them are:

>who  num mails  of total
>   Tom Lane  1,9358.0%
>   Scott Marlowe 1,0774.5%
>   Alvaro Herrera  5212.2%
>   Joshua Drake4681.9%
>   Richard Huxton  4321.8%
>   Craig Ringer3381.4%
>   Ivan Sergio Borgonovo   3141.3%

I just wrote privately to Tom that I'm ashamed I ask so much and
answer so few.
But well I'm an exception ;) I'm the top of non-contributors. 

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table has 22 million records, but backup doesn't see them

2009-04-08 Thread Steve Crawford

Radcon Entec wrote:


 
Here is the text that results from dumping my 22-million-row feedback 
table:
 
...


CREATE RULE feedback_partition_active AS ON INSERT TO feedback WHERE 
(new.charge > 7000) DO INSTEAD INSERT INTO feedback_active (charge, 
elapsed_time, tag_type, stack, tag_value, heating, status) VALUES 
(new.charge, new.elapsed_time, new.tag_type, new.stack, new.tag_value, 
new.heating, new.status);


...

CREATE RULE feedback_partition_archived AS ON INSERT TO feedback WHERE 
(new.charge <= 7000) DO INSTEAD INSERT INTO feedback_archived_7000 
(charge, elapsed_time, tag_type, stack, tag_value, heating, status) 
VALUES (new.charge, new.elapsed_time, new.tag_type, new.stack, 
new.tag_value, new.heating, new.status);




Are you certain that feedback actually contains any data or is it just 
the parent table and the real data is in the child tables? What is the 
output of "select count(*) from only feedback;" ?


Cheers,
Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table has 22 million records, but backup doesn't see them

2009-04-08 Thread Radcon Entec
By the way, a full backup and restore using PGAdmin and accepting all default 
setings worked successfully, including all 22 million feedback records.

I still would like to understand why the feedback table cannot be backed up by 
itself.  The technique of backing up and restoring only selected tables will be 
frequently useful for many customers, and I would like to know how to get 
around this problem if it shows up again.

RobR



  

Re: [GENERAL] Now I am back, next thing. Final PGS tuning.

2009-04-08 Thread John R Pierce

Jennifer Trey wrote:

Scott, thank you.

I think I might have misunderstood the effective cache size. Its 
measured in 8kB blocks. So the old number 449697 equals 3.5 GB, which 
is quite much. Should I lower this? I had plans to use 2.75GB max. Can 
I put 2.75GB there? Should I leave it?


effective_cache_size is an estimate of how much disk data the OS is 
likely to have cached in memory.   postgres uses this to guess whether 
or not recently read data is likely to be 'fast' (in the system cache) 
or 'slow' (on the physical disk, hence requiring disk IO to read).   
This value is used in some fairly abstract heuristics, it does NOT need 
to be that accurate, its jusr a ballpark estimate.


you should run your system under your expected workload, then view the 
actual working cache size in Task Manager ("System Cache" on the 
Performance tab of the task manager in XP, I dunno about 2008 
Server)...  Now some of that cache probably belongs to other processes 
than postgres, so round down a bit.   On my desktop system at the 
moment, I'm showing 1.3GB




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table has 22 million records, but backup doesn't see them

2009-04-08 Thread Radcon Entec
From: Steve Crawford scrawf...@pinpointresearch.com
 
And what was the result? Zero-size file? If not, what was in the file?


Here is the text that results from dumping my 22-million-row feedback table:

--
-- PostgreSQL database dump
--
-- Started on 2009-04-08 10:10:49 Eastern Daylight Time
SET client_encoding = 'UTF8';
SET check_function_bodies = false;
SET client_min_messages = warning;
SET search_path = public, pg_catalog;
SET default_tablespace = '';
SET default_with_oids = false;
--
-- TOC entry 1417 (class 1259 OID 7667616)
-- Dependencies: 5
-- Name: feedback; Type: TABLE; Schema: public; Owner: caps; Tablespace: 
--
CREATE TABLE feedback (
feedback_key bigserial NOT NULL,
charge integer,
elapsed_time smallint,
tag_type character varying(16),
stack smallint,
tag_value real,
heating smallint,
status smallint
);
 
ALTER TABLE public.feedback OWNER TO caps;
--
-- TOC entry 1783 (class 0 OID 0)
-- Dependencies: 1416
-- Name: feedback_feedback_key_seq; Type: SEQUENCE SET; Schema: public; Owner: 
caps
--
SELECT pg_catalog.setval(pg_catalog.pg_get_serial_sequence('feedback', 
'feedback_key'), 22326846, true);
 
--
-- TOC entry 1780 (class 0 OID 7667616)
-- Dependencies: 1417
-- Data for Name: feedback; Type: TABLE DATA; Schema: public; Owner: caps
--
COPY feedback (feedback_key, charge, elapsed_time, tag_type, stack, tag_value, 
heating, status) FROM stdin;
\.
 
--
-- TOC entry 1779 (class 2606 OID 7667620)
-- Dependencies: 1417 1417
-- Name: feedback_pkey; Type: CONSTRAINT; Schema: public; Owner: caps; 
Tablespace: 
--
ALTER TABLE ONLY feedback
ADD CONSTRAINT feedback_pkey PRIMARY KEY (feedback_key);
 
--
-- TOC entry 1777 (class 1259 OID 7829003)
-- Dependencies: 1417
-- Name: feedback_charge_idx; Type: INDEX; Schema: public; Owner: caps; 
Tablespace: 
--
CREATE INDEX feedback_charge_idx ON feedback USING btree (charge);
 
--
-- TOC entry 1514 (class 2618 OID 7667631)
-- Dependencies: 1417 1417 1418 1417 1417 1417 1417 1417 1417 1417 1417
-- Name: feedback_partition_active; Type: RULE; Schema: public; Owner: caps
--
CREATE RULE feedback_partition_active AS ON INSERT TO feedback WHERE 
(new.charge > 7000) DO INSTEAD INSERT INTO feedback_active (charge, 
elapsed_time, tag_type, stack, tag_value, heating, status) VALUES (new.charge, 
new.elapsed_time, new.tag_type, new.stack, new.tag_value, new.heating, 
new.status);
 
--
-- TOC entry 1515 (class 2618 OID 7667632)
-- Dependencies: 1417 1417 1419 1417 1417 1417 1417 1417 1417 1417 1417
-- Name: feedback_partition_archived; Type: RULE; Schema: public; Owner: caps
--
CREATE RULE feedback_partition_archived AS ON INSERT TO feedback WHERE 
(new.charge <= 7000) DO INSTEAD INSERT INTO feedback_archived_7000 (charge, 
elapsed_time, tag_type, stack, tag_value, heating, status) VALUES (new.charge, 
new.elapsed_time, new.tag_type, new.stack, new.tag_value, new.heating, 
new.status);
 
--
-- TOC entry 1782 (class 0 OID 0)
-- Dependencies: 1417
-- Name: feedback; Type: ACL; Schema: public; Owner: caps
--
REVOKE ALL ON TABLE feedback FROM PUBLIC;
REVOKE ALL ON TABLE feedback FROM caps;
GRANT ALL ON TABLE feedback TO caps;
GRANT ALL ON TABLE feedback TO anneal_operator;
GRANT ALL ON TABLE feedback TO anneal_supervisor;
GRANT ALL ON TABLE feedback TO anneal_administrator;
GRANT SELECT ON TABLE feedback TO anneal_metallurgist;
GRANT SELECT ON TABLE feedback TO anneal_guest;
 
--
-- TOC entry 1784 (class 0 OID 0)
-- Dependencies: 1416
-- Name: feedback_feedback_key_seq; Type: ACL; Schema: public; Owner: caps
--
REVOKE ALL ON TABLE feedback_feedback_key_seq FROM PUBLIC;
REVOKE ALL ON TABLE feedback_feedback_key_seq FROM caps;
GRANT ALL ON TABLE feedback_feedback_key_seq TO caps;
GRANT ALL ON TABLE feedback_feedback_key_seq TO PUBLIC;
 
-- Completed on 2009-04-08 10:10:52 Eastern Daylight Time
--
-- PostgreSQL database dump complete
--
 
I ran a DOS batch file.  Here's the first few lines:
 
"\program files\postgresql\8.1\bin\pg_dump" -f schema.sql -v -s -h 
159.138.80.150 -U postgres -X disable-triggers Anneal > backup_in_pieces.log
"\program files\postgresql\8.1\bin\pg_dump" -f adhoc_query.sql -v -a -t 
adhoc_query -h 159.138.80.150 -U postgres -X disable-triggers Anneal 
"\program files\postgresql\8.1\bin\pg_dump" -f base_cycle_compatibility.sql -v 
-a -t base_cycle_compatibility -h 159.138.80.150 -U postgres -X 
disable-triggers Anneal 
"\program files\postgresql\8.1\bin\pg_dump" -f base_type.sql -v -a -t base_type 
-h 159.138.80.150 -U postgres -X disable-triggers Anneal 

Thanks again for your help!
 
RobR
>  When I ran a batch file dumping the schema and about forty tables into 
>separate files, no problems were encountered.  All of the resulting files have 
>reasonable sizes.
>  
What method did you use for that process?



  

Re: [GENERAL] Now I am back, next thing. Final PGS tuning.

2009-04-08 Thread Jennifer Trey
Well, no.. I don't know that. But in a worst case scenario, where everything
is using max, there won't be 3.5 GB for the OS. But for the OS + Postgre
(combined) there will be  2.5 + 2.75 .. But it seems that there is no
greater danger in the effective cache, but a good setting would be nice :)
Is the effective cache only the one for the OS ? not for them combined ?

Sincerely / Jen


On Wed, Apr 8, 2009 at 7:44 PM, David Wilson wrote:

> On Wed, Apr 8, 2009 at 12:24 PM, Jennifer Trey 
> wrote:
>
> > I think I might have misunderstood the effective cache size. Its measured
> in
> > 8kB blocks. So the old number 449697 equals 3.5 GB, which is quite much.
> > Should I lower this? I had plans to use 2.75GB max. Can I put 2.75GB
> there?
> > Should I leave it?
>
> The effective cache size setting is merely letting postgres know how
> much caching it can expect the OS to be doing. If you know that the OS
> isn't going to have more than 2.75 GB available for caching DB files,
> then by all means reduce it. The setting by itself doesn't affect
> postgres memory usage at all, though.
>
> --
> - David T. Wilson
> david.t.wil...@gmail.com
>


Re: [GENERAL] Now I am back, next thing. Final PGS tuning.

2009-04-08 Thread David Wilson
On Wed, Apr 8, 2009 at 12:24 PM, Jennifer Trey  wrote:

> I think I might have misunderstood the effective cache size. Its measured in
> 8kB blocks. So the old number 449697 equals 3.5 GB, which is quite much.
> Should I lower this? I had plans to use 2.75GB max. Can I put 2.75GB there?
> Should I leave it?

The effective cache size setting is merely letting postgres know how
much caching it can expect the OS to be doing. If you know that the OS
isn't going to have more than 2.75 GB available for caching DB files,
then by all means reduce it. The setting by itself doesn't affect
postgres memory usage at all, though.

-- 
- David T. Wilson
david.t.wil...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there performance advantages in storing bulky field in separate table?

2009-04-08 Thread Sam Mason
On Wed, Apr 08, 2009 at 04:56:35PM +0100, Ian Mayo wrote:
> One more thing:  hey, did you hear?  I just got some advice from Tom Lane!

Statistically speaking; he's the person most likely to answer you by
quite a long way.  Out of the ~24k emails going back to Oct 2007 I've
got from pgsql-general the most common people who wrote them are:

   who  num mails  of total
  Tom Lane  1,9358.0%
  Scott Marlowe 1,0774.5%
  Alvaro Herrera  5212.2%
  Joshua Drake4681.9%
  Richard Huxton  4321.8%
  Craig Ringer3381.4%
  Ivan Sergio Borgonovo   3141.3%
  Sam Mason   3101.3%
  Raymond O'Donnell   2701.1%
  Martijn van Oosterhout  2641.1%
  Greg Smith  2521.0%

The remaining ~2000 distinct addresses were less than one percent each.

I didn't expect to see myself there; ho hum, maybe I should spend less
time on email!  It's also somewhat biased as I only have archives as
long as I've posted.  I just tried pulling numbers from markmail.org and
get somewhat different results.  Here it knows about 161k messages and
the top twenty posters are:

  Tom Lane   14,1478.8%
  Bruce Momjian   3,4002.1%
  Scott Marlowe   3,1121.9%
  Richard Huxton  2,7381.7%
  Martijn van Oosterhout  2,4801.5%
  Alvaro Herrera  1,8531.2%
  Stephan Szabo   1,7831.1%
  Joshua D. Drake 1,7201.1%
  Peter Eisentraut1,4880.9%
  Michael Fuhr1,3280.8%
  Bruno Wolff III 1,2010.7%
  Andrew Sullivan   9850.6%
  Doug McNaught 7730.5%
  Jan Wieck 7640.5%
  Ron Johnson   7640.5%
  Jim C. Nasby  7450.5%
  Magnus Hagander   6650.4%
  Marc G. Fournier  6300.4%
  Dennis Gearon 6020.4%
  The Hermit Hacker 6010.4%

I've not been able to merge people where they use different email
address like with my own archive, but manually fiddled Scott Marlowe as
he appeared as "scott.marlowe" as well.  I'm glad to see I drop off the
bottom now!

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Now I am back, next thing. Final PGS tuning.

2009-04-08 Thread Jennifer Trey
Scott, thank you.

I think I might have misunderstood the effective cache size. Its measured in
8kB blocks. So the old number 449697 equals 3.5 GB, which is quite much.
Should I lower this? I had plans to use 2.75GB max. Can I put 2.75GB there?
Should I leave it?


Also, Greg. Since I use Java, prepared statements are quite natural. And I
read this part on the guide which I understand you are part of :

http://www.postgresql.org/docs/current/static/runtime-config-resource.html#GUC-MAX-PREPARED-TRANSACTIONS

Should I change this value? Not sure... :S

Worried about the locks... whats your though on this? Should I just leave it
alone?


Sincerely / Jennifer


Re: [GENERAL] Now I am back, next thing. Final PGS tuning.

2009-04-08 Thread Scott Mead
On Wed, Apr 8, 2009 at 12:05 PM, Jennifer Trey wrote:

> max_connections = 150 # A comprimise :)
>
> Scott, you mentioned :
>
> You can also use the pg_stat_all_indexes table to look at index scans
> vs. tuples being read, this can sometimes hint at index 'bloat'. I
> would also recommend pg_stattuple which has a pg_statindex function
> for looking at index fragmentation.
>
> From where can I see these stats ? Is there any graphic tool?


   From pgAdmin, you could:

   select * from pg_stat_all_indexes;

  You will see this system view in pgAdmin by:

database +
Catalogs +
  PostgreSQL (pg_catalog) +
Views +

You should be able to see the structure there.


--Scott


Re: [GENERAL] Table has 22 million records, but backup doesn't see them

2009-04-08 Thread Steve Crawford
Please remember to reply-all so others can help or see the solution as 
well (also, the convention on this list is to bottom-post, not top-post).


Radcon Entec wrote:

Steve,
 
Here's the exact command and output, taken from the DOS command window:
 
C:\Documents and Settings\entec>"\program 
files\postgresql\8.1\bin\pg_dump" -f f

eedback.sql -v -a -t feedback -h 159.138.80.150 -U postgres Anneal
pg_dump: reading schemas
pg_dump: reading user-defined functions
pg_dump: reading user-defined types
pg_dump: reading procedural languages
pg_dump: reading user-defined aggregate functions
pg_dump: reading user-defined operators
pg_dump: reading user-defined operator classes
pg_dump: reading user-defined conversions
pg_dump: reading user-defined tables
pg_dump: reading table inheritance information
pg_dump: reading rewrite rules
pg_dump: reading type casts
pg_dump: finding inheritance relationships
pg_dump: reading column info for interesting tables
pg_dump: finding the columns and types of table "feedback"
pg_dump: finding default expressions of table "feedback"
pg_dump: flagging inherited columns in subtables
pg_dump: reading indexes
pg_dump: reading indexes for table "feedback"
pg_dump: reading constraints
pg_dump: reading triggers
pg_dump: reading dependency data
pg_dump: saving encoding
pg_dump: executing SEQUENCE SET feedback_feedback_key_seq
pg_dump: restoring data for table "feedback"
pg_dump: dumping contents of table feedback

And what was the result? Zero-size file? If not, what was in the file?
 
When I ran a batch file dumping the schema and about forty tables into 
separate files, no problems were encountered.  All of the resulting 
files have reasonable sizes.
 

What method did you use for that process?
I haven't tried psql yet, nor pg_dumpall.  I've got a full backup 
running now.
Via what utility if you aren't using pg_dumpall? Also, running a full 
backup won't impede testing your connection with psql.
 
I am sure that I'm looking at a table.
 
I'm not sure where to find the server logs (which gives you some idea 
of my knowledge of PostgreSQL administration).  I'll go check them now.
 
RobR



*From:* Steve Crawford 
*To:* Radcon Entec 
*Cc:* pgsql-general@postgresql.org
*Sent:* Wednesday, April 8, 2009 11:25:20 AM
*Subject:* Re: [GENERAL] Table has 22 million records, but backup 
doesn't see them


Radcon Entec wrote:

Greetings!
 
I'm running PostgreSQL 8.1 under Windows XP, looking at a database 
hosted on a machine running PostgreSQL under Windows Server 2003.
 
The database has a table with three simple columns and 22 million 
rows.  I am trying to back up that table by itself.  However, pg_dump 
finishes almost instantly, obviously not backing up any data from the 
table.  I've tried it from the DOS command line with and without the 
-a (data only) option, and from inside PGAdmin.  Can anyone suggest 
what might cause this behavior?
 
What is the exact command and what is the output (I'll be surprised if 
there is no output at all to either stdout or stderr)? Does pg_dumpall 
run fine from the same machine? How about psql? Are you sure you are 
hitting a base-table and not a view? Do the server logs show anything 
interesting?


Cheers,
Steve





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SOLVED: tsearch2 dictionary for statute cites

2009-04-08 Thread Kevin Grittner
Oleg Bartunov  wrote: 
>> I probably just need to have that "Aha!" moment, slap my forehead,
and
>> move on; but I'm not quite understanding something.  The answer to
>> this question could be it: Can I use a different set of
dictionaries
>> for creating the tsquery than I did for the tsvector?
> 
> Sure ! For example, you want to index all words, so your
dictionaries
> doesn't have stop word lists, but forbid people to search common
words.
> Or, if you want to search 'to be or not to be' you have to use 
> dictionaries without stop words.
 
I found a creative solution which I think meets my needs.  I'm posting
both to help out anyone with similar issues who finds the thread, and
in case someone sees an obvious defect.  By creating one function to
generate the "legal" tsvector (which recognizes statute cites) and
another function to generate the search values, with casts from text
to the ts objects, I can get more targeted results than the parser and
dictionary changes alone could give me.
 
I'm still working on the dictionaries and the query function, but the
vector function currently looks like the attached.
 
Thanks to Oleg and Tom for assistance; while neither suggested quite
this solution, their comments moved me along to where I found it.
 
-Kevin



to_legal_tsvector.sql
Description: Binary data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Now I am back, next thing. Final PGS tuning.

2009-04-08 Thread Jennifer Trey
max_connections = 150 # A comprimise :)

effective_cache_size = 2048MB # Old value 439MB --> Even older : 128MB
#Is this too high?

maintenance_work_mem = 96MB # Old 16MB. Would 64MB be better? Updates
and therefore re-indexing of tuples happens quite frequently.

work_mem = 3MB
# Old was 1MB!? That is too low.
# Scott you mentioned an example with 1 GB. I guess this is the work
memory to work on per user query to sort, join and so on. I will be
doing those things quite often.
# After all, if I understand the concept correctly, it will only use
it if needs too, otherwise performance will take a hit.
# Scott, you say that I might need to change this later on when I have
several gigs of data. But will it hurt when I don't?
# I think 4-8MB should be enough and relativly safe to start with. I
am scared of going higher. But 1MB is low.

shared_buffer = 1024MB # Kept it

random_page_cost = 3 # I have pretty fast disks.

wal_buffers = 1024KB

Scott, you mentioned :

You can also use the pg_stat_all_indexes table to look at index scans
vs. tuples being read, this can sometimes hint at index 'bloat'. I
would also recommend pg_stattuple which has a pg_statindex function
for looking at index fragmentation.

>From where can I see these stats ? Is there any graphic tool?

Thanks all / Jennifer


Re: [GENERAL] Are there performance advantages in storing bulky field in separate table?

2009-04-08 Thread Ian Mayo
Cheers Tom,

On Wed, Apr 8, 2009 at 4:47 PM, Tom Lane  wrote:
> Ian Mayo  writes:
>> [snip]
> No.  You'd basically be manually reinventing the TOAST mechanism;
> or the large object mechanism, if you choose to store the blob
> as a large object rather than a plain bytea field.  Either way,
> it won't physically be in the same table as the main row data.

fine, that keeps the design simpler

> If you're curious, this goes into some of the gory details:
> http://www.postgresql.org/docs/8.3/static/storage-toast.html

Oooh, no, much too gory for me.

>                        regards, tom lane

thanks again.

One more thing:  hey, did you hear?  I just got some advice from Tom Lane!

Ian

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Now I am back, next thing. Final PGS tuning.

2009-04-08 Thread Greg Smith

On Wed, 8 Apr 2009, Massa, Harald Armin wrote:


"documenting" that for the wiki is still on my backlog; so, here:
shared_buffers of PostgreSQL on Windows != shared_buffers of PostgreSQL on Unix


There's already comments about that in the shared_buffers section of 
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are there performance advantages in storing bulky field in separate table?

2009-04-08 Thread Tom Lane
Ian Mayo  writes:
> I've got a fairly straightforward table that's similar to a blog table
> (entryId, date, title, author, etc).  There is, however, the
> requirement to allow a single, fairly bulky binary attachment to
> around 1% of the rows.

> There will be a few million rows, and I value efficient searches by
> date, title, and author.

> Would there be a performance advantage in storing the attachment in a
> separate table - linked by entryId foreign key?

No.  You'd basically be manually reinventing the TOAST mechanism;
or the large object mechanism, if you choose to store the blob
as a large object rather than a plain bytea field.  Either way,
it won't physically be in the same table as the main row data.

If you're curious, this goes into some of the gory details:
http://www.postgresql.org/docs/8.3/static/storage-toast.html

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PGS Tuning Wizard destroys my login

2009-04-08 Thread John R Pierce

Jennifer Trey wrote:
*As far as I know, *I have one user and that is the Administrator. Not 
sure if that is the commander in chief but obviously not, because he 
is screwing with me allot. 


unlike Unix derivatives, the Windows Administrator account does NOT have 
file access override rights, it has to obey rights just like any other 
user.   The one special power Adminstrator does have is the right to 
change file ownership, but they made this as obfuscated as possible.


The standard Windows PostgreSQL install creates a special unprivileged 
user 'postgres' which must own the data directory and have read/write 
access to all files in it, and is used to run the postgresql server 
process.   This postgres account needs and has no special privileges, 
except the right to run as a service.


In Unix derived systems, when you update an existing file, it tends to 
NOT create a new file, just replace the data in the existing file, so by 
default ownership and privileges remain the same.In Windows, 
instead, its more common to create a new file, then remove the old one 
and rename the new one, this means when user X edits an existing file, 
he tends to end up being the owner.  So, if user "Administrator" 
runs this tuning tool, odds are pretty good that user running the tool 
will own the postgresql.conf file it creates.   AFAIK, there's no 
special requirement that Postgres owns this file, it just needs read 
privileges, so I'd check the file access rights after running the tool.  
yeah, I just checked on my XP system here, my own account 'owns' the 
file, but the postgres user has modify/read/write/execute privileges on it.


   C:\> cacls D:\postgres\8.3\data\postgresql.conf
   D:\postgres\8.3\data\postgresql.conf BUILTIN\Administrators:F
NT AUTHORITY\SYSTEM:F
PORKER\pierce:F
BUILTIN\Users:R
PORKER\postgres:C

one method of 'fixing' this, then, would be...

   C:\> cacls \path\to\data\postgresq.conf /e /g:postgres:RWC

now try starting the postgres service process...
   C:\> net start pgsql-8.3
   The PostgreSQL Database Server 8.3 service is starting.
   The PostgreSQL Database Server 8.3 service was started successfully.
  





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Are there performance advantages in storing bulky field in separate table?

2009-04-08 Thread Ian Mayo
Hi all,
I'm designing a Postgresql database, and would appreciate this design advice.

I've got a fairly straightforward table that's similar to a blog table
(entryId, date, title, author, etc).  There is, however, the
requirement to allow a single, fairly bulky binary attachment to
around 1% of the rows.

There will be a few million rows, and I value efficient searches by
date, title, and author.

Would there be a performance advantage in storing the attachment in a
separate table - linked by entryId foreign key?  Or shall I just
include it as an ALLOW NULL field my blog table?

[of course, I'd appreciate redirection to the 'right' list if this is
the wrong one].

cheers,
Ian

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table has 22 million records, but backup doesn't see them

2009-04-08 Thread Steve Crawford

Radcon Entec wrote:

Greetings!
 
I'm running PostgreSQL 8.1 under Windows XP, looking at a database 
hosted on a machine running PostgreSQL under Windows Server 2003.
 
The database has a table with three simple columns and 22 million 
rows.  I am trying to back up that table by itself.  However, pg_dump 
finishes almost instantly, obviously not backing up any data from the 
table.  I've tried it from the DOS command line with and without the 
-a (data only) option, and from inside PGAdmin.  Can anyone suggest 
what might cause this behavior?
 
What is the exact command and what is the output (I'll be surprised if 
there is no output at all to either stdout or stderr)? Does pg_dumpall 
run fine from the same machine? How about psql? Are you sure you are 
hitting a base-table and not a view? Do the server logs show anything 
interesting?


Cheers,
Steve



Re: [GENERAL] recovery after segmentation fault

2009-04-08 Thread Ivan Sergio Borgonovo
On Wed, 08 Apr 2009 10:59:54 -0400
Tom Lane  wrote:

> Ivan Sergio Borgonovo  writes:
> > 2009-04-08 16:36:53 CEST LOG:  startup process (PID 3176) was
> > terminated by signal 11: Segmentation fault 2009-04-08 16:36:53
> > CEST LOG:  aborting startup due to startup process failure
> 
> Hmm, what Postgres version is this?  Can you get a stack trace from
> the startup process crash?

How on Debian?
Debian does all it's automagic stuff in init. I never learned how to
start pg manually.

> The only simple way out of this is to delete the presumably-corrupt
> WAL log by running pg_resetxlog.  That will destroy the evidence

I couldn't find it... mmm what a strange place for an executable:
/usr/lib/postgresql/8.3/bin/pg_resetxlog

> about what went wrong, though, so if you'd like to contribute to
> preventing such problems in future you need to save a copy of
> everything beforehand (eg, tar up all of $PGDATA).  Also you might
> have a corrupt database afterwards :-(

What if I just don't care about recovery of *one* DB (that is maybe
the culprit) and just see the server restart then just do a restore
from a VERY recent backup?

Is there a way to just kill recovery for one DB? Just don't start it
at all?

This is the same DB having problem with recreation of gin index
BTW... and I've the feeling that the problem is related to that
index once more... I was vacuuming full, I aborted...

I think the DB is trying to recreate the index but due to some
problem (can I say bug or is it too early?) it segfaults.

I think this could be of some help:

2009-04-08 16:47:13 CEST LOG:  database system was not properly shut
down; automatic recovery in progress
2009-04-08 16:47:13 CEST LOG: redo starts at 72/9200EBC8

BTW:
Linux amd64, debian stock kernel
Debian etch/backport: Version: 8.3.4-1~bpo40+1


Now let's learn how to use pg_resetxlog

thanks

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Table has 22 million records, but backup doesn't see them

2009-04-08 Thread Radcon Entec
Greetings!

I'm running PostgreSQL 8.1 under Windows XP, looking at a database hosted on a 
machine running PostgreSQL under Windows Server 2003.

The database has a table with three simple columns and 22 million rows.  I am 
trying to back up that table by itself.  However, pg_dump finishes almost 
instantly, obviously not backing up any data from the table.  I've tried it 
from the DOS command line with and without the -a (data only) option, and from 
inside PGAdmin.  Can anyone suggest what might cause this behavior?

I'm going to have to do a full database backup, which I've been trying to avoid 
because there are other large tables I don't need and don't want to waste time 
getting.  A full backup of this database takes something like three hours.

Thanks again!


  

Re: [GENERAL] recovery after segmentation fault

2009-04-08 Thread Tom Lane
Ivan Sergio Borgonovo  writes:
> 2009-04-08 16:36:53 CEST LOG:  startup process (PID 3176) was
> terminated by signal 11: Segmentation fault 2009-04-08 16:36:53 CEST
> LOG:  aborting startup due to startup process failure

Hmm, what Postgres version is this?  Can you get a stack trace from
the startup process crash?

The only simple way out of this is to delete the presumably-corrupt
WAL log by running pg_resetxlog.  That will destroy the evidence
about what went wrong, though, so if you'd like to contribute to
preventing such problems in future you need to save a copy of everything
beforehand (eg, tar up all of $PGDATA).  Also you might have a corrupt
database afterwards :-(

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Now I am back, next thing. Final PGS tuning.

2009-04-08 Thread Jennifer Trey
On Wed, Apr 8, 2009 at 5:38 PM, Massa, Harald Armin  wrote:

> Bill, Jennifer,
>
> > *shared_buffers = 1024 # min 128kB or max_connections*16kB  ## Also to
>> low.
>> > Right? I've got 3GB to work with!*
>>
>> Assuming that's equating to 1G, then the value is about right.  Common
>> best practice is to set this value to 1/4 - 1/3 of the memory available
>> for PostgreSQL.  You're saying you'll have ~3G for PG, so 1G is about
>> right to start with.
>>
>>
> "documenting" that for the wiki is still on my backlog; so, here:
>
> shared_buffers of PostgreSQL on Windows != shared_buffers of PostgreSQL on
> Unix
>
> My experience is that raising shared_memory on Windows above minimum+~20%
> is not helping performance; it's more effective to have that memory at
> Windows for caching. (at least up to server 2003)

I forgot to comment on this on Bill so its good you brought it up again.
This guide : http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
says under shared_buffers
**
*"If you have a system with 1GB or more of RAM, a reasonable starting value
for shared_buffers is 1/4 of the memory in your system."*
**
*in your system* ... that means I should count from 8GB right? Bill
mentioned countring from the 3GB. What would you say Harald, is perhaps 1.5
GB more suitable, a comprise for my giga byte greed :P haha!


>
>
> Harald
>
> --
> GHUM Harald Massa
> persuadere et programmare
> Harald Armin Massa
> Spielberger Straße 49
> 70435 Stuttgart
> 0173/9409607
> no fx, no carrier pigeon
> -
> LASIK good, steroids bad?
>
When it comes to the effective_cache I think this might be of great
importance for me since similar tuples will be fetched quite often by
different users. So caching could become quite important here. 439 MB is not
so much. The same guide as mentioned seconds ago says this :
*Setting effective_cache_size to 1/2 of total memory would be a normal
conservative setting, and 3/4 of memory is a more aggressive but still
reasonable amount.*
**
3/4 of total memory!? Its on 439 MB now. Could someone give me a better
offer?
Other things to consider ?
Sincerely / Jennifer


[GENERAL] recovery after segmentation fault

2009-04-08 Thread Ivan Sergio Borgonovo
postgresql suddenly died...

during recovery 

2009-04-08 16:35:34 CEST FATAL:  the database system is starting up
^^^ several
2009-04-08 16:35:34 CEST LOG:  incomplete startup packet
2009-04-08 16:36:53 CEST FATAL:  the database system is starting up
2009-04-08 16:36:53 CEST LOG:  startup process (PID 3176) was
terminated by signal 11: Segmentation fault 2009-04-08 16:36:53 CEST
LOG:  aborting startup due to startup process failure

It could be something wrong with the recovery process in an aborted
transaction that is causing the segfault...

How can I resurrect the server and load a backup?
It was serving more than one DB and I assume that only one is
causing problems. Can I skip just that one from recovery and start
from backup?

thanks

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Now I am back, next thing. Final PGS tuning.

2009-04-08 Thread Scott Mead
On Wed, Apr 8, 2009 at 10:23 AM, Bill Moran wrote:

> In response to Jennifer Trey :
>
>
> > *maintenance_work_mem = 16384 *


   If your vacuums and / or create index are taking ages, considering a
higher value here may be useful.  I would need to know more about the
database before suggesting though.  I have a gut feeling that this may be a
good starting place.


> >
> > *work_mem = 1024  # I think this is kb. Way to low, right? What is a
> better
> > value?*
>
> Be careful with work_mem.  For every connection to the database, it is
possible to consume up to work_mem so:

   If your application makes 100 connections to the database and your
work_mem =1GB, IF you are running big nasty order by's... you would be
swapping 100 GB.  This is a pretty extreme example, but I think it's
important.

   As a rule of thumb, I like to start with 1 - 2 Mb and almost always leave
it there.  If you're doing joins and order by's on many many gigs later on,
then it could be an issue.


>
> > *shared_buffers = 1024 # min 128kB or max_connections*16kB  ## Also to
> low.
> > Right? I've got 3GB to work with!*
>
> Assuming that's equating to 1G, then the value is about right.  Common
> best practice is to set this value to 1/4 - 1/3 of the memory available
> for PostgreSQL.  You're saying you'll have ~3G for PG, so 1G is about
> right to start with.


  The idea here is to be conservative with shared_buffers and then use
effective_cache_size to tell the optimizer how much ram the OS can use for
buffering data.  1 GB is a good start place.


>
>
> Once the system is up and running, you can install pg_buffercache to
> monitor usage and help tune it.


 Good advice


>
>
> > *wal_buffers = 256 # Also kB...*
> >
> > Please give your thoughts. I was also wondering about the Vacuum, force
> > reindex and stuff. Are those things good to run once in a while? Force
> > sounds a little brutal though!
>
> Turn on autovacuum.  I've found it's the best way to go in 99% of installs
> (the corner cases being servers that have _very_ predictable workloads ...
> in which case explicit, scheduled vacuums are better).


 + 1

>
>
> REINDEXing is an occasional topic of discussion.  Doing it occasionally
> definitely saves disk space on frequently updated databases, but the
> impact (if any) on performance is a subject for debate.  I've yet to see
> any drastic performance improvement from REINDEXing, but if you've got
> obvious off-peak times (i.e., if nobody uses the system over weekends or
> something) it probably doesn't hurt to reindex everything on a regular
> schedule.  Don't obsess over it, though.


  Just remember that the REINDEX command is a locking command, so using
'create index concurrently' is recommended.

   You can also use the pg_stat_all_indexes table to look at index scans vs.
tuples being read, this can sometimes hint at index 'bloat'.  I would also
recommend pg_stattuple which has a pg_statindex function for looking at
index fragmentation.


--Scott


Re: [GENERAL] Now I am back, next thing. Final PGS tuning.

2009-04-08 Thread Jennifer Trey
On Wed, Apr 8, 2009 at 5:23 PM, Bill Moran  wrote:

> In response to Jennifer Trey :
> >
> > I have 8GB memory, Intel Quad Core 2.4Ghz with 8MB L2 cache. I am running
> > Windows Web Server 2008 x64 and will be running a Java (64 bit version)
> > application.
> >
> > I want to give the java app room for working on 2-3GB. The operating
> system
> > is currently consuming around 1GB but lets give it a little more room.
> Lets
> > give it a total of 2GB.
> >
> > That leaves 8-3-2 = 3 GB for PostgreSQL alone. Sounds good?
> >
> > Here is my config file :
> >
> > http://85.235.31.35/resources/postgresql.conf
> >
> > I see there is a setting
> >
> > *max_connections = 100*
> >
> > What does this do? Should I be looking at this as max similtaneous
> queries ?
> > is 100 really enough? I think I want to max this more.
> >
> > I am looking for a worst scenario around like 50-100 similitaneous user
> > clicks (per second?). But the querying might be around like 200 queries
> per
> > seocond, not really, but I want to be prepared. :)
>
> Depends on how long your "clicks" take to process.  If you're doing 100
> page views (clicks) /second and each view takes 2 seconds to process,
> you're
> tying up 200 connections on a continual basis.
>
> Unless you're using some sort of connection pooling ... I'm no Java expert,
> but doesn't Java have connection pooling built in?  If so, it becomes
> more difficult to estimate the # of simultaneous connections because each
> instance of a running script might share a connection with other scripts.
>
> In that case, you'll probably have to test to see what a good max is, as
> it's going to be difficult or impossible to estimate.
>
> In any event, 100 is probably a good starting point (based on my
> experience).  Note that if you find that you have to raise that value too
> high, (much over a few hundred) then you probably want to investigate some
> form of connection pooling, such as pgpool.
>
**
*Yes.* I think java uses these things. Looking at jConsole I can see that
there is these things (pools) going on.
I think I will increase this to 175. Just to be on the safe side...


> > Here is some other settings I am thinking about :
> >
> > *effective_cache_size = 449697*
>
> What version of Postgres?  In modern versions, you can specify MB, GB, etc.
>
I use 8.3.7 for windows.
I think this is kb since no MB is specified afterwards, which makes it 439
MB. The old value before tuning wizard was 128 MB.


>
>
> This value should be the memory that's left unused when everything is
> running (including Postgres).  It helps the planner estimate how much of
> the filesystem is cached in memory.  Based on the other numbers you've
> mentioned, this should probably be set to about 2G.
>
> > *maintenance_work_mem = 16384 *
> >
> > *work_mem = 1024  # I think this is kb. Way to low, right? What is a
> better
> > value?*
>
> I haven't noticed much value in tweaking this.  It only affects a few
> commands, such as vacuum and analyze.  Test to see if tweaking it speeds
> up vacuum without pushing the system into swap.

Yes, I will leave those as is then. But is it possible to set a time on when
the auto vacuum should kick in? Perhpas late at night would be better than
in the day.


>
>
> > *shared_buffers = 1024 # min 128kB or max_connections*16kB  ## Also to
> low.
> > Right? I've got 3GB to work with!*
>
> Assuming that's equating to 1G, then the value is about right.  Common
> best practice is to set this value to 1/4 - 1/3 of the memory available
> for PostgreSQL.  You're saying you'll have ~3G for PG, so 1G is about
> right to start with.

Yes, about 3GB but now I started to think about the OS cache aswell, which I
believe will be involved so perhpas put his at 2.75 GB with at java 2.75GB
and 2.5GB on the OS.


>
>
> Once the system is up and running, you can install pg_buffercache to
> monitor usage and help tune it.
>
> > *wal_buffers = 256 # Also kB...*
> >
> > Please give your thoughts. I was also wondering about the Vacuum, force
> > reindex and stuff. Are those things good to run once in a while? Force
> > sounds a little brutal though!
>
> Turn on autovacuum.  I've found it's the best way to go in 99% of installs
> (the corner cases being servers that have _very_ predictable workloads ...
> in which case explicit, scheduled vacuums are better).

I will :) But as I mentioned earlier. Is there a way to set a more suited
time for this happen (autovacuum)?


>
>
> REINDEXing is an occasional topic of discussion.  Doing it occasionally
> definitely saves disk space on frequently updated databases, but the
> impact (if any) on performance is a subject for debate.  I've yet to see
> any drastic performance improvement from REINDEXing, but if you've got
> obvious off-peak times (i.e., if nobody uses the system over weekends or
> something) it probably doesn't hurt to reindex everything on a regular
> schedule.  Don't obsess over it, though.
>
> --
> Bill Moran
> http://www.potentialtech.com
> ht

Re: [GENERAL] Now I am back, next thing. Final PGS tuning.

2009-04-08 Thread Massa, Harald Armin
Bill, Jennifer,

> *shared_buffers = 1024 # min 128kB or max_connections*16kB  ## Also to
> low.
> > Right? I've got 3GB to work with!*
>
> Assuming that's equating to 1G, then the value is about right.  Common
> best practice is to set this value to 1/4 - 1/3 of the memory available
> for PostgreSQL.  You're saying you'll have ~3G for PG, so 1G is about
> right to start with.
>
>
"documenting" that for the wiki is still on my backlog; so, here:

shared_buffers of PostgreSQL on Windows != shared_buffers of PostgreSQL on
Unix

My experience is that raising shared_memory on Windows above minimum+~20% is
not helping performance; it's more effective to have that memory at Windows
for caching. (at least up to server 2003)

Harald

-- 
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
no fx, no carrier pigeon
-
LASIK good, steroids bad?


Re: [GENERAL] Now I am back, next thing. Final PGS tuning.

2009-04-08 Thread Bill Moran
In response to Jennifer Trey :
> 
> I have 8GB memory, Intel Quad Core 2.4Ghz with 8MB L2 cache. I am running
> Windows Web Server 2008 x64 and will be running a Java (64 bit version)
> application.
> 
> I want to give the java app room for working on 2-3GB. The operating system
> is currently consuming around 1GB but lets give it a little more room. Lets
> give it a total of 2GB.
> 
> That leaves 8-3-2 = 3 GB for PostgreSQL alone. Sounds good?
> 
> Here is my config file :
> 
> http://85.235.31.35/resources/postgresql.conf
> 
> I see there is a setting
> 
> *max_connections = 100*
> 
> What does this do? Should I be looking at this as max similtaneous queries ?
> is 100 really enough? I think I want to max this more.
> 
> I am looking for a worst scenario around like 50-100 similitaneous user
> clicks (per second?). But the querying might be around like 200 queries per
> seocond, not really, but I want to be prepared. :)

Depends on how long your "clicks" take to process.  If you're doing 100
page views (clicks) /second and each view takes 2 seconds to process, you're
tying up 200 connections on a continual basis.

Unless you're using some sort of connection pooling ... I'm no Java expert,
but doesn't Java have connection pooling built in?  If so, it becomes
more difficult to estimate the # of simultaneous connections because each
instance of a running script might share a connection with other scripts.

In that case, you'll probably have to test to see what a good max is, as
it's going to be difficult or impossible to estimate.

In any event, 100 is probably a good starting point (based on my
experience).  Note that if you find that you have to raise that value too
high, (much over a few hundred) then you probably want to investigate some
form of connection pooling, such as pgpool.

> Here is some other settings I am thinking about :
> 
> *effective_cache_size = 449697*

What version of Postgres?  In modern versions, you can specify MB, GB, etc.

This value should be the memory that's left unused when everything is
running (including Postgres).  It helps the planner estimate how much of
the filesystem is cached in memory.  Based on the other numbers you've
mentioned, this should probably be set to about 2G.

> *maintenance_work_mem = 16384 *
> 
> *work_mem = 1024  # I think this is kb. Way to low, right? What is a better
> value?*

I haven't noticed much value in tweaking this.  It only affects a few
commands, such as vacuum and analyze.  Test to see if tweaking it speeds
up vacuum without pushing the system into swap.

> *shared_buffers = 1024 # min 128kB or max_connections*16kB  ## Also to low.
> Right? I've got 3GB to work with!*

Assuming that's equating to 1G, then the value is about right.  Common
best practice is to set this value to 1/4 - 1/3 of the memory available
for PostgreSQL.  You're saying you'll have ~3G for PG, so 1G is about
right to start with.

Once the system is up and running, you can install pg_buffercache to
monitor usage and help tune it.

> *wal_buffers = 256 # Also kB...*
> 
> Please give your thoughts. I was also wondering about the Vacuum, force
> reindex and stuff. Are those things good to run once in a while? Force
> sounds a little brutal though!

Turn on autovacuum.  I've found it's the best way to go in 99% of installs
(the corner cases being servers that have _very_ predictable workloads ...
in which case explicit, scheduled vacuums are better).

REINDEXing is an occasional topic of discussion.  Doing it occasionally
definitely saves disk space on frequently updated databases, but the
impact (if any) on performance is a subject for debate.  I've yet to see
any drastic performance improvement from REINDEXing, but if you've got
obvious off-peak times (i.e., if nobody uses the system over weekends or
something) it probably doesn't hurt to reindex everything on a regular
schedule.  Don't obsess over it, though.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump/psql: Select a server and automate password

2009-04-08 Thread Adrian Klaver
On Wednesday 08 April 2009 6:51:44 am Radcon Entec wrote:
> Greetings!
>
> On my computer, I have a copy of a customer's database for which a full
> backup file would be about 300 megabytes long.  There are several history
> tables that are not needed for day-to-day operation of our system.  I came
> up with a DOS batch file that consists of a sequence of pg_dump commands. 
> The first dumps the database's schema, and all the rest dump the required
> tables into individual files.  A companion batch file issues psql
> repeatedly to recreate the database and the tables. 
>
> On my computer, the batch files work fine.  I run them and they happily
> call pg_dump or psql as many times as they need to, and I don't need to do
> anything. 
>
> I just uploaded the batch files to our customer's machine, and a couple of
> questions occured to me.  First, pg_dump asks me for a password every time
> it is invoked.  That doesn't happen on my computer.  What is likely to be
> set up differently on my computer so that I don't need a password?

See pgpass
http://www.postgresql.org/docs/8.3/interactive/libpq-pgpass.html

>
> Second, I see no way to select a server in pg_dump or psql.  I am connected
> to a computer that was set up for my use specifically for testing.  In
> PGAdmin on that machine, I have a server talking to localhost and a server
> talking to the machine that hosts the production database.  I would like to
> run my batch file on the testing machine, but how do I tell pg_dump to use
> the server that talks to the production machine?

Use the -p switch to point pg_dump at the appropriate server.
http://www.postgresql.org/docs/8.3/interactive/app-pgdump.html

>
> Thank you very much!
>
> RobR



-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multidatabase query

2009-04-08 Thread Neanderthelle Jones
On Fri, 27 Mar 2009, Mauro Bertoli wrote:

> SELECT 
>   db1.a.id FROM db1.a 
> UNION
>   db2.b.id FROM db2.b
> 
> Where "db1" is a database and "db2" is another database. "a" is a
> table in database "db1" and "b" is a table in database "db2"

You might be able to create the equivalent of a union by having a
front-end program connect to both databases.  You can have two open
connections and query them in turn and put the union together.
Particularly if it is just a union of keys.

But the essential idea is that the database is the universe of data.

E.g. you want to minimize data redundancy in a db, but you wouldn't
want to do that across databases, because they are different,
independent worlds.  What is the business of one is not the business
of the other.

A query like the above seems to defeat this idea.  What you are
calling databases, or what the other DBMS calls databases, arguably
are not.

--
Elle

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Now I am back, next thing. Final PGS tuning.

2009-04-08 Thread Jennifer Trey
Ok,

I have left the previous thread. After changing the last permissions, even
though it said Access Denied, suddenly PostgreSQL started to work again. I
will not dig any further to the strangeness.

I copied the content of the.conf from tuning wizard and restarted. Still
working!

I want to say thanks to several people on that thread :) Thank you!

I would like to further tune the tuning wizards recommendations though. I
think it put itself on the lower scale.

I have 8GB memory, Intel Quad Core 2.4Ghz with 8MB L2 cache. I am running
Windows Web Server 2008 x64 and will be running a Java (64 bit version)
application.

I want to give the java app room for working on 2-3GB. The operating system
is currently consuming around 1GB but lets give it a little more room. Lets
give it a total of 2GB.

That leaves 8-3-2 = 3 GB for PostgreSQL alone. Sounds good?

Here is my config file :

http://85.235.31.35/resources/postgresql.conf

I see there is a setting

*max_connections = 100*

What does this do? Should I be looking at this as max similtaneous queries ?
is 100 really enough? I think I want to max this more.

I am looking for a worst scenario around like 50-100 similitaneous user
clicks (per second?). But the querying might be around like 200 queries per
seocond, not really, but I want to be prepared. :)

I would appreciate if could have a discussion on these topics. On whats
important and whats not.

Here is some other settings I am thinking about :

*effective_cache_size = 449697*

is this kilo bytes ? Is this a good value?

*maintenance_work_mem = 16384 *

*work_mem = 1024  # I think this is kb. Way to low, right? What is a better
value?*

*shared_buffers = 1024 # min 128kB or max_connections*16kB  ## Also to low.
Right? I've got 3GB to work with!*

*wal_buffers = 256 # Also kB...*

Please give your thoughts. I was also wondering about the Vacuum, force
reindex and stuff. Are those things good to run once in a while? Force
sounds a little brutal though!

Something else I should consider?

/ Jennifer


Re: [GENERAL] PGS Tuning Wizard destroys my login

2009-04-08 Thread Jennifer Trey
After doing allot of clicks with the permissions. It automagically fixed
itself. Fantastic!

The root problem was Windows itself!

I would recommend the EnterpriseDB folks to consider this though, because in
someway its affecting your software.

Thank you all. Now join my new thread for real discussions, instead of
problem solving :)


On Wed, Apr 8, 2009 at 4:31 PM, Massa, Harald Armin  wrote:

> Jennifer,
>
> 1.) you should state the exact version of your operating system
>>
>> *Windows Web Server 2008 x64*
>>
>>>
> 2.) you should state the exact PostgreSQL Version
>>
>> *The latest available, installed this : postgresql-8.3.7-1-windows.exe*
>>
>>
>>> 3.) exactly describe your Windows Authorization. Is there some Domain /
>>> ActiveDirectory going on? Do you really have MasterChiefAdministrator
>>> rights?
>>
>> *As far as I know, *I have one user and that is the Administrator. Not
>> sure if that is the commander in chief but obviously not, because he is
>> screwing with me allot.
>>
>
> You have digged down to the root of the problem:
>
> FATAL: could not open configuration file
>> "E:/PostgreSQLDATA/postgresql.conf": Permission denied
>>
>
> so to sum it up: you ran the Tuning Wizard, which gave you a new
> postgresql.conf. Tuning wizard, running as your user, copied this
> postgresql.conf OR you put back your back-upped postgresql.conf, also as
> "Administrator"
>
> The permissions of that directory seem to be very screwed :)
>
>>
>> 5.) Check for the logfiles of PostgreSQL within the pg_log directory.
>> *I cannot enter that folder! *Crazy! I command you to let me in!
>> No, I try to change the settings and permissions (there is like tousand
>> entries to those things), i give full access but no difference! I don't get
>> it! Make me the super chief damn it!
>>
>
> A quick google for NTFS permissions points to a lot of second source
> manuals; that is a hint that it is complicated enough for people to make
> money to write books about it. As I am always working WITH the system
> adminstrators when implementing PostgreSQL on Win32, I have no MCP or MSC or
> whatever that title for knowing about Windows-Permissions is.
>
> So my hints to solve your problem:
>
> a) check if there is some higher authority - as "this computer is member of
> a domain / Active Directory", and rights for those file systems are managed
> elsewhere out of your control
>
> b) read up on Windows NTFS File Permissions on Server 2008. Do you have to
> use another account? Do you have to elevate privileges in any way?
>
> c) after learning enough about windows File Permissions, make sure that the
> user "Postgres" -> PostgreSQL Service Account has the appropriate
> permissions to access all the files within the PostgreSQL Data directory.
>
> *You seem to be on something here. Keep going! :)*
>>
>
> Now it's you and windows. Good luck!
>
> Harald
>
> --
> GHUM Harald Massa
> persuadere et programmare
> Harald Armin Massa
> Spielberger Straße 49
> 70435 Stuttgart
> 0173/9409607
> no fx, no carrier pigeon
> -
> LASIK good, steroids bad?
>


[GENERAL] pg_dump/psql: Select a server and automate password

2009-04-08 Thread Radcon Entec
Greetings!

On my computer, I have a copy of a customer's database for which a full backup 
file would be about 300 megabytes long.  There are several history tables that 
are not needed for day-to-day operation of our system.  I came up with a DOS 
batch file that consists of a sequence of pg_dump commands.  The first dumps 
the database's schema, and all the rest dump the required tables into 
individual files.  A companion batch file issues psql repeatedly to recreate 
the database and the tables.  

On my computer, the batch files work fine.  I run them and they happily call 
pg_dump or psql as many times as they need to, and I don't need to do 
anything.  

I just uploaded the batch files to our customer's machine, and a couple of 
questions occured to me.  First, pg_dump asks me for a password every time it 
is invoked.  That doesn't happen on my computer.  What is likely to be set up 
differently on my computer so that I don't need a password?

Second, I see no way to select a server in pg_dump or psql.  I am connected to 
a computer that was set up for my use specifically for testing.  In PGAdmin on 
that machine, I have a server talking to localhost and a server talking to the 
machine that hosts the production database.  I would like to run my batch file 
on the testing machine, but how do I tell pg_dump to use the server that talks 
to the production machine?

Thank you very much!

RobR


  

Re: [GENERAL] PGS Tuning Wizard destroys my login

2009-04-08 Thread Massa, Harald Armin
Jennifer,

1.) you should state the exact version of your operating system
>
> **
> *Windows Web Server 2008 x64*
> **
>
>>
2.) you should state the exact PostgreSQL Version
>
> **
> *The latest available, installed this : postgresql-8.3.7-1-windows.exe*
>
>
>> 3.) exactly describe your Windows Authorization. Is there some Domain /
>> ActiveDirectory going on? Do you really have MasterChiefAdministrator
>> rights?
>
> **
> *As far as I know, *I have one user and that is the Administrator. Not
> sure if that is the commander in chief but obviously not, because he is
> screwing with me allot.
>

You have digged down to the root of the problem:

FATAL: could not open configuration file
> "E:/PostgreSQLDATA/postgresql.conf": Permission denied
>

so to sum it up: you ran the Tuning Wizard, which gave you a new
postgresql.conf. Tuning wizard, running as your user, copied this
postgresql.conf OR you put back your back-upped postgresql.conf, also as
"Administrator"

The permissions of that directory seem to be very screwed :)

>
> 5.) Check for the logfiles of PostgreSQL within the pg_log directory.
> **
> *I cannot enter that folder! *Crazy! I command you to let me in!
> No, I try to change the settings and permissions (there is like tousand
> entries to those things), i give full access but no difference! I don't get
> it! Make me the super chief damn it!
>

A quick google for NTFS permissions points to a lot of second source
manuals; that is a hint that it is complicated enough for people to make
money to write books about it. As I am always working WITH the system
adminstrators when implementing PostgreSQL on Win32, I have no MCP or MSC or
whatever that title for knowing about Windows-Permissions is.

So my hints to solve your problem:

a) check if there is some higher authority - as "this computer is member of
a domain / Active Directory", and rights for those file systems are managed
elsewhere out of your control

b) read up on Windows NTFS File Permissions on Server 2008. Do you have to
use another account? Do you have to elevate privileges in any way?

c) after learning enough about windows File Permissions, make sure that the
user "Postgres" -> PostgreSQL Service Account has the appropriate
permissions to access all the files within the PostgreSQL Data directory.

*You seem to be on something here. Keep going! :)*
>

Now it's you and windows. Good luck!

Harald

-- 
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
no fx, no carrier pigeon
-
LASIK good, steroids bad?


Re: [GENERAL] PGS Tuning Wizard destroys my login

2009-04-08 Thread Jennifer Trey
It seems like my long message didnt make it :S Here it is again. (Perhaps it
was because of the attached pic, I will leave it out this time)--

On Wed, Apr 8, 2009 at 3:17 PM, Massa, Harald Armin  wrote:

> Jennifer,
>
> obviously you are on windows. And the problem you are describing seems to
> be:
>
> "after running the tuning wizzard, the PostgreSQL Service does not start
> any more"
>
> AT the same time you are describing some problem of
>
> "you are not able to delete some Files / Directories connected with
> PostgreSQL-Databases, and you are not able to change ownership / take
> ownership of those files."
>
> THAT gives reason to the assumption that you are using something Vistaish
> or Windows-7ish.
>
> AND that the right structure of your file system around PostgreSQLs Files
> is rather screwed; not clear, by what or who.
>
> SO, to help you to get to some solution:
>
> 1.) you should state the exact version of your operating system

**

*Windows Web Server 2008 x64*
**

>
> 2.) you should state the exact PostgreSQL Version

**

*The latest available, installed this : postgresql-8.3.7-1-windows.exe*


>
> 3.) exactly describe your Windows Authorization. Is there some Domain /
> ActiveDirectory going on? Do you really have MasterChiefAdministrator
> rights?

**

*As far as I know, *I have one user and that is the Administrator. Not sure
if that is the commander in chief but obviously not, because he is screwing
with me allot.

running *lusrmgr.msc * I can see under user that there is a username :

postgre

with description : PostgreSQL service account

There is allot of changes that can be made on the properties here.


>
>
> and to continue your diagnostics:
>
> 4.) Check the application event log of windows. "eventvwr" from the Command
> line, or dig through the Administration Panels

**

*Ok, I did that. *The export generated uglyness so I will be pasting those I
found relevant and there was a few!

By Date ASC (Oldest first, and only errors)

FATAL: could not open configuration file
"E:/PostgreSQLDATA/postgresql.conf": Permission denied
FATAL: could not open configuration file
"E:/PostgreSQLDATA/postgresql.conf": Permission denied
FATAL: could not open configuration file
"E:/PostgreSQLDATA/postgresql.conf": Permission denied  and some
more of these 
Timed out waiting for server startup
pg_ctl: PID file "E:/PostgreSQLDATA/postmaster.pid" does not exist
Is server running?
pg_ctl: PID file "E:/PostgreSQLDATA/postmaster.pid" does not exist

5.) Check for the logfiles of PostgreSQL within the pg_log directory.
**

*I cannot enter that folder! *Crazy! I command you to let me in!
No, I try to change the settings and permissions (there is like tousand
entries to those things), i give full access but no difference! I don't get
it! Make me the super chief damn it! Freaking Windows! Just crap and I am
stuck with it.
Most likely reason is that postgres.exe is not allowed to open some file or
port; to write to a log file or read it's databases or even read its
configuration file postgresql.conf
**

*I enter the properties.* postgres is among the users. The only checked (for
the entire folder) is 'Special Permissions', nothing else. (Oh yeah, postgre
is the owner). Changing to full access says that Access denied. Please look
at the JPEG I attached to this email, its marked with the steps.

So, next of this: check the file-ownerships and rights of your PostgreSQL
Data directory. Especially, is the user owning the postgresql service
allowed to read an change all the files within the PostgreSQL Data
directory? Especially the postgresql.conf?

*You seem to be on something here. Keep going! :)*

Best luck,

Harald

Sincerely / Jennifer


Re: [GENERAL] PGS Tuning Wizard destroys my login

2009-04-08 Thread Jennifer Trey
I just tried to change the permission on all users, admin to user to
postgres. Access Denied for all! The machines have taken over!


Re: [GENERAL] PGS Tuning Wizard destroys my login

2009-04-08 Thread Massa, Harald Armin
Jennifer,

obviously you are on windows. And the problem you are describing seems to
be:

"after running the tuning wizzard, the PostgreSQL Service does not start any
more"

AT the same time you are describing some problem of

"you are not able to delete some Files / Directories connected with
PostgreSQL-Databases, and you are not able to change ownership / take
ownership of those files."

THAT gives reason to the assumption that you are using something Vistaish or
Windows-7ish.

AND that the right structure of your file system around PostgreSQLs Files is
rather screwed; not clear, by what or who.

SO, to help you to get to some solution:

1.) you should state the exact version of your operating system
2.) you should state the exact PostgreSQL Version
3.) exactly describe your Windows Authorization. Is there some Domain /
ActiveDirectory going on? Do you really have MasterChiefAdministrator
rights?

and to continue your diagnostics:

4.) Check the application event log of windows. "eventvwr" from the Command
line, or dig through the Administration Panels
5.) Check for the logfiles of PostgreSQL within the pg_log directory.

Most likely reason is that postgres.exe is not allowed to open some file or
port; to write to a log file or read it's databases or even read its
configuration file postgresql.conf

So, next of this: check the file-ownerships and rights of your PostgreSQL
Data directory. Especially, is the user owning the postgresql service
allowed to read an change all the files within the PostgreSQL Data
directory? Especially the postgresql.conf?

Best luck,

Harald




On Wed, Apr 8, 2009 at 1:50 PM, Jennifer Trey wrote:

> Hehe, I am not saying that you are untruthful, but obviously there is more
> to this issue than just the .conf file.
>
> Because when returning to the old conf the problem is still there and I am
> also certain that this problem was not there before I just installed the
> tuning wizard.
>
> Would you at least agree that there is something fishy going on, and that
> its related to the tuning wizard? Things seldom work as expected
>
>
> I just un-installed the tuning-wizard (I still have the generated config
> file, so I dont have to repeat this step once more), restarted, the problem
> is still there. The problem does not seem to be login related. But more to
> that the server doesn't start up.
>
> Running the program, 'Start Server', it just sits there and does nothing.
> After long time it says:
>
> The service did not report an error.
>
> More help is available by typing NET HELPMSG 3534
>
> Not sure what is going on. Server refuses to start it seems. Please help me
> debug this problem :)
>
> Sincerely / Jen
>
>  On Wed, Apr 8, 2009 at 2:31 PM, Dave Page  wrote:
>>
>> On Wed, Apr 8, 2009 at 12:16 PM, Jennifer Trey 
>> wrote: > Look, I am telling now what I did. > > I don't want to hear
>> claims about how its not the tuning wizards fault. It > is! > > And it
>> does more than the postgresql.conf because thats not true.
>> I'm sorry to hear you think I'm being untruthful. The source code for the
>> tuning wizard is at
>> http://www.enterprisedb.com/openDownloads.do?productId=417&redirectReason=true&productVersion=otherDownload-
>>  you can see exactly what it modifies there. --Dave PageEnterpriseDB UK:
>> http://www.enterprisedb.com
>>
>
>



-- 
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
no fx, no carrier pigeon
-
LASIK good, steroids bad?


Re: [GENERAL] PGS Tuning Wizard destroys my login

2009-04-08 Thread Kevin Kempter
Jenifer;

Can you send  a copy of both your before and after config files to the list?



On Wednesday 08 April 2009 05:16:35 Jennifer Trey wrote:
> Look, I am telling now what I did.
>
> I don't want to hear claims about how its not the tuning wizards fault. It
> is!
>
> And it does more than the postgresql.conf because thats not true.
> Just until today I have been able to login and logout, using pgAdmin to my
> DB.
>
> Now, after considering what you said, I tried once more! And I am getting
> tired of this whole process.
>
> I installed the tuning wizard. I got myself a new postgresql.conf and it
> renamed the old one which I noted.
>
> I opened my pgAdmin, logged in, worked fine (I guess the changes require
> some kind of restart). I restarted my computer and we are back with the old
> pattern where I cannot login, not with pgAdmin, not with my java-app and
> not with sql shell :
> Username [postgres]:
> psql: could not connect to server: Connection refused (0x274D/10061)
>   Is the server running on host "localhost" and accepting
>   TCP/IP connections on port 5432?
>
> Press any key to continue . . .
>
> looking at the pgadmin.log i see this :
> 2009-04-08 02:03:01 ERROR : server closed the connection unexpectedly
>  This probably means the server terminated abnormally
>  before or while processing the request.
>
> I replaced the new postgresql.conf with the old one. Restarted my computer
> again. And still the problems are there. Last time I had to uninstall
> postgre (the time before I reformatted the computer) .. so I would like
> this to work this time around.
>
> About taking ownership, its not working for me either, (I am trying to
> delete the data folder that was left from the previous installation) but it
> refuses to let me delete two folders (base, pg_log) but I guess thats more
> a windows server issue.
> Lets concentrate on the first problem.
>
> How can I fix it ?
> / Jennifer



Re: [GENERAL] PGS Tuning Wizard destroys my login

2009-04-08 Thread Scott Mead
On Wed, Apr 8, 2009 at 7:50 AM, Jennifer Trey wrote:

>
> The service did not report an error.
>
> More help is available by typing NET HELPMSG 3534
>
> Not sure what is going on. Server refuses to start it seems. Please help me
> debug this problem :)
>

  My first guess as to why this is happening:

 If you've been manually placing the config files, it probably means
that the windows user who is logging on and running the postgres service
does not have permission to read the postgresql.conf file.  Try to set the
permissions of that file to the same as the other files in the directory
before trying to start the service.

--Scott

> Sincerely / Jen
>
>  On Wed, Apr 8, 2009 at 2:31 PM, Dave Page  wrote:
>>
>> On Wed, Apr 8, 2009 at 12:16 PM, Jennifer Trey 
>> wrote: > Look, I am telling now what I did. > > I don't want to hear
>> claims about how its not the tuning wizards fault. It > is! > > And it
>> does more than the postgresql.conf because thats not true.
>> I'm sorry to hear you think I'm being untruthful. The source code for the
>> tuning wizard is at
>> http://www.enterprisedb.com/openDownloads.do?productId=417&redirectReason=true&productVersion=otherDownload-
>>  you can see exactly what it modifies there. --Dave PageEnterpriseDB UK:
>> http://www.enterprisedb.com
>>
>
>


Re: [GENERAL] PGS Tuning Wizard destroys my login

2009-04-08 Thread Sam Mason
On Wed, Apr 08, 2009 at 02:16:35PM +0300, Jennifer Trey wrote:
> looking at the pgadmin.log i see this :
> 2009-04-08 02:03:01 ERROR : server closed the connection unexpectedly
>  This probably means the server terminated abnormally
>  before or while processing the request.

Why aren't you looking at the postmaster's log?  Not sure where this
lives under Windows, but it should tell you why the server is closing
the connection.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PGS Tuning Wizard destroys my login

2009-04-08 Thread Jennifer Trey
Hehe, I am not saying that you are untruthful, but obviously there is more
to this issue than just the .conf file.

Because when returning to the old conf the problem is still there and I am
also certain that this problem was not there before I just installed the
tuning wizard.

Would you at least agree that there is something fishy going on, and that
its related to the tuning wizard? Things seldom work as expected

I just un-installed the tuning-wizard (I still have the generated config
file, so I dont have to repeat this step once more), restarted, the problem
is still there. The problem does not seem to be login related. But more to
that the server doesn't start up.

Running the program, 'Start Server', it just sits there and does nothing.
After long time it says:

The service did not report an error.

More help is available by typing NET HELPMSG 3534

Not sure what is going on. Server refuses to start it seems. Please help me
debug this problem :)

Sincerely / Jen

 On Wed, Apr 8, 2009 at 2:31 PM, Dave Page  wrote:
>
> On Wed, Apr 8, 2009 at 12:16 PM, Jennifer Trey 
> wrote: > Look, I am telling now what I did. > > I don't want to hear
> claims about how its not the tuning wizards fault. It > is! > > And it
> does more than the postgresql.conf because thats not true.
> I'm sorry to hear you think I'm being untruthful. The source code for the
> tuning wizard is at
> http://www.enterprisedb.com/openDownloads.do?productId=417&redirectReason=true&productVersion=otherDownload-
>  you can see exactly what it modifies there. --Dave PageEnterpriseDB UK:
> http://www.enterprisedb.com
>


Re: [GENERAL] PGS Tuning Wizard destroys my login

2009-04-08 Thread Dave Page
On Wed, Apr 8, 2009 at 12:16 PM, Jennifer Trey  wrote:
> Look, I am telling now what I did.
>
> I don't want to hear claims about how its not the tuning wizards fault. It
> is!
>
> And it does more than the postgresql.conf because thats not true.

I'm sorry to hear you think I'm being untruthful. The source code for
the tuning wizard is at
http://www.enterprisedb.com/openDownloads.do?productId=417&redirectReason=true&productVersion=otherDownload
- you can see exactly what it modifies there.

-- 
Dave Page
EnterpriseDB UK:   http://www.enterprisedb.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PGS Tuning Wizard destroys my login

2009-04-08 Thread Jennifer Trey
Look, I am telling now what I did.

I don't want to hear claims about how its not the tuning wizards fault. It
is!

And it does more than the postgresql.conf because thats not true.
Just until today I have been able to login and logout, using pgAdmin to my
DB.

Now, after considering what you said, I tried once more! And I am getting
tired of this whole process.

I installed the tuning wizard. I got myself a new postgresql.conf and it
renamed the old one which I noted.

I opened my pgAdmin, logged in, worked fine (I guess the changes require
some kind of restart). I restarted my computer and we are back with the old
pattern where I cannot login, not with pgAdmin, not with my java-app and not
with sql shell :
Username [postgres]:
psql: could not connect to server: Connection refused (0x274D/10061)
  Is the server running on host "localhost" and accepting
  TCP/IP connections on port 5432?

Press any key to continue . . .

looking at the pgadmin.log i see this :
2009-04-08 02:03:01 ERROR : server closed the connection unexpectedly
 This probably means the server terminated abnormally
 before or while processing the request.

I replaced the new postgresql.conf with the old one. Restarted my computer
again. And still the problems are there. Last time I had to uninstall
postgre (the time before I reformatted the computer) .. so I would like this
to work this time around.

About taking ownership, its not working for me either, (I am trying to
delete the data folder that was left from the previous installation) but it
refuses to let me delete two folders (base, pg_log) but I guess thats more a
windows server issue.
Lets concentrate on the first problem.

How can I fix it ?
/ Jennifer


Re: [GENERAL] How to split timestamps values by 24h slices ?

2009-04-08 Thread Bruno Baguette

Le 30/03/09 05:39, Osvaldo Kussama a écrit :

2009/3/29 Bruno Baguette :

I would like to do a SELECT of that table, but by splitting by 24h day :

Try:

bdteste=# \set ini '\'2009-03-30 14:50\'::timestamp'
bdteste=# \set fim '\'2009-04-01 19:00\'::timestamp'

bdteste=# SELECT CASE WHEN (:ini)::date + s.a = (:ini)::date THEN :ini
bdteste-# ELSE (:ini)::date+s.a*'1 day'::interval
bdteste-#END AS "Inicio",
bdteste-#CASE WHEN (:ini)::date + s.a < (:fim)::date THEN
((:ini)::date+(s.a+1)*'1 day'::interval)-'1 second'::interval
bdteste-# ELSE :fim
bdteste-#END AS "Fim"
bdteste-#FROM generate_series(0, extract(day from (:fim -
:ini))::int) as s(a);
   Inicio| Fim
-+-
 2009-03-30 14:50:00 | 2009-03-30 23:59:59
 2009-03-31 00:00:00 | 2009-03-31 23:59:59
 2009-04-01 00:00:00 | 2009-04-01 19:00:00
(3 registros)

Osvaldo


Really nice !

I under-estimasted the power of the generate_series() function and I 
didn't thought using that function with date manipulation.


The calendar suggest of Artacus is also interesting but it needs to be 
be regularly populated.


Regards,

--
Bruno Baguette

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PGS Tuning Wizard destroys my login

2009-04-08 Thread Dave Page
On Wed, Apr 8, 2009 at 10:48 AM, Jennifer Trey  wrote:
> Hi,
> What does the Tuning Wizard do? The one you can run on PGS ? Does it just
> touch up the postgresql.conf ? Does it play with other stuff as well?

It only touches postgresql.conf, and only after you're given a chance
to review those changes.

> I am asking because, every time I run the wizard I cannot login with the
> password I choose when I installed PGS. What is going on? Replacing the
> generated postgresql.conf with the old to "go back" does not help either.
> First time I reformatted my whole disk, and tried install PGS and ran Tuning
> Wizard again. Same thing happened!

If replacing postgresql.conf with the old version doesn't help, then
it's clearly not the tuning wizard at fault.

> This time I just uninstalled, and installed PGS again without the tuning
> wizard and I haven't had any problems since, but I need the tuning!
>
> The un-installation left some of the Data files behind, which i cannot
> delete, I have not access to them, but I am logged in as Administrator.
> What is going on!?

Just take ownership of them and then delete them.


-- 
Dave Page
EnterpriseDB UK:   http://www.enterprisedb.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PGS Tuning Wizard destroys my login

2009-04-08 Thread Jennifer Trey
Hi,
What does the Tuning Wizard do? The one you can run on PGS ? Does it just
touch up the postgresql.conf ? Does it play with other stuff as well?

I am asking because, every time I run the wizard I cannot login with the
password I choose when I installed PGS. What is going on? Replacing the
generated postgresql.conf with the old to "go back" does not help either.

First time I reformatted my whole disk, and tried install PGS and ran Tuning
Wizard again. Same thing happened!

This time I just uninstalled, and installed PGS again without the tuning
wizard and I haven't had any problems since, but I need the tuning!

The un-installation left some of the Data files behind, which i cannot
delete, I have not access to them, but I am logged in as Administrator.

What is going on!?

Jennifer


Re: [GENERAL] nooby Q: temp tables good for web apps?

2009-04-08 Thread Greg Smith

On Tue, 7 Apr 2009, John Cheng wrote:

One concern I have with SSD drives is that the performance degrades over 
time.


The bigger concern I have with them is that even the Intel drives have a 
volatile write cache in them.  You have either turn off the write cache 
(which degrades performance substantially and might even have a longevity 
impact) or use a battery-backed disk controller for them to be safe 
database storage.  There's a good article about this at 
http://www.mysqlperformanceblog.com/2009/03/02/ssd-xfs-lvm-fsync-write-cache-barrier-and-lost-transactions/


If there's a disk controller with a write cache involved, that narrows the 
gap between SDD and regular drives quite a bit.


--
* Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general