Re: [GENERAL] Blocking access by remote users for a specific time period

2014-12-19 Thread Scot Kreienkamp
Wouldn't it be a lot easier to set the database logins for those user accounts 
to zero allowed connections, then terminate their existing sessions, rather 
than mess with pg_hba.conf?

You could schedule a cron job to run a query to make the change to the user, 
then later the same to undo the change.

Scot Kreienkamp

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Steve Atkins
Sent: Sunday, December 14, 2014 1:02 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Blocking access by remote users for a specific time 
period


On Dec 13, 2014, at 10:38 PM, Michael Nolan  wrote:

> Yeah, a cron job to swap pg_hba.conf files is the best solution I've come up 
> with so far.  It's not one web app, it's closer to two dozen of them, on 
> multiple sites.

If they use persistent connections you'll also have to kill existing 
connections after you've prevented new connections via pg_hba.conf

This ...

psql -q -c 'select application_name as "Client", procpid as "PID", 
pg_terminate_backend(procpid) as "Disconnected" from pg_stat_activity where 
procpid <> pg_backend_pid()' database_name

... will kill all connections to the server. You can use application_name, 
client_addr, datname or usename to be more selective about who lives and who 
dies.

Cheers,
  Steve

> --
> Mike Nolan
>
> On Sat, Dec 13, 2014 at 11:10 PM, Adrian Klaver  
> wrote:
> On 12/13/2014 08:13 PM, Michael Nolan wrote:
> I have several web apps that access our Postgresql database that I'd
> like to lock out of the database for about an hour during a weekly
> maintenance interval. (There are some internal users that do not get
> locked out, because they're running the maintenance tasks.)
>
> There are no time-of-day access limitation parameters in the
> pg_hba.conf file, are there any simple ways to do this?
>
> Use a cron job that at beginning of period swaps out the pg_hba.conf with one 
> that denies access, reloads server and then at end of time period reverse 
> procedure ?
>
> --
> Mike Nolan
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



This message is intended only for the individual or entity to which it is 
addressed. It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws. If you are not the intended recipient, 
please note that you are strictly prohibited from disseminating or distributing 
this information (other than to the intended recipient) or copying this 
information. If you have received this communication in error, please notify us 
immediately by e-mail or by telephone at the above number. Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] table insert/primary key question

2014-04-02 Thread Scot Kreienkamp
Hey everyone,

I have a table like so:

Receiptlimitid: BIGINT (Primary Key)
Profitcenterid: BIGINT
Receiptnumber: INTEGER

All are set to Not Null also.


My question is, if I have an insert that goes idle in transaction for a while 
before it commits, will it stop all other inserts from happening on the table?  
If so, is that because of the possible violation of the primary key by the 
following pending inserts?  Just trying to confirm the behavior I'm seeing and 
make sure I understand what's going on.

Thanks!


Scot Kreienkamp | Senior Systems Engineer | La-Z-Boy Incorporated ®
1284 North Telegraph Road | Monroe, Michigan 48162 |   Phone: 734-384-6403  |   
 Fax:  |   Email: skre...@la-z-boy.com<mailto:skre...@la-z-boy.com>
www.la-z-boy.com | <http://www.la-z-boy.com/> facebook.com/lazboy 
|<http://www.facebook.com/lazboy> twitter.com/lazboy 
|<http://www.twitter.com/lazboy> youtube.com/lazboy 
|<http://www.youtube.com/lazboy>




This message is intended only for the individual or entity to which it is 
addressed. It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws. If you are not the intended recipient, 
please note that you are strictly prohibited from disseminating or distributing 
this information (other than to the intended recipient) or copying this 
information. If you have received this communication in error, please notify us 
immediately by e-mail or by telephone at the above number. Thank you.


Re: [GENERAL] idle in transaction query makes server unresponsive

2012-09-26 Thread Scot Kreienkamp

> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Albe Laurenz
> Sent: Wednesday, September 26, 2012 5:15 AM
> To: Scot Kreienkamp; pgsql-general@postgresql.org
> Subject: Re: [GENERAL] idle in transaction query makes server unresponsive
>
> Scot Kreienkamp wrote:
> > I have a problem that I've been struggling with for quite some time.
> Every once in a while I will get
> > a connection that goes to idle in transaction on an in-house
> programmed application that connects with
> > JDBC.  That happens fairly regularly and the programmers are trying to
> clean that up, but sometimes
> > the idle in transaction connection makes the PG server entirely
> unresponsive.  I'm not getting
> > connection refused, nothing.  All connections existing or new, JDBC or
> psql, just hang.  I've already
> > got full query logging on to try to catch the problem query or
> connection so I can give the developers
> > somewhere to look to resolve their issue with the application, but
> since queries are logged with
> > runtimes I'm assuming they are only logged after they are complete.
> And since it's idle in
> > transaction it never completes so it never gets logged.  Our
> application is connecting as an
> > unprivileged user named rmstomcat, and the database is limited to 400
> connections out of 512.  I'm not
> > running out of connections as I've got reserved connections set, and
> even connecting as user postgres
> > with psql the connection just hangs.  The server doesn't appear to be
> running out of memory when this
> > happens and nothing is printed in the log.  The only thing that
> resolves it is doing a kill on the PID
> > of any idle in transaction connections existing at the time causing
> them to roll back.  Then
> > everything else picks up right where it left off and works again.
> >
> > Can anyone give me any hints about why PG becomes unresponsive?  Or
> how to fix it so it doesn't?
> >
> > My server is 9.1.2 right now.  I will be upgrading to the latest 9.1
> series soon, but until 9.2 can be
> > run through our development/testing cycle I can't upgrade to 9.2.
> That will take about 6-10 months.
>
> Yes, see if upgrading to 9.1 makes the problem disappear.
>
> It is surprising that you cannot even start new connections.
>
> You could try to "strace" the postmaster during a connection attempt
> and see what happens.  Maybe that helps to spot the place where
> things go wrong.
>
> Yours,
> Laurenz Albe

[Scot Kreienkamp]
I'm willing to give it a try, but I've never done it before.  What do I need to 
do?



This message is intended only for the individual or entity to which it is 
addressed. It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws. If you are not the intended recipient, 
please note that you are strictly prohibited from disseminating or distributing 
this information (other than to the intended recipient) or copying this 
information. If you have received this communication in error, please notify us 
immediately by e-mail or by telephone at the above number. Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] idle in transaction query makes server unresponsive

2012-09-26 Thread Scot Kreienkamp


> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Thomas Kellerer
> Sent: Tuesday, September 25, 2012 5:25 PM
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] idle in transaction query makes server unresponsive
>
> Scot Kreienkamp wrote on 25.09.2012 22:35:
> > The application is using a pooler and generally runs around 100
> > connections, but I've seen it as high as 200 during the day for
> > normal use.  It's on a large server; 64 cores total and about 500
> > gigs of memory.  That's one of the reasons I left it at 512
> > connections.
>
> We had several web applications where performance was *improved*
> by configuring the connection pool have a a lot less connections.
>
> There is a threshold where too many connections
> will simply flood the server. Lowering the number of processes
> fighting for resource makes each process faster.
>
> You might want to give it a try.
>

[Scot Kreienkamp]
Can I get the same effect by limiting the number of logons for the user that my 
application is connecting as to 250?  Or do I need to lower the number in 
postgresql.conf?  I'd rather go with the first option because I can adjust it 
live without editing and reloading config files.



This message is intended only for the individual or entity to which it is 
addressed. It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws. If you are not the intended recipient, 
please note that you are strictly prohibited from disseminating or distributing 
this information (other than to the intended recipient) or copying this 
information. If you have received this communication in error, please notify us 
immediately by e-mail or by telephone at the above number. Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] idle in transaction query makes server unresponsive

2012-09-25 Thread Scot Kreienkamp
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of John R Pierce
> Sent: Tuesday, September 25, 2012 3:53 PM
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] idle in transaction query makes server unresponsive
>
> On 09/25/12 12:23 PM, Scot Kreienkamp wrote:
> >
> > I have a problem that I've been struggling with for quite some time.
> > Every once in a while I will get a connection that goes to idle in
> > transaction on an in-house programmed application that connects with
> > JDBC.  That happens fairly regularly and the programmers are trying to
> > clean that up, but sometimes the idle in transaction connection makes
> > the PG server entirely unresponsive.  I'm not getting connection
> > refused, nothing.  All connections existing or new, JDBC or psql, just
> > hang.  I've already got full query logging on to try to catch the
> > problem query or connection so I can give the developers somewhere to
> > look to resolve their issue with the application, but since queries
> > are logged with runtimes I'm assuming they are only logged after they
> > are complete.  And since it's idle in transaction it never completes
> > so it never gets logged.  Our application is connecting as an
> > unprivileged user named rmstomcat, and the database is limited to 400
> > connections out of 512.  I'm not running out of connections as I've
> > got reserved connections set, and even connecting as user postgres
> > with psql the connection just hangs.  The server doesn't appear to be
> > running out of memory when this happens and nothing is printed in the
> > log.  The only thing that resolves it is doing a kill on the PID of
> > any idle in transaction connections existing at the time causing them
> > to roll back. Then everything else picks up right where it left off
> > and works again.
> >
> > Can anyone give me any hints about why PG becomes unresponsive?  Or
> > how to fix it so it doesn't?
> >
>
>
> that is a LOT of connections.   you likely should be limiting that with
> a connection pooler, and configuring your application to ...
>
> 1) get connection from pool
> 2) execute transaction
> 3) release connection to pool
>
> then configure the pool to stall the requester when some sane number of
> connections has been reached, like no more than 2-3X the number of CPU
> cores or hardware threads you have.   you'll likely get better overall
> throughput.
>
> if you have jobs that execute long running queries for reporting etc,
> have those use a seperate smaller pool.
>
> re: your logging   means that connection has no
> query running but started a transaction.  there's no pending query on
> that connection.  these are normally only a concern when they go on
> for a long time, say 10 minutes or more. however, if that transaction
> has gotten locks on resources, and is then sitting on its thumbs doing
> nothing, OTHER connections likely will block.   join pg_stat_activity
> with pg_locks to find out what all is going on..
>
[Scot Kreienkamp]

Hi John,

The application is using a pooler and generally runs around 100 connections, 
but I've seen it as high as 200 during the day for normal use.  It's on a large 
server; 64 cores total and about 500 gigs of memory.  That's one of the reasons 
I left it at 512 connections.  The idle in transaction connections are getting 
locks and then going idle in transaction causing the queries to be waiting in 
that database.  That I can understand.  My problem is that I can't run a query 
to see what exactly it's doing because the entire Postgres server is 
unresponsive.  I can't even use psql to connect to the postgres user database 
as user postgres so I can query pg_stat_activity, that hangs also until I kill 
the idle in transaction query PID.  That's what my dilemma is.  The server 
hardware itself is not being stressed when that's happening though, so it 
doesn't appear to be a resource problem, but I can't check because I can't see 
what PG is doing.

The problem is how do I investigate this when PG is entirely unresponsive?  Why 
is it becoming unresponsive, and how do I prevent the PG server from becoming 
unresponsive?

Thanks!



This message is intended only for the individual or entity to which it is 
addressed. It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws. If you are not the intended recipient, 
please note that you are strictly prohibited from disseminating or distributing 
this information (other than to the intended recipient) or copying this 
information. If you have received this communication in error, please notify us 
immediately by e-mail or by telephone at the above number. Thank you.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] idle in transaction query makes server unresponsive

2012-09-25 Thread Scot Kreienkamp
Hi everyone,

I have a problem that I've been struggling with for quite some time.  Every 
once in a while I will get a connection that goes to idle in transaction on an 
in-house programmed application that connects with JDBC.  That happens fairly 
regularly and the programmers are trying to clean that up, but sometimes the 
idle in transaction connection makes the PG server entirely unresponsive.  I'm 
not getting connection refused, nothing.  All connections existing or new, JDBC 
or psql, just hang.  I've already got full query logging on to try to catch the 
problem query or connection so I can give the developers somewhere to look to 
resolve their issue with the application, but since queries are logged with 
runtimes I'm assuming they are only logged after they are complete.  And since 
it's idle in transaction it never completes so it never gets logged.  Our 
application is connecting as an unprivileged user named rmstomcat, and the 
database is limited to 400 connections out of 512.  I'm not running out of 
connections as I've got reserved connections set, and even connecting as user 
postgres with psql the connection just hangs.  The server doesn't appear to be 
running out of memory when this happens and nothing is printed in the log.  The 
only thing that resolves it is doing a kill on the PID of any idle in 
transaction connections existing at the time causing them to roll back.  Then 
everything else picks up right where it left off and works again.

Can anyone give me any hints about why PG becomes unresponsive?  Or how to fix 
it so it doesn't?

My server is 9.1.2 right now.  I will be upgrading to the latest 9.1 series 
soon, but until 9.2 can be run through our development/testing cycle I can't 
upgrade to 9.2.  That will take about 6-10 months.

Thanks!

Scot Kreienkamp




This message is intended only for the individual or entity to which it is 
addressed. It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws. If you are not the intended recipient, 
please note that you are strictly prohibited from disseminating or distributing 
this information (other than to the intended recipient) or copying this 
information. If you have received this communication in error, please notify us 
immediately by e-mail or by telephone at the above number. Thank you.


Re: [GENERAL] list blocking queries

2012-01-31 Thread Scot Kreienkamp
Tom,

There's something wrong with the query that I've written based on what you gave 
me.

Here's what I have:

select bl.pid as Blocked_PID,
a.usename as Blocked_User,
kl.pid as Blocking_PID,
ka.usename as Blocking_User,
to_char(age(now(), a.query_start),'HH24h:MIm:SSs') as Age
from pg_catalog.pg_locks bl
join pg_catalog.pg_stat_activity a on bl.pid = a.procpid
join pg_catalog.pg_locks kl
join pg_catalog.pg_stat_activity ka on bl.locktype = kl.locktype
and bl.database is not distinct from kl.database
and bl.relation is not distinct from kl.relation
and bl.page is not distinct from kl.page
and bl.tuple is not distinct from kl.tuple
and bl.virtualxid is not distinct from kl.virtualxid
and bl.transactionid is not distinct from kl.transactionid
and bl.classid is not distinct from kl.classid
and bl.objid is not distinct from kl.objid
and bl.objsubid is not distinct from kl.objsubid
and bl.pid != kl.pid
where kl.granted and not bl.granted

When I run that I get:
ERROR:  syntax error at or near "where"
LINE 20: where kl.granted and not bl.granted

I think I'm missing something in one of the joins.

Thanks!

Scot Kreienkamp
Senior Systems Engineer
skre...@la-z-boy.com


-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Monday, January 30, 2012 9:49 PM
To: Scot Kreienkamp
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] list blocking queries

Scot Kreienkamp  writes:
> My apologies for the slightly novice post on this, but I'm a bit stumped.  I 
> have this query that I found on the net and adapted a little to find the 
> queries that were blocking:

> "select bl.pid as \"Blocked PID\", a.usename as \"Blocked User\", kl.pid as 
> \"Blocking PID\", ka.usename as \"Blocking User\", to_char(age(now(), 
> a.query_start),'HH24h:MIm:SSs') as \"Age\"  from pg_catalog.pg_locks bl join 
> pg_catalog.pg_stat_activity a on bl.pid = a.procpid join pg_catalog.pg_locks 
> kl join pg_catalog.pg_stat_activity ka on kl.pid = ka.procpid on 
> bl.transactionid = kl.transactionid and bl.pid != kl.pid where not 
> bl.granted;"

Hm, that would only have worked for rather small values of "work",
because it's matching pg_locks entries on the basis of the transactionid
field, which means it will only detect conflicts for locks on
transaction IDs.  There are a lot of other types of locks.  You need
something more like

join ... on bl.locktype = kl.locktype
and bl.database is not distinct from kl.database
and bl.relation is not distinct from kl.relation
and bl.page is not distinct from kl.page
and bl.tuple is not distinct from kl.tuple
and bl.virtualxid is not distinct from kl.virtualxid
and bl.transactionid is not distinct from kl.transactionid
and bl.classid is not distinct from kl.classid
and bl.objid is not distinct from kl.objid
and bl.objsubid is not distinct from kl.objsubid
and bl.pid != kl.pid

Since most of these fields will be nulls in any specific rows, you have
to use "is not distinct from" not just "=".  Tedious, I know.

The WHERE clause seems a few bricks shy of a load as well; you need

where kl.granted and not bl.granted

if you don't want it to claim that fellow blockees are blocking each
other.  (In some cases that would actually be a fair statement, but
I don't think it's possible to tell from pg_locks who's queued behind
whom in the wait-list for a lock, so it's probably best not to try
to show those relationships.)

regards, tom lane



This message is intended only for the individual or entity to which it is 
addressed. It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws. If you are not the intended recipient, 
please note that you are strictly prohibited from disseminating or distributing 
this information (other than to the intended recipient) or copying this 
information. If you have received this communication in error, please notify us 
immediately by e-mail or by telephone at the above number. Thank you.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] list blocking queries

2012-01-30 Thread Scot Kreienkamp
Hey everyone,

My apologies for the slightly novice post on this, but I'm a bit stumped.  I 
have this query that I found on the net and adapted a little to find the 
queries that were blocking:

"select bl.pid as \"Blocked PID\", a.usename as \"Blocked User\", kl.pid as 
\"Blocking PID\", ka.usename as \"Blocking User\", to_char(age(now(), 
a.query_start),'HH24h:MIm:SSs') as \"Age\"  from pg_catalog.pg_locks bl join 
pg_catalog.pg_stat_activity a on bl.pid = a.procpid join pg_catalog.pg_locks kl 
join pg_catalog.pg_stat_activity ka on kl.pid = ka.procpid on bl.transactionid 
= kl.transactionid and bl.pid != kl.pid where not bl.granted;"

To test it I ran this twice simultaneously:
begin work;
lock table traffic in EXCLUSIVE mode;
select pg_sleep(300);
commit work;

PG_stat_activity does show the second lock table query as waiting, but the 
query I pasted above returns zero results.  I have not run this same test on 
8.3 as pg_sleep didn't exist then, but I know the query works because I've used 
it in my 8.3 production environment.  Bottom line is I'm not sure if the query 
to list blocked/blocking PID's doesn't work on 9.1 or if it's the way I'm 
testing it.

That query is a bit above my head for now as I'm only a part time DBA at my 
job.  Thank you for any insights you can give me.

BTW, the streaming replication and hot standby on 9.1 rocks!!!  Thanks to 
everyone involved for that!!

Scot Kreienkamp
skre...@la-z-boy.com




This message is intended only for the individual or entity to which it is 
addressed. It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws. If you are not the intended recipient, 
please note that you are strictly prohibited from disseminating or distributing 
this information (other than to the intended recipient) or copying this 
information. If you have received this communication in error, please notify us 
immediately by e-mail or by telephone at the above number. Thank you.


Re: [GENERAL] initdb with lc-collate=C

2011-12-13 Thread Scot Kreienkamp
Thanks Tom.  I had tried it with SU before I mailed the list and it didn't 
work.  When I tried it again, I noticed that I typed a lower case C which it 
wouldn't accept.  I tried it again with an upper case C and it worked.  That's 
probably why my first attempt with SU failed and I didn't catch the error 
message before it rolled off the screen.

The initdb on the initscript used to honor the extra switches.  That's how I've 
been doing it since 8.2.  I'll run it with SU from now on.

Thanks!

Scot Kreienkamp
Senior Systems Engineer
skre...@la-z-boy.com


-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Tom Lane
Sent: Tuesday, December 13, 2011 12:00 AM
To: Scot Kreienkamp
Cc: Adrian Klaver; Postgres General (pgsql-general@postgresql.org)
Subject: Re: [GENERAL] initdb with lc-collate=C

Scot Kreienkamp  writes:
> There wasn't any output from the initdb other than OK.
> [root@dvrv5030 9.1]# /etc/init.d/postgresql-9.1 initdb --lc-collate=C
> Initializing database: [  OK  ]

Um.  This isn't running initdb: this is running the package's initscript
and hoping it will pass the switch you supplied through to initdb.
If it doesn't, you'll get whatever locale is default in root's environment.

I gather from the package name that you're using Devrim's packages not
mine, so I don't know for sure what will happen here ... but I'm pretty
sure that that extra switch would *not* be honored in the Red Hat/Fedora
initscripts.

My suggestion for this would be to "su - postgres" and then run initdb
directly.

regards, tom lane

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
This message is intended only for the individual or entity to which it is 
addressed. It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws. If you are not the intended recipient, 
please note that you are strictly prohibited from disseminating or distributing 
this information (other than to the intended recipient) or copying this 
information. If you have received this communication in error, please notify us 
immediately by e-mail or by telephone at the above number. Thank you.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] initdb with lc-collate=C

2011-12-12 Thread Scot Kreienkamp
There wasn't any output from the initdb other than OK.

[root@dvrv5030 9.1]# /etc/init.d/postgresql-9.1 initdb --lc-collate=C
Initializing database: [  OK  ]
[root@dvrv5030 9.1]# /etc/init.d/postgresql-9.1 start
Starting postgresql-9.1 service:   [  OK  ]

Scot Kreienkamp
Senior Systems Engineer
skre...@la-z-boy.com


-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@gmail.com]
Sent: Monday, December 12, 2011 3:56 PM
To: Scot Kreienkamp
Cc: Postgres General (pgsql-general@postgresql.org)
Subject: Re: [GENERAL] initdb with lc-collate=C

On 12/12/2011 12:37 PM, Scot Kreienkamp wrote:
> Nope  no clusters.  I never got past the initial install and configure.
>
> All I did was install, initdb, alter a few things in postgresql.conf (nothing 
> relating to locale) and pg_hba.conf, start postgres using the init script, 
> and run the query to check the collation setting.  Nothing more.

Did you happen to catch the initdb output message?

>
> Scot Kreienkamp
> Senior Systems Engineer
> skre...@la-z-boy.com
>

--
Adrian Klaver
adrian.kla...@gmail.com
This message is intended only for the individual or entity to which it is 
addressed. It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws. If you are not the intended recipient, 
please note that you are strictly prohibited from disseminating or distributing 
this information (other than to the intended recipient) or copying this 
information. If you have received this communication in error, please notify us 
immediately by e-mail or by telephone at the above number. Thank you.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] initdb with lc-collate=C

2011-12-12 Thread Scot Kreienkamp
Nope  no clusters.  I never got past the initial install and configure.

All I did was install, initdb, alter a few things in postgresql.conf (nothing 
relating to locale) and pg_hba.conf, start postgres using the init script, and 
run the query to check the collation setting.  Nothing more.

Scot Kreienkamp
Senior Systems Engineer
skre...@la-z-boy.com


-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@gmail.com]
Sent: Monday, December 12, 2011 3:26 PM
To: Scot Kreienkamp
Cc: Postgres General (pgsql-general@postgresql.org)
Subject: Re: [GENERAL] initdb with lc-collate=C

On 12/12/2011 12:15 PM, Adrian Klaver wrote:
> On 12/12/2011 10:49 AM, Scot Kreienkamp wrote:
>> Hey guys,
>>
>> In PG 8.x, when I did an initdb with --lc-collate=c it was always
>> effective in setting it server wide so it would apply to all databases.
>> However, in 9.1.2, when I run initdb like so: /etc/init.d/postgresql-9.1
>> initdb --lc-collate=C, it doesn't seem to have any effect.
>>
>> [root@dvrv5030 9.1]# psql -U postgres -Aqtc "select setting from
>> pg_settings where name='lc_collate'"
>>
>> en_US.UTF-8
>>
>> Not sure why the difference in behavior. Our application depends on
>> collation being set to C. I can create the individual database with that
>> option, but why won't it stick setting it server-wide with initdb?
>
> Databases are created using template1 as the default template. Did the
> lc_collate get changed for template1?

Wrong question:( Read the above too quickly and did not pick up you
where getting the setting from pg_settings.

Try again.
Is there more than one database cluster on the machine and if so are you
sure you did the initdb on the same cluster as the select..?

>
>>
>> Thanks!
>>
>> Scot Kreienkamp
>>
>
>
>


--
Adrian Klaver
adrian.kla...@gmail.com
This message is intended only for the individual or entity to which it is 
addressed. It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws. If you are not the intended recipient, 
please note that you are strictly prohibited from disseminating or distributing 
this information (other than to the intended recipient) or copying this 
information. If you have received this communication in error, please notify us 
immediately by e-mail or by telephone at the above number. Thank you.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] initdb with lc-collate=C

2011-12-12 Thread Scot Kreienkamp
Hey guys,

In PG 8.x, when I did an initdb with --lc-collate=c it was always effective in 
setting it server wide so it would apply to all databases.  However, in 9.1.2, 
when I run initdb like so:  /etc/init.d/postgresql-9.1 initdb --lc-collate=C, 
it doesn't seem to have any effect.

[root@dvrv5030 9.1]# psql -U postgres -Aqtc "select setting from pg_settings 
where name='lc_collate'"
en_US.UTF-8

Not sure why the difference in behavior.  Our application depends on collation 
being set to C.  I can create the individual database with that option, but why 
won't it stick setting it server-wide with initdb?

Thanks!

Scot Kreienkamp
Senior Systems Engineer
skre...@la-z-boy.com

This message is intended only for the individual or entity to which it is 
addressed. It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws. If you are not the intended recipient, 
please note that you are strictly prohibited from disseminating or distributing 
this information (other than to the intended recipient) or copying this 
information. If you have received this communication in error, please notify us 
immediately by e-mail or by telephone at the above number. Thank you.


[GENERAL] Help with ERROR: character 0xc280 of encoding "UTF8" has no equivalent in "WIN1252"

2011-06-13 Thread Scot Kreienkamp
Hi everyone,



I have a database that is UTF8, and a client that is connecting using
client encoding WIN1252.  There are a few records in my database that
somehow have gotten characters in them that are not viewable.  When
viewing the records from the command line using UTF8 client encoding
there are strange looking characters, and the records cause an error
when viewing under WIN1252 encoding.



I cannot modify the data, so my alternative was this query, which I
found an example for on a list that Google was kind enough to translate
from French for me:



set client_encoding to 'WIN1252' ;

SELECT soldnotesid,soldid,regexp_replace(notes, E'\u008C', ' ', 'g') as
notes,
privatenote,modified,userloginid,notetype,sourcekey,hidden,notesmodified
,notesmodifiedby,created from soldnotes where soldid<'317773002' and
soldid>'317771002'



However, it still errors out:  psql:sql3:7: ERROR:  character 0xc280 of
encoding "UTF8" has no equivalent in "WIN1252"



Is there any way to mask these "bad" characters from the field in a
query to allow the client to work?  I'll put it in a view once I get it
working so the end users can use the view.



PG 8.3.7.



Thanks for any help you can provide.



Scot Kreienkamp






This message is intended only for the individual or entity to which it is 
addressed.  It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws.  If you are not the intended recipient, 
please note that you are strictly prohibited from disseminating or distributing 
this information (other than to the intended recipient) or copying this 
information.  If you have received this communication in error, please notify 
us immediately by e-mail or by telephone at the above number. Thank you.




Re: [GENERAL] archiver process problem

2010-10-18 Thread Scot Kreienkamp
Never mind, it picked up again.  It took a few hours, not sure why.

 

Scot Kreienkamp

skre...@la-z-boy.com

 

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Scot Kreienkamp
Sent: Monday, October 18, 2010 11:29 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] archiver process problem

 

Hey guys,

 

My archive became unavailable for a short while, causing the archiver
process to stop copying files out.  Now it seems to be stuck although
the archive location is available again.

 

This is from ps -ax:  postgres: archiver process   last was
0001032300F8

 

How do I wake up the archiver process again so it starts copying files
out, short of restarting postgres?  Running PG 8.3.7.

 

Thanks!

 

Scot Kreienkamp

skre...@la-z-boy.com

 



[GENERAL] archiver process problem

2010-10-18 Thread Scot Kreienkamp
Hey guys,

 

My archive became unavailable for a short while, causing the archiver
process to stop copying files out.  Now it seems to be stuck although
the archive location is available again.

 

This is from ps -ax:  postgres: archiver process   last was
0001032300F8

 

How do I wake up the archiver process again so it starts copying files
out, short of restarting postgres?  Running PG 8.3.7.

 

Thanks!

 

Scot Kreienkamp

skre...@la-z-boy.com

 



Re: [GENERAL] help with error "unexpected pageaddr"

2010-09-15 Thread Scot Kreienkamp
Shouldn't have, the only thing we did to the server was restart it and
run our database queries.  Clearing out all the wal files from pg_xlog
along with a new base backup did fix it though.

Thanks for the help Tom!

Scot Kreienkamp
skre...@la-z-boy.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] help with error "unexpected pageaddr"

2010-09-15 Thread Scot Kreienkamp

"Scot Kreienkamp"  writes:
> We have a PG 8.3.7 server that is doing WAL log shipping to 2 other
> servers that are remote mirrors.  This has been working well for
almost
> two years.  Last night we did some massive data and structure changes
to
> one of our databases.  Since then I get these errors on the two
mirrors:

> 2010-09-15 08:35:05 EDT: LOG:  restored log file
> "0001030100D9" from archive

> 2010-09-15 08:35:27 EDT: LOG:  restored log file
> "0001030100DA" from archive

> 2010-09-15 08:35:40 EDT: LOG:  restored log file
> "0001030100DB" from archive

> 2010-09-15 08:35:40 EDT: LOG:  unexpected pageaddr 301/4700 in log
> file 769, segment 219, offset 0

This appears to indicate that you archived the wrong contents of log
file 0001030100DB.  If you don't still have the correct
contents on the master, I think the only way to recover is to take a
fresh base backup so you can make the slaves roll forward from a point
later than this log segment.  There's no reason to suppose that there's
data corruption on the master, just bad data in the WAL archive.

You'd probably be well advised to look closely at your WAL archiving
script to see if it has any race conditions that might be triggered by
very fast generation of WAL.

> Also, one additional question.  I don't have a 1.history file
which
> makes the PITRTools complain constantly.  Is there any way to
regenerate
> this file?

Just ignore that, it's cosmetic (the file isn't supposed to exist).

regards, tom lane


Tom,

I tried to take a new base backup about 45 minutes ago.  The master has
rolled forward a number of WAL files since I last tried, but it still
fails.  

LOG:  restored log file "0001030100FE" from archive
LOG:  restored log file "00010302" from archive
LOG:  restored log file "000103020001" from archive
LOG:  restored log file "000103020002" from archive
LOG:  restored log file "000103020003" from archive
LOG:  unexpected pageaddr 301/5000 in log file 770, segment 3,
offset 0
LOG:  redo done at 302/2BCE828
LOG:  last completed transaction was at log time 2010-09-15
15:07:01.040854-04
LOG:  restored log file "000103020002" from archive
LOG:  selected new timeline ID: 2

My entire WAL archiving script is 4 cp %p %f commands.  It's so short I
don't even have a script, it's directly in the postgresql.conf archive
command.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] help with error "unexpected pageaddr"

2010-09-15 Thread Scot Kreienkamp
Hey everyone,

 

We have a PG 8.3.7 server that is doing WAL log shipping to 2 other
servers that are remote mirrors.  This has been working well for almost
two years.  Last night we did some massive data and structure changes to
one of our databases.  Since then I get these errors on the two mirrors:

 

2010-09-15 08:35:05 EDT: LOG:  restored log file
"0001030100D9" from archive

2010-09-15 08:35:27 EDT: LOG:  restored log file
"0001030100DA" from archive

2010-09-15 08:35:40 EDT: LOG:  restored log file
"0001030100DB" from archive

2010-09-15 08:35:40 EDT: LOG:  unexpected pageaddr 301/4700 in log
file 769, segment 219, offset 0

2010-09-15 08:35:40 EDT: LOG:  redo done at 301/DA370780

2010-09-15 08:35:40 EDT: LOG:  last completed transaction was at log
time 2010-09-15 08:30:01.24936-04

2010-09-15 08:35:40 EDT: LOG:  restored log file
"0001030100DA" from archive

2010-09-15 08:36:26 EDT: LOG:  selected new timeline ID: 2

2010-09-15 08:37:11 EDT: LOG:  archive recovery complete

 

I've taken two separate file level backups and tried to restart the
mirrors, and every time on both servers I get a similar error message.
I seem to recall reading that it may have something to do with
corruption in the timeline, which is why it's jumping to a new timeline
ID.

 

1.  Can anyone tell me what this means?  

2.  Is there some corruption in the database?  

3.  If so, is there an easy way to fix it?  

 

Also, one additional question.  I don't have a 1.history file which
makes the PITRTools complain constantly.  Is there any way to regenerate
this file?

 

Any help would be much appreciated.  I'm rather worried that I've got
corruption, and not having the mirrors running puts us at risk for data
loss.



Re: [GENERAL] CIDR data type query help

2010-05-20 Thread Scot Kreienkamp
Thanks Steve, that works nicely in the testing I've done so far.  

I'll keep in mind about the pgfoundry project.  I don't see this growing
overly large, but you never know.  I didn't realize the CIDR type
couldn't be indexed.  

Scot Kreienkamp
skre...@la-z-boy.com


-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Steve Atkins
Sent: Wednesday, May 19, 2010 2:18 PM
To: pgsql-general General
Subject: Re: [GENERAL] CIDR data type query help


On May 19, 2010, at 10:32 AM, Scot Kreienkamp wrote:

> Hi everyone,
> 
> I have a column of type CIDR in a table that I am querying that
contains
> the values of 10/8, 10.1/16,10.1.28/24, and 10.1.28.95.  I am trying
to
> return only the one record that's most specific compared to the IP
> address I'm currently on as this is being done in a CGI script.  If
> there's no specific match for the IP of the originating workstation
then
> it should return the /24 if it's there, then the /16 if it's there,
etc.
> I have never worked with the CIDR type, and a novice when it comes to
> SQL query language, so I have no idea how to approach this.

Something like this (untested):

select foo from table where foo >>= '10.1.28.14' order by masklen(foo)
desc limit 1;

You likely want to look at http://pgfoundry.org/projects/ip4r/ as an
alternative,
if the table is likely to grow beyond a few dozen rows. It's usefully
indexable
for "contains" queries, unlike the native cidr type,

Cheers,
  Steve




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] CIDR data type query help

2010-05-19 Thread Scot Kreienkamp
Hi everyone,

I have a column of type CIDR in a table that I am querying that contains
the values of 10/8, 10.1/16,10.1.28/24, and 10.1.28.95.  I am trying to
return only the one record that's most specific compared to the IP
address I'm currently on as this is being done in a CGI script.  If
there's no specific match for the IP of the originating workstation then
it should return the /24 if it's there, then the /16 if it's there, etc.
I have never worked with the CIDR type, and a novice when it comes to
SQL query language, so I have no idea how to approach this.

Thanks for any help.

Scot Kreienkamp
skre...@la-z-boy.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum question

2010-03-09 Thread Scot Kreienkamp
 

ISTM that 9.0's read-only standby feature may be of use to you.  I know
it doesn't help you *today* but have you looked at it yet?

 

[Scot Kreienkamp] 

I had considered it and it will make my life easier for my reporting
server, but unfortunately in this case I need a read-write copy.  

 

 



Re: [GENERAL] autovacuum question

2010-03-09 Thread Scot Kreienkamp
Would the stats come across in WAL log shipping to a physically separate 
server?  My understanding is that they won't.

Thanks,

Scot Kreienkamp
skre...@la-z-boy.com


-Original Message-
From: gsst...@gmail.com [mailto:gsst...@gmail.com] On Behalf Of Greg Stark
Sent: Tuesday, March 09, 2010 9:39 AM
To: Scot Kreienkamp
Cc: Tom Lane; Scott Mead; pgsql-general@postgresql.org
Subject: Re: autovacuum question

On Tue, Mar 9, 2010 at 1:47 PM, Scot Kreienkamp  wrote:
> I found a way to do it very easily using LVM snapshots and WAL log
> shipping, but the net effect is I'm bringing a new LVM snapshot copy of
> the database out of recovery every 1-2 hours.  That means I'd have to
> spend 15 minutes, or one-quarter of the time, doing an analyze every
> time I refresh the database.  That's fairly painful.  The LVM snap and
> restart only takes 1-2 minutes right now.

Your snapshot should have the same stats that the server does, so this
doesn't actually seem to explain the discrepancy.

You be running into performance problems with LVM if the snapshot is
the one paying the price for all the CoW copies. Or it could be that
doing retail block copies as needed results in them being fragmented
and destroying the sequential scan performance. You might be able to
reduce the difference by making sure to do a vacuum and a checkpoint
immediately prior to the snapshot. That would hopefully achieve
setting most hint bits so that read-only queries on the snapshot don't
cause writes to blocks just to set them.

There might be an option in LVM to materialize the entire snapshot
which might be able to bring the performance up to the same level and
hopefully allocate all the blocks sequentially.


-- 
greg

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum question

2010-03-09 Thread Scot Kreienkamp
I'm gonna take a scientific wild-assed guess that the real issue here
is caching, or more specifically, lack thereof when you first start up
your copy of the db.

[Scot Kreienkamp] 
That is definitely one of the problems.  No way to help that that I'm
aware of.  

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum question

2010-03-09 Thread Scot Kreienkamp
 

ISTM that 9.0's read-only standby feature may be of use to you.  I know
it doesn't help you *today* but have you looked at it yet?

 

 

   Okay, so the RO database won't work.  How much data are we talking?
How much growth do you see between snapshots?

 

The initial database size is 31 gigs.  I give it 5 gigs in the snapshot
to grow in, but I would be surprised if more than 200 megs of data
changes.  The actual change rate should be very low.  



Re: [GENERAL] autovacuum question

2010-03-09 Thread Scot Kreienkamp
Wish I could Tom.  I need a non-production, read-write copy of the
database that is updated every 1-2 hours from production. I don't set
this requirement, the business does. I just have to do it if it's
technically possible.

I found a way to do it very easily using LVM snapshots and WAL log
shipping, but the net effect is I'm bringing a new LVM snapshot copy of
the database out of recovery every 1-2 hours.  That means I'd have to
spend 15 minutes, or one-quarter of the time, doing an analyze every
time I refresh the database.  That's fairly painful.  The LVM snap and
restart only takes 1-2 minutes right now.  

If you have any other ideas how I can accomplish or improve this I'm all
ears.  

Thanks,

Scot Kreienkamp
skre...@la-z-boy.com

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Monday, March 08, 2010 10:32 PM
To: Scot Kreienkamp
Cc: Scott Mead; pgsql-general@postgresql.org
Subject: Re: [GENERAL] autovacuum question 

"Scot Kreienkamp"  writes:
>> Why not just add an 'analyze' as the last step of the restore job?

> Due to the amount of time it takes.  The disks are slow enough to make
a
> database-wide analyze painful since I would have to repeat it every
1-2
> hours, IE every reload time.  

You claimed that before.  It didn't make any sense then and it doesn't
now.  There is no way that an analyze is expensive compared to a
database reload.

Maybe what you need to be doing is rethinking the strategy that involves
reloading every couple of hours...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] autovacuum question

2010-03-08 Thread Scot Kreienkamp
 

On Mon, Mar 8, 2010 at 5:13 PM, Scot Kreienkamp 
wrote:

Hi everyone,

I have a database that is constantly getting reloaded several times per
day from production backups and is used for reporting purposes.  The
problem I'm having with it is that the database seems to be much slower
than the others I have that are more static.  I suspect that is due to
the lack of query planner statistics (analyze) having been done after
each restore, which is that way due to the amount of time it takes to
accomplish.

First, I'm hoping someone here can validate my theory.  

 

  It would seem likely that this could be the problem...

 

Cool Thanks.  Glad I'm on the right track.

 

Second, if
that's true, is there any way to trigger an auto-analyze on a
table
automatically the first time a query touches that table?   

(I ask because
there is no way to shrink the amount of time a database-wide
analyze
would take into the window I have to do it in.  The expectations
may be
a bit unrealistic here, I know.)  

 

Why not just add an 'analyze' as the last step of the restore job?

 

Due to the amount of time it takes.  The disks are slow enough to make a
database-wide analyze painful since I would have to repeat it every 1-2
hours, IE every reload time.  

 

Third, what parameters can I set to
make analyze go as fast as possible, knowing that the disks are
slow on
it because of the hardware?  (Can't do anything about that
either, FYI)
Obviously more memory the better, and setting maintenance work
memory
higher also.  Doing a vacuum is kind of pointless because it
gets
reloaded every 2 hours, so all I really need is an analyze --I
think--. 

 

Sounds like you've done what you can.  How long does an analyze take?

 

Last I tried it, it took 15 minutes on a 30 gig database while it was
being used.  

 

I'm on 8.3.7.  Since 8.3.10 is due out I'll probably upgrade to it after
it's shaken out a bit.

 

  Why not move up to 8.4? 

 

Because I'm constrained by our application developers who don't have the
time to vet our app against 8.4 yet.  I've been pushing for it for the
last 2 months.  

 

--Scott M

 

    Thanks for your help.  Any suggestions are welcome.

Scot Kreienkamp
skre...@la-z-boy.com



--
Sent via pgsql-general mailing list
(pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

 



[GENERAL] autovacuum question

2010-03-08 Thread Scot Kreienkamp
Hi everyone,

I have a database that is constantly getting reloaded several times per
day from production backups and is used for reporting purposes.  The
problem I'm having with it is that the database seems to be much slower
than the others I have that are more static.  I suspect that is due to
the lack of query planner statistics (analyze) having been done after
each restore, which is that way due to the amount of time it takes to
accomplish.  

First, I'm hoping someone here can validate my theory.  Second, if
that's true, is there any way to trigger an auto-analyze on a table
automatically the first time a query touches that table?  (I ask because
there is no way to shrink the amount of time a database-wide analyze
would take into the window I have to do it in.  The expectations may be
a bit unrealistic here, I know.)  Third, what parameters can I set to
make analyze go as fast as possible, knowing that the disks are slow on
it because of the hardware?  (Can't do anything about that either, FYI)
Obviously more memory the better, and setting maintenance work memory
higher also.  Doing a vacuum is kind of pointless because it gets
reloaded every 2 hours, so all I really need is an analyze --I think--.


I'm on 8.3.7.  Since 8.3.10 is due out I'll probably upgrade to it after
it's shaken out a bit.  

Thanks for your help.  Any suggestions are welcome.  

Scot Kreienkamp
skre...@la-z-boy.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump native format will not restore correctly

2009-11-10 Thread Scot Kreienkamp

I'm still not buying the theory that the dump/restore method has
something to do with it.  You might try the test yourself: run
pg_restore's output into a text file and compare with the text dump.

regards, tom lane



Haven't figured out exactly why yet, but I did find the difference as to
why text based restore works for me but native format doesn't.  In the
text based restore permissions are granted to all restored tables and
the schema itself.  Using the native format permissions are granted to
restored tables, but NOT the schema, which makes all the tables in the
restored schema inaccessible to all but the postgres user.  This is of
course when restoring only the schema and its contents, not the entire
database.  

My text based method does a dump to a file from the production database
immediately prior to the restore, then cats the file to psql for the
restore.  My native format restore I'm simply restoring from last nights
native format backup of the database and using the schema only switch to
pg_restore.  Not sure if that makes the difference or not.

Thanks for the help in trying to figure this out.  I'm still looking
into it.
 
Scot Kreienkamp

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump native format will not restore correctly

2009-11-09 Thread Scot Kreienkamp


-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Sam Mason
Sent: Thursday, November 05, 2009 10:50 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg_dump native format will not restore correctly

On Thu, Nov 05, 2009 at 10:41:54AM -0500, Scot Kreienkamp wrote:
> Here's the query in the sandbox:
> psql -U postgres -d rms-prod -c "explain select * from soldtrx"
>  QUERY PLAN
> 
>  Seq Scan on soldtrx  (cost=0.00..223459.51 rows=4833151 width=278)
> (1 row) 
> 
> Another strange thing I just found when composing this email... If I
set
> limit 5 on the query it works on the soldtrx table, but if I don't set
a
> limit it just hangs.

That's to be expected; it's trying to copy all 5 million rows into the
psql process and that'll take a while.  I presume you weren't trying
a similar test on the original box as you'd have got exactly the same
thing.  Running COUNT(*) on the table is probably a better way to make
sure you've got all the rows you'd expect in there as you'll only have
to copy a single row over to psql--it'll still take a while for PG to
churn through things though. iostat or vmstat are good tools to see what
the database box is doing while it's working.


[Scot Kreienkamp] 
I guess I could buy that, but why would it show up suddenly in one
night?  We've never had a problem with this prior to last night.
Otherwise our reporting would have found it.  Those queries didn't
change from yesterday to today.

Thanks,

Scot Kreienkamp
skre...@la-z-boy.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] question about pg_dump

2009-10-22 Thread Scot Kreienkamp
Hi everyone,

 

I have a question about pg_dump backups.  Will changes made in the
database made during a pg_dump operation be reflected in the dumped
datafile?  Or is the data the exact dataset that was in the database as
of the start of the backup?

 

>From what I understand of the documentation, any changes made during the
backup would NOT be in the dumped datafile.   That is very important for
a monthend process where I need consistent data as of exactly the time
the dump starts, which is why I'm confirming that is indeed the case.

 

Thanks,

 

Scot Kreienkamp

skre...@la-z-boy.com

 



Re: [GENERAL] slightly off-topic: Central Auth

2009-10-16 Thread Scot Kreienkamp

On 16/10/2009 19:38, Scot Kreienkamp wrote:
> Hey everyone,
> 
>  
> 
> I apologize in advance for going slightly off topic, but I have never
> setup a centralized authentication scheme under Linux.  My question
is,
> what do most people do for centralized command line, X, and PG
> authentication?  From what I've read the main choices are NIS or LDAP.
> LDAP would be problematic as I would have to embed a login and plain
> text password in the ldap.conf file for binding to the MS AD.  On the
> other hand, it seems like NIS is old, inflexible, outdated, and
possibly
> nearing end of life.  We are a largely Windows shop with many app and
> database servers running Linux.  The Linux environment is growing too
> large not to do centralized authentication of some kind.  
> 
>  
> 
> At this point I'm open to suggestions or comments.  SSH and X are
> required, PG would be nice to be able to auth centrally as well while
> I'm at it.

Does "PG" = PostgreSQL? If so, it can do LDAP, Kerberos and PAM, among
other things:

[Scot Kreienkamp] 
But of course.  :)  

So I guess what I see taking shape is setting up everything to auth
against PAM locally, then setting up local PAM to auth to a remote
source.  


Thanks,

Scot Kreienkamp
skre...@la-z-boy.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] slightly off-topic: Central Auth

2009-10-16 Thread Scot Kreienkamp
These are all RH4 and 5, so they do all have PAM.  I thought PAM had to
interface with something else, which is where NIS and LDAP enter the
picture, to authenticate to another server though.  Otherwise I'm not
sure how it works?

 

Thanks,

 

Scot Kreienkamp

skre...@la-z-boy.com

 

From: Scott Mead [mailto:scott.li...@enterprisedb.com] 
Sent: Friday, October 16, 2009 2:50 PM
To: Scot Kreienkamp
Cc: pgsql-general
Subject: Re: [GENERAL] slightly off-topic: Central Auth

 

 

On Fri, Oct 16, 2009 at 2:38 PM, Scot Kreienkamp 
wrote:

Hey everyone,

 

I apologize in advance for going slightly off topic, but I have never
setup a centralized authentication scheme under Linux.  My question is,
what do most people do for centralized command line, X, and PG
authentication?  From what I've read the main choices are NIS or LDAP.
LDAP would be problematic as I would have to embed a login and plain
text password in the ldap.conf file for binding to the MS AD.  


It sounds like PAM would be useful for you.  That's really what is
was built for.
--Scott



[GENERAL] slightly off-topic: Central Auth

2009-10-16 Thread Scot Kreienkamp
Hey everyone,

 

I apologize in advance for going slightly off topic, but I have never
setup a centralized authentication scheme under Linux.  My question is,
what do most people do for centralized command line, X, and PG
authentication?  From what I've read the main choices are NIS or LDAP.
LDAP would be problematic as I would have to embed a login and plain
text password in the ldap.conf file for binding to the MS AD.  On the
other hand, it seems like NIS is old, inflexible, outdated, and possibly
nearing end of life.  We are a largely Windows shop with many app and
database servers running Linux.  The Linux environment is growing too
large not to do centralized authentication of some kind.  

 

At this point I'm open to suggestions or comments.  SSH and X are
required, PG would be nice to be able to auth centrally as well while
I'm at it.

 

Thanks,

 

Scot Kreienkamp 

 



Re: [GENERAL] VMWare file system / database corruption

2009-09-21 Thread Scot Kreienkamp
We have fiber channel in production and iscsi in development. Both  
work well but iscsi is definitely slower. In our case the iscsi disks  
are slower too so that likely affects our speed on iscsi.


Sent from my iPod

On Sep 21, 2009, at 4:24 PM, "John R Pierce"   
wrote:



Scot Kreienkamp wrote:
> On the contrary, we've been running PG in production for years now  
under VMWare.  Same with MSSQL.  We've never had any problems.  Less  
so than an actual physical machine actually since we can move the  
server to different physical hardware on demand.  Also makes  
disaster recovery MUCH easier.

>
> However, VMWare does have its places.  A high usage database is  
not one of them, IMHO.  A moderately or less used one, depending on  
requirements and the hardware backing it, is often a good fit.  And  
I agree with Scott about the snapshots.  They do tend to cause  
temporary communication issues with a running virtual machine  
occasionally, regardless of OS or DB type.  (The benefits outweigh  
the risks 99% of the time though, with backups being that 1%.)  In  
my experience the level of interference from snapshotting a virtual  
machine also depends on the type and speed of your physical disks  
backing the VMWare host and the size of the virtual machine and any  
existing snapshot.  I've been told that in VSPhere (VMWare 4.0) this  
will be significantly improved.

>

does your VMWARE server use NFS to communicate with the disks?   It  
was
my understanding most folks used SAN logical units for the virtual  
disks

with VMware ESX, and not NFS/NAS








Re: [GENERAL] VMWare file system / database corruption

2009-09-21 Thread Scot Kreienkamp
On the contrary, we've been running PG in production for years now under 
VMWare.  Same with MSSQL.  We've never had any problems.  Less so than an 
actual physical machine actually since we can move the server to different 
physical hardware on demand.  Also makes disaster recovery MUCH easier.

However, VMWare does have its places.  A high usage database is not one of 
them, IMHO.  A moderately or less used one, depending on requirements and the 
hardware backing it, is often a good fit.  And I agree with Scott about the 
snapshots.  They do tend to cause temporary communication issues with a running 
virtual machine occasionally, regardless of OS or DB type.  (The benefits 
outweigh the risks 99% of the time though, with backups being that 1%.)  In my 
experience the level of interference from snapshotting a virtual machine also 
depends on the type and speed of your physical disks backing the VMWare host 
and the size of the virtual machine and any existing snapshot.  I've been told 
that in VSPhere (VMWare 4.0) this will be significantly improved.  

My .02 cents worth as we are a heavy VMWare user.  

Thanks,

Scot Kreienkamp
skre...@la-z-boy.com

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Alex Gadea
Sent: Monday, September 21, 2009 2:11 PM
To: Scott Marlowe
Cc: pgsql-general@postgresql.org; Tom Duffey
Subject: Re: [GENERAL] VMWare file system / database corruption

I'd be careful using VMWARE as a database server for anything other than 
production.  I've had problems with SQL Server and mySQL databases and I am 
sure that Postgres will experience problems as well.  

One thing to look at is whether snapshots are set to be taken while the 
database is active.  If they are, ask the system admin to find another way to 
make backups of the VM.   My experience has been that snapshots cause many 
problems with databases.

Alex

- Original Message -
From: "Scott Marlowe" 
To: "Tom Duffey" 
Cc: pgsql-general@postgresql.org
Sent: Monday, September 21, 2009 1:40:33 PM GMT -05:00 US/Canada Eastern
Subject: Re: [GENERAL] VMWare file system / database corruption

On Mon, Sep 21, 2009 at 11:09 AM, Tom Duffey  wrote:
> Hi All,
>
> We're having numerous problems with a PostgreSQL 8.3.7 database running on a
> virtual Linux server w/VMWare ESX.  This is not by choice and I have been
> asking the operator of this equipment for details about the disk setup and
> here's what I got:
>
> "We have a SAN that is presenting an NFS share.  VMWare sees that share and
> reads the VMDK file that make up the virtual file system."
>
> Does anyone with a better understanding of PostgreSQL and VMWare know if
> this is an unreliable setup for PostgreSQL?  I see things like "NFS" and
> "VMWare" and start to get worried.

I see VMWare and thing performance issues, I see NFS and thing dear
god help us all.  Even if properly setup NFS is a problem waiting to
happen, and it's not reliable storage for a database in my opinion.
That said, lots of folks do it.  Ask for the NFS mount options from
the sysadmin.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Idle in transaction help

2009-07-10 Thread Scot Kreienkamp
We have a java web page that will give us the stack trace of all the
running JDBC connections inside our system.  The problem is that we
currently have no way of relating those stack traces back to a PID so
the programmers can get the stack trace of the hung database connection.
We use the JDBC connection pooling so there's no way to be sure what
stack trace goes to what PID.  I gave the developers the postgres call
to get that backend PID through the JDBC connection a few days ago, but
they don't have the resources to get the additional call built into
their programs for up to 1-2 months.  

I'm working on the business side to get priorities changed, but it
hasn't happened yet.  Mostly because I've got Xymon watching for those
conditions so I can correct them before we get calls into the helpdesk.
Sorry, I'm rambling.

Anyway, I'm trying to attack it from the database side out since I am
not a programmer and can't help with that part.  I can do simple CGIs
with bash, but I don't know Java or C or even Perl yet for that matter.
Since you guys are the experts, I'm looking for any way to attack this
problem from the database side.  The tips I've gotten about the JDBC
driver and commits are helpful in that it gives our programmers things
to watch out for that we didn't realize, and I'm open to any suggestions
from the list about how I can help attack this.  

Since I'm ultimately responsible for database performance and I don't
like being reduced to sitting on the sidelines I'm trying to see what if
anything else my skills can contribute.  

As for patting you on the head, I was being sincere.  And trying not to
come off sounding like a cocky SOB.  :-)

Thanks,

 

Scot Kreienkamp
 


-Original Message-
From: Scott Marlowe [mailto:scott.marl...@gmail.com] 
Sent: Friday, July 10, 2009 7:02 PM
To: Scot Kreienkamp
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Idle in transaction help

On Fri, Jul 10, 2009 at 4:40 PM, Scot Kreienkamp
wrote:
> Thanks scott, but I wrote a cgi to combine all of the process info and
allow
> me to kill errant queries. So I know how to track down the pid. Thanks
for
> trying to help though. :-)

So, what are you looking for, a stack trace dump from java to look
through maybe? (the one that kill -1 or whatever generates? It's been
a few years.)  That'll usually give you the context to find out which
thread is where.  P.s. no need to pat me on the head like the doggie.
:)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Idle in transaction help

2009-07-10 Thread Scot Kreienkamp
Thanks scott, but I wrote a cgi to combine all of the process info and allow me 
to kill errant queries. So I know how to track down the pid. Thanks for trying 
to help though. :-)

- Original Message -
From: Scott Marlowe 
To: Scot Kreienkamp
Cc: pgsql-general@postgresql.org 
Sent: Fri Jul 10 18:34:14 2009
Subject: Re: [GENERAL] Idle in transaction help

On Fri, Jul 10, 2009 at 2:05 PM, Scot Kreienkamp wrote:
> Hi everyone,
>
> I need some help with tracking down idle in transaction problems.  We have a
> custom application that is leaving queries in idle in transaction status for
> unknown reasons.  The developers are working on ways to track it down, but
> right now the options on their end are limited and it will be at least 1-2
> months until they have something working.  I am trying to track down the
> problem from the PG end in the meantime.  Is there any way to tell what
> query is hanging in idle in transaction status?  Or what the current or
> previous query was/is, since idle in transaction doesn’t tell me anything?
> I’m kind of at a loss on what if anything I can do from the database end to
> help (read push) the programmers to find and fix this problem.
>
>
>
> My first priority is helping them find and fix the problem if I can.  My
> second priority is finding an automated way to deal with the idle in
> transactions as they are locking tables and rows, causing other transactions
> to hang also.  None of the timeouts appear to affect idle in transactions.

Assuming that tracking down the process that's connected might help,
you can use pg_stat_activity to find the port that the client is
connecting from, then on the client machine, use lsof to hunt down the
process that is connecting via that port.

For instance, I connect from my laptop with two connections.  One I do
a begin; in and in the other I look it up like so:

select * from pg_stat_activity where current_query ilike
'%idle%trans%' and current_query not ilike 'select%';
 datid | datname  | procpid | usesysid | usename  | current_query
   | waiting |  xact_start   |  query_start
  | backend_start | client_addr  | client_port
---+--+-+--+--+---+-+---+---+---+--+-
 11511 | postgres |   24893 |16413 | smarlowe |  in
transaction | f   | 2009-07-10 16:20:15.056385-06 | 2009-07-10
16:20:15.056385-06 | 2009-07-10 15:27:48.944738-06 | 192.168.0.74 |
   48727

The client port is 48727.  Now, on my laptop I can do:

sudo lsof |grep 48727 and I have this line in there:

psql  27964   smarlowe3u IPv41114765
  TCP steamboat:48727->192.168.0.247:postgresql (ESTABLISHED)

Note that 27964 is the pid of the psql command that's connected to the
server.  Hope that helps a little.


Re: [GENERAL] Idle in transaction help

2009-07-10 Thread Scot Kreienkamp
Hi John,

It is Java.  I asked our programmers to check on the JDBC version as I
had seen that on the list previously.  It is using postgresql-8.2-504.
Is that one of the problem versions? I had thought it was new enough
that it would not be subject to that problem.

The unexplained part is why are there locks acquired, sometimes on the
row level, prior to the connection going to idle in transaction status?
That makes me think it's not the JDBC driver.  

Thanks,
 
Scot Kreienkamp

-Original Message-
From: John R Pierce [mailto:pie...@hogranch.com] 
Sent: Friday, July 10, 2009 4:21 PM
To: Scot Kreienkamp
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Idle in transaction help

Scot Kreienkamp wrote:
>
> Hi everyone,
>
> I need some help with tracking down idle in transaction problems. We 
> have a custom application that is leaving queries in idle in 
> transaction status for unknown reasons. The developers are working on 
> ways to track it down, but right now the options on their end are 
> limited and it will be at least 1-2 months until they have something 
> working. I am trying to track down the problem from the PG end in the 
> meantime. Is there any way to tell what query is hanging in idle in 
> transaction status? Or what the current or previous query was/is, 
> since idle in transaction doesn't tell me anything? I'm kind of at a 
> loss on what if anything I can do from the database end to help (read 
> push) the programmers to find and fix this problem.
>


there is no active query, thats why its idle. they did a "BEGIN" to 
start a transaction, then left the connection idle.

is this software, by any chance, Java based? older versions of the 
Postgres JDBC module had a nasty habit of doing this, as JDBC 
autogenerates the BEGIN if its not in autocommit mode. the older version

would generate the begin immediately after a COMMIT or ROLLBACK to 
prepare for the next transaction, and if the app simply stopped using 
the connection, it was left IDLE IN TRANSACTION. The updated version 
postpones the BEGIN until you issue your first query.

if you enable statement logging and set up a log prefix to show the 
Process ID (and I usually prefix with a timestamp, database name and 
other useful stuff), then you can grep the logs for the PID of the IDLE 
IN TRANSACTION process. Note logging all statements is pretty CPU and 
disk intensive, so likely will impact your system performance, so should

only be done for debug purposes.






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Idle in transaction help

2009-07-10 Thread Scot Kreienkamp
Hi everyone,

 

I need some help with tracking down idle in transaction problems.  We
have a custom application that is leaving queries in idle in transaction
status for unknown reasons.  The developers are working on ways to track
it down, but right now the options on their end are limited and it will
be at least 1-2 months until they have something working.  I am trying
to track down the problem from the PG end in the meantime.  Is there any
way to tell what query is hanging in idle in transaction status?  Or
what the current or previous query was/is, since idle in transaction
doesn't tell me anything?  I'm kind of at a loss on what if anything I
can do from the database end to help (read push) the programmers to find
and fix this problem.   

 

My first priority is helping them find and fix the problem if I can.  My
second priority is finding an automated way to deal with the idle in
transactions as they are locking tables and rows, causing other
transactions to hang also.  None of the timeouts appear to affect idle
in transactions.  

 

I'm on PG 8.3.7.  Upgrading to 8.4 won't be an option for several
months, enough time for the developers to test and develop against 8.4.

 

Thanks for any help you can give me.

 

Scot Kreienkamp



Re: [GENERAL] WAL and master multi-slave replication

2009-07-01 Thread Scot Kreienkamp


Thanks,
 
Scot Kreienkamp
La-Z-Boy Inc.
skre...@la-z-boy.com
734-242-1444 ext 6379

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Alvaro Herrera
Sent: Wednesday, June 24, 2009 1:51 PM
To: Eduardo Morras
Cc: Scott Marlowe; pgsql-general@postgresql.org
Subject: Re: [GENERAL] WAL and master multi-slave replication

Eduardo Morras escribió:
> At 19:25 24/06/2009, you wrote:
>> On Wed, Jun 24, 2009 at 11:22 AM, Eduardo Morras wrote:
>> > Yes, there will be 3 masters recolleting data (doing updates, inserts and
>> > deletes) for now and 5 slaves where we will do the searches. The 
>> slaves must
>> > have all the data recollected by the 3 masters and the system must be 
>> easily
>> > upgradable, adding new masters and new slaves.
>>
>> You know you can't push WAL files from > 1 server into a slave, right?
>
> No, i didn't know that.

I guess you don't know either that you can't query a slave while it is
on recovery (so it's only a "warm" standby, not hot).  And if you bring
it up you can't afterwards continue applying more segments later.

What you can do is grab a filesystem snapshot before bringing it online,
and then restoring that snapshot when you want to apply some more
segments to bring it up to date (so from Postgres' point of view it
seems like it was never brought up in the first place).


That is what I do.  I actually have two separate copies of Postgres running at 
any given time on one of my mirrors.  The first is running recovery constantly. 
 The second is an LVM snapshot that is mounted on a different directory that 
listens on the network IP address.  Every hour I have a script that shuts down 
both copies of Postgres, re-creates and remounts the new snapshot, alters the 
Postgresql.conf listen address, brings the LVM snapshot Postgres out of 
recovery, and then starts both copies of Postgres again.  It takes about 60 
seconds for the whole process with a few sleep statements to smooth things out. 
 It guarantees my PITR mirror is still running and allows the mirror to be 
queryable.  That's the best solution I could figure out to fit my requirements. 
 

BTW, PITRtools is very nice.  I had it scripted in 8.2, when 8.3 came out I 
switched to PITRtools so it would delete the WAL logs I no longer needed.  Very 
nice, and much easier than my old scripts.  

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres Clustering

2009-05-28 Thread Scot Kreienkamp
We didn't have much luck with Continuent.  They had to make multiple
code level changes to get their product to work correctly with our app
on PG 8.2.  We never did get it successfully implemented.  At this point
I'm stuck with WAL shipping as I can't find anything that fits my
constraints.

Thanks,
 
Scot Kreienkamp

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Alan McKay
Sent: Wednesday, May 27, 2009 1:57 PM
To: pgsql-performa...@postgresql.org; pgsql-general@postgresql.org
Subject: [GENERAL] Postgres Clustering

Hey folks,

I have done some googling and found a few things on the matter.  But
am looking for some suggestions from the experts out there.

Got any good pointers for reading material to help me get up to speed
on PostgreSQL clustering?   What options are available?  What are the
issues?  Terminology.  I'm pretty new to the whole data-warehouse
thing.   And once I do all the reading, I'll even be open to product
recommendations :-)

And in particular since I already have heard of this particular
product - are there any opinions on Continuent?

thanks,
-Alan

-- 
"Mother Nature doesn't do bailouts."
 - Glenn Prickett

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres with phpScheduleIt

2009-03-06 Thread Scot Kreienkamp
I did get it working with 8.3, but it took some time and several posts
to the forums there.  The changes were fairly minor on the phpscheduleit
side to make it work.  They were specific to differences between mysql
and Postgres; there weren't any changes that were Postgres version
specific.  It does require a working knowledge of Postgres to get it
working though.  If anyone is interested in what I had to do, search for
posts from skreien on the forums at phpscheduleit website.

From what I saw of how it uses Postgres there shouldn't be any reason it
can't use almost any recent version; the database usage is fairly
simple.  The setup script never did work properly though.

Still using phpscheduleit with Postgres 8.3, I haven't touched it since
I built it some months ago.  

Thanks,
 
Scot Kreienkamp
La-Z-Boy Inc.
skre...@la-z-boy.com
734-242-1444 ext 6379

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Peter
Eisentraut
Sent: Thursday, March 05, 2009 2:05 PM
To: pgsql-general@postgresql.org
Cc: Stuart Luppescu
Subject: Re: [GENERAL] Postgres with phpScheduleIt

On Thursday 05 March 2009 18:39:11 Stuart Luppescu wrote:
> Hello, I'm trying to install phpScheduleIt (an on-line calendaring
> application) on my system that has postgres 8.0.15 running. I have to
> admit upfront that I have very little idea how postgres works; my
> small-business accounting system uses it, but it's sort of like a
black
> box to me.
>
> phpScheduleIt uses mySQL by default, but they claim is should work
with
> postgres. The setup uses a script that starts like this:
>
> # phpScheduleIt 1.2.0 #
> drop database if exists phpScheduleIt;
> create database phpScheduleIt;
> use phpScheduleIt;

I downloaded phpscheduleit to check it out, and the brief answer is, it
won't 
work out of the box with PostgreSQL.  It looks like you could make it
work 
with a bit of effort, but if you are very new to PostgreSQL, this might
not 
be the best use of your time.

Btw., the latest phpscheduleit version is 1.2.11, and the latest
PostgreSQL 
version of that branch is 8.0.20.  Make sure you run up to date
software.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Warm Standby question

2009-02-02 Thread Scot Kreienkamp
Probably can.  But you're talking about disabling off-host archiving.
The whole point behind this is prevention in case a host hard drive
fails... if it fails and you don't use off-host archiving then you've
lost the files you need to rebuild the database along with the original
database.

Thanks,
 
Scot Kreienkamp
La-Z-Boy Inc.
skre...@la-z-boy.com
734-242-1444 ext 6379
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Thomas Kellerer
Sent: Sunday, February 01, 2009 7:47 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Warm Standby question

Hi,

(Note: I have never used log shipping before, I'm just interested in the

concepts, so I'm might be missing a very important aspect)

I was reading the blog entry about HA and warm standby:
http://scale-out-blog.blogspot.com/2009/02/simple-ha-with-postgresql-poi
nt-in-time.html

The image that explained how log shipping works, strikes me as being a
bit too 
complex.
<http://1.bp.blogspot.com/_26KnjtB2MFo/SYVDrEr1HXI/AEY/ncq_AW-Vv
-w/s1600-h/pg_warm_standby.png>

According to the picture it basically works like this:

Master -> Copy master archive directory -> Copy to standby archive dir
-> copy 
to pg_xlogs.

When I look at this chain I'm asking myself, why do I need the two
archive 
directories?

Why can't the master copy the files directly into the pg_xlogs directory
of the 
standby server?

Thanks
Thomas



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum problems

2009-01-06 Thread Scot Kreienkamp
Yep... dummy me.  That works.  I tried that before with the reindexdb
command, that doesn't work.  I didn't try it with the psql command.

Thanks,
 
Scot Kreienkamp
La-Z-Boy Inc.
skre...@la-z-boy.com

-Original Message-
From: Adrian Klaver [mailto:akla...@comcast.net] 
Sent: Tuesday, January 06, 2009 12:02 PM
To: Scot Kreienkamp
Cc: pgsql-general@postgresql.org; Scott Marlowe
Subject: Re: [GENERAL] Vacuum problems



----- "Scot Kreienkamp"  wrote:

> Thanks for the advice Scott.  I've taken out the vacuum fulls
> entirely.
> I've now got a nightly vacuum analyze as well as reindex.  I'll
> probably
> drop both to every other night.  
> 
> BTW, the database shrunk by 2 gigs just from reindexing last night. 
> I
> expect I'll see a performance gain from actually doing reindexing
> since
> this database has never been reindexed since it was put in production
> 6
> months ago.  
> 
> I've got about 12 tables that get caught by the autoanalyze and about
> 6
> that get caught by autovacuum on a daily basis.  I'm not sure how
> often
> the autovacuum and autoanalyze runs on those tables.  I probably need
> to
> up the logging to find out.  I'm not worried about making it more
> aggressive yet.  
> 
> One other problem though... my database has a "-" in the name... when
> I
> try to run:
> 
> psql -U postgres -d rms-prod -c "REINDEX SYSTEM rms-prod"

You need to quote the db name:
psql -U postgres -d rms-prod -c 'REINDEX SYSTEM "rms-prod"'

> 
> I get this:
> 
> ERROR:  syntax error at or near "-"
> LINE 1: REINDEX SYSTEM rms-prod
> 
> The user tables run fine.  Should I reindex the system tables also? 
> If
> so, how do I get around the dash in the db name?
> 
> Thanks,
>  
> Scot Kreienkamp
> La-Z-Boy Inc.
> skre...@la-z-boy.com
> 734-242-1444 ext 6379
> 

Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum problems

2009-01-06 Thread Scot Kreienkamp
Thanks for the advice Scott.  I've taken out the vacuum fulls entirely.
I've now got a nightly vacuum analyze as well as reindex.  I'll probably
drop both to every other night.  

BTW, the database shrunk by 2 gigs just from reindexing last night.  I
expect I'll see a performance gain from actually doing reindexing since
this database has never been reindexed since it was put in production 6
months ago.  

I've got about 12 tables that get caught by the autoanalyze and about 6
that get caught by autovacuum on a daily basis.  I'm not sure how often
the autovacuum and autoanalyze runs on those tables.  I probably need to
up the logging to find out.  I'm not worried about making it more
aggressive yet.  

One other problem though... my database has a "-" in the name... when I
try to run:

psql -U postgres -d rms-prod -c "REINDEX SYSTEM rms-prod"

I get this:

ERROR:  syntax error at or near "-"
LINE 1: REINDEX SYSTEM rms-prod

The user tables run fine.  Should I reindex the system tables also?  If
so, how do I get around the dash in the db name?

Thanks,
 
Scot Kreienkamp
La-Z-Boy Inc.
skre...@la-z-boy.com
734-242-1444 ext 6379

-Original Message-
From: Scott Marlowe [mailto:scott.marl...@gmail.com] 
Sent: Monday, January 05, 2009 1:37 PM
To: Scot Kreienkamp
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Vacuum problems

On Mon, Jan 5, 2009 at 10:15 AM, Scot Kreienkamp 
wrote:
> Scott,
>
> Would the "alter user postgres set statement_timeout=0;" be a
permanent
> change?  I ask because our application is using that for its login to
> the database.  (No lectures please, I inherited the system that way.
I
> already read the riot act to our developers about that.)  If so I'll
> have to set it back after the vacuum is done.

Then you can just issue a "set statement_timeout=0" before you run
vacuum / vacuum full.

The update versus insert ratio isn't as important as how many rows are
updated out of the total between each run of vacuum analyze.  Vacuum
full is definitely NOT a regular, recommended practice.  I don't think
the docs really say it is.  But a few other people have seemed to get
the same idea from the docs, so there must be some gray area I'm not
seeing when I read them.  Given the usage pattern you described
earlier, I'd say vacuum full is definitely NOT called for, but regular
vacuum should be plenty.

The best thing to do is to examine how many dead tuples you've got to
keep track of, and if that number keeps rising then figure out if fsm
pages needs to be bumped up, and / or autovacuum needs more aggresive
settings.  Note that autovacuum is kind of hand cuffed on pg versions
before 8.3 because it was single threaded, and one really big table
could throw it behind on other more frequently updated tables getting
bloated while the vacuum thread runs against that one large table.

Use vacuum verbose to get an idea of how many dead tuples there are in
the database, and see if they rise to a plateu, or just keep rising.
For most usage patterns with autovacuum enabled, you'll see a steady
rise to about 10-20% dead tuples then it should level off.

> FYI, when I inherited the system it was doing nightly vacuum fulls.
It
> was that way for several months.  If that causes bloated indexes, then
> that's fairly likely a problem I have.  Sounds like I should quit
> running vacuum fulls altogether except maybe once or twice per year.

A lot of times a pgsql doing nightly fulls is a sign of someone who
started out with an old version that only supported full vacuum and
applying the faulty knowledge they gained from there to the newer
version which likely doesn't need it.

If you do find one table that really needs full vacuums because of its
usage pattern, it's best to cron up a single vacuum (regular) to run
more often on it, or make autovacuum more aggresive, or, failing those
two, to make a regular nightly vacuum full / cluster / reindex for
that one relation.

Usually cluster is a better choice, as it doesn't bloat indexes and
puts the table into index order (on the index you clustered on).

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum problems

2009-01-05 Thread Scot Kreienkamp
Nothing.  I ran a query to see what other queries were running and what
other locks existed at the time.  No queries running, no locks.  Nothing
running at that time of night except dump, which is finished before the
vacuum job starts.

Thanks,
 
Scot Kreienkamp
La-Z-Boy Inc.
skre...@la-z-boy.com

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: Monday, January 05, 2009 1:05 PM
To: Scot Kreienkamp
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Vacuum problems 

"Scot Kreienkamp"  writes:
> The nightly vacuums have been working flawlessly, but about three
weeks
> ago the vacuum full started failing.  It was taking about 5-10 minutes
> normally, but all of a sudden it started hitting the command timeout
> that I have set, which is at 60 minutes.  I thought that it may be a
> corrupt table or a large amount of content had been deleted from a
> database, so I built a script to loop through each database and run a
> vacuum full analyze on each table individually thinking I would find
my
> problem table.  The script finished in 5 minutes!  

Maybe the vacuum was blocking waiting for someone's lock on a table?
What else is running in parallel with your nightly vacuum script?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum problems

2009-01-05 Thread Scot Kreienkamp
Scott,

Would the "alter user postgres set statement_timeout=0;" be a permanent
change?  I ask because our application is using that for its login to
the database.  (No lectures please, I inherited the system that way.  I
already read the riot act to our developers about that.)  If so I'll
have to set it back after the vacuum is done.

FYI, when I inherited the system it was doing nightly vacuum fulls.  It
was that way for several months.  If that causes bloated indexes, then
that's fairly likely a problem I have.  Sounds like I should quit
running vacuum fulls altogether except maybe once or twice per year.  

I'll try adding a reindex for tonight's vacuum run.

Thanks,
 
Scot Kreienkamp
La-Z-Boy Inc.
skre...@la-z-boy.com

-Original Message-
From: Scott Marlowe [mailto:scott.marl...@gmail.com] 
Sent: Monday, January 05, 2009 11:35 AM
To: Scot Kreienkamp
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Vacuum problems

On Mon, Jan 5, 2009 at 7:46 AM, Scot Kreienkamp 
wrote:
> Hi everyone...
>
>
>
> I have a database that is currently about 25 gigs on my primary DB
server
> running Postgres 8.2.9, and two others that are less than 1 gig
apiece.  The
> DB server is a quad proc, quad core, 64 gigs of memory, 5 drive RAID5
array,
> so it has plenty of horsepower.  Until about three weeks ago I was
running a
> nightly vacuum analyze and a vacuum full analyze once per week.

Did you have a compelling reason for running vacuum full?  It's
generally discouraged unless you've got a usage pattern that demands
it.  If you are running vacuum full you likely have bloated indexes,
so you might need to reindex the db as well.

> This is what I was running for the vacuum full command:
>
> vacuumdb -a -e -f -z -v  -U postgres
>
>
>
> The nightly vacuums have been working flawlessly, but about three
weeks ago
> the vacuum full started failing.  It was taking about 5-10 minutes
normally,
> but all of a sudden it started hitting the command timeout that I have
set,
> which is at 60 minutes.

Since I assume vacuum is running under the superuser account you can try
this:

alter user postgres set statement_timeout=0;

To give it all the time it needs to finish.


>  I thought that it may be a corrupt table or a large
> amount of content had been deleted from a database, so I built a
script to
> loop through each database and run a vacuum full analyze on each table
> individually thinking I would find my problem table.  The script
finished in
> 5 minutes!

It might be that the previous vacuum full cleaned up enough stuff that
the next one ran faster.  But again, vacuum full is usually a bad idea
as regular maintenance.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum problems

2009-01-05 Thread Scot Kreienkamp
Nope, on RHEL5.  This is the backend of an application that's not available or 
visible to the public.  

Thanks,
 
Scot Kreienkamp
La-Z-Boy Inc.
skre...@la-z-boy.com
734-242-1444 ext 6379
-Original Message-
From: Grzegorz Jaśkiewicz [mailto:gryz...@gmail.com] 
Sent: Monday, January 05, 2009 11:19 AM
To: Scot Kreienkamp
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Vacuum problems

btw, is that on windows ? (looking at la-z-boy.com, it uses .net).

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Vacuum problems

2009-01-05 Thread Scot Kreienkamp
Hi everyone...

 

I have a database that is currently about 25 gigs on my primary DB
server running Postgres 8.2.9, and two others that are less than 1 gig
apiece.  The DB server is a quad proc, quad core, 64 gigs of memory, 5
drive RAID5 array, so it has plenty of horsepower.  Until about three
weeks ago I was running a nightly vacuum analyze and a vacuum full
analyze once per week.  

 

This is what I was running for the vacuum full command:

vacuumdb -a -e -f -z -v  -U postgres

 

The nightly vacuums have been working flawlessly, but about three weeks
ago the vacuum full started failing.  It was taking about 5-10 minutes
normally, but all of a sudden it started hitting the command timeout
that I have set, which is at 60 minutes.  I thought that it may be a
corrupt table or a large amount of content had been deleted from a
database, so I built a script to loop through each database and run a
vacuum full analyze on each table individually thinking I would find my
problem table.  The script finished in 5 minutes!  

 

This is what I'm running on each table now in my script:

vacuumdb -d $DB -t $TABLE -e -f -z -v  -U postgres

 

As I understand it, the "vacuumdb -a" command basically does the same
thing as my script.  So why is it timing out while my script finishes in
5 minutes or less?  Is the "vacuumdb -a" command doing something that
I'm not?  

 

Now that the holiday season is past I will be upgrading to the latest
8.2 release as soon as possible, but I need to get this figured out
first unless it's a bug that the upgrade fixes.

 

I'd appreciate any advice or explanations you guys can send my way.  

 

 

Thanks,

 

Scot Kreienkamp

La-Z-Boy Inc.

skre...@la-z-boy.com

734-242-1444 ext 6379