Re: [ADMIN] Random server overload

2013-10-08 Thread Kevin Grittner
Viktor Juhanson sp...@planet.ee wrote:

 Btw we have the max pool size of web application 50 connections
 and since we have 4 instances of application running  it makes
 max 200.

 I don't really get how the database pool gets full when
 application can use 200 connections max and postrgesql config has
 set to 250 max connections ?

If the client application  (or pooler) doesn't close the database
connections gracefully, it might take some time before the server
figures out the connection is dead and frees the resources on the
server side.  Does the server log say anything about broken
connections or client not responding?

Also, the pooler might maintanin some *minimum* number of
connections but go beyond that on demand.  Without knowing what
pooler and how it is configured, it's hard to say what might be
going on.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Reg. Restore

2013-10-07 Thread Kevin Grittner
Venakata Ramana ramana@gmail.com wrote:

 I am using postgresql 9.1.5. on windows Xp.

You should update to a more current minor release.

http://www.postgresql.org/support/versioning/

 1. Restore of DB is very slow.
 How to improve the speed of Restore?

You need to provide more information on how you backed up, how you
are restoring, and what the bottleneck seems to be.

http://www.postgresql.org/docs/9.1/interactive/backup.html


 2. Without indexes, retrieving from table is fast?
 After creating an index on that table, then the retrieval
 become slow.

There's not enough information to give much advice.  Please read
this page and start a new thread on the pgsql-performance list:

http://wiki.postgresql.org/wiki/SlowQueryQuestions

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Dumping a database that is not accepting commands?

2013-09-18 Thread Kevin Grittner
Natalie Wenz nataliew...@ebureau.com wrote:

 autovacuum_freeze_max_age   | 8

 We talked a little bit about lowering the
 autovacuum_max_freeze_age, at least some, but there was concern
 that it would end up doing a lot more lengthy full-table scans.
 Is that a legitimate concern?

It will cause full-table scans to occur more often, but possibly
not as much as you fear if your baseline is based on how it behaved
before recent bug fixes.  It will tend to flush hot data from the
caches, at least to some degree.  The up side of doing it more
often is that it will have fewer writes to do each time it is run,
which might avoid write gluts that hurt performance more than the
reads.

 Would it be prudent to change any of the other values back to
 their defaults at the same time? For example, we have the
 autovacuum_vacuum_cost_delay set to 0, with the idea that we
 don't mind if we take a hit on performance while the autovacuum
 is running; our priority is that it be able to finish as quickly
 as possible. If we start the vacuum earlier, though, maybe that
 should be bumped up too?

Maybe, but you do have a lot of machine there.  You might not
notice the hit very much.


 Does the autovacuum do different work when it is vacuuming to
 prevent wraparound (and that's triggered when a table passes the
 autovacuum_max_freeze_age, right?) and a vacuum triggered by the
 table changing in size by a certain amount, or a manually-invoked
 vacuum?

A normal vacuum just visits pages which need work based on the
visibility map, so those really are almost a fixed amount of work
per week regardless of the frequency of runs.  It's just a matter
of whether you wait until a lot of work needs to be done and do it
all at once, or do smaller runs that nibble away at it.  The latter
usually has less noticeable impact.

 (Are there any books, or articles, that cover Vacuuming and
 Autovacuuming: the gory details?)

The best book I know of for this is Greg Smith's PostgreSQL 9.0
High Performance:

http://www.postgresql.org/docs/books/

(Full disclosure, I was one of the technical editors, but don't get
money from sales.)


 Rereading the Routine Vacuuming page in the docs, this sentence
 caught my eye:

 However, for static tables (including tables that receive
 inserts, but no updates or deletes), there is no need to vacuum
 for space reclamation, so it can be useful to try to maximize the
 interval between forced autovacuums on very large static tables.
 Obviously one can do this either by increasing
 autovacuum_freeze_max_age or decreasing vacuum_freeze_min_age.

 We generally never delete from this database at all. This case
 was unusual; I was migrating the data from one table to another
 because we added some columns, and changed the datatype of many
 of the columns from text to more appropriate types (timestamp,
 int, uuid, inet, etc). Ideally, even then we wouldn't have
 preferred to delete anything until the whole table was migrated,
 but disk space became an issue. Bleh.

That might be a reason to tweak the settings, but since recent
versions of vacuum skip heap pages that won't benefit from vacuum,
I probably wouldn't.

 With that in mind, would you still recommend putting the
 autovacuum_max_freeze_age back to 200 million?

Where possible, I'm a big fan of incremental change.  I might nudge
it in that direction a little at a time and watch the behavior.  I
do think that periodic VACUUM ANALYZE statements (weekly?) of the
database might be a good supplement to the autovacuum jobs,
especially if you have a time when load tends to be lower to
schedule that in.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Dumping a database that is not accepting commands?

2013-09-17 Thread Kevin Grittner
Natalie Wenz nataliew...@ebureau.com wrote:

 I have a large database from our test environment that got into trouble with
 some high volume and some long-running queries about…six weeks ago? We have a
 buffer mechanism that has been storing the new data since the database stopped
 accepting connections, so we haven't really lost any data, which is good.
 But the single-user backend vacuum freeze is still grinding away, using 100% 
 cpu
 most of the time, except when it's doing a lot of writes. We did cancel the
 vacuum once so we could stop the database and take a snapshot of the 
 filesystem
 (we are using ZFS on FreeBSD) and copy the snapshot to another machine. This
 allowed us to get a fresh database started (with 9.3!) where we could unspool
 the last six weeks of data into a fresh database, and be able to access at 
 least
 that much of our data.

I'm not sure how you could have done that without dealing with the
wraparound before the upgrade.

 Now:
 I have a copy of the database (with data from all time up until the database
 shut itself down six weeks ago) that I just need the data from. I am becoming
 impatient with the vacuum, as it appears to have not even started working on 
 the
 files for one of the largest relations in the database (that table was about
 14TB last I saw). I'm trying to find alternatives to waiting another who
 knows how many weeks for the vacuum to finish just to have the database in a
 state where I can dump the data out, since this is no longer the
 live version. This copy running on hardware with plenty of space to
 work with. The database has about a million transactions before it wraps.

The copy running on 9.3, or the original?

 Is it possible to somehow just dump the data using pg_dump or pg_dumpall? I
 haven't tried to see if those utilities will work when the database is
 protecting itself from data loss. If it were possible, would it be wise (do
 those utilities use tons of transactions to dump, or would it be safely within
 the million or so that I have)? I suppose I could use copy?

pg_dump uses COPY (by default, anyway), and does all its work in a
single transaction.  But it needs to start that transaction.

 Would there be any risks if I were to do that? Maybe none of this
 is risky at this point because we can always clone the original
 snapshot, and try again.

I'm not sure that pg_dump followed by restore would be expected to
be faster than finishing the VACUUM, unless that is configured to
pace itself way too slowly.

 Any ideas or suggestions?

After getting past this crisis, I would take a close look at your
vacuuming regimen -- it sounds like it is not aggressive enough to
keep you out of trouble.

I'm sorry that I don't have a better suggestion for resolving the
crisis than running VACUUM at maximum speed.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] New autovacuum messages in postgres log after upgrade

2013-09-17 Thread Kevin Grittner
Benjamin Krajmalnik k...@servoyant.com wrote:

 During a maintenance window, we upgraded our systems to Postgres
 9.0.13 from 9.0.3 running on FreeBSD 8.1 amd64.
 When we restarted the postgres server, I notices, and continue to
 notice, a recurrence of messages in the log.
 
 2013-09-16 21:15:58 MDT LOG:  automatic vacuum of table 
 ishield.public.tbltmptests: could not (re)acquire exclusive lock for 
 truncate scan
 2013-09-16 21:15:59 MDT LOG:  automatic vacuum of table 
 ishield.public.tbltmptests: could not (re)acquire exclusive lock for 
 truncate scan
 2013-09-16 21:16:00 MDT LOG:  automatic vacuum of table 
 ishield.public.tbltmptests: could not (re)acquire exclusive lock for 
 truncate scan
 
 The tables on which I am seeing this messages are essentially
 temporary work tables into which we insert records, which are
 processed by triggers, and then deleted.
 Performance of the system does not seem to have been affected by
 the upgrade.
 Is this simply caused by a higher level of verbosity in the
 autovaccum logging, as a result of the autovacuum fix in 9.0.12?

Yes, table truncation after a large number of deletes is now
smarter, getting more done with less effort and blocking.  This
message, which was useful for developing the fix, made it into
production at the LOG level.  In the next minor release it will be
changed to the DEBUG level to avoid cluttering the log with entries
about routine activities.

-- 
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Dumping a database that is not accepting commands?

2013-09-17 Thread Kevin Grittner
Natalie Wenz nataliew...@ebureau.com wrote:

 Sorry; my description of what is going on was a little unclear.
 We didn't upgrade the existing database. We moved it to different
 hardware, and just created a brand new database to accept the
 data that had been backing up in sqlite files while our original
 database was offline. I'm still dealing with the wraparound on
 the original, just on a different machine.

OK, to restate to be sure I understand, the original database is
being vacuumed in a new location, and a new 9.3 database in the
original location has absorbed the queued data?  Once you complete
the vacuum, you will copy the old data back to the new database at
the old location?

 autovacuum_freeze_max_age   | 8

 Normally the autovacuum seem to keep up sufficiently. We got into
 trouble with a bad combination of not-typical long-running
 queries, a disk failure and subsequent zpool repair, and the only
 person who checks the log files regularly (me) was out of the
 office. This has been so painful and slow to recover from, I
 don't think we'll ever get into this mess again. (At least not
 quite like this. I seem to have a knack for finding *new* ways to
 break things.)

A perfect storm of events, eh?  It's hard to have things always go
smoothly in the face of such events, but I see a couple things you
might want to consider.  Increasing autovacuum_freeze_max_age
reduces the amount of time you have to get back on track.  You
might want to take that back down to the default.  There was a bug
causing wraparound prevention autovacuums to trigger too
frequently, which is now fixed in the latest minor releases, so
making that chnage might not be as painful as you expect.  Make
sure you are monitoring for long-running transactions, so you don't
get burned by one that is accidental.  Also, if you have a
processing cycle where there are off-peak hours on a daily or
weekly basis, you might want to run a VACUUM ANALYZE command
durning those windows, to get some of the freezing done before it
is critical.

 I will also take this opportunity to mention again that if anyone
 is considering a making a patch for 64-bit xids, you would make
 at least one small group of people very, very happy.  :)

While 64-bit xids isn't likely, there is some work taking a more
creatie approach to the issue which might make you even happier in
a year or so.  :-)

 Side question: some of the longer queries that were running when
 the database yakked were deletes of old data that had been
 manually migrated to a new table with a more appropriate format.
 We were running out of disk space, so we were trying to clear up
 some space by removing data we now had in two places. Does a
 delete of previously-frozen rows unfreeze them, or anything like
 that? Because in a series of maybe a dozen queries or so, we
 deleted billions of rows. Does that generate a significant amount
 of extra work for the autovacuumer? 

The pages which had tuples deleted would need to be cleaned up by
vacuum, and rewritten.  It would also remove all index entries for
all deleted rows.  It might also scan backward from the end of the
table to release space to the OS.  That could conceivably be enough
avoidable work to make your idea of copying out the remaining data
feasible.  What percentage of the rows were deleted?  Could your
copy out be to a separate set of drives?

If you know at the time a row is added what group it will be in for
deletion, it might pay to move to partitioning, so that a group of
rows could be deleted pretty much as fast as you can drop a table.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] wrong database name in error message?

2013-09-16 Thread Kevin Grittner
Rural Hunter ruralhun...@gmail.com wrote:

 This was changed quite long time ago when I saw too frequent auto
 vacuums to prevent the wrap-around on a very busy/large table
 which slow down the performance. I will change it back to the
 default to see how it works.

There was a long-standing bug which could cause over-frequent
wraparound prevention autovacuums.  As long as you are on the
latest minor release, things should be much better now.

 I will try the parameters as you suggested too.

Possibly.  As I said before, I think the symptoms might better fit
a situation where the table in need of VACUUM was a shared table
and it just happened to mention db1 because that was the database
it was scanning at the time.  (Every database includes the shared
system tables in its catalog.)

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] wrong database name in error message?

2013-09-15 Thread Kevin Grittner
Rural Hunter ruralhun...@gmail.com wrote:

 I'm on Ubuntu 12.04.1 64bit with 32 cores and 377G memory. The
 data is stored on several rai10 SAS 15k disks.

With a machine that beefy I have found it necessary to make the
autovacuum settings more aggressive.  Otherwise the need for
vacuuming can outpace the ability of autovacuum to keep up.

   autovacuum_freeze_max_age  | 20 | configuration file

   vacuum_freeze_table_age    | 10 | configuration file

There's your problem.  You left so little space between when
autovacuum would kick in for wraparound prevention (2 billion
transactions) and when the server prevents new transactions in
order to protect your data (2 ^ 31 - 100 transactions) that
autovacuum didn't have enough time to complete its effort to do so.

Changing a setting to ten times its default value is something
which should always be approached with caution.  In this case you
changed the threshold for starting the work to prevent data loss
from a little under 10% of the distance to the disastrous condition
to a little under 100% of that distance.

You could play with non-standard setting for these, but if you go
anywhere near this extreme you risk downtime like you have just
experienced.  Personally, I have never had a reason to change these
from the defaults.

To ensure that autovacuum can keep up with the activity on a
machine like this, I have generally gone to something like:

autovacuum_cost_limit = 800

If you have more than a couple large tables which take long enough
to scan to prevent small, frequently-updated tables from getting
attention soon enough, you might want to boost
autovacuum_max_workers, too.
 
-- 
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] wrong database name in error message?

2013-09-14 Thread Kevin Grittner
Rural Hunter ruralhun...@gmail.com wrote:

 Let me put here the whole scenario:
 1. I was called by our application users that all the updating was
 failing. So I went to check the db. Any update transaction including
 manual vacuum is blocked out by the error message:
 ERROR: database is not accepting commands to avoid wraparound data loss
 in database db1
   Suggestion:Stop the postmaster and use a standalone backend to
 vacuum that database.

 2. Since db1 is a very large database(it is the main db the user is
 using) I can not afford to take long time to vacuum full on that. So I
 thought about to try on other small dbs first.

Why in the world would you want to use VACUUM FULL in this circumstance?

 3. I stop the instance.

 4. I use echo 'vacuum full;' | postgres --single -D /pgdata [other
 dbs] to vacuum some other dbs. I still got several warning messages
 when vacuum the first database(let's say db2):
 2013-09-14 08:56:44 CST [5536]: [453-1] user=,db=,host= WARNING:
 database db1 must be vacuumed within 999775 transactions
 2013-09-14 08:56:44 CST [5536]: [454-1] user=,db=,host= HINT:  To avoid
 a database shutdown, execute a database-wide VACUUM in that database.

WARNING, not error, so the VACUUM would have run.

 Here the error message still points to db1.

I'm not sure which database would be referenced if the table which
needed the VACUUM was a shared table, like pg_database or
pg_authid.

 5.  When I ran the single connection vacuum on other dbs(not db1), there
 was not any error/warning message. So I tried to start whole instance.

 6. I started the instance and found everything is fine.

 So actually I have 3 questions here:
 1. Was the db name in the error message wrong?

Probably not, to the extent that running VACUUM (FULL is not
necessary) against that database would have solved the problem.  If
it was a shared catalog table it might be that it was not the
*only* database which would work.

 2. How would that happend? Shouldn't auto vacuum handle it and avoid
 such problem?

There are two possibilities -- either you had a long-running
transaction in the cluster or your autovacuum is not configured to
be aggressive enough to keep you out of trouble.

 3. How to detect such problem earlier?

We would need a description of the machine (cores, RAM, storage
system) and the output of these queries to be able to make good
suggestions on tuning autovacuum:

SELECT version();
SELECT name, current_setting(name), source
  FROM pg_settings
  WHERE source NOT IN ('default', 'override');

In addition, make sure that you are monitoring for long-running
transactions.  A reasonable monitoring scheme might be to alert
when either of these queries returns any rows:

select * from pg_stat_activity where xact_start  (now() - interval '1 hour');
select * from pg_prepared_xacts where prepared  (now() - interval '1 minute');

You can, of course, adjust the intervals to what makes the most
sense for your environment.  If you have max_prepared_transactions
set to zero, the latter query is not really necessary.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] several questions about pg_dumpall, pg_start_backup, pg_basebackup and WAL

2013-08-27 Thread Kevin Grittner
Patrick Dung patrick_...@yahoo.com.hk wrote:
 Albe Laurenz laurenz.a...@wien.gv.at
 Patrick Dung wrote:

 It is possible that there is file changes (added or file size
 changed) between the pg_start_backup and pg_stop_backup.

 Yes.

 Is the backup consistent?

 The tar or snapshot itself will not be consistent, it will have
 to be recovered at least until the end on the online backup.

 I should ask: is the backup crash consistent?

PITR restore procedures will use the crash recovery mechanism to
make the database consistent, if that's what you mean.

 Which means it is consistent at the time that the pg_start_backup
 is run.

No.  It will be consistent with the time that pg_stop_backup was
run, or any later point in time that you choose, as long as you
have WAL to that point in time.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] unexpected EOF on client connection during pg_dumpall

2013-08-01 Thread Kevin Grittner
Arnold, Sandra L. arnol...@ornl.gov wrote:

 I am currently getting a “unexpected EOF on client connection”
 when running pg_dumpall from a bash shell script in cron.  I have
 looked online to see if anyone else is having or have had this
 problem without any luck.  I am hoping someone can point me in
 the right direction to determine what is happening.  When I run
 the command manually it works.

 Please let me know if I need to provide any other information.

What does pg_dumpall write to stdout and stderr?  I have only seen
the server-side error you show when the pg_dumpall client had an
error writing, due to permissions problems or disk space
exhaustion, and the reason needs to be found on the client side,
not in the server log.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] 9.2.2 - semop hanging

2013-07-16 Thread Kevin Grittner
Rafael Domiciano rafael.domici...@gmail.com wrote:

 Yeah, disabling THP seens to lower the severity of the situation.
 Thanks. Right now is about 1 hour without any episode.

 Googling I've found that others had the same problem, and
 resolved disabling THP. Is it the right way?

That is the only way to correct this problem when it appears that I
am aware of.  I have seen recommendations to disable THP defrag
instead, but where I have seen people do that, they wound up
entirely disabling THP support later.

Huge pages should benefit performance in general, but some
implementations seem to have problems.

--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] 9.2.2 - semop hanging

2013-07-15 Thread Kevin Grittner
Rafael Domiciano rafael.domici...@gmail.com wrote:

 PostgreSQL 9.2.2 on x86_64-unknown-linux-gnu, compiled by gcc
 (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4), 64-bit

 CentOS release 6.3 (Final)

 Since 2 weeks I'm get stucked in a very strange situation: from
 time to time (sometimes with intervals less than 10 minutes), the
 server get stucked/hang (I dont know how to call it) and
 every connections on postgres (dont matter if it's SELECT,
 UPDATE, DELETE, INSERT, startup, authentication...) seems like
 get paused; after some seconds (say ~10 or ~15 sec, sometimes
 less) everything goes OK.

During these episodes, do you see high system CPU time?  If so, try
disabling transparent huge page support, and see whether it affects
the frequency or severity of the episodes.

 So, my first trial was to check disks. Running iostat
 apparently showed that disks was OK.

Did you run iostat during an episode of slowness?  What did it
show?  Giving an interpretation that it as apparently OK doesn't
provide much useful information.

 It's a Raid10, 4 600GB SAS, IBM Storage DS3512, over FC. IBM DS
 Storage Manager says that disks is OK.

Are there any reports to show you when writing was saturated?

  total   used   free shared    buffers    cached
 Mem:    145182 130977  14204  0 43    121407
 -/+ buffers/cache:   9526 135655
 Swap: 6143 65   6078

 Following is what I've tried:
 1) Emre Hasegeli has suggested to reduce my shared buffers, but
 it's already low:
   total server memory: 141 GB
   shared_buffers: 16 GB

On a machine with nearly twice that RAM, I've had to decrease
shared_buffers to 2GB to avoid the symptoms you describe.  That is
in conjunction with making the background writer more aggressive
and making sure the checkpoint completion target is set to 0.9.

 Maybe it's too low? I've been thinking to increase to 32 GB.

Well, you could try that; if the symptoms get worse, then you might
be willing to go the other direction

 max_connections = 500 and ~400 connections average

How many cores (not hardware threads) does the machine have?  You
will probably have better throughput and latency if you use
connection pooling to limit the number of active database
transactions to somewhere arount two times the number of cores, or
slightly above that.

-- 
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Creating new cluster by copying directory?

2013-07-11 Thread Kevin Grittner
Wells Oliver wellsoli...@gmail.com wrote:

 So it's safe to copy the files in /var/lib/postgresql/9.1/main to
 /var/lib/postgresql/9.1/test while main is running?

As long as you copy the whole $PGDATA tree while the source cluster
is stopped, or use PITR recovery techniques:

http://www.postgresql.org/docs/9.2/interactive/backup.html

Of course, the machines must be of the same architecture.

--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Dump/Reload pg_statistic to cut time from pg_upgrade?

2013-07-10 Thread Kevin Grittner
Jerry Sievers gsiever...@comcast.net wrote:

 Planning to pg_upgrade some large (3TB) clusters using hard link
 method.  Run time for the upgrade itself takes around 5 minutes.
 Nice!!  Origin version 8.4 and destination version 9.1.

 Unfortunately the post-upgrade analyze of the entire cluster is going
 to take a minimum of 1.5 hours running several threads to analyze all
 tables.  This was measured in an RD environment.

 Got to thinking, what if we make a SAN snapshot a few hours prior to
 upgrade time, upgrade that... then analyze it and then dump the stats
 table?

 Has anyone else attempted anything similar?  Any feedback is
 appreciated.

I certainly understand the motivation, and it may be a good option
if you test carefully beforehand.

What I have done in a similar situation, to minimize down time, is
to run a database ANALYZE with a very small target.  I forget the
particular value we used -- it may have been 3.  Then ran an
ANALYZE with the normal default target on a few key tables
(identified in testing to be the ones eating most of the scan time
with no statistics), and let users in.  The database ANALYZE with
the normal default target was done while normal production hit the
database, without too much of a performance hit.  With this
technique we were able to let users in with near-normal performance
with 10 or 15 minutes of down time rather than hours.

--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Connecting to a remote db server

2013-07-10 Thread Kevin Grittner
Struckhoff, Kevin kstruckh...@ebay.com wrote:

 I've installed postgres 9.2 on a server, call it db01. I now want
 to access postgres from my app server, call it app01.
 
 What do I install on the app01 server? I've installed postgres
 9.2 on it and set the postgresesql.conf file's listen_address to
 a value of '*' on both machines. I've also modified the
 pg_hba.conf file to be wide open for now on both machines.

The postgresesql.conf and pg_hba.conf files aren't used on the
client side -- those only matter on the server.

 It seems that something else is missing or needs to be done.
 Googling wasn't much help, the results weren't current.

How are you trying to connect, and what happens when you try?

http://wiki.postgresql.org/wiki/Guide_to_reporting_problems

--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PANIC during VACUUM

2013-04-30 Thread Kevin Grittner
[please don't top-post]

German Becker german.bec...@gmail.com wrote:
 Albe Laurenz laurenz.a...@wien.gv.at wrote:
 German Becker wrote:

 I am testing version 9.1.9 before putting it in production. One
 of my tests involved deleting a the contents of a big table ( ~
 13 GB size) and then VACUUMing it. During VACUUM PANICS.

 If you mess with the database files, errors like this are to be
 expected.

 Thanks for your reply. In which sense did I mess with the
 database files?

You didn't say how you deleted the contents of that big table, and
it appears that Albe assumed you deleted or truncated the
underlying disk file rather than using the DELETE or TRUNCATE SQL
statement.

In any event, more details would help people come up with ideas on
what might be wrong.

http://wiki.postgresql.org/wiki/Guide_to_reporting_problems

--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] after 9.2.4 patch vacuumdb -avz not analyzing all tables

2013-04-12 Thread Kevin Grittner
Scott Marlowe scott.marl...@gmail.com wrote:

 Does this behavior only affect the 9.2 branch? Or was it ported
 to 9.1 or 9.0 or 8.4 as well?

After leaving it on master for a while to see if anyone reported
problems in development, I back-patched as far as 9.0 in time for
the 9.2.3 (and related) patches.  Prior to that the code was too
different for it to be the same patch, and (perhaps not entirely
coincidentally) I had not seen the problems before 9.0.  From 9.0
on I have seen multiple sites (all using queuing from Slony or a
JMS implementation) with recurring problems when the queue
temporarily got large, shrank again, and then wrapped around to the
beginning of the table's file space.  In some cases performance was
so impaired that when such an event was triggered they would shut
down their application until a manual VACUUM could be run.

-- 
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] after 9.2.4 patch vacuumdb -avz not analyzing all tables

2013-04-11 Thread Kevin Grittner
Tom Lane t...@sss.pgh.pa.us wrote:

 However I've got to say that both of those side-effects of
 exclusive-lock abandonment seem absolutely brain dead now that I
 see them.  Why would we not bother to tell the stats collector
 what we've done?  Why would we think we should not do ANALYZE
 when we were told to?

 Would someone care to step forward and defend this behavior? 
 Because it's not going to be there very long otherwise.

I'm pretty sure that nobody involved noticed the impact on VACUUM
ANALYZE command; all discussion was around autovacuum impact; and
Jan argued that this was leaving things in a status quo for that,
so I conceded the point and left it for a follow-on patch if
someone felt the behavior needed to change.  Sorry for the miss.

http://www.postgresql.org/message-id/50bb700e.8060...@yahoo.com

As far as I'm concerned all effects on the explicit command were
unintended and should be reverted.

--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Invalid SQL not rejected?

2013-04-11 Thread Kevin Grittner
Julian Glass temp...@internode.on.net wrote:

 You might want to explicitly reference tables (and alias).

I agree.  I find that the best defense against several classes of
silent misbehavior is to alias all table references and qualify all
column references with the alias -- or use a tool which does this
for you.

--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] VACUUM ANALYZE AND ANALYZE ISSUE

2013-03-28 Thread Kevin Grittner
suhas.basavaraj12 suha...@verse.in wrote:

 This was observed in postgres 9.0.1.

You might want to review the bug fixes in 9.0 maintenance releases
and see if any of them could be responsible for what you saw:

http://www.postgresql.org/docs/9.0/static/release-9-0-2.html
http://www.postgresql.org/docs/9.0/static/release-9-0-3.html
http://www.postgresql.org/docs/9.0/static/release-9-0-4.html
http://www.postgresql.org/docs/9.0/static/release-9-0-5.html
http://www.postgresql.org/docs/9.0/static/release-9-0-6.html
http://www.postgresql.org/docs/9.0/static/release-9-0-7.html
http://www.postgresql.org/docs/9.0/static/release-9-0-8.html
http://www.postgresql.org/docs/9.0/static/release-9-0-9.html
http://www.postgresql.org/docs/9.0/static/release-9-0-10.html
http://www.postgresql.org/docs/9.0/static/release-9-0-11.html
http://www.postgresql.org/docs/9.0/static/release-9-0-12.html

... or just apply all of those bug fixes and see if you can make it
happen again.

In general, it pays to apply fixes as they become available.

http://www.postgresql.org/support/versioning/

--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Pg 9.1 master-slave replication

2013-02-21 Thread Kevin Grittner
Prashanth Ranjalkar prashant.ranjal...@gmail.com wrote:

 Hello, Could explain somebody what will happen, if the slave

 It depends on the type of replication is used.

 If it's a slony replication then master continues to work and
 will catch up when slave is available.

 However if the Streaming replication is used, [...] once it fills
 all the space, master may go down.

That strikes me a false distinction -- Slony also stores data to be
replicated until the slave becomes available.  Either way you can
control where that is stored, and you need to watch out for space
problems on an extended outage of a replica.

An issue I don't think I've seen mentioned is that if you use
synchronous replication you are telling PostgreSQL not to return an
indication of success for a data-modifying transaction until the
work of that transaction has been persisted on at least one
replica.  To avoid stalls on the master, you may want to define
multiple synchronous replicas, so that when one goes down you keep
running without DBA intervention.

-- 
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] update Timestamp updated whenever the table is updated

2013-02-12 Thread Kevin Grittner
Campbell, Lance la...@illinois.edu wrote:

 I would like to have a generic strategy for updating a timestamp
 field on some tables whenever the table is updated.  Is there a
 recommended strategy for doing this other than via the SQL UPDATE
 command?

 Example table:
 CREATE TABLE test_table
 (
   id integer NOT NULL,
   field1 character varying NOT NULL,
   field2 character varying NOT NULL,
   updated_timestamp timestamp with time zone DEFAULT now(),
   created_timestamp timestamp with time zone DEFAULT now()
 );

A BEFORE UPDATE trigger is probably what you want.  You could write
a single trigger function which could be attached to all tables
with the updated_timestamp column.  For example:

CREATE FUNCTION set_updated_timestamp()
  RETURNS TRIGGER
  LANGUAGE plpgsql
AS $$
BEGIN
  NEW.updated_timestamp := now();
  RETURN NEW;
END;
$$;

Note that the above function depends on a column name, but not a
table name.  You link it to each table like this:

CREATE TRIGGER test_table_update_timestamp
  BEFORE UPDATE ON test_table
  FOR EACH ROW EXECUTE PROCEDURE set_updated_timestamp();

 I think I read something about RULES.

You generally want to avoid RULES, especially where a trigger works
so well.

-- 
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Schema design question as it pertains to performance

2013-01-22 Thread Kevin Grittner
Benjamin Krajmalnik wrote:

 From a performance standpoint, is there a big hit on select performance
 if a query ends up utilizing more than one index, taking into account
 that an index has been used already to reduce the data set of potential
 records, and the secondary index would mostly be used in the ordering of
 the result set (such as a last updated time)?

That depends on so many variables it is hard to give a simple answer.

 I also assume that if no data has changed in an index, nothing is done
 when the record is updated as pertains to the particular index - am I
 correct in this assumption?

No. If the update doesn't affect *any* indexed column, and there is
room in the page, it will do a HOT update and can skip all index
updates. If any indexed column is updated, it must expire the old
tuple and create a new tuple to represent the updated version of
the row, and this new tuple is not likely to land in the same page
as the old tuple; so it needs new entries in all the indexes. The
old index entries must remain until they can no longer be the
visible version of the row for any database transaction, so both
versions of the row will be on the index for a while.

-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Schema design question as it pertains to performance

2013-01-22 Thread Kevin Grittner
Benjamin Krajmalnik wrote:
 Kevin Grittner wrote:
 Benjamin Krajmalnik wrote:

 I also assume that if no data has changed in an index, nothing
 is done when the record is updated as pertains to the
 particular index - am I correct in this assumption?
 
 No. [...] If any indexed column is updated, [...] it needs new
 entries in all the indexes.

 That was my intent - if no column of an index changes in an
 update then no changes are done on the index.

I don't think you understood -- if any column referenced by any
index is updated, then all indexes must get a new entry for that
row whether or not they include that column. The new version of the
row will be at a new location, so new entries are needed in all
indexes to point to the new location.

I did *not* say that only indexes referencing the updated column
need a new entry.

-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Need assistance in incremental backup for my environment

2013-01-17 Thread Kevin Grittner
Vinod V wrote:

 Below were the error messages that we were getting ... (while
 restarting the server).
 
 
 2013-01-10 01:58:46 PST LOG: could not bind IPv6 socket: No error
 2013-01-10 01:58:46 PST HINT: Is another postmaster already running on port 
 5432? If not, wait a few seconds and retry.
 2013-01-10 01:58:46 PST LOG: could not bind IPv4 socket: No error
 2013-01-10 01:58:46 PST HINT: Is another postmaster already running on port 
 5432? If not, wait a few seconds and retry.
 2013-01-10 01:58:46 PST WARNING: could not create listen socket for *
 2013-01-10 01:58:46 PST FATAL: could not create any TCP/IP sockets

It would appear that something is probably already (or still)
running on port 5432 when you try to start.

-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Question concerning replicated server using streaming replication used as a read-only reporting server

2013-01-17 Thread Kevin Grittner
Benjamin Krajmalnik wrote:

 I have 2 servers which are using streaming replication (pg 9.0.4).
 
 The secondary server is there primarily as a disaster recovery server,
 but we are also using it for reporting, so as not to place undue load on
 the primary server.
 
 As I review the logs on the secondary server, I frequently see the
 following:

 2013-01-17 06:05:47 MST [local]ERROR: canceling statement due to
 conflict with recovery
 2013-01-17 06:05:47 MST [local]DETAIL: User query might have needed to
 see row versions that must be removed.
 2013-01-17 06:05:47 MST [local]STATEMENT: Select statement goes here
 2013-01-17 06:05:47 MST [local]FATAL: terminating connection due to
 conflict with recovery
 2013-01-17 06:05:47 MST [local]DETAIL: User query might have needed to
 see row versions that must be removed.
 2013-01-17 06:05:47 MST [local]HINT: In a moment you should be able to
 reconnect to the database and repeat your command.

 Is there anything that can be done to mitigate this situation?

You need to decide how stale you're willing to let the hot
standby get. To preserve an image of the database which can allow
the query to keep running, the standby server might need to pause
replay of transactions. To allow long transactions, you need to
allow it to pause the transaction stream for a long time, but that
means that it's getting out of date for disaster recovery purposes.
It might be worthwhile to keep two standby clusters, one that is
aggressive about applying the latest transactions, and another
which allows long-running queries.

-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Question concerning replicated server using streaming replication used as a read-only reporting server

2013-01-17 Thread Kevin Grittner
Benjamin Krajmalnik wrote:

 It is ok if I am a little bit behind. What setting do
 I need to tweak to allow it to get further behind?

The relevant settings are described here:

http://www.postgresql.org/docs/9.0/interactive/runtime-config-wal.html#RUNTIME-CONFIG-REPLICATION
http://www.postgresql.org/docs/9.0/interactive/runtime-config-wal.html#RUNTIME-CONFIG-STANDBY

The ones that you might want to look at are:

vacuum_defer_cleanup_age
max_standby_archive_delay
max_standby_streaming_delay

Leaving some gaps for catch up time between long-running requests
can help prevent cancelations, since the lag can otherwise
accumulate.

-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] pg_dump and restore

2013-01-11 Thread Kevin Grittner
suhas.basavaraj12 wrote:

 We will be dumping data from version 9.0 and restore to 9.1.

That should work fine, as long as use use pg_dump from version
9.1 to dump the 9.0 database.

-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Using pre-configured vs building Postgres

2013-01-09 Thread Kevin Grittner
Armin Resch wrote:

 one needs to evaluate to what extent an upgrade of postgres is contained

PostgreSQL minor releases (where the version number matches to the
left of the second dot) only contain fixes for bugs and security
vulnerabilities. Dependencies on other packages should not change.

http://www.postgresql.org/support/versioning/

-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Autovacuum issues with truncate and create index ...

2012-12-31 Thread Kevin Grittner
Baptiste LHOSTE wrote:

 These queries are very simple : delete from table where
 start_date  availableTimestamp. We performed an EXPLAIN to try
 to understand what could be the problem. The query planner said
 that the index on start_date could not be used because it was not
 up-to-date.

Could you show that output you base that on?

 How a server (8 CPUs) which has a 0.56 load over the last 15
 minutes could not handle 3 autovacuum processes, for me it is
 very confusing.

When the bottleneck is disk I/O the CPUs count is not going to
help. Threads which have not been context-switched out, but are
sitting waiting for the electric motors to drag the disk arm to the
right cylinder probably don't count against the load average.

Note that while three autovacuum processes normally don't cause any
grief, you seem to be near the tipping point anyway, so it may be a
case of the straw that broke the camel's back. Especially since
you made autovacuum many times more resource-hungry than it is by
default.

-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Autovacuum issues with truncate and create index ...

2012-12-30 Thread Kevin Grittner
Baptiste LHOSTE wrote:

 Just so we know how to interpret that, how many minutes, hours,
 or days did you wait to see whether it would ever end?
 
 I have waiting for 15 minutes in this state. I can not wait more
 time without losing some data for our client.

Thanks. I wasn't suggesting you increase the duration; I just
wanted perspective on whether it could be the result of unusually
long run times rather than blocking, and how severe that increase
was known ot be.

 If it doesn't cause too much pain to let it get into this state
 for a few minutes, it might help diagnose the issue if you could
 start `vmstat 1` before you let it get into this state, and
 capture `ps aux | postgres`, pg_stat_activity, and pg_locks at
 intervals while it is in this state. Looking at all of the above
 might suggest a cause. If we can find the cause, we can almost
 certainly fix it.

 [information captured as requested]

Thank you very much, With that much information we should be much
better able to get a sense of the nature of the problem. It will
take a while to sift through it and properly analyze it. But even
on a first pass I think there is a lot of valuable information that
jumps out:

(1) There is no sign of unusual pressure on OS memory; OS cache
usage remains stable from before the incident to the end of the
monitored period.

(2) Disk wait time climbed early in the incident and remained high
to the end.

(3) Disk read volume *before* the incident shows a peak of about
the same as during the incident, with somewhat lower disk wait
time. (Do we know what was causing that? It ended before the other
results were captured.)

(4) Not a single incident of blocking was captured in any of the
lists of pg_stat_activity or pg_locks.

(5) The TRUNCATE statements completed and moved on to CREATE INDEX,
which continued to accrue CPU time during the episode.

(6) Of the three autovacuum worker processes, two were running just
an ANALYZE on every sample, and were moving on from table to table.

(7) One autovacuum process was running VACUUM ANALYZE against a
single table for the entire duration of the incident. It was slowly
accumulating CPU time during the incident.

On the face of it, it appears that with your normal production
settings your storage system is right at the edge of what it can
handle, and making autovacuum more aggressive to try to keep the
statistics on the second type of table more up-to-date is pushing
the load past its capacity. You might be able to change the
autovacuum thresholds and scale factors without changing
autovacuum_vacuum_cost_delay and autovacuum_vacuum_cost_limit (or
making smaller changes to them). You could probably throw hardware
at it to fix the problem. Even with settings which work fine when
everything is up-to-date you may experience some impact on
production when you frist turn it on and autovacuum is attempting
to catch up.

I'm not actually clear, when I look back, at what the problem is
that you are trying to solve -- you say that a particular type of
query is running for 2 to 3 minutes, and note that statistics on a
particular type of table are only being re-sampled once every 5 to
6 days. It's not clear that more frequent statistical sampling of
the tables would change the plans. Perhaps you should post one such
query to the performance list, with supporting data, and see
whether someone can suggest a way to speed that query.

http://wiki.postgresql.org/wiki/SlowQueryQuestions

-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Regarding Migaration from Mysql procedures to Postgresql Functions

2012-12-26 Thread Kevin Grittner
satish kumar wrote:

 How to convert Mysql procedures to Postgresql Functions using migration
 tools.

http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#MySQL

-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Autovacuum issues with truncate and create index ...

2012-12-20 Thread Kevin Grittner
Baptiste LHOSTE wrote:

  - finally we delete old data of the second kind of tables

 Then the autovacuum process starts to work on the second kind of
 tables, but our process blocks into step 3 (truncate) or step 5
 (create index).
 
 As soon as I reset the autovacuum thresholds for the second kind
 of tables, our process run again perfectly.
 
 Can our process could create deadlock with the autovacuum
 process ?

Would it be possible for you to create such a situation and capture
the contents of pg_stat_activity and pg_locks while it is going on?
What messages related to autovacuum or deadlocks do you see in the
server log while this is going on?

  PostgreSQL 8.4.11

Would it be possible to update your 8.4 installation to the latest
bug fix (currently 8.4.15) to rule out the influence of any bugs
which have already been fixed?

-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Autovacuum issues with truncate and create index ...

2012-12-20 Thread Kevin Grittner
Baptiste LHOSTE wrote:

 Here's the pg_stat_activity during the issue :
 [no processes waiting]

 Here's the pg_locks during the issue :
 [all locks granted]

Was the blocking you described occurring at the time you captured
this? It doesn't seem to be showing any problem.

 Is there a way to upgrade without having to dump all data and
 restore them after the upgrade ?

A minor release (where the version number doesn't change before the
second dot) never requires a dump and restore. There is sometimes a
need to do some cleanup work; for example, if a bug is fixed which
could corrupt a particular type of index, the release notes may
recommend rebuilding all indexes of that type to repair any damage
which may have occurred before the bug was fixed.

http://www.postgresql.org/support/versioning/

-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Autovacuum issues with truncate and create index ...

2012-12-20 Thread Kevin Grittner
Baptiste LHOSTE wrote:

 Was the blocking you described occurring at the time you
 captured this? It doesn't seem to be showing any problem.
 
 Yes indeed. We have noticed that any process seems to be in
 waiting situation but :
  - before the autovacuum process starts to work on the both kind
    of tables, truncate and index creation take less than 2
    seconds
  - after the autovacuum process starts to work on the both kind
    of tables, truncate and index creation never end

Just so we know how to interpret that, how many minutes, hours, or
days did you wait to see whether it would ever end?

 We have to stop our process, then reset the autovacuum thresold
 for second kind of tables, then restart our process.
 
 Is it possible that the fact that statistics of postgresql are
 not up-to-date could explain this behavior ?
 Is it possible that the autovacuum process does not stop itself
 when we perform a truncate or a create index ?

At the time you captured data from pg_stat_activity and pg_locks,
there were three autovacuum processes active, all running ANALYZE,
and eight TRUNCATE commands active on normal connections. All the
TRUNCATE statements started in the same second. One of the ANALYZE
tasks started about a minute and a half before that, the other two
started about a minute after the TRUNCATE statements. All are on
different tables, and no heavyweight locking is showing up.

I've heard enough reports of behavior like this to believe that
there is some sort of bug here, but the nature of it is not
apparent. We could really use more clues.

If it doesn't cause too much pain to let it get into this state for
a few minutes, it might help diagnose the issue if you could start
`vmstat 1` before you let it get into this state, and capture `ps
aux | postgres`, pg_stat_activity, and pg_locks at intervals while
it is in this state. Looking at all of the above might suggest a
cause. If we can find the cause, we can almost certainly fix it.

-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PostgreSQL Performance Tuning / Postgresql.conf and on OS Level

2012-12-18 Thread Kevin Grittner
Shams Khan wrote:

 select now()-query_start as runtime,client_addr,pid,query from
 pg_stat_activity where not query like '%IDLE%' order by 1;

 When I check Idle session running question, shows the many queries running
 but end of the query it shows Rollback and commit which take lot of time.

No, you need to adjust that query. Add the state column and maybe
the xact_start column to your output, and it should then be obvious
how to modify your where clause. People felt it would be useful to
see what the last statement was which had been run on a connection
which was idle or (especially) idle in transaction. The query
column no longer shows anything other than a query.

-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PostgreSQL Performance Tuning / Postgresql.conf and on OS Level

2012-12-17 Thread Kevin Grittner
Shams Khan wrote:

 Question 1. How do we correlate our memory with kernel parameters, I mean
 to say is there any connection between shared_buffer and kernel SHMMAX. For
 example if I define my shared buffer more than my current SHMMAX value, it
 would not allow me to use that ??or vice versa. Please throw some light.

If SHMMAX is not large enough to allow the PostgreSQL service to
acquire the amount of shared memory it needs based on your
configuration settings, the PostgreSQL server will log an error and
fail to start. Please see the docs for more information:

http://www.postgresql.org/docs/current/static/kernel-resources.html

 Questions 2. I want to show the last result of last query before and after
 changing the parameters, I found performance was degraded.

  Total runtime: 142.812 ms

  Total runtime: 145.127 ms

The plan didn't change and the times were different by less than
2%. There can easily be that much variation from one run to the
next. If you try the same query many times (say, 10 or more) with
each configuration and it is consistently faster with one than the
other, then you will have pretty good evidence which configuration
is better for that particular query. If the same configuration wins
in general, use it.

Since performance differences which are that small are often caused
by very obscure issues, it can be very difficult to pin down the
reason. It's generally not anything to fret over.

-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] ERROR: index row size exceeds maximum 2712 for index

2012-12-16 Thread Kevin Grittner
amjad usman wrote:

 ERROR: index row size 3176 exceeds maximum 2712 for 
 index description_department_of_aeronautics_and_astronautics_5_pkey

Can you show us the definitions of the table and the index?

-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PostgreSQL Performance Tuning / Postgresql.conf and on OS Level

2012-12-14 Thread Kevin Grittner
Shams Khan wrote:

 *Need to increase the response time of running queries on
 server...*

 8 CPU's and 16 cores

 [64GB RAM]

 HDD 200GB
 Database size = 40GB

Without more info, there's a bit of guesswork, but...

 maintenance_work_mem = Not initialised

I would say probably 1GB

 effective_cache_size = Not initialised

48GB

 work_mem = Not initialised

You could probably go 100MB on this.

 wal_buffers = 8MB

16BM

 checkpoint_segments = 16

Higher. Probably not more than 128.

 shared_buffers = 32MB (have read should 20% of Physical memory)

16GB to start. If you have episodes of high latency, where even
queries which normally run very quickly all pause and then all
complete close together after a delay, you may need to reduce this
and/or increase the aggressiveness of the background writer. I've
had to go as low as 1GB to overcome such latency spikes.

 max_connections = 100

Maybe leave alone, possibly reduce. You should be aiming to use a
pool to keep about 20 database connections busy. If you can't do
that in the app, look at pgbouncer.

  checkpoint_completion_target = Not initialised

It is often wise to increase this to 0.8 or 0.9

If I read this right, you have one 200GB drive for writes? That's
going to be your bottleneck if you write much data. You need a RAID
for both performance and reliability, with a good controller with
battery-backed cache configured for write-back. Until you have one
you can be less crippled on preformance by setting
synchronous_commit = off. The trade-off is that there will be a
slight delay between when PostgreSQL acknoleges a commit and when
the data is actually persisted.

-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PostgreSQL Performance Tuning / Postgresql.conf and on OS Level

2012-12-14 Thread Kevin Grittner
Shams Khan wrote:

 *PERFORMANCE WAS BOOSTED UP DRASTICALLY* ---when I edited the
 work_mem to 100 MB---just look at the difference;

You only showed EXPLAIN output, which only shows estimated costs.
As already suggested, try running both ways with EXPLAIN ANALYZE --
which will show both estimates and actual.

 One more thing Kevin, could you please help me out to understand
 how did calculate those parameters?

My own experience and reading about the experiences of others. If
you follow the pgsql-performance list, you will get a better gut
feel on these issues as well as picking up techniques for problem
solving. Speaking of which, that would have been a better list to
post this on. The one actual calculation I did was to make sure
work_mem was less than RAM * 0.25 / max_connections. I didn't go
all the way to that number because 100MB is enough for most
purposes and your database isn't very much smaller than your RAM.
You know, the melding of a routine calculation with gut feel.  :-)

 Without more info, there's a bit of guesswork, but...
 What exta info is required...please let me know...

The main things I felt I was missing was a description of your
overall workload and EXPLAIN ANALYZE output from a typical slow
query.

There's a page about useful information to post, though:

http://wiki.postgresql.org/wiki/SlowQueryQuestions

Now that you have somewhat reasonable tuning for the overall
server, you can look at the EXPLAIN ANALYZE output of queries which
don't run as fast as you thing they should be able to do, and  see
what adjustments to cost factors you might need to make. With the
numbers you previously gave, a wild guess would be that you'll get
generally faster run-times with these settings:

seq_page_cost = 0.1
random_page_cost = 0.1
cpu_tuple_cost = 0.5

Be sure to look at actual run times, not EXPLAIN cost estimates.

-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Confuse about the behaveior of PreparedStatement.executeBatch (jdbc)

2012-12-13 Thread Kevin Grittner
Haifeng Liu wrote:

 I wanna know if executeBatch really keep all the records in a
 batch untouched when the batch failed.

I recommend asking on the pgsql-jdbc list. You might want to
mention what autoCommit is set to during the attempt.

-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Backup

2012-11-29 Thread Kevin Grittner
Sabry Sadiq wrote:

 Does it work well with version 9.1.3?

It might work better in 9.1.6:

http://www.postgresql.org/support/versioning/

And it would probably pay to keep up-to-date as new minor releases
become available.

-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgre Eating Up Too Much RAM

2012-11-14 Thread Kevin Grittner
Aaron Bono wrote:

 (there are currently a little over 200 active connections to the
 database):

How many cores do you have on the system? What sort of storage
systeme? What, exactly, are the symptoms of the problem? Are there
200 active connections when the problem occurs? By active, do you
mean that there is a user connected or that they are actually running
something?

http://wiki.postgresql.org/wiki/Guide_to_reporting_problems

 max_connections = 1000

If you want to handle a large number of clients concurrently, this is
probably the wrong way to go about it. You will probably get better
performance with a connection pool.

http://wiki.postgresql.org/wiki/Number_Of_Database_Connections

 shared_buffers = 256MB

Depending on your workload, a Linux machine with 32GB RAM should
probably have this set somewhere between 1GB and 8GB.

 vacuum_cost_delay = 20ms

Making VACUUM less aggressive usually backfires and causes
unacceptable performance, although that might not happen for days or
weeks after you make the configuration change.

By the way, the software is called PostgreSQL. It is often shortened
to Postgres, but Postgre is just wrong.

-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Autoanalyze of the autovacuum daemon ...

2012-11-09 Thread Kevin Grittner
Baptiste LHOSTE wrote:

 Today I consulted the log of my PostgreSQL server and I saw that
 autovacuum tasks took to much time to do their work. I thought that
 ANALYZE was a fast operation ?

That depends on configuration settings and on whether the computer
(or VM) is so swamped that the autovacuum task is starved for cycles.
Also on any overrides of statistics targets for those tables.

Please show us the output from running this query:

http://wiki.postgresql.org/wiki/Server_Configuration

Have you overridden any statistics targets?

A description of the environment would also be good. Hardware? Load?

-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Autoanalyze of the autovacuum daemon ...

2012-11-09 Thread Kevin Grittner
Baptiste LHOSTE wrote:

 Please show us the output from running this query:
 
 http://wiki.postgresql.org/wiki/Server_Configuration

 [very reasonable settings except for a very large work_mem]

Make sure that work_mem setting isn't driving you into swapping or
near-zero caching. A shortage of cache space could explain this
because it looks like about 8.5 ms for each page read. About the only
other thing I can think to recommend is to decrease
autovacuum_cost_delay to 10ms and see if that helps.

 These tables have two timestamp columns and a btree index on both
 timestamp column. Will it be more efficient for us to configure the
 autovacuum daemon analyze task only on those columns ?

No.

 4 * 300 Go Raid 0

You do realize that if any of those four drives fail you will need to
use your backup, right?

-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Cannot close 'an error has occurred' dialogue box

2012-10-25 Thread Kevin Grittner
Adrian Heath wrote:

 I am unable to click on the Ok button to close the dialog box or
 either of the pgAdmin screens. I can drag the dialog box around the
 screen but cannot close it. Only option is to terminate the pgAdmin
 process.

You might want to try posting this on the pgadmin-support list.

-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] what is maximum allowed value of autovacuum_freeze_max_age

2012-09-16 Thread Kevin Grittner
Bruce Momjian  wrote:
 On Wed, Sep 12, 2012 at 10:13:38PM -0500, Kevin Grittner wrote:
 Radovan Jablonovsky wrote:

 In documentation
 http://www.postgresql.org/docs/9.1/static/routine-vacuuming.html
 is this information about autovacuum_freeze_max_age: if
 autovacuum_freeze_max_age is set to its maximum allowed value of
 a little less than two billion. What is exact maximum allowed
 value of autovacuum_freeze_max_age?
 
 Perhaps we should drop a little less than from the docs.
 
 Agreed.
 
Hearing no objection, done.  Back-patched to 8.3 on the basis that
this was a bug in the documentation, and one which has actually
caused end-user confusion.
 
-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] pg_restore problem Found unexpected Block id

2012-09-13 Thread Kevin Grittner
[copying the list; please keep the list copied on all replies]

Ramana Panda ramana@gmail.com wrote:
 
 I am using the Version : *PostgreSQL 8.0.0 on i686-pc-mingw32,
 compiled by GCC gcc.exe (GCC) 3.4.2 (mingw-special)*
 
You didn't even keep up on bug fix releases?  It's amazing that your
database is still running.
 
 It is a running application. It is very difficult to update all at
 once.  I need to change the front end also.
 
You are in danger of losing all your data without having a usable
backup.  Consider how difficult that position will be by comparison.
 
 Sorry for inconvinience.
 
No need to apologize; it won't have much impact on me when your data
is lost.
 
 I have taken backup on 09-09-2012. it is working fine.
 
Whatever you do, don't overwrite or delete that backup!  That may be
something you will need to use for recovery.
 
 with 10th Sep  Backup,  while restoring it is giving error message
 after some tables restored:
 
 *pg_restore: [custom archiver] found unexpected block ID (0) when
 reading data -- expected 4034
 *
 O.S. Version : Windows XP with Service Pack 3
 
If the data in this database has any value to you, you should
upgrade to *at least* the last version of 8.2 *IMMEDIATELY*, since
that's the release where some very serious bugs related to Windows
were fixed.  Try running the pg_dump version from the newer version,
and you might still be able to back up recent work; otherwise you
will need to restore from your last usable backup and redo all work
from that point on.  8.2 went out of support last year, but it would
be a huge step forward in stability on Windows without any major
incompatibilities for application code, and should buy you a little
time to deal with working through some of the bumps in converting to
later versions.  You should really be targeting the 9.1 release at
this point.
 
-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] pg_restore problem Found unexpected Block id

2012-09-12 Thread Kevin Grittner
ramana.pls ramana@gmail.com wrote:
 
 I am getting an error while restoring the Database Backup. I am
 postgre sql 8.0 with Windows XP O.S.
 
PostgreSQL version 8.0 has been out of support overall for years,
and out of support for Windows for years before that.
 
http://archives.postgresql.org/pgsql-announce/2008-01/msg5.php
 
http://www.postgresql.org/support/versioning/
 
Once you get past the immediate problems, it is extremely important
that you update to a supported version and stay up-to-date with
minor (bug-fix) releases.
 
Regarding the immediate problem, your report could use a lot more
detail.  What steps have you taken to get to where you are now? 
Exactly what version of PostgreSQL is this (8.0.what?)?
 
http://wiki.postgresql.org/wiki/Guide_to_reporting_problems
 
Without knowing more, it's hard to give much advice.
 
-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] what is maximum allowed value of autovacuum_freeze_max_age

2012-09-12 Thread Kevin Grittner
Radovan Jablonovsky  wrote:
 
 In documentation
 http://www.postgresql.org/docs/9.1/static/routine-vacuuming.html
 is this information about autovacuum_freeze_max_age: if
 autovacuum_freeze_max_age is set to its maximum allowed value of a
 little less than two billion. What is exact maximum allowed value
 of autovacuum_freeze_max_age?
 
test=# select name, setting, min_val, max_val
test-#   from pg_settings where name = 'autovacuum_freeze_max_age';
   name|  setting  |  min_val  |  max_val   
---+---+---+
 autovacuum_freeze_max_age | 2 | 1 | 20
(1 row)
 
Perhaps we should drop a little less than from the docs.
 
-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Upgrading from 9.1.2 to 9.1.5

2012-09-10 Thread Kevin Grittner
Craig James cja...@emolecules.com wrote:
 Sergey Konoplev gray...@gmail.com wrote:
 Bruce Momjian br...@momjian.us wrote:
 On Thu, Sep  6, 2012 at 05:55:05PM -0500, Antoine Guidi wrote:
 Is it possible to do a pg_upgrade from 9.1.2 to 9.1.5 just
 using pg_upgrade?  For what I could read, the only exception
 would be if I was using a citext column (which I am not).

 You cannot use pg_upgrade for this.
 
Cannot or don't need to?
 
 You just need to stop the server, install the binaries, and
 restart the server.

 AFAIU it is not necessary to stop the server when updating
 binaries if one is not going to create extensions, PLs or
 anything else that will be dynamically linked after the binaries
 update and before the server restart.

 So with the process

 1. update binaries
 2. postgres restart

 the downtime will be shorter.
 
 I'm just guessing, but this is probably a bad idea.  This could
 happen...
 
 1. Postgres master and a bunch of clients are running
 
 2. You start updating binaries
 
 3. In the middle of your update, an new client connects and a new
 backend process starts.
 
 4. The 9.1.2 executable links to the 9.1.5 binaries.  Or a 9.1.5
 executable links to the 9.1.2 libraries.  Or a 9.1.5 executable
 starts with the right binaries, but is talking to a 9.1.2
 postmaster process, which might not have the same shared-memory
 map.  Or ...
 
 ... and so forth.
 
That's why we put each minor release into a separate location.
 
1.  PostgreSQL master and a bunch of clients are running against
executables deployed with a prefix of /usr/local/pgsql-9.1.4.  The
prefix is specified in the service script for the server; clients
use a symlink at /usr/local/pgsql.
 
2.  We make and install a new build with prefix
/usr/local/pgsql-9.1.5.
 
3.  We change the symlink to point to the new build.
 
4.  We change the appropriate service script(s) to point to the new
prefix.
 
5.  We stop and then start the server(s).  (We don't use pg_ctl
restart because that seems to stay on the same prefix.)
 
6.  Later, when we confirm that nothing is still referencing the old
prefix, we remove its subdirectory.
 
PostgreSQL is down only for the time it takes for a restart.  We
normally do this during off-hours; but even if this is done during
normal operations, properly coded clients (which retry a database
transaction if it fails with a broken connection, without giving the
client any error) only see a short stutter in response time.
 
-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Canot access PostgreSQL via psql -h (Vmware Fusion)

2012-08-24 Thread Kevin Grittner
CS DBA cs_...@consistentstate.com wrote:
 
 I've fired up 2 CentOS 6.2 VM's via vmware fusion 5 (on a mac).
 
 psql -h 192.168.91.145
 psql: could not connect to server: No route to host
 
That problem has nothing to do with PostgreSQL; you might have
better luck on a list related to the other technologies.
 
FWIW, on Linux I would start with `netstat -plnt` to see if the
process was listening on the expected port and host address.
 
-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] When I executed type cast functions. The postgres normal concatenation operator query was breaking.

2012-08-20 Thread Kevin Grittner
Saravanakumar Ramasamy r...@zoniac.com wrote:
 
 Now I am using postgres 9.1.3 version . Before I am used 8.2.22
 
 ERROR:  function to_number(unknown, numeric) does not exist
 
 HINT:  No function matches the given name and argument types. You
 might need to add explicit type casts.
 
 I found solution. That solution is 13 implicit type cast. When I
 executed type cast functions [concatenations started breaking]
 
 ERROR:  operator is not unique: numeric || unknown
 
 HINT:  Could not choose a best candidate operator. You might need
 to add explicit type casts.
 
This was also posted on StackOverflow:
 
http://stackoverflow.com/questions/12007988/when-i-executed-type-cast-functions-the-postgres-normal-concatenation-operator
 
The accepted solution there was basically to drop the 13 implicit
casts which were added in an attempt to maintain pre-8.3 behavior,
and to change code which counted on the implicit casts which were
eliminated in 8.3.
 
-Kevin


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PostgreSQL oom_adj postmaster process to -17

2012-08-09 Thread Kevin Grittner
Craig Ringer ring...@ringerc.id.au wrote:
 On 08/09/2012 04:24 AM, Kevin Grittner wrote:
 http://wiki.postgresql.org/wiki/Number_Of_Database_Connections

 Can we please please PLEASE link to that as a comment above
 max_connections?

 Last time this came up nobody was happy with wording of a comment
 so nothing got done. It's a real usability wart - causing
 real-world performance and reliability problems - that people
 unwittingly raise max_connections to absurd levels because they
 get no warnings, hints or guidance of any sort.
 
I see that we currently have five links to wiki.postgresql.org in
release notes and four more in the rest of the docs.  Are people OK
with adding this link to the docs on max_connections?  (Feel free to
improve it before answering if you have qualms about the specifics
on that page.)
 
We do seem to get an awful lot of posts (between here and
StackOverflow) from people who assume they need one database
connection per active user, and then are surprised that performance
is horrible.
 
If we get consensus on linking to this I'll put together a patch to
make a brief comment in the docs with a link to the Wiki.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PostgreSQL oom_adj postmaster process to -17

2012-08-08 Thread Kevin Grittner
Radovan Jablonovsky radovan.jablonov...@replicon.com wrote:
 
 PostgreSQL version 9.1.1 with 32GB of RAM
 
 shared_buffers = 8GB
 temp_buffers = 32MB
 work_mem = 64MB
 maintenance_work_mem = 512MB
 
 Currently there are maximum 600 connections.
 
Please read:
 
http://wiki.postgresql.org/wiki/Number_Of_Database_Connections
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PostgreSQL oom_adj postmaster process to -17

2012-08-03 Thread Kevin Grittner
Radovan Jablonovsky radovan.jablonov...@replicon.com wrote:
 
 In usual load there are not much pressure on memory, but it is
 possible to have all clients start using heavy reports. They are
 valid requests and could consume all memory.
 
Your clients will get their results back faster if you can arrange
some way to queue these sorts of requests when they get beyond some
reasonable limit.  You might be able to do that using a connection
pool, or you might want to create some sort of job queue which
releases a limited number of such jobs at a time; but I guarantee
that every single person to submit a job to such a queue, including
the last person, will get their results sooner with such queuing
than turning loose a thundering herd of requests which puts the
system into swapping.  I guarantee it.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] VACUUM ANALYZE block the whole database

2012-08-02 Thread Kevin Grittner
Majid Azimi majid.merk...@gmail.com wrote:
 
 ran VACUUM ANALYZE on it(it is not VACUUM FULL). but this cause
 the database to completely block.
 
Please show the results from running the query here:
 
http://wiki.postgresql.org/wiki/Server_Configuration
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] PostgreSQL oom_adj postmaster process to -17

2012-08-01 Thread Kevin Grittner
Radovan Jablonovsky radovan.jablonov...@replicon.com wrote:
 
 We are running PostgreSQL version 9.1.1
 
You should apply the latest bug fixes by updating to 9.1.4.
 
http://www.postgresql.org/support/versioning/
 
 with 32GB of RAM, 32GB of SWAP and during high load we could reach
 (swap + RAM) memory limit.
 
If you're even *starting* to swap you're doing something wrong, much
less exhausting swap space equal to actual RAM.  What is your
configuration?
 
http://wiki.postgresql.org/wiki/Server_Configuration
 
While it's probably a good idea to configure the OOM killer to
behave more sanely, we tend to ignore it in favor of ensuring that
it never comes into play.  We run about 200 databases 24/7 and I
think I've seen it kick in about twice -- when we ran queries that
leaked memory on each row in a big query.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] a very slow SQL

2012-06-25 Thread Kevin Grittner
wangqi  wrote:
 
 An SQL execution is very slow.
 What can I do to makes it faster。
 
Without knowing more about the version of PostgreSQL, your PostgreSQL
configuration, your schema (including indexes), and your hardware,
it's hard to give advice.
 
http://wiki.postgresql.org/wiki/SlowQueryQuestions
 
A wild guess on the evidence we have is that you might benefit from
an index on MCL.ctid if you don't already have one.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] starting postgres with an empty px_xlog folder

2012-06-23 Thread Kevin Grittner
Mike Broers  wrote:
 Mike Broers  wrote:
 
 We shut down our postgres 8.3 server last night cleanly for some
 hosted services maintenance. When we got our server back, it didnt
 have the pg_xlog mount with files and now when we start the
 server, it complains
 
 Since we had a clean shut down is there an easy or relatively
 clean way to start back up? the pg_xlog folder is there, just with
 no files. i wouldnt think it would be needed for a clean restart.
 
It is.
 
 I was able to duplicate the error in a test environment and get the
 database started back up wtih the pg_resetxlog $PGDATA command, but
 i am concerned about the fallout and not sure if i need to use any
 of the option switches. We also have archived log files up to the
 point of server shutdown if that provides a better option.
 
I would make a copy of the directory tree of the database cluster
while the server is stopped before attempting any recovery.
 
http://wiki.postgresql.org/wiki/Corruption
 
The archived WAL files, may provide a better option that
pg_resetxlog.
 
If it were my data, I would be inclined to use pg_dump to create a
fresh version of the database after recovery, although if you really
have a full set of archived WAL files and the server starts up
without any indications of problems in the server log, you might be
OK without doing that.
 
As previously stated, make sure you understand how this happened, so
you can make sure it doesn't happen again.  The contents of the
pg_xlog directory are an integral part of your database cluster.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] autovac hitting too many tables at once

2012-06-22 Thread Kevin Grittner
Greg Williamson gwilliamso...@yahoo.com wrote:
 
 I've got an 8.4.11 system that I am relatively new to and I am
 seeing multiple autovac processes kick off on several of the
 largest tables at once and it is causing pain.
 
 Are there any suggestions to
 
   a) quickly relieve the immediate pain
 
Set autovacuum_cost_limit to a smaller value.  (Change the
postgresql.conf file and do a reload.)  I would start by cutting the
current value in half.
 
 and
 
   b) prevent such issues in the future (other than going to manual
  vacuuming on a schedule).
 
If it is suddenly doing this on lots of big tables at once, it seems
somewhat likely that you've hit the transaction wraparound
protection threshold.  Because the vacuum necessary for this can be
painful, and they tend to come at the worst possible time (the more
your workload looks like a really heavy OLTP workload at any given
moment, the higher the probability that this is about to happen), I
always follow a bulk load (like from restoring pg_dump output) with
a VACUUM FREEZE ANALYZE.
 
You might also want to consider running off-hours vacuums to
supplement autovacuum.  Upgrading to a more recent version of
PostgreSQL is likely to help some, too.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] backup

2012-06-18 Thread Kevin Grittner
lohita nama namaloh...@gmail.com wrote:
 
 I am working as sql dba recently our team had oppurtunity to work
 on postgres databases and i had experience on sql server and on
 windows platform and now our company had postgres databases on
 solaris platform
 
 can u please suggest how to take the back up of postgress
 databases by step by step procudure
 
As a new user of PostgreSQL you may not yet have noticed how good
the manuals are.  You should find what you need in this chapter:
 
http://www.postgresql.org/docs/current/interactive/backup.html
 
If you are on an older version of PostgreSQL, click the link at the
top for the version you are using.
 
If you still have questions after reading the manual, feel free to
post with a more specific question.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Question about PITR backup

2012-06-08 Thread Kevin Grittner
sgm sgm...@yahoo.com.cn wrote:
 
 I have a question about PITR backup in a single server, the method
 is make a base backup, and backup the WAL archive log(eg, every
 day at 11:30 pm). But if the OS' harddisk is broken(eg,14:00
 pm),the system can't start, we have to recover the database on a
 another server using the base backup and the WAL backup,so in this
 case suppose that we only can restore the database to yesterday's
 11:30 pm state, am I right?
 
You can't recover to a point past your last available WAL record.
 
 I want to backup the pg_xlog folder every minute by crontab but my
 concern is that the data inconsistent, because the lastest log in
 pg_xlog is being updated all the time, am I right?   Any
 suggestions?
 
Use streaming replication?  Or at the very least, set a short
archive_timeout value and copy from the archive target location
frequently.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Data split -- Creating a copy of database without outage

2012-06-06 Thread Kevin Grittner
Igor Shmain igor.shm...@gmail.com wrote:
 
 Would it be possible for you to mention what hardware (cpu, ram,
 disks, etc.) and software your system uses to support this db size
 and number of transactions?
 
We have 4 Intel Xeon  X7350 @ 2.93GHz for 16 cores with 128GB RAM. 
We've got a pair of drives in RAID 1 for OS on its own controller,
four drives in RAID 10 for xlog directories on its own controller,
and a couple RAID 5 arrays, each about 40 drives, for our two
databases (3TB and 2TB).  I'm not exactly clear on the controller
configuration there except that I understand there are separate
paths from two controllers to each drive.  All controllers are using
battery-backed cache configured for write-back.
 
A machine like that is still capable of handling our current load;
but the load is always increasing so we step up the hardware each
time we replace a machine.  The new server (able to handle about
twice the load of the one I just described for our normal
transaction mix) has 4 Intel Xeon X7560 @ 2.27GHz for 32 cores with
256GB RAM.
 
We are replicating to each of the databases on these boxes using a
pool of 6 database connections to process data from 72 circuit court
databases and on the 2TB from other sources, like Supreme Court and
Court of Appeals, Board of Bar Examiners, etc.  For the read-only
web load we have a pool of 30 database connections.  Checking the
monitoring system for the read-only web application, at the moment
we are showing:
 
Active Requests: 3
Requests Per Second: 148.66
Active Sessions: 9081
 
This is running through a firewall to an apache web server in our
DMZ which just redirects through another firewall to a an apache web
server which just functions as a load balancer which sends the
requests to renderers (well, currently just one, since on the latest
hardware one renderer handles the load) which runs Tomcat connecting
to our custom Java middle tier on the database server machine which
provides the connection pooling and manages each database
transaction.  Requests for boilerplate content are served before
it gets to this point where it would show in this monitoring; this
is just requests which require database content.  One request
above may run up to about 15 queries, many of which contain a large
number of joins. 

While the load I show above would amount to about 13 million web
requests if it went on 24 hours per day, load does drop at night. 
Last I heard, we had about 5 million requests per day, but that was
a couple years ago and it seems to grow pretty steadily.

Last I checked, the replication consisted of about two million
database transactions per day, many of which have dozens (or
hundreds) of statements modifying data.  When idle time is detected
on a replication source, it is used to compare source data to
target, apply fixes to the target, and log the fixes for review. 
(These are infrequent, but I'm not comfortable running multi-master
replication without such automated review.)
 
 Buying a super computer, hoping that one day it will run at full
 throttle is not for startups. Getting such a powerful computer
 quickly and moving the database there is unrealistic. It makes
 more sense to design the system in a way so it can be easily and
 quickly distributed across many relatively inexpensive servers.
 That is why the sharding is needed. 
 
I understand the scaling need, and certainly don't want to discount
that.  Cloud resources might be an alternative to sharding in that
fashion, at least to a point.
 
Before we moved to PostgreSQL we were using a commercial database
which could not keep up with demand using just one box, so we load
balanced between identical servers.  Since the replication is
asynchronous and we didn't want people potentially jumping around in
time, we used session affinity from the renderers to particular
database servers to keep a consistent timeline for each user
session.  This sort of approach is a viable alternative to sharding
in some cases.
 
I hope that helps.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Data split -- Creating a copy of database without outage

2012-06-02 Thread Kevin Grittner
Igor Shmain  wrote:
 
 I need to design a solution for a database which will grow and will
 require horizontal split at some moment.
 
Just one more bit of food for thought -- we have a database with
3TB processing approximately 50 million database transactions per day
(some with a great many statements or affecting many rows) running
quite comfortably on a single machine (actually sharing that machine
with a 2TB database on a separate drive array), without partitioning.
 
We have done a lot of tuning.
 
I'm not sure what your basis is for the assumption that you will need
to split the database across machines; you might be right, but you
might be engaging in premature optimization.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Advice/guideline on increasing shared_buffers and kernel parameters

2012-05-10 Thread Kevin Grittner
Gnanakumar gna...@zoniac.com wrote:
 
 I've already thought of converting this into a document and keep
 it handy so that I may want to refer back whenever I need.
 
I've put up a first cut at such a document as a Wiki page:
 
http://wiki.postgresql.org/wiki/Number_Of_Database_Connections
 
Everyone should feel free to improve upon it.  I'll probably add a
thought experiment I've used a few times which seems to help some
people understand the issue.
 
 A formula which has held up pretty well across a lot of
 benchmarks for years is that for optimal throughput the number of
 active connections should be somewhere near 
 ((core_count * 2) + effective_spindle_count).
 
 Our entire Production application stack is setup in Amazon EC2
 cloud environment, that includes database server also.  So, in
 that case, how do I find out effective_spindle_count?  I know
 core_count can be determined from Amazon EC2 instance type.  Per
 Amazon EC2, EBS volumes are reportedly a shared resource.
 
I think you need to experiment with different pools sizes.  Please
post results and/or update the Wiki page.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Advice/guideline on increasing shared_buffers and kernel parameters

2012-05-09 Thread Kevin Grittner
Gnanakumar gna...@zoniac.com wrote:
 We get very good performance dealing with thousands of concurrent
 users with a pool of 35 connections to the database.
 
 If you want to handle more users than you can currently support,
 you probably need to use fewer database connections.
 
 First, please excuse me that I'm not able to understand this
 particular point clearly. How can be reducing/using fewer
 connections in connection pooler can support larger concurrent
 incoming connection requests? If this is so critical to revisit
 (reducing), then I may have to convince/justify my peers also,
 before making this change in the Production server. Can you throw
 some light on this subject?
 
 Thanks for bringing this idea to notice.
 
There have been numerous discussions of this on the lists, so you
can probably find a more in-depth discussion of the topic if you
search the archives, and this may motivate me to put together a Wiki
page on the topic, but here's the general concept.
 
A database server only has so many resources, and if you don't have
enough active connections active to use all of them, your throughput
will generally improve by using more connections.  Once all of the
resources are in use, you won't push any more through by having more
connections competing for the resources.  In fact, throughput starts
to fall off due to the overhead from that contention.  If you look
at any graph of PostgreSQL performance with number of connections on
the x axis and tps on the y access (with nothing else changing), you
will performance climb as connections rise until you hit saturation,
and then you have a knee after which performance falls off.  A lot
of work has been done for version 9.3 to push that knee to the right
and make the fall-off more gradual, but the issue is intrinsic --
without a built-in connection pool or at least an admission control
policy, the knee will always be there.
 
Now, this decision not to include a connection pooler inside the
PostgreSQL server itself is not capricious and arbitrary.  In many
cases you will get better performance if the connection pooler is
running on a separate machine.  In even more cases (at least in my
experience) you can get improved functionality by incorporating a
connection pool into client-side software.  Many frameworks,
including the ones we use at Wisconsin Courts, do the pooling in a
Java process running on the same server as the database server (to
minimize latency effects from the database protocol) and make
high-level requests to the Java process to run a certain function
with a given set of parameters as a single database transaction. 
This ensures that network latency or connection failures can't cause
a transaction to hang while waiting for something from the network,
and provides a simple way to retry any database transaction which
rolls back with a serialization failure (SQLSTATE 40001 or 40P01).
 
Since a pooler built in to the database engine would be inferior
(for the above reasons), the community has decided not to go that
route.
 
I know I won't be able to remember *all* of the reasons that
performance *falls off* after you reach the knee rather than just
staying level, but I'll list the ones which come to mind at the
moment.  If anyone wants to add to the list, feel free to reply, or
look for a Wiki page to appear this week and add them there.
 
 - Context switches.  The processor is interrupted from working on
one query and has to switch to another, which involves saving state
and restoring state.  While the core is busy swapping states it is
not doing any useful work on any query.
 
 - Cache line contention.  One query is likely to be working on a
particular area of RAM, and the query taking its place is likely to
be working on a different area; causing data cached on the CPU chip
to be discarded, only to need to be reloaded to continue the other
query.  Besides that the various processes will be grabbing control
of cache lines from each other, causing stalls.  (Humorous note, in
one oprofile run of a heavily contended load, 10% of CPU time was
attributed to a 1-byte noop; analysis showed that it was because it
needed to wait on a cache line for the following machine code
operation.)
 
 - Lock contention.  This happens at various levels: spinlocks, LW
locks, and all the locks that show up in pg_locks.  As more
processes compete for the spinlocks (which protect LW locks
acquisition and release, which in turn protect the heavyweight and
predicate lock acquisition and release) they account for a high
percentage of CPU time used.
 
 - RAM usage.  The work_mem setting can have a big impact on
performance.  If it is too small, hash tables and sorts spill to
disk, bitmap heap scans become lossy, requiring more work on each
page access, etc.  So you want it to be big.  But work_mem RAM can
be allocated for each node of a query on each connection, all at the
same time.  So a big work_mem with a large number of connections can
cause a lot of the OS 

Re: FW: [ADMIN] pg_dump: schema with OID 2200 does not exist

2012-05-09 Thread Kevin Grittner
Elizandro Gallegos elizandro...@hotmail.com wrote:
 
  Please can I be removed from the mailing list
 
The answer was in the email to which you responded.  Did you have
trouble using the referenced page?
 
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-admin
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Advice/guideline on increasing shared_buffers and kernel parameters

2012-05-08 Thread Kevin Grittner
Gnanakumar gna...@zoniac.com wrote:
 
 our web-based application has crossed more than 500 concurrent
 users.  Hence we've already upgraded RAM and now we want to
 upgrade max connection parameter too.  Yes, we're already using
 pgpool-II v3.1.1 for connection pooling.
 
The main point of using a connection pooler is to funnel a large
number of client connection into the pooler into a small number of
database connections.  We get very good performance dealing with
thousands of concurrent users with a pool of 35 connections to the
database.  We originally had a larger pool, but contention was
reducing performance, and we found that throughput and latency both
improved with a smaller pool of database connections.
 
If you want to handle more users than you can currently support, you
probably need to use fewer database connections.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] retaining useful information on my screen

2012-05-08 Thread Kevin Grittner
Fred Parkinson fr...@abag.ca.gov wrote:
 
 2. Is there way to tell psql NOT to clear the screen, so I can
 subsequently view it while I work?
 
\pset pager off
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] increasing max_pred_locks_per_transaction, what shuold I look for?

2012-05-08 Thread Kevin Grittner
Brian Ferhle bri...@consistentstate.com wrote:
 
 I've got a situation where I need to increase
 max_pred_locks_per_transaction value to allow the addition of a
 slony node to complete on postgres 9.1. We had this issue before
 and we increased the default value from 64 to 128, but now we have
 a sync that takes even longer and the value for 128 isn't enough 
 either.
 
 Looking at the activity I saw in pg_locks, I believe I need to set
 it to at least 256, but a value this high I want to make sure that
 I don't cause other issues with the system. The docs mention that
 it may need to request more system v shared memory, but should I
 also re-calculate other parameters such as work_mem,
 shared_buffers, etc?
 
 Current System:
 Postgres 9.1.3
 252 GB system memory
 shared_buffers 40GB
 work_mem 200MB
 max_connections = 300
 max_prepared_transactions = 0
 
 free -g
  total   used   free sharedbuffers
 cached
 Mem:   252239 12  0  1
221
 
 I have a lot of cashed memory, but I just want to make sure this
 isn't something that cascades out of control and I'm suddenly
 having major 'out of memory' issues.
 
Given all the other settings, doubling
max_pred_locks_per_transaction would probably increase the shared
memory needs by something less than 24MB.  As long as the OS isn't
near its limit on shared memory allocations, this would come out of
OS cache without any other significant effects.
 
The other issue that jumps out at me, though, is max_connections =
300.  How many cores are in this machines, and what does the storage
system look like?  You might well benefit from a connection pool
which limits the number of concurrent transactions which are active.
 In general, 300 connections is more that what is optimal for both
throughput and latency, and when using serializable transactions you
might see the impact of too many connections rather sooner than at
other transaction isolation levels.
 
We have a web application which at any one moment typically has
several thousand users actively hitting it, and we incrementally
changed our connection pool size until we found the sweet spot
where we got best performance for that load -- it was with 35
connections for the web application and 6 connections for
replicating data from the 72 sources that feed into the database. 
That's on a system with 16 cores, 40 spindles, and a moderate level
of caching (5TB of database and 128GB RAM).
 
On the other hand, I would really like to improve the heuristic used
for promoting predicate locks of one granularity to another, to
allow a more graceful performance degradation when predicate locks
get tight, but I've lacked data on what sort of workloads hit this. 
If you could send me (of list) a copy of your pg_locks data when you
are at or approaching this problem, it would be helpful in adjusting
this.  A rough description of the workload would help, too.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] SQLSTATE 53100 could not extend file / disk full

2012-05-07 Thread Kevin Grittner
Jan-Peter Seifert jan-peter.seif...@gmx.de wrote:
 
 I wonder whether extra measures are necessary to recover from a
 disk full error besides freeing enough disk space?
 
 Is it no problem if the WAL is within the same disk space and
 can't be written as well?
 
 Should you free enough disk space, do a checkpoint, restart the
 server and check the server log?
 
At our shop, we have rarely run out of disk space on a production
machine, due to close monitoring; but in those few cases, and in the
somewhat more frequent case that it happened in a development
environment, we have simply freed space and continued.  From what I
have seen, the PostgreSQL development community tries very hard to
ensure that you can do this safely.  On the other hand, it would
obviously be very hard to thoroughly test every possible code path
under such conditions, so I always try to watch for possible
problems afterward.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Very long IDLE in transaction query

2012-05-04 Thread Kevin Grittner
k...@rice.edu k...@rice.edu wrote:
 
 You may also want to consider setting a statement_timeout to
 prevent this until you can find the problem with the application.
 
How would a statement timeout help close a transaction on an idle
connection?  It's idle because no statements are being run.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] DELETE and UPDATE triggers on parent table of partioned table not firing.

2012-05-03 Thread Kevin Grittner
Plugge, Joe R. jrplu...@west.com wrote:
 
 Using postgres 9.0.7 on RHEL 5.4.  I have a parent table that is
 partitioned by day.  My inserts are working correctly and are
 being directed to the correct child table.  I also have both an
 UPDATE and DELETE trigger on the parent table that are defined as
 AFTER triggers.  The actual update and delete operation works,
 however the triggers do not seem to be firing.  What am I doing
 wrong?
 
The DELETE and UPDATE triggers need to be on the child tables.  An
operation on a child doesn't fire the triggers of the parent.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] unexpected EOF on client connection / could not send data to client: Broken pipe

2012-05-02 Thread Kevin Grittner
Hariraman Jayaraj hariraman@gmail.com wrote:
 
 We are using postgres 8.3 in Open Suse 11 server.
 Front end - Java, struts
 Middle ware - Jboss
 Backend - Postgres 8.3 DB.
 
It helps to know the exact version number and PostgreSQL
configuration settings.
 
http://wiki.postgresql.org/wiki/Server_Configuration
 
 Recently I am encoutering below mentioned error. Also by the time
 this error occuring, Linux server / Postgres Server is too slow
 and at times it is not responding at all.
 
Can you run `vmstat 1` while things are slow?  It sounds like you
might be getting into heavy swapping due to memory over-commit.  Can
you show us the result of running `free -m`?
 
 2012-05-02 12:14:09 IST LOG:  unexpected EOF on client connection
 2012-05-02 12:14:09 IST LOG:  unexpected EOF on client connection
 2012-05-02 12:14:09 IST LOG:  unexpected EOF on client connection
 2012-05-02 12:14:10 IST LOG:  unexpected EOF on client connection
 2012-05-02 12:14:11 IST LOG:  unexpected EOF on client connection
 2012-05-02 12:14:11 IST LOG:  unexpected EOF on client connection
 2012-05-02 12:14:12 IST LOG:  unexpected EOF on client connection
 2012-05-02 12:14:12 IST LOG:  unexpected EOF on client connection
 2012-05-02 12:14:13 IST LOG:  could not send data to client:
 Broken pipe
 
Those messages mean that the TCP connection from the client was
broken by something other than the PostgreSQL server.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] grant select pg 9.0.3

2012-05-02 Thread Kevin Grittner
Tony Capobianco tcapobia...@prospectiv.com wrote:
 I've issued the following statement:
 
 grant select on all tables in schema support to backup;
 
 How can I avoid having to issue the grant each time I create a new
table?
 
http://www.postgresql.org/docs/9.0/interactive/sql-alterdefaultprivileges.html
 
Also, please consider updating to 9.0.7.
 
http://www.postgresql.org/support/versioning/
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Re: [BUGS] pg_dump: aborting because of server version mismatch

2012-05-02 Thread Kevin Grittner
Mitesh Shah mitesh.s...@stripes39.com wrote:
 
 *pg_dump: server version: 9.1.2; pg_dump version: 9.0.5*
 *pg_dump: aborting because of server version mismatch*
 
This is not a bug.  Use a version of pg_dump which is at least as
new as the server.  The older version of pg_dump is unlikely to be
able to recognize everything in the newer server,
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin



Re: [ADMIN] Any public dataset for benchmarking?

2012-05-01 Thread Kevin Grittner
Bèrto ëd Sèra wrote:
 
 I'm asked to benchmark a PG-related product. I was wondering if
 there is any sort of standard public dataset for such operations.
 
You might want to take a look at pgbench:
 
http://www.postgresql.org/docs/9.1/interactive/pgbench.html
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Query REST Service

2012-04-25 Thread Kevin Grittner
Ricardo Bayley ricardo.bay...@gmail.com wrote:
 
 Does anybody know if it is possible to create a PL which sends an
 http GET request and retrieves its response ?
 
Have you looked at PL/Python?
 
http://www.postgresql.org/docs/current/interactive/plpython.html
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Partial substrings in FTS

2012-04-23 Thread Kevin Grittner
A J s5...@yahoo.com wrote:
 In FTS, how do I search for partial substrings that don't form a
 English word.
 Example, in the text: 'one hundred thirty four'  I want to find
 the records based on 'hun'
 
 SELECT to_tsvector('one hundred thirty four') @@
 to_tsquery('hun'); does not return anything.
 
It sounds like trigrams might be a better fit for you than text
search.
 
http://www.postgresql.org/docs/9.1/static/pgtrgm.html
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] What is the role of pg_filenode.map ?

2012-04-22 Thread Kevin Grittner
F. BROUARD / SQLpro  wrote:
 
 in every database there is a file nammed pg_filenode.map wich I
 suppose give the map of the real filenode while some command make
 a divergence betwen the actuel object oid and the new filenode...
 Am I wright ?
 
If you want to understand internals like this, the best thing to do
is probably to read the source code.
 
http://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/utils/cache/relmapper.c;h=6f214957bf831c4a5c17ebc630f5151adc860135;hb=HEAD
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] 9.0.4/7 WAL Archiving: archive_command add date-time to cp of filename

2012-04-11 Thread Kevin Grittner
L'Huillier, Jeff jeff.lhuill...@onstar.com wrote:
 
 When enabling WAL archiving and setting up the archive_command, is
 it possible to add the date  time as an extension to the %f
 copied to the archive directory in order to avoid overwriting a
 file of the same name?
 
The recommended behavior is to exit the recovery command (or the
script run as the command) with an exit code of 255 when the target
file already exists.  This should never happen unless you
accidentally point two different database clusters to the same
target directory.
 
 In adding a date-time stamp to the archived file name, would this
 adversely affect the restore_command and render the %f file coming
 back unusable?
 
Well, you could use a wildcard in the restore command to find a
matching file, but if there is more than one because of the
timestamps added to the filename, which one do you pick?
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Recovery mode for a WAL-based slave

2012-04-11 Thread Kevin Grittner
Wells Oliver wellsoli...@gmail.com wrote:
 
 Looking at PGAdmin, in recovery? is yes, but replay location
 is the same as receive location, and the data is absolutely up to
 date.
 
 Is the recovery bit an issue, or just SOP?
 
SOP.  It's due to the gradual evolution of the hot standby feature
from the transaction logs which were originally used just for crash
recovery.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Why would queries fail with 'could not stat file' after CLUSTER?

2012-04-10 Thread Kevin Grittner
Sashbeer Bhandari sashb...@gmail.com wrote:
 
 I am using Postgresql DB 8.2 and my encoding is in SQL_ASCII ,. I
 want to convert it in UTF8, Please help me it.
 
This has nothing to do with the thread on which you posted it. 
Please start a new thread with an appropriate subject line.
 
By the way, PostgreSQL version 8.2 is out of support.  Perhaps you
should install 9.1 and create a database with the desired encoding,
and make that transition during your upgrade.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Setting up streaming replication w/ a big ole database

2012-04-10 Thread Kevin Grittner
Wells Oliver wellsoli...@gmail.com wrote:
 
 I admit to being scared as crap of rsync'ing a live database to
 another server. Like chills are running down my spine even typing
 it. Is this an approved, safe thing?
 
It is fine, as long as you're doing it between the pg_start_backup()
and pg_stop_backup() calls.  We do it on 100 production databases
every week.  It is approved, documented, and safe.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Writing to a database or schema on a slave

2012-04-10 Thread Kevin Grittner
Wells Oliver wellsoli...@gmail.com wrote:
 
 I'd like to create a schema on my slave so that users who do not
 have access to the master can create some data. Clearly this data
 won't be replicated, since it's on the slave, but will it cause
 any problems w/ data integrity to have it on the slave?
 
What are you using for replication?
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Writing to a database or schema on a slave

2012-04-10 Thread Kevin Grittner
[rearranged; please don't top-post]
 
Wells Oliver wellsoli...@gmail.com wrote: 
 Kevin Grittner kevin.gritt...@wicourts.gov wrote:
 Wells Oliver wellsoli...@gmail.com wrote:

 I'd like to create a schema on my slave so that users who do not
 have access to the master can create some data. Clearly this
 data won't be replicated, since it's on the slave, but will it
 cause any problems w/ data integrity to have it on the slave?

 What are you using for replication?
 
 I am using the WAL method w/ 9.1.
 
Then the replica must be read-only; you can't create anything on
that PostgreSQL instance because it is a block-level copy of the
master database cluster.  You would need to use logical replication
(for example, Slony) to do what you describe, or store the new
schema on another cluster and use dblink or a foreign data wrapper.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] about multiprocessingmassdata

2012-04-04 Thread Kevin Grittner
superman0920 superman0...@gmail.com wrote:
 
 i have a table which has 850 rows record, i run 30 threads to
 update the record.
 i find the database of processing data speed so slow, per thread
 updating 1000 rows need take 260s
 How to configure the database to make processing speed faster ? 
 
Performance issues are best addressed on the pgsql-performance list,
not pgsql-admin.  Before posting there, please read the following
page so that you can post enough information for people to make
useful suggestions:
 
http://wiki.postgresql.org/wiki/SlowQueryQuestions
 
For perspective, in benchmarks on my own machines I have seen
complex data-modifying transactions running at 3000 transactions per
second, and we have production systems applying millions of complex
transactions per day against tables with hundreds of millions of
rows while serving web applications running tens of millions of
queries.  So, my first thought is to wonder what the differences are
in your environment, and which of them might be causing problems. 
To figure that out, I need to know more about your environment.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] custom conflict resolution information request

2012-04-02 Thread Kevin Grittner
amador alvarez aalva...@d2.com wrote:
 
 I am trying to find any kind of information or examples to deal
 with custom conflict resolution on swap syncs in a master-master
 replication.
 
What are you using for replication?
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] about encoding

2012-03-29 Thread Kevin Grittner
superman0920 superman0...@gmail.com wrote:
 
 i want to insert a report to postgresql,the report contain
 something Chinese characters and the postgresql is utf-8.
 
 the response from db is this: 
 ERROR:  invalid byte sequence for encoding UTF8: 0xb1 
 
That's not a valid byte sequence for a character under the UTF8
character encoding scheme.  You said that PostgreSQL is using utf-8,
but what encoding is used for the report?  Is your client connecting
using the character encoding scheme used by the report?
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] german sort is wrong

2012-03-22 Thread Kevin Grittner
Reinhard Asmus reinhard.as...@spdfraktion.de wrote:
 Am 21.03.2012 14:51, schrieb Kevin Grittner:
 Reinhard Asmusreinhard.as...@spdfraktion.de  wrote:

 when i make a sort this is the result:

 [vowel with umlaut sorts equal to vowel without]

 in german this is wrong. what is the problem?

 It appears to be one of three different right ways:

 http://en.wikipedia.org/wiki/German_alphabet#Sorting

 Is there a different collation available on your OS to sort
 names?
 
 when i make the same thing in oracle i've got
 
 Ätna
 Anton
 
 
 is it possible to get the same with postgresql and when how?
 
PostgreSQL doesn't implement collations itself; it can only use
collations available from your OS.  It appears that your OS is
defaulting to the dictionary collation and you would prefer the
phone book collation.  The Wikipedia link mentions that Windows
provides both collations; I suspect it's not the only OS that does,
but have no direct knowledge about that.
 
Starting in version 9.1 PostgreSQL can support collation overrides,
for example at the column level.  Provided that your OS provides
both, you could use one for your default collation and override that
for specific columns, which sounds like it might make sense for
German.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] pg_dump: schema with OID 145167 does not exist

2012-03-22 Thread Kevin Grittner
Paul Wouters paul.wout...@resilion.be wrote:
 
 We have some problems using pg_dump. We get the following error:
 
 pg_dump: schema with OID 145167 does not exist
 
Make sure you have a copy of the entire PostgreSQL data directory
tree before trying to fix corruption.
 
 In the table pg_depend I have also e reference to 124208
 
 Can I Also remove that dependency?
 
You might need to delete those, after making that backup.
 
 We are using the 8.2.5 version of PG.
 
For starters you should upgrade to the last version of the
now-out-of-support 8.2 release.  After getting past the immediate
issue with the dump, you should look at upgrading to a supported
release.
 
www.postgresql.org/support/versioning/
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] german sort is wrong

2012-03-21 Thread Kevin Grittner
Reinhard Asmus reinhard.as...@spdfraktion.de wrote:
 
 when i make a sort this is the result:
 
 [vowel with umlaut sorts equal to vowel without]
 
 in german this is wrong. what is the problem?
 
It appears to be one of three different right ways:
 
http://en.wikipedia.org/wiki/German_alphabet#Sorting
 
Is there a different collation available on your OS to sort names? 
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Update actions (with user name) inside PostgreSQL DB - any version on postgreSQL

2012-03-14 Thread Kevin Grittner
Khangelani Gama kg...@argility.com wrote:
 
 the issue we have is that we have many Linux users having root
 access into the system.
 
Which gives them rights to impersonate any other user on the system
and to erase any audit trail written on that system.
 
 Auditors wants PostgreSQL to tell who updated what inside the
 database
 
You might be able to create something which looks plausible without
solving the first problem, but it wouldn't be at all trustworthy. 
Consider limiting access to root on your database servers and, in
general, pay attention to the concept of separation of duties[1].
 
-Kevin
 
[1] http://en.wikipedia.org/wiki/Separation_of_duties

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] triggers are not shared between parent and child tables?

2012-03-12 Thread Kevin Grittner
Rural Hunter ruralhun...@gmail.com wrote:
 
 triggers are not shared between parent and child tables. is it
 true? 
 
Yes.
 
You can use the same trigger *function* for more than one trigger
though.
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] How to back up selected rows using

2012-03-12 Thread Kevin Grittner
Piyush Lenka lenka.piy...@gmail.com wrote:
 
 How can i backup only 5 records from that table using pg_dump or psql
or
 both.
 
In psql:
 
\copy ( select * from that_table where ... ) to 'filename'
 
http://www.postgresql.org/docs/9.1/interactive/app-psql.html
 
Search the page for \copy
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] triggers are not shared between parent and child tables?

2012-03-12 Thread Kevin Grittner
Rural Hunter ruralhun...@gmail.com wrote:
 
 is it worth mentioning in the doc?
 
Do you have a suggestion for what language you would have found
helpful, or which section(s) of the docs should be modified?
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Disable TRUST authentication mode

2012-03-10 Thread Kevin Grittner
c k  wrote:
 
 One of our customer found that few of it's employees are trying to
 change the data without having any proper rights. The simplest way
 is to get the control of the server and then change the mode of the
 authentication to trust and restart the server.
 
There's your problem right there.
 
I once had the great pleasure and honor of attending a luncheon where
Admiral Grace Hopper[1] spoke.  One of the topics she addressed was
security.  She emphasized that if someone has physical access to your
hardware, the game is over.  She asserted that if anyone in the room
gave her ten minutes alone with their computer, she could breach
security, and dared those in attendance to let her prove it.
(Nobody took her up on it.)
 
Without getting into gory details, I realize that there are
techniques which could make certain types of attack difficult even
with physical access, but there are some absolute security
deal-breakers.  If someone can log on to the OS running your database
as the root user, you had better trust that person, because they can
do pretty much anything.  Any sense that you're secure in the face of
an untrusted user with root access is purely illusionary.  What's to
stop them from creating a custom version of any software (including
PostgreSQL) which has a back-door access that lets them in?
 
It seems to me that you either need to look at providing your
software as a service, so that you retain control of the hardware, or
educate your customers on security principles.
 
-Kevin
 
[1] http://en.wikipedia.org/wiki/Grace_Hopper



-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgres server crashing unexpectedly.

2012-03-08 Thread Kevin Grittner
umashankar narayanan umashan...@graffiti.net wrote:
 Version : 8.3
 
 
  Below is the log from the server.
 
  
 -
 --
 
The above is everything that showed up on your post.  Make sure
you're not attaching large images, file which can only be viewed on
a particular operating system, or anything else unusual.
 
Please read this and try again:
 
http://wiki.postgresql.org/wiki/Guide_to_reporting_problems
 
-Kevin

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


  1   2   3   4   5   6   7   8   >