What do you want to have happen to the timestamp/user?
You can obviously do:
update test set t4 = 9, user = someuser, timestamp = sometimestamp
where t1 = 001;
I'm assuming you're trying to store the user and timestamp somewhere
else, though?
Robin
On Wed, 2012-03-14 at 12:44 +0200,
On Wed, 2012-03-14 at 14:39 +0200, Khangelani Gama wrote:
Hi, anyone with an idea based on my latest comments below? Thanks
-Original Message-
From: Khangelani Gama [mailto:kg...@argility.com]
Sent: Wednesday, March 14, 2012 1:25 PM
To: 'Robin Iddon'; 'pgsql-admin
Constrained sudo is no substitute for proper security. If I was in
charge of a database where personal details or credit card or financial
information could be compromised I would not rely on constrained sudo.
The reason is that no matter how smart you think you are, some smarty
pants always
Check out the GRANT nd REVOKE commands:
http://www.postgresql.org/docs/8.1/static/sql-grant.html
http://www.postgresql.org/docs/8.1/static/sql-revoke.html
You may want to do some revoking of PUBLIC rights followed by some
granting of rights to specific users on specific databases.
But, I
How do I know if autovacuum is reclaiming space or is my max_fsm setting
plain wrong. (I understand that autovacuum marks tuples for reuse, but
shouldn't pg_total_relation_size take that into account)
You can find out if your FSM settings are big enough by running vacuumdb
-av (and yes, it
Adam Radlowski wrote:
I can't find in the documentation, if it is possible to define our own
sort order for already compiled PostgreSQL.
I need it, because the best for my now building application were be to
build sort with normal LATIN2 sort order (for non alphanumeric chars
compatible with
Can I delete these files ? Or give me an advice, please.
When I had a similar problem it turned out that I had exceeded the
limits of my FSM and thus the stats table in particular was growing as
it was unable to reuse rows.
Check your FSM size in postgresql.conf and see what:
Hi Steve,
If you can afford to move forwards to 8.x then you can benefit from PITR
(Point In Time Recovery).
See http://www.postgresql.org/docs/8.0/interactive/backup-online.html
You can do this without making a PITR live replica (as I understand it,
you're happy to trust your hardware so
Hi,
It is currently running on a 3GHz Xeon HT, 2GB RAM, dual 72GB disks
running RAID 1. This server is a 1U without only 2 drive bays, so I
have a potential issue with drive space.
As a result, I will be moving the db server to a Dell 1650 with 3 146GB
SCSI drives running RAID 0. System is a
Assuming that presently the server's capability as far as drives is 3
drives, would you recommend doing a RAID-1 on 2 drives and an additional
non RAIDed drive to hold the WAL? Anything else which shold be moved to
the other spindle?
I would RAID-1 the WAL. Assuming you have a mechanism
Anytime you're looking at a high write load on a database (not just
PostgreSQL) you should be looking at a hardware RAID controller with
battery backed cache and RAID 1 or RAID 1+0.
We recently benchmarked the latest and greatest PCI-X SATA mega raid
controller (with NCQ support, so
Your problem here is quoting. You need to escape the quotes around the
SQL strings so that the shell (I assume you're using Unix of some
flavour) doesn't interpret them:
/usr/local/pgsql/bin/psql -d temp -c 'insert into
temp.ip_mapping_tb(ip,location)
Brendan Duddridge wrote:
What I'd like to see is a table exclusion list. I have a few very
large history tables that are never updated or deleted, only inserts
and selects.
Such a table will never trigger the vacuum rules as I understand them
(vacuum only happens on table that have obsolete
Marc G. Fournier wrote:
I know ppl are using it to do replication, but has anyone documented
what is involved in doing so?
thanks ...
We use linux HA and linux DRBD (~RAID1 mirror between disks across a
LAN) to provide a similar replication mechanism that runs underneath
the database
Andy Shellam wrote:
Robin,
On my part it's simply the fact that I currently have two servers in
different geographical locations - and cost of new hardware is a huge issue.
I have, however, recently developed an interest in rsync but I'm unsure as
to how PG on the standby server would handle
In java, this big fat exception:
org.postgresql.util.PSQLException: An I/O error occured while
sending to the backend.
Call stack:
org.postgresql.util.PSQLException: An I/O error occured while
sending to the backend.
at
Try looking at the manual for the postmaster process ...
A simple way of achieving what you want is adding the -i option to the
postmaster command line which allows TCP/IP connections. Otherwise only
localhost can connect.
There are possibly other ways of configuring this same option, but
-
From: Robin Iddon [EMAIL PROTECTED]
To: sandhya [EMAIL PROTECTED]
Cc: Postgres pgsql-admin@postgresql.org
Sent: Monday, March 20, 2006 2:28 PM
Subject: Re: [ADMIN] reg:conninfo
Try looking at the manual for the postmaster process ...
A simple way of achieving what you want is adding the -i
You need 100k pages minimum here.
I don't actually understand exactly what this is telling me though!
Take a look at
http://www.pervasivepostgres.com/lp/newsletters/2005/Insights_opensource_Nov.asp#3
It's a good overview of the FSM, how it works, and how to understand
vacuum verbose
Hi,
I wonder if anyone can help me to identify why my background
pg_autovacuum doesn't appear to clear out the stats table entries?
Here is the non-commented lines in my postgresql.conf:
max_connections = 100
shared_buffers = 1000 # min 16, at least max_connections*2,
8KB each
Hi Matt,
Thanks for the reply.
Matthew T. O'Connor wrote:
What do you mean by stats table entries? Are you saying there is
bloat in a specific table, or just the whole database?
The table in question is pg_statistic. It's pages as reported by
pg_class.relpages grow without bound until I
Third: Are you seeing in the autovacuum log that autovacuum is
actually trying to vacuum tables? Turn up the debugging to -d2 that
should give you some more info as to why autovac is (or is not) doing
what it's doing.
I will turn it on and take a look. I am guessing it will tell me that
Tom,
Tom Lane wrote:
Quite incorrect --- if that were so, there would be no dead tuples
for vacuum to remove, hm? Actually ANALYZE does an
update-or-insert-if-not-present fandango.
Thanks for the correction - I was basing my comments on the fact that
the tuple count was just climbing
In fact I just noticed that the number of stats tuples just climbed
from 1236 to 2634. The ins/del counts are still zero. I ran analyze
and the update counter went up only by 1232. For pg_autovacuum to
vacuum this table I need (2*2634)+1000 = 6268 updates, which is never
going to
Jim C. Nasby wrote:
Please upgrade.
Are you running with the default FSM settings? I'm guessing that
pg_statistics has just gotten so large that it's blowing out the FSM.
The last few lines from vacuumdb -av would verify that...
Jim,
I am running this in a critical production environment,
25 matches
Mail list logo