Hi there,
I am using a postgres 9.2 slave for some reports and I noticed that
sometimes they are held up because certain activity on the master like
vacuums can affect long queries on a slave .
Looking at pg_locks view , I can see that the recovering process on the
slave is holding some
Check this out :
http://www.postgresql.org/support/versioning/
Cheers,
A.A.
On Mon, Sep 9, 2013 at 8:17 PM, rajkphb rajkph...@gmail.com wrote:
Hi, I am relatively new to postgresql. I did not find any detailed
documentation on patching . Please provide me some links where i can get
some
Hi there,
I am setting up a 2 node database cluster (Master/standby) on pgsql 9.1
and it looks to me that drdb option could be easier to maintain keep
running after several failovers on a row, specially because after a
failover operation it is not too friendly to set back up automatically
Hi there,
And why not shipping older WAL files to the target on a regular basis ?.
On the master you can control with a crontab job to ship the wanted WAL
files (n hours older than current time and clean the shipped up, check
rsync options up) in a regular basis.
A.A.
On 07/26/2012 02:24
Hi,
Obviously it will always depends on the constraints in your schema, but
why do not dump the databases out and import it into a single one
properly? Once you get the new database, it should be easy to catch up
with the 2 databases before the switching (with statements like
insert/update
Hi there,
I have been looking for a good solution to backup a postgresql 8.4
database server (not pg_dump) and the only options that it seems that I
have are either a Omnipitr or a custom-coded solution.
I am a little bit afraid about setting up omipitr in production for
archiving backup,
Thanks a lot Hari,
very resourceful, you have been very helpful.
cheers,
A.A.
On 06/07/2012 12:47 AM, hari.fu...@gmail.com wrote:
Amador Alvarezaalva...@d2.com writes:
Any idea on doing (COMMENT ON SCHEMA x IS 'y') as 'y' variable?
You could use PL/pgSQL's EXECUTE for that:
DO $$BEGIN
Thanks hary and Matthias,
It is a very good idea, however the schema names are meaningful and not
allowed to be attached to a date.
Regarding the comment solution (COMMENT ON SCHEMA x IS 'y'), it sounds
great and I tried to run different examples without happy ending as 'y'
must be a literal
Thanks Tom,
I will figure out then how to add a newly created schema to the
schema-list to be backed up (dumped) but not directly as i expected.
Cheers,
A.A.
On 06/05/2012 05:43 PM, Tom Lane wrote:
Amador Alvarezaalva...@d2.com writes:
I would like to know if it is possible to get the
Hi ,
I would start with a single high performance tuned database focusing
mainly on dealing efficiently with concurrent activity and identifying
the real hot spots.
If you check out that you really need to go forward on database power,
consider on adding new databases and relocate some users
Hi Scott,
Why you do not replicate this master to the other location/s using other
methods like bucardo?, you can pick the tables you really want get
replicated there.
For the backup turn to hot backup (tar $PGDATA)+ archiving, easier,
faster and more efficient rather than a logical copy with
I mean bucardo (even though there are more tools like this one) just
for the replication stuff and the hot database backup only for the
backup stuff and only one bounce is needed to turn the archiving on, you
do not need to turn anything at all down during the backup.
A.A
On 04/25/2012
Usually the standard location for data is /var/lib/pgsql/data for
postgresql 8.
So try to restore this directory first and underneath.
Only with that you can hopefully restore the whole system assuming that
if tablespaces were creates are under the standard location.
I would ask others to
, April 16, 2012 10:55 PM
To: pgsql-admin@postgresql.org
Subject: Re: Recreate primary key without dropping foreign keys?
On 04/16/2012 07:02 PM, amador alvarez wrote:
How about deferring the FK's while recreating the PK ?
or using a temporary parallel table to be pointed by the other tables
(FK
How about deferring the FK's while recreating the PK ?
or using a temporary parallel table to be pointed by the other tables
(FK) and swap it up on the recreation.
Cheers,
A.A
On 04/16/2012 06:54 AM, Chris Ernst wrote:
On 04/16/2012 02:39 AM, Frank Lanitz wrote:
Am 16.04.2012 10:32, schrieb
What are you using for replication?
-Kevin
Hi Kevin,
I set up a master-master asynchronous replication of one database with bucardo
4.4.5 and testing right now 4.99.3.
I know the standard settings when a conflict between same id rows comes up :
source - the rows on the source database
Hi there,
I am trying to find any kind of information or examples to deal with
custom conflict resolution on swap syncs in a master-master replication.
Any clue will be very wellcome,
Thanks in advance,
Amador A.
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make
Hi there,
I wonder why are you considering this solution, as if something wrong
comes within the data (logical corruption, user error) it will be spread
on both locations, Would not be better a delayed standby database.
I am curious because I am setting this up right now and do not get all
18 matches
Mail list logo