Paul Breen wrote:
> Hello all, I wonder if anyone could help me.
>
> I'm currently working on a project where it is essential that we have some
> kind of replication of our production database. I've found a number of
> 'near' solutions (e.g., doing a piped 'COPY IN/OUT' using psql - one
> local
As my last try to post was stalled, here again my question.
On Fri, Oct 29, 2010 at 3:13 PM, Gerhard Hintermayer
wrote:
> is this possible (just tried this on a backup server) and should it be used ?
>
> btw I'm using gentoo. Is also a reinstallation of the same version
> (b
is this possible (just tried this on a backup server) and should it be used ?
btw I'm using gentoo. Is also a reinstallation of the same version
(because of changed library dependancies) safe/unsafe ?
thanks for any input
Gerhard
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org
Sorry for posting this again, but my last post regarding this
problematic wasn't answered satisfyingly. (maybe I didn't find the
right words for my question ;-) )
is this (or a minor upgrade) a "safe" way to go under linux ? I'm on a
production system and I don't want to restart the DB server whi
On Tue, Nov 2, 2010 at 4:06 PM, Robert Gravsjö wrote:
>
>
> On 2010-11-02 15.51, Gerhard Hintermayer wrote:
>>
>> Thanks, I know that. But slots are reserved per major version (i.e.
>> one for 8.1, one for 8.4, one for 9.0), but you don't have multiple
>> sl
and of course install binaries somewhere else, use a different PG_DATA
dir ... you're soon loosing simplicity ;-) . Some distros support this
natively. Gentoo supports this, don't know about Centos.
On Thu, Nov 4, 2010 at 4:50 PM, Szymon Guz wrote:
>
>
> On 4 November 2010 16:13, Ramiro Barreca
You could try connecting to each possible IP in your network, but to
catch each instance, you should also check nostandard (i.e. different
from 5432) ports - kind of hacking ... AFAIK (or read ;-) )
Rendezvous/bonjour was implemented on OS X port only.
regards
Gerhard
On Sun, Nov 14, 2010 at 5:5
your distro has a similar feature.
regards
Gerhard Hintermayer
On Wed, Dec 8, 2010 at 12:05 AM, Sridhar Reddapani
wrote:
> Hi All,
>
> I am trying to install second copy of postgres on same machine [with
> different prefix, data dir, servicename, port etc..] which already has one
>
What exactly do you mean ? Upgrade libraries which postgres depends on
while the server is running ? Depends on the distro I think. I did
this with gentoo, e.g. upgrade glibc while postgres (and other
services) still use the old version. Except from memory issues (both
versions have to be kept in m
I managed to set up a streaming replication with 9.0.3. This works like a charm.
When I pull the trigger - I mean create the trigger file ;-) - I find
the following entries in the server log, is this normal or am I doing
something nasty ?
[ @]LOG: streaming replication successfully connected to p
Hi,
I'm trying to set up at least 3 servers using hot standby streaming
replication. I'd like to have one primary and 2 secondary (on 2
different locations in case of a desaster in the server room).
A primary
B secondary 1
C secondary 2 (on a different location that A and B)
Are the following acti
On Thu, Apr 7, 2011 at 11:40 AM, Gerhard Hintermayer
wrote:
> Hi,
> I'm trying to set up at least 3 servers using hot standby streaming
> replication. I'd like to have one primary and 2 secondary (on 2
> different locations in case of a desaster in the server room).
>
the tuple is in the
table (by just listing all tuples and see if it is there)
Gerhard
On Mon, Apr 11, 2011 at 6:09 PM, Kevin Grittner
wrote:
> Gerhard Hintermayer wrote:
>
>> Unfortunately I had to insert 2.1 reindex database [for all
>> databases] after creating the t
On Mon, Apr 11, 2011 at 7:25 PM, Kevin Grittner
wrote:
> Gerhard Hintermayer wrote:
>
>> Because tests & docs say so:
>>
> http://www.postgresql.org/docs/9.0/static/continuous-archiving.html#CONTINUOUS-ARCHIVING-CAVEATS
>
> I asked because I didn't rememb
at 7:55 PM, Kevin Grittner
wrote:
> Gerhard Hintermayer wrote:
>
>> I have e.g. a table with:
>
>> Indexes:
>> "idx_auftrag_l1" hash (a_nr)
>
> Any *hash* index will need to be rebuilt. Like that one.
>
>> Seeing this and reading the docs abo
t 9:06 PM, Kevin Grittner
wrote:
> Gerhard Hintermayer wrote:
>
>> Just checked my indices, looks like the only table in all my 5
>> DB's that has a hash index is the one I ran tests on.
>
> Well, actually that's pretty lucky. If you'd tested with other
>
This should of course be rsync -a ... . Much better now :-)
Sorry for this extra round.
On Tue, Apr 12, 2011 at 2:32 PM, Gerhard Hintermayer <
gerhard.hinterma...@gmail.com> wrote:
> my basebackup is done via the following ($1 is the parameter for the
> server where the basebackup i
Hi, I do rsync -a --delete new_primary_server::postgresql-data/
/var/lib/postgresql/9.0/data/
Take care to use -a, I had -r and wondered, why the rsync took so long,
despite the data was nearly the same.
assuming you have configured rsyncd on the server as:
[postgresql-data]
uid = postgres
om them
in case something goes wrong.
The recommended setup should be handled more in detail in the docs, since
this is a topic where you can make a lot of mistakes :-(
Gerhard
On Tue, Apr 19, 2011 at 11:16 AM, rudi wrote:
> On 04/19/2011 10:09 AM, Gerhard Hintermayer wrote:
>
>
I'm trying to make use of indices for following table
auftrag_l1
a_nr | integer
ts | timestamp without time zone
and 10-15 other columns
I do have indices
"idx_auftrag_l1_anr" btree (a_nr)
"idx_auftrag_l1_ts" btree (ts)
when I query the table by
explain analyze select * fr
20 matches
Mail list logo