Re: [ADMIN] cp1250 problem

2011-02-16 Thread Achilleas Mantzios
Στις Wednesday 16 February 2011 00:00:10 ο/η Jan-Peter Seifert έγραψε:
 Hello,
 
 Am 15.02.2011 12:26, schrieb Achilleas Mantzios:
  Στις Tuesday 15 February 2011 12:44:31 ο/η Lukasz Brodziak έγραψε:
  Hello,
 
  How can I set PostgreSQL locale and encoding to be pl_PL.cp1250 all I
  can do is pl_PL.UTF-8.
  I have PG 8.2.4 on Ubuntu 10.04 (polish version).
 
 There are no code pages in Ubuntu. The nearest what you can get seems to
 be encoding LATIN2 ( and a compatible locale ). These charsets are NOT
 identical though.
 
  The locale for your whole cluster is defined in postgresql.conf.
 
 Not really - LC_COLLATE and LC_CTYPE are set during initialization of
 the cluster by initdb. You can verify the settings with pg_controldata:
 

That's true. Thanx.

 http://www.postgresql.org/docs/8.2/interactive/app-pgcontroldata.html
 
 As of PostgreSQL v8.4 you can specify these two locale settings
 different from the server's settings (for each database).
 If you plan to upgrade your PostgreSQL major server version beware of
 the removal of some implicit data type casts among other changes as of 8.3.
 
 Peter
 
 



-- 
Achilleas Mantzios

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] LC_COLLATE and pg_upgrade

2011-02-16 Thread Peter Eisentraut
On mån, 2011-02-14 at 14:18 +0100, Bernhard Schrader wrote:
 As far as I read right now, LC_COLLATE is a read_only variable which
 is used while initdb. But why does the pg_upgrade script doesn't see
 that utf8 and UTF-8 are the same? Is it just a string compare? 

Why don't you just reinitialize your new database cluster with the same
locale spelling as the old one?

Your points are valid, but unfortunately difficult to handle in the
general case.



-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] LC_COLLATE and pg_upgrade

2011-02-16 Thread Bernhard Schrader
If i reinitialize the database cluster, won't i have to pg_dump restore
my data? In this case the downtime would be to big. Therefore I'm
searching for a faster way. 

utf8 and UTF-8, should be the same in my opinion, so i think there must
be a way to just change this value, so that pg_upgrade can do his work,
or is this impossible?

regards

Am Mittwoch, den 16.02.2011, 13:54 +0200 schrieb Peter Eisentraut:
 On mån, 2011-02-14 at 14:18 +0100, Bernhard Schrader wrote:
  As far as I read right now, LC_COLLATE is a read_only variable which
  is used while initdb. But why does the pg_upgrade script doesn't see
  that utf8 and UTF-8 are the same? Is it just a string compare? 
 
 Why don't you just reinitialize your new database cluster with the same
 locale spelling as the old one?
 
 Your points are valid, but unfortunately difficult to handle in the
 general case.
 
 
 




-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Moving the data directory

2011-02-16 Thread Dean Gibson (DB Administrator)


On 2011-02-16 04:51, Greg Smith wrote:


Debian and Ubuntu installations have a unique feature where you can 
have multiple PostgreSQL installations installed at the same time.


That's pretty easy to do with any Linux distribution.

--
Mail to my list address MUST be sent via the mailing list.
All other mail to my list address will bounce.


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Replication by schema

2011-02-16 Thread Armin Resch
Hi there,

what options do exist to replicate from a master by schema?

What I'm really after is this scenario:

Say, I have 100 databases out in the field. All of them have the same schema
and are autonomous masters. Now, at a central location, I want to replicate
from all masters to a central slave (which would have the combined data
footprint from all masters). Now, if schema-based replication was possible,
I would configure the slave to replicate the first master's schema 'A' to
the central slave's schema 'Master01', the second master's schema 'A' to the
central slave's schema 'Master02', etc.

Making sense? Is there appetite by anyone else for something like that?
Would there be insurmountable architectural hurdles to make this a built-in
replication option?

Rgds,
-ar


Re: [ADMIN] Replication by schema

2011-02-16 Thread Scott Marlowe
On Wed, Feb 16, 2011 at 1:33 PM, Scott Marlowe scott.marl...@gmail.com wrote:
 On Wed, Feb 16, 2011 at 12:45 PM, Armin Resch ar...@reschab.net wrote:
 Hi there,

 what options do exist to replicate from a master by schema?

 What I'm really after is this scenario:

 Say, I have 100 databases out in the field. All of them have the same schema
 and are autonomous masters. Now, at a central location, I want to replicate
 from all masters to a central slave (which would have the combined data
 footprint from all masters). Now, if schema-based replication was possible,
 I would configure the slave to replicate the first master's schema 'A' to
 the central slave's schema 'Master01', the second master's schema 'A' to the
 central slave's schema 'Master02', etc.

 Making sense? Is there appetite by anyone else for something like that?
 Would there be insurmountable architectural hurdles to make this a built-in
 replication option?

 Slony can already do all of this.

OK, it can't change schemas, but if you named them something unique in
both places it will work.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgres on NAS/NFS

2011-02-16 Thread Bruce Momjian
Bryan Keller wrote:
 I am considering running a Postgres with the database hosted on a NAS
 via NFS. I have read a few things on the Web saying this is not
 recommended, as it will be slow and could potentially cause data
 corruption.
 
 My goal is to have the database on a shared filesystem so in case of
 server failure, I can start up a standby Postgres server and point it
 to the same database. I would rather not use a SAN as I have heard
 horror stories about managing them. Also they are extremely expensive.
 A DAS would be another option, but I'm not sure if a DAS can be
 connected to two servers for server failover purposes.
 
 Currently I am considering not using a shared filesystem and instead
 using replication between the two servers.
 
 I am wondering what solutions have others used for my active-passive
 Postgres failover scenario? Is a NAS still not a recommended approach?
 Will a DAS work? Or is replication the best approach?

The last section of this documentation page talks about NFS usage:

http://www.postgresql.org/docs/9.0/static/creating-cluster.html

--
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgres on NAS/NFS

2011-02-16 Thread Greg Smith

Bryan Keller wrote:

It sounds like NFS is a viable solution nowadays. I a still going to shoot for 
using iSCSI, given it is a block-level protocol rather than file-level, it 
seems to me it would be better suited to database I/O.
  


Please digest carefully where Joe Conway pointed out that it took them 
major kernel-level work to get NFS working reliably on Linux.  On 
anything but Solaris, I consider NFS a major risk still; nothing has 
improved nowadays relative to when people used to report regular 
database corruption running it on other operating systems.  Make sure 
you read 
http://www.time-travellers.org/shane/papers/NFS_considered_harmful.html 
and mull over the warnings in there before you assume it will work, too.


I don't think I've ever heard from someone happy with an iSCSI 
deployment, either.  The only way you could make an NFS+iSCSI storage 
solution worse is to also use RAID5 on the NAS.


I'd suggest taking a look at 
http://wiki.postgresql.org/wiki/Shared_Storage and consider how you're 
going to handle fencing issues as well here.  One of the reasons SANs 
tend to be preferred in this area is because fencing at the 
fiber-channel switch level is pretty straightforward.  DAS running over 
fiber-channel can offer the same basic features though, it's just not as 
common to use a switch in that environment.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
PostgreSQL 9.0 High Performance: http://www.2ndQuadrant.com/books


--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] Postgres on NAS/NFS

2011-02-16 Thread Joshua D. Drake
On Wed, 2011-02-16 at 15:56 -0500, Greg Smith wrote:
 Bryan Keller wrote:
  It sounds like NFS is a viable solution nowadays. I a still going to shoot 
  for using iSCSI, given it is a block-level protocol rather than file-level, 
  it seems to me it would be better suited to database I/O.

 
 Please digest carefully where Joe Conway pointed out that it took them 
 major kernel-level work to get NFS working reliably on Linux.  On 
 anything but Solaris, I consider NFS a major risk still; nothing has 
 improved nowadays relative to when people used to report regular 
 database corruption running it on other operating systems.  Make sure 
 you read 
 http://www.time-travellers.org/shane/papers/NFS_considered_harmful.html 
 and mull over the warnings in there before you assume it will work, too.
 
 I don't think I've ever heard from someone happy with an iSCSI 
 deployment, either.  The only way you could make an NFS+iSCSI storage 
 solution worse is to also use RAID5 on the NAS.
 
 I'd suggest taking a look at 
 http://wiki.postgresql.org/wiki/Shared_Storage and consider how you're 
 going to handle fencing issues as well here.  One of the reasons SANs 
 tend to be preferred in this area is because fencing at the 
 fiber-channel switch level is pretty straightforward.  DAS running over 
 fiber-channel can offer the same basic features though, it's just not as 
 common to use a switch in that environment.

In short, use DAS or a SAN. iSCSI suffers from all kinds of performance
issues and NFS is just Michael Myers scary.

With DAS systems able to handle up 192 drives over 6Gb/s a second these
days, combined with a volume manager you can solve a lot of problems
without breaking the bank.

JD

-- 
PostgreSQL.org Major Contributor
Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579
Consulting, Training, Support, Custom Development, Engineering
http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt


-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


Re: [ADMIN] LC_COLLATE and pg_upgrade

2011-02-16 Thread Bruce Momjian
Bernhard Schrader wrote:
 Doesn't anyone know if I could override this settings?
 LC_COLLATE should be interesting for sort order, and while both is utf8,
 it should work or am I totally wrong? 
 
 As far as I read right now, LC_COLLATE is a read_only variable which is
 used while initdb. But why does the pg_upgrade script doesn't see that
 utf8 and UTF-8 are the same? Is it just a string compare?

Yes, just a string compare.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[ADMIN] Timeline Issue

2011-02-16 Thread Selva manickaraja
Dear All,

We have a primary running continous archiving to a secondary. We managed to
test fail-over to the secondary by stopping the database in the primary.
Then some transactions were tested in the secondary. It was acting well as a
primary to accept both read and write.

Now we want to revert this acting primary back to secondary and bring up the
actual primary. We know that the secondary had gone out of synch in ahead of
primary. So we do a PITR in this secondary before the time when the initial
primary was brought down. Now the primary is up and the secondary is brought
up as hot-standby. The secondary complaints that it cannot recover because
its Timeline 2 does not match with the Timeline 1 of the primary.

How can this be resolved?

Thank you.

Regards,

Selvam


Re: [ADMIN] Timeline Issue

2011-02-16 Thread Scott Mead
On Wed, Feb 16, 2011 at 8:29 PM, Selva manickaraja mavle...@gmail.comwrote:

 Dear All,

 We have a primary running continous archiving to a secondary. We managed to
 test fail-over to the secondary by stopping the database in the primary.
 Then some transactions were tested in the secondary. It was acting well as a
 primary to accept both read and write.

 Now we want to revert this acting primary back to secondary and bring up
 the actual primary. We know that the secondary had gone out of synch in
 ahead of primary. So we do a PITR in this secondary before the time when the
 initial primary was brought down. Now the primary is up and the secondary is
 brought up as hot-standby. The secondary complaints that it cannot recover
 because its Timeline 2 does not match with the Timeline 1 of the primary.


Once you open the standby for use, it cannot be put back into standby mode.
You will need to rebuild the standby server from the primary.

--Scott




 How can this be resolved?

 Thank you.

 Regards,

 Selvam



Re: [ADMIN] Timeline Issue

2011-02-16 Thread Scott Mead
(Added list back to keep me honest :-)

On Wed, Feb 16, 2011 at 8:41 PM, Selva manickaraja mavle...@gmail.comwrote:

 So what would be the best option for there on? Should the standby be
 converted to primary and the initial primary now nominated as standby?


Once you convert to a primary, you're stuck on it.  You'll have to create a
new standby from it, the old primary can't be used as a standby.

--Scott





 On Thu, Feb 17, 2011 at 9:36 AM, Scott Mead sco...@openscg.com wrote:

 On Wed, Feb 16, 2011 at 8:29 PM, Selva manickaraja mavle...@gmail.comwrote:

 Dear All,

 We have a primary running continous archiving to a secondary. We managed
 to test fail-over to the secondary by stopping the database in the primary.
 Then some transactions were tested in the secondary. It was acting well as a
 primary to accept both read and write.

 Now we want to revert this acting primary back to secondary and bring up
 the actual primary. We know that the secondary had gone out of synch in
 ahead of primary. So we do a PITR in this secondary before the time when the
 initial primary was brought down. Now the primary is up and the secondary is
 brought up as hot-standby. The secondary complaints that it cannot recover
 because its Timeline 2 does not match with the Timeline 1 of the primary.


 Once you open the standby for use, it cannot be put back into standby
 mode.  You will need to rebuild the standby server from the primary.

 --Scott




 How can this be resolved?

 Thank you.

 Regards,

 Selvam






[ADMIN] Trigger File Behaviour

2011-02-16 Thread Selva manickaraja
Hi,

We tried setting the trigger file for fail-over purpose. But we just can't
understand how it works. Each time the secondary is started the trigger file
is removed. How can we introduce auto fail-over is this happens?

Thank you.

Regards,

Selvam


[ADMIN] Fwd: Trigger File Behaviour

2011-02-16 Thread Selva manickaraja
Any assistance available on this topic?

-- Forwarded message --
From: Selva manickaraja mavle...@gmail.com
Date: Thu, Feb 17, 2011 at 10:10 AM
Subject: Trigger File Behaviour
To: pgsql-admin@postgresql.org


Hi,

We tried setting the trigger file for fail-over purpose. But we just can't
understand how it works. Each time the secondary is started the trigger file
is removed. How can we introduce auto fail-over is this happens?

Thank you.

Regards,

Selvam