Re: [GENERAL] CoC [Final]

2016-01-18 Thread Stéphane Schildknecht
On 18/01/2016 19:36, Joshua D. Drake wrote:
> On 01/18/2016 10:15 AM, Kevin Grittner wrote:
>> On Mon, Jan 18, 2016 at 12:02 PM, Joshua D. Drake <j...@commandprompt.com> 
>> wrote:
>>
>>> * Participants who disrupt the collaborative space, or participate in a
>>> pattern of behaviour which could be considered harassment will not be
>>> tolerated.
>>
>> Personally, I was comfortable with the rest of it, but this one
>> made me squirm a little.  Could we spin that to say that those
>> behaviors will not be tolerated, versus not tolerating the people?
>> Maybe:
>>
>> * Disruption of the collaborative space or any pattern of
>> behaviour which could be considered harassment will not be
>> tolerated.
> 
> No argument from me. I think they both service the same gist.
> 
> Sincerely,
> 
> JD
> 
> 
> 

I would also vote in favour of not tolerating the behaviour. I guess it would
be less open to critics than saying a participant is not tolerated...



-- 
Stéphane Schildknecht
Contact régional PostgreSQL pour l'Europe francophone
Loxodata - Conseil, expertise et formations
06.17.11.37.42


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Charging for PostgreSQL

2016-01-06 Thread Stéphane Schildknecht
On 06/01/2016 16:54, James Keener wrote:
> As Melvin mentioned, this belongs in a new thread.

And as such, it would have been really kind to actually start a new one.

(...)
-- 
Stéphane Schildknecht
Contact régional PostgreSQL pour l'Europe francophone
Loxodata - Conseil, expertise et formations
06.17.11.37.42


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_xlog on a hot_stanby slave

2015-06-29 Thread Stéphane Schildknecht
On 16/06/2015 10:55, Xavier 12 wrote:
 Hi everyone,
 
 Questions about pg_xlogs again...
 I have two Postgresql 9.1 servers in a master/slave stream replication
 (hot_standby).
 
 Psql01 (master) is backuped with Barman and pg_xlogs is correctly
 purged (archive_command is used).
 
 Hower, Psql02 (slave) has a huge pg_xlog (951 files, 15G for 7 days
 only, it keeps growing up until disk space is full). I have found
 documentation and tutorials, mailing list, but I don't know what is
 suitable for a Slave. Leads I've found :
 
 - checkpoints
 - archive_command
 - archive_cleanup
 
 Master postgresq.conf :
 
 [...]
 wal_level = 'hot_standby'
 archive_mode = on
 archive_command = 'rsync -az /var/lib/postgresql/9.1/main/pg_xlog/%f
 bar...@nas.lan:/data/pgbarman/psql01/incoming/%f'
 max_wal_senders = 5
 wal_keep_segments = 64
 autovacuum = on
 
 Slave postgresql.conf :
 
 [...]
 wal_level = minimal
 wal_keep_segments = 32
 hot_standby = on
 
 Slave recovery.conf :
 
 standby_mode = 'on'
 primary_conninfo = 'host=10.0.0.1 port=5400 user=postgres'
 trigger_file = '/var/lib/postgresql/9.1/triggersql'
 restore_command='cp /var/lib/postgresql/9.1/wal_archive/%f %p'
 archive_cleanup_command =
 '/usr/lib/postgresql/9.1/bin/pg_archivecleanup
 /var/lib/postgresql/9.1/wal_archive/ %r'
 
 
 
 
 
 How can I reduce the number of WAL files on the hot_stanby slave ?
 
 Thanks
 
 Regards.
 
 Xavier C.
 
 


I wonder why you are doing cp in your recovery.conf on the slave.
That is quite correct when the streaming can't get WAL from the master. But
cp is probably not the right tool.

You also cp from the master archive directory, and are cleaning on that
directory as well.

You don't clean up the standby xlog directory. And cp may copy incomplete WAL
files.

The streaming replication can take care of your xlog clean up, until you
introduce WAL files by another mean (manual cp for instance).

S.

-- 
Stéphane Schildknecht
Contact régional PostgreSQL pour l'Europe francophone
Loxodata - Conseil, expertise et formations
06.17.11.37.42


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Basic Question on Point In Time Recovery

2015-03-12 Thread Stéphane Schildknecht
Hello,

On 11/03/2015 11:54, Robert Inder wrote:
 We are developing a new software system which is now used by a number
 of independent clients for gathering and storing live data as part of
 their day to day work.
 
 We have a number of clients sharing a single server.  It is running
 one Postgres service, and each client is a separate user with access
 to their own database.  Each client's database will contain hundreds
 of thousands of records, and will be supporting occasional queries by
 a small number of users.   So the system is currently running on
 modest hardware.
 
 To guard against the server failing, we have a standby server being
 updated by WAL files, so if the worst comes to the worst we'll only
 lose a few minutes work.  No problems there.
 
 But, at least while the system is under rapid development, we also
 want to have a way to roll a particular client's database back to a
 (recent) known good state, but without affecting any other client.
 
 My understanding is that the WAL files mechanism is installation-wide
 -- it will affect all clients alike.
 
 So to allow us to restore data for an individual client, we're running
 pg_dump once an hour on each database in turn.  In the event of a
 problem with one client's system, we can restore just that one
 database, without affecting any other client.
 
 The problem is that we're finding that as the number of clients grows,
 and with it the amount of data, pg_dump is becoming more intrusive.
 Our perception is that when pg_dump is running for any database,
 performance on all databases is reduced.  I'm guessing this is because
 the dump is making heavy use of the disk.

One way you could choose is to have a server acting as WAL archiver.

pg_basebackup your slave every day, and store all WAL until new pg_basebackup
is taken.

Whenever you have to restore a single customer, you could recover the whole
instance up to the time *before* the worst happend and pg_dump the customer,
and pg_restore it.

Doing that, you won't have to pg_dump avery one hour or so all of your 
databases.



 
 There is obviously scope for improving performance by getting using
 more, or more powerful, hardware.  That's obviously going to be
 necessary at some point, but it is obviously an expense that our
 client would like to defer as long as possible.
 
 So before we go down that route, I'd like to check that we're not
 doing something dopey.
 
 Is our current frequent pg_dump approach a sensible way to go about
 things.  Or are we missing something?  Is there some other way to
 restore one database without affecting the others?
 
 Thanks in advance.
 
 Robert.
 


-- 
Stéphane Schildknecht
Contact régional PostgreSQL pour l'Europe francophone
Loxodata - Conseil, expertise et formations
06.17.11.37.42



signature.asc
Description: OpenPGP digital signature


[GENERAL] Timestamp precision

2007-03-29 Thread Stéphane Schildknecht
Hi,

I'm reading date/time datatypes documentation, and I'm a little bit
surprised by this piece of documentation :

Note:  When timestamp values are stored as double precision
floating-point numbers (currently the default), the effective limit of
precision may be less than 6. timestamp values are stored as seconds
before or after midnight 2000-01-01. Microsecond precision is achieved
for dates within a few years of 2000-01-01, but the precision degrades
for dates further away. When timestamp values are stored as eight-byte
integers (a compile-time option), microsecond precision is available
over the full range of values. However eight-byte integer timestamps
have a more limited range of dates than shown above: from 4713 BC up to
294276 AD. (...)

In fact, I wonder why a date ranging from somme 4000 BC to 3 AC is
stored as a reference to the 1st january of 2000. Is it because that day
is some close to actual time date ?

And so, what do you mean by within a few years? Is it in reference to
geological time (200 years on 30 is less than one on a thousand) or
to human life?

I still wonder who could want to store a date 100 years ago with a
microsecond precision ;-)

Best regards,
-- 
Stéphane SCHILDKNECHT
Président de PostgreSQLFr
http://www.PostgreSQLFr.org



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Who is Slony Master/Slave + general questions.

2007-01-19 Thread Stéphane Schildknecht
Hello,

You should ask directly to the slony1 mailing list.

[EMAIL PROTECTED] a écrit :
 (...) The Slony version I'm using is 1.1.2.
The current version of Slony1 is slony1-1.2.6.
 Take a scenario that
 you want to check the state of the system without prior knowledge of
 the node setup, how would you determine which machine is the prime and
 which one is the slave?
   
Without any knowledge of replication ? That will be difficult. You
should connect to one of the DB and have a look at slony schema tables
(sl_status and sl_listen for instance)...

You may also have a look at that page :
http://linuxfinances.info/info/monitoring.html

 Also I'm having issues with the slonik script (below) that is supposed
 to handle the failover to the slave in case of master failure.  For
 some reason it hangs and I was wondering if there are known issues with
 it.  
As written in documentation Slony-I does not provide any automatic
detection for failed systems. 
First of all, you may want to upgrade to the latest stable slony1 version.


Cheers,

SAS

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] pg_dump without oids

2007-01-17 Thread Stéphane Schildknecht
Hi all,

pg_dump and pg_dumpall have the -o option that should tell them to
include oids in dump. I didn't chose this option, and the dump doesn't
include WITH OIDS, but the tables are created with oids when restoring
this dump.

I'm dumping from 7.4.5 to 8.2.1.
I do have
#default_with_oids = off
in postgresql.conf for 8.2.

Is there a way to prevent creating table with oids ?

A table created by psql client is created without OID.

Thanks by advance

Stéphane Schildknecht


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] FK Constraint on index not PK

2007-01-14 Thread Stéphane Schildknecht
Tom Lane a écrit :
 =?UTF-8?B?U3TDqXBoYW5lIFNjaGlsZGtuZWNodA==?= [EMAIL PROTECTED] writes:
   
 My goal is to migrate to 8.2.1. definitely. But as you said it, I do not
 want to recreate unwanted index when migrating. I want to drop them BEFORE.
 But, I can't just do a drop index command. It fails.
 

 Right, because the FK constraints by chance seized on those indexes as
 being the matching ones for them to depend on.

 What you want to do is (1) update the relevant pg_depend entries to
 reference the desired PK indexes instead of the undesired ones; then
 (2) drop the undesired indexes.

 I don't have a script to do (1) but it should be relatively
 straightforward: in the rows with objid = OID of FK constraint
 and refobjid = OID of unwanted index, update refobjid to be the
 OID of the wanted index.  (To be truly correct, make sure that
 classid and refclassid are the right values; but the odds of a
 false match are probably pretty low.)

 Needless to say, test and debug your process for this in a scratch
 database ... and when you do it on the production DB, start with
 BEGIN so you can roll back if you realize you blew it.

   regards, tom lane
   
Hi Tom,

Thank You very much for this answer. I'll try that tomorrow morning.

regards,

SAS

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] FK Constraint on index not PK

2007-01-12 Thread Stéphane Schildknecht
Dear community members,

I'm having a quite strange behaviour while trying to drop some index.

We have some tables with two indexes on a primary key. The first one was
automatically created by the primary constraint. The second one was
manually created on the same column. Don't know why, but I would now
want to suppress it.

The first index is : foo_pkey
The second one : i_foo_pk
The constraint on table bar is fk_bar_foo references foo(id)

But, when trying to drop the second index I get the following message :

NOTICE:  constraint fk_bar_foo on table t_foo depends on index i_foo_pk

The database server is 7.4.5 .

Having dumped database and restored it on a 8.2 server, I could drop the
second index without any problem.

The fact is I could do that as I indded what to migrate all databases
from 7.4 to 8.2. But I would prefer not to recreate every index before
dropping the non necessary one. And duplicate indexes are surely
unnecessary...

I have read in some thread that these troubles are known and have been
corrected in versions  7.4.5. But, droping them before migrating is an
option I'd prefer to use.

So I wonder if ther is a way to indicate my foreign key it has to use
the right primarry key constraint and not an arbitrary index on that
primary key.

(Almost 10 databases and 300 tables to migrate with something like 130
indexes badly created). So I'd alse prefer not to drop every fk
constraint before dropping index and recreating constraint...

Thanks by advance

Stéphane Schildknecht

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] FK Constraint on index not PK

2007-01-12 Thread Stéphane Schildknecht
Scott Marlowe a écrit :
 Being quite familiar with both of those issues from the past, I can't
 imagine either one causing a problem with an update prior to dumping so
 he can then upgrade to 8.2.

 Seriously.  Hungarian collation, plerl can no longer change locale and
 corrupt indexes, and a minor security update.  

 And none of them need to be applied to do the pg_dump and then import to
 8.2

 Now, if he were gonna keep the 7.4 machine up and running, then I'd
 definitely recommend he look into the points made in the release notes
 for those versions.  But all the OP seemed to be in search of was
 dropping those extra indexes before dumping / migrating to 8.2. 

   

My goal is to migrate to 8.2.1. definitely. But as you said it, I do not
want to recreate unwanted index when migrating. I want to drop them BEFORE.

But, I can't just do a drop index command. It fails.

That's why I asked for an advice to drop them or not recreate them. I
would really prefer not to parse the all dump (some GB).

Thx

SAS

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] FK Constraint on index not PK

2007-01-12 Thread Stéphane Schildknecht
Joshua D. Drake a écrit :
 On Fri, 2007-01-12 at 17:50 +0100, Stéphane Schildknecht wrote:
   
 Dear community members,

 I'm having a quite strange behaviour while trying to drop some index.

 We have some tables with two indexes on a primary key. The first one was
 automatically created by the primary constraint. The second one was
 manually created on the same column. Don't know why, but I would now
 want to suppress it.
 

 Drop the second index. It is redundant.
   

I know it. But I can't.

SAS

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Excluding schema from backup

2006-12-08 Thread Stéphane Schildknecht
Hi all,

I tried the knewly introduced feature allowing one to exclude a schema
from a backup with pg_dump, but I got a
really strange error :

pg_dump -U postgres MYDB -N _MYDB gives me a dump including that schema.

I then tried pg_dump -U postgres MYDB -n _MYDB and then got pg_dump:
No matching schemas were found

Dumping the only public schema works. But, by doing so, I miss some
other schema I really need.

Is there a limitation I didn't catch ?

Thanks by advance.

Stéphane Schildknecht


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Number format problem

2006-02-03 Thread Stéphane SCHILDKNECHT

Hi,

There seems to be some tricky behaviour with number formating ant french 
locale.


i tried the following request:
select to_char(1485.12, '9G999D99');

I was expecting to get: 1 485,12

But, surprinsingly, I got 1,1485,12.

My postgresql server is an 8.1.2 version. The same problem occurs under 
Ubuntu Breezy and Debian Testing.

My current configuration is
[EMAIL PROTECTED]
client_encoding=LATIN9
server_encoding=LATIN9

I tried to reconfigure locales and restart the server, but I can't get 
the result I expect.


I really don't know what else I could do.

Sincerely,

--
Stéphane SCHILDKNECHT
Président de PostgreSQLFr
http://www.postgresqlfr.org



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Number format problem

2006-02-03 Thread Stéphane SCHILDKNECHT

Hi,

There seems to be some tricky behaviour with number formating and french
locale.

I tried the following request:
select to_char(1485.12, '9G999D99');

I was expecting to get: 1 485,12

But, surprinsingly, I got 1,1485,12.

My postgresql server is an 8.1.2 version. The same problem occurs under
Ubuntu Breezy and Debian Testing.
My current configuration is
[EMAIL PROTECTED]
client_encoding=LATIN9
server_encoding=LATIN9

I tried to reconfigure locales and restart the server, but I can't get
the result I expect.

I really don't know what else I could do.

Sincerely,

--
Stéphane SCHILDKNECHT
Président de PostgreSQLFr
http://www.postgresqlfr.org




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster