Re: [GENERAL] Two entries with the same primary key

2012-04-12 Thread Ivan Evtuhovich
Hi,

our current version both on master and slave is
PostgreSQL 9.1.3 on x86_64-pc-linux-gnu, compiled by gcc-4.4.real (Debian
4.4.5-8) 4.4.5, 64-bit

But as i remember, we start streaming replication on 9.1.2 and then upgrade
to 9.1.3. My ops now on vacations, and we will make standby resync
on Monday, and I'll check, if problem solved.

Thanx you for answer.

On Fri, Apr 13, 2012 at 00:48, Merlin Moncure  wrote:

> On Thu, Apr 12, 2012 at 9:20 AM, Ivan Evtuhovich 
> wrote:
> > Hello,
> >
> > More then month ago we upgrade DB from 9.0 to 9.1 with pg_upgrade. Then
> we
> > move DB to another server with standard pg streaming replication.
> >
> > Now we have two entries with the same primary key. And I do not know
> what to
> > do.
> >
> > SELECT ctid, id from billing_invoices where id = 27362891;
> >
> >  ctid |id
> > --+--
> >  (1112690,11) | 27362891
> >  (1112438,26) | 27362891
> >
>
> Per some off-list conversation with Ivan, this is only happening on
> the standby.  Ivan, what's the precise version of postgres you are
> using?  When you first went to hs/sr?  I bet your clog files are out
> of whack (and if so, probably upgrading to recent bugfix postgres and
> standby resync is the correct course of action).
>
> merlin
>


Re: [GENERAL] Issue of upgrading from 9.0.4 to 9.1.3

2012-04-12 Thread Raghavendra
On Fri, Apr 13, 2012 at 11:39 AM, Zhidong She  wrote:

> Hi All,
>
> We used 9.0.4 before and recently we plan to upgrade to 9.1.3. During
> the test, we found a issue related to escape letter as below:
>
> in 9.0.4, the sql is correct
> insert into test values('abc\'a');
>
> but in 9.1.3, the postgresql denied the same sql, then it worked after
> I changed it to
> insert into test values('abc''a');
>
> How to configure 9.1.3 and let it also accept \ as the escpage? Could
> someone help me out?
>
> thanks very much.
>
>

-bash-4.1$ ./psql
psql (9.1.3)
Type "help" for help.

postgres=# select version();
   version
--
 PostgreSQL 9.1.3 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6
20110731 (Red Hat 4.4.6-3), 64-bit
(1 row)

postgres=# select E'Raghav\'s';
 ?column?
--
 Raghav's
(1 row)

or

change the parameter standard_confirming_strings to off; and retry your
example.

---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/





> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Issue of upgrading from 9.0.4 to 9.1.3

2012-04-12 Thread John R Pierce

On 04/12/12 11:09 PM, Zhidong She wrote:

but in 9.1.3, the postgresql denied the same sql, then it worked after
I changed it to
insert into test values('abc''a');

How to configure 9.1.3 and let it also accept \ as the escpage? Could
someone help me out?


use E'abc\'a', this is the SQL standard for escaped strings.

or even better, use $$ quoting, like $$abc'a$$



--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Issue of upgrading from 9.0.4 to 9.1.3

2012-04-12 Thread Zhidong She
Hi All,

We used 9.0.4 before and recently we plan to upgrade to 9.1.3. During
the test, we found a issue related to escape letter as below:

in 9.0.4, the sql is correct
insert into test values('abc\'a');

but in 9.1.3, the postgresql denied the same sql, then it worked after
I changed it to
insert into test values('abc''a');

How to configure 9.1.3 and let it also accept \ as the escpage? Could
someone help me out?

thanks very much.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PGBouncer help (how to get it working)

2012-04-12 Thread Raghavendra
>
> On Fri, Apr 13, 2012 at 11:00 AM, Phoenix Kiula 
> wrote:
>
>> On Fri, Apr 13, 2012 at 2:41 AM, Scott Marlowe 
>> wrote:
>> > On Thu, Apr 12, 2012 at 11:12 AM, Phoenix Kiula <
>> phoenix.ki...@gmail.com> wrote:
>> >>
>> >>   WARNING: password file "/root/.pgpass" has group or world access;
>> >>   permissions should be u=rw (0600) or less
>> >>   psql: ERROR:  No such user: MYSITE_pgbouncer
>> >
>> > Pretty sure the error is just the perms on that file.  Set them to
>> > 0600 and try again.
>>
>>
>> I had already done this. Doesn't do anything. Pgbouncer starts
>> (service pgbouncer restart) but when I try to connect, it tells me
>>
>>psql: ERROR:  No such user: MYSITE_pgbouncer
>>
>> Where should i create the "MYSITE_pgbouncer" user?
>>
>>
> Add it in pgbouncer.auth file as per your .ini file parameter.
>
>auth_file = /var/lib/pgsql/pgbouncer.txt
>
>
Seems you already did this. I believe you are connecting as postgres user
not from root, if yes, then check .pgpass file too (it will be in postgres
user home directory).

--Raghav


Re: [GENERAL] PGBouncer help (how to get it working)

2012-04-12 Thread Raghavendra
On Fri, Apr 13, 2012 at 11:00 AM, Phoenix Kiula wrote:

> On Fri, Apr 13, 2012 at 2:41 AM, Scott Marlowe 
> wrote:
> > On Thu, Apr 12, 2012 at 11:12 AM, Phoenix Kiula 
> wrote:
> >>
> >>   WARNING: password file "/root/.pgpass" has group or world access;
> >>   permissions should be u=rw (0600) or less
> >>   psql: ERROR:  No such user: MYSITE_pgbouncer
> >
> > Pretty sure the error is just the perms on that file.  Set them to
> > 0600 and try again.
>
>
> I had already done this. Doesn't do anything. Pgbouncer starts
> (service pgbouncer restart) but when I try to connect, it tells me
>
>psql: ERROR:  No such user: MYSITE_pgbouncer
>
> Where should i create the "MYSITE_pgbouncer" user?
>
>
Add it in pgbouncer.auth file as per your .ini file parameter.

   auth_file = /var/lib/pgsql/pgbouncer.txt

---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/




> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] PGBouncer help (how to get it working)

2012-04-12 Thread Phoenix Kiula
On Fri, Apr 13, 2012 at 2:41 AM, Scott Marlowe  wrote:
> On Thu, Apr 12, 2012 at 11:12 AM, Phoenix Kiula  
> wrote:
>>
>>   WARNING: password file "/root/.pgpass" has group or world access;
>>   permissions should be u=rw (0600) or less
>>   psql: ERROR:  No such user: MYSITE_pgbouncer
>
> Pretty sure the error is just the perms on that file.  Set them to
> 0600 and try again.


I had already done this. Doesn't do anything. Pgbouncer starts
(service pgbouncer restart) but when I try to connect, it tells me

psql: ERROR:  No such user: MYSITE_pgbouncer

Where should i create the "MYSITE_pgbouncer" user?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] recommended schema diff tools?

2012-04-12 Thread Damian Carey
On Fri, Apr 13, 2012 at 12:57 AM, Welty, Richard wrote:

> **
>
> can anyone recommend an open source tool for diffing schemas?
>
> (it should go without saying that i'm looking for ddl to update production
> and QA DBs from development DBs, but i'll say it, just in case.)
>
> thanks,
>richard
>

Richard,

A java command line tool that we have used for years is AGPDIFF -
http://apgdiff.startnet.biz/

Usage is something like this ...
c:> java -jar apgdiff.jar --ignore-start-with pg_old.sql pg_new.sql >
diff.sql

It is PostgreSQL specific. Takes two SQL files and produces the diff.sql,
that we use as the basis of our production upgrade artifacts.

A very useful and basic tool..

HTH
-Damian


Re: [GENERAL] recommended schema diff tools?

2012-04-12 Thread raghu ram
On Thu, Apr 12, 2012 at 9:35 PM, Thomas Kellerer  wrote:

> Welty, Richard wrote on 12.04.2012 16:57:
>
>  can anyone recommend an open source tool for diffing schemas?
>>
>> (it should go without saying that i'm looking for ddl to update
>> production and QA DBs from development DBs, but i'll say it, just in
>> case.)
>>
>
Take the dumps of two schemas in plain format like this:

pg_dump.exe -Fp -U edb -s -n schema1 edb > db.out

pg_dump.exe -Fp -U edb -s -n schema2  edb > db1.out

Now you can use a tool like *Kompare  [
http://www.kde.org/applications/development/kompare/]* to get a diffbetween two
schema files and see where there is a change in the schema.
-- 

Thanks & Regards,

Raghu Ram

EnterpriseDB: http://www.enterprisedb.com


Re: [GENERAL] Re: Hot Standby - ERROR: canceling statement due to conflict with recovery

2012-04-12 Thread raghu ram
On Fri, Apr 13, 2012 at 2:09 AM, Francois  wrote:

> One solution, for backups anyway, is to pause the replication on the slave
> machines with the command: pg_xlog_replay_pause() and
> pg_xlog_replay_resume().  We still don't have a solution for long running
> queries, mayber have a mechanism to retry them when they fail with that
> specific error.
>


For resolving this issue,Set below parameters values equal to total
execution time of long running statements in Postgresql.conf of Standby
server.

max_standby_archive_delay

max_standby_streaming_delay

and then RELOAD the PostgreSQL cluster.
-- 

Thanks & Regards,

Raghu Ram

EnterpriseDB: http://www.enterprisedb.com


Re: [GENERAL] Searchable chess positions in a Postgress DB

2012-04-12 Thread Michael Nolan
On Thu, Apr 12, 2012 at 4:50 PM, Gavin Flower  wrote:

>  On 11/04/12 21:24, Gavin Flower wrote:
>
> On 11/04/12 19:15, Sidney Cadot wrote:
>
> Dear all,
>
> As a hobby project, I am toying around with a database containing
> about 5 million chess games. On average, these games have about 80
> positions (~ 40 moves by both black and white), which means there are
> about 400 million chess positions in there.
>
>
>
If you haven't done so already, you should read through the literature on
chess and computers.  I'm quite a few years out of date, but there's been a
lot of research into efficient ways to store and search chess positions,
and some of it may have dealt with SQL database structures.
--
Mike Nolan


Re: [GENERAL] Searchable chess positions in a Postgress DB

2012-04-12 Thread Gavin Flower

On 11/04/12 21:24, Gavin Flower wrote:

On 11/04/12 19:15, Sidney Cadot wrote:

Dear all,

As a hobby project, I am toying around with a database containing
about 5 million chess games. On average, these games have about 80
positions (~ 40 moves by both black and white), which means there are
about 400 million chess positions in there.

I have written code to extract these positions, and now I want to put
them into a Postgres database. Specifically, I want to do this in a
way that allows *fast* lookups of positions, e.g. "give me all
positions that have a White King on c4 and either a Black Bishop or
White Knight on f7".

Currently, my "Positions" table looks like this:

   Column   |  Type   | Modifiers
---+-+---
  gameindex | integer | not null
  plyindex  | integer | not null
  pseudofenboard| text| not null
  fenside   | text| not null
  fencastling   | text| not null
  fenenpassant  | text| not null
  possiblemovecount | integer | not null
  isincheck | boolean | not null
Indexes:
 "positions_pkey" PRIMARY KEY, btree (gameindex, plyindex)
Foreign-key constraints:
 "positions_gameindex_fkey" FOREIGN KEY (gameindex) REFERENCES
games(gameindex)

The "PseudoFenBoard" field currently holds a string describing the
position. For example, the starting position of chess looks like this:

"rnbqkbnr///////RNBQKBNR"

This design allows me to formulate the kind of positional queries that
I want (by using regular expression matching), but executing them will
involve a slow, linear traversal of the 400M table rows, which is not
desirable.

I am toying around with the ugly idea to make a "Positions" table that
has a single field for each of the squares, e.g.

CREATE TABLE Position2 (
 GameIndex INTEGER NOT NULL,
 PlyIndex  INTEGER NOT NULL,
 a1"char"  NOT NULL,
 a2"char"  NOT NULL,
 -- (60 fields defs omitted)
 h7"char"  NOT NULL,
 h8"char"  NOT NULL
);

This would allow the creation of indices on each of the 64 fields
separately, which should help to achieve near-instantaneous position
query performance, especially after gathering proper statistics for
all the field-specific indices.

I realize that this design is quite ugly, so I would be interested to
hear if there are nicer alternatives that can perform equally well.

Also, above I use the 1-byte "char" type. Is this the only type in
PostGres that is guaranteed to be just a single byte, or are there
better alternatives? A 13-state enum would be best (listing the 6
white pieces, 6 black pieces, and 'empty' states for every square on
the board) but as I understand from the documentation, enums always up
take 4 bytes per entry.

Any ideas for improvement would be greatly appreciated.



How aboutsomething like the following (game and postion would have 
more fields in practice, like comments and where played)?



DROP TABLE IF EXISTS game CASCADE;

CREATE TABLE game
(
id int PRIMARY KEY,
name_white text,
name_black text,
played timestamptz
);

CREATE TABLE position
(
id int PRIMARY KEY,
game_id int REFERENCES game (id),
ply int
);

CREATE TABLE piece
(
id int PRIMARY KEY,
position_id int REFERENCES position (id),
rank char, -- 1...8 from white's perspective
file char, -- a...h
white boolean,
type char -- P.R,N,B,K,Q
);

CREATE UNIQUE INDEX square ON piece (rank, file, type, white);


SELECT
   p.position_id
FROM
piece p
WHERE
( p.white
AND p.type = 'K'
AND p.file = 'c'
AND p.rank = '4'
)
AND
(
((NOT p.white AND p.type = 'B') OR (p.white AND p.type = 'K'))
AND p.file = 'f'
AND p.rank = '7'
);


Cheers,
Gavin

There was a blatantly obvious flaw in the above query: the pices 
checked, should belong to the same position!


That I only discovered the flaw when I mentally checked the SQL on the 
way to work the folowing day.


The following, hopefully, fixes the problem

SELECT
p1.position_id
FROM
piece AS p1 JOIN piece AS p2 USING (position_id)
WHERE
(
p1.white
AND p1.type = 'K'
AND p1.file = 'c'
AND p1.rank = '4'
)
AND
(
((NOT p2.white AND p2.type = 'B') OR (p2.white AND p2.type = 'K'))
AND p2.file = 'f'
AND p2.rank = '7'
);



[GENERAL] Installer Questions (NSIS)

2012-04-12 Thread Bret Stern
I'm starting to put together an install script using NSIS,
for our client application. 

For the Postgresql database, I've been using the installer from
postgresql.org, courtesy of Dave Page.


Being an anal programmer type, I may want to pursue
a little more control over how our application is
distributed.

Is there a list which discusses this, or should I test the waters
here when the time comes?

Bret Stern



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Two entries with the same primary key

2012-04-12 Thread Merlin Moncure
On Thu, Apr 12, 2012 at 9:20 AM, Ivan Evtuhovich  wrote:
> Hello,
>
> More then month ago we upgrade DB from 9.0 to 9.1 with pg_upgrade. Then we
> move DB to another server with standard pg streaming replication.
>
> Now we have two entries with the same primary key. And I do not know what to
> do.
>
> SELECT ctid, id from billing_invoices where id = 27362891;
>
>  ctid |id
> --+--
>  (1112690,11) | 27362891
>  (1112438,26) | 27362891
>

Per some off-list conversation with Ivan, this is only happening on
the standby.  Ivan, what's the precise version of postgres you are
using?  When you first went to hs/sr?  I bet your clog files are out
of whack (and if so, probably upgrading to recent bugfix postgres and
standby resync is the correct course of action).

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Hot Standby - ERROR: canceling statement due to conflict with recovery

2012-04-12 Thread Francois
One solution, for backups anyway, is to pause the replication on the slave
machines with the command: pg_xlog_replay_pause() and
pg_xlog_replay_resume().  We still don't have a solution for long running
queries, mayber have a mechanism to retry them when they fail with that
specific error.

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Hot-Standby-ERROR-canceling-statement-due-to-conflict-with-recovery-tp3402417p5636744.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] regexp operator for array element compares needed

2012-04-12 Thread Gauthier, Dave
Hi:

v8.3.4 on linux

In plpgsql, I have something like this...

if ((string_to_array(check_values_text,',') <@ string_to_array((select 
permitted_values from bi_constraints where bicolumn = 'fivr'),',')) = 'f')

It's just testing all the values in the check_values_text csv are in 
permitted_values csv (through array operators as you can see)

I need to do the same thing only for regexp.
So, instead of...

if ((string_to_array('aa,cc,dx',',') <@ string_to_array((select 
permitted_values from 'aa,bb,cc,dd,ee' where bicolumn = 'fivr'),',')) = 'f')
(which would flag no compare because "dx" is not in 'aa,bb,cc,dd,ee')
I need to compare with regexp operator...

if ((string_to_array('aa,cc,dx',',') <@ string_to_array((select 
permitted_values from 'a.,b.,c*,d*,ee' where bicolumn = 'fivr'),',')) = 'f')
(which would compare OK because "dx" is is matched by "d*")

Don't even know if this sort of thing is possible.  I didn't see any operator 
like this in the docs.  May have to split out each and compare in nested loops 
sith atomic regexp compare "~".

Thanks for any ideas and/or help


Re: [GENERAL] PGBouncer help (how to get it working)

2012-04-12 Thread Scott Marlowe
On Thu, Apr 12, 2012 at 11:12 AM, Phoenix Kiula  wrote:
>
>   WARNING: password file "/root/.pgpass" has group or world access;
>   permissions should be u=rw (0600) or less
>   psql: ERROR:  No such user: MYSITE_pgbouncer

Pretty sure the error is just the perms on that file.  Set them to
0600 and try again.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PGBouncer help (how to get it working)

2012-04-12 Thread Phoenix Kiula
I had pgbouncer working somehow, but we have switched servers recently
and now I cannot for the life of me figure out again how to set it up.

Online guides say things  like "create a user ID". Well, where? Inside
PG the database? Or in my CentOS system?

Here's my "/etc/pgbouncer.ini":

[databases]
* = port = 5432

[pgbouncer]
listen_port = 6543
listen_addr = 127.0.0.1
auth_type = trust
auth_file = /var/lib/pgsql/pgbouncer.txt
logfile = pgbouncer.log
pidfile = pgbouncer.pid
admin_users = postgres,MYSITE_pgbouncer
pool_mode = transaction
server_reset_query = DISCARD ALL;
server_check_query = select 1
server_check_delay = 10
max_client_conn = 1000
default_pool_size = 20
log_connections = 0
log_disconnections = 0
log_pooler_errors = 1


And here's my authfile, "/var/lib/pgsql/pgbouncer.txt" --

"MYSITE_pgbouncer" ""

Is there something else I need to do? What steps am I missing? When I
start pgbouncer at the command line, I see this error:

   WARNING: password file "/root/.pgpass" has group or world access;
   permissions should be u=rw (0600) or less
   psql: ERROR:  No such user: MYSITE_pgbouncer

Thanks for any tips!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [HACKERS] [GENERAL] [streaming replication] 9.1.3 streaming replication bug ?

2012-04-12 Thread Fujii Masao
On Thu, Apr 12, 2012 at 4:09 AM, Michael Nolan  wrote:
> -- Forwarded message --
> From: Michael Nolan 
> Date: Wed, 11 Apr 2012 14:48:18 -0400
> Subject: Re: [HACKERS] [GENERAL] [streaming replication] 9.1.3
> streaming replication bug ?
> To: Robert Haas 
>
> On Wed, Apr 11, 2012 at 2:14 PM, Robert Haas  wrote:
>
>>
>>
>> We've talked about teaching the master to keep track of how far back
>> all of its known standbys are, and retaining WAL back to that specific
>> point, rather than the shotgun approach that is wal_keep_segments.
>> It's not exactly clear what the interface to that should look like,
>> though.
>>
>>
> Moreover, how does the database decide when to drop a known standby from
> the queue because it has failed or the DBA notify the database that a
> particular standby should no longer be included?

Probably the latter. So as Robert pointed out, we need neat API to register
and drop the standby. Though I have no good idea about this..

BTW, I have another idea about wal_keep_segments problem.
http://archives.postgresql.org/message-id/AANLkTinN=xspooaxzvfsp1okfmdab1f_d-f91xjez...@mail.gmail.com

Regards,

-- 
Fujii Masao

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] trigger when clause

2012-04-12 Thread Andy Chambers
On Tue, Apr 10, 2012 at 5:10 PM, Jeff Davis  wrote:

> On Tue, 2012-04-10 at 16:15 -0400, Andy Chambers wrote:
>
> > Does anyone know the time complexity of the algorithm used to handle
> > triggers with a when clause?
>
> It's done with a linear scan of all triggers, testing the WHEN clause
> for each.
>
> > To make this a little more concrete, what is likely to perform better
> >
> >
> > a) A single trigger with "n" if/else clauses
> > b) A set of "n" triggers each using a different when clause.
>
> Both are essentially linear.
>
> If you want to scale to a large number of conditions, I would recommend
> using one trigger in a fast procedural language, and searching for the
> matching conditions using something better than a linear search.
>
> To beat a linear search, you need something resembling an index, which
> is dependent on the types of conditions. For instance, if your
> conditions are:
>
>  00 <= x < 10
>  10 <= x < 20
>  20 <= x < 30
>  ...
>
> you can use a tree structure. But, obviously, postgres won't know enough
> about the conditions to know that a tree structure is appropriate from a
> given sequence of WHEN clauses. So, you should use one trigger and code
> the condition matching yourself.
>

Thanks Jeff.  That's very helpful.

--
Andy


Re: [GENERAL] recommended schema diff tools?

2012-04-12 Thread Thomas Kellerer

Welty, Richard wrote on 12.04.2012 16:57:

can anyone recommend an open source tool for diffing schemas?

(it should go without saying that i'm looking for ddl to update
production and QA DBs from development DBs, but i'll say it, just in
case.)


You might want to have a look at SQL Workbench/J.
It has it's own "SQL command" for doing a diff on schema level.

http://www.sql-workbench.net/manual/compare-commands.html

The output is a (dbms independent) XML file, but XSLT templates to transform 
that into SQL are available:

http://www.sql-workbench.net/xslt.html

The XSTL might need some adjusting for your purposes though.

But you should put a controlled way of deploying schema changes into place.
Doing a diff of a developer DB isn't really the ideal approach.

We are quite happy using Liquibase for this purpose: http://liquibase.org/

Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] recommended schema diff tools?

2012-04-12 Thread Chris Angelico
On Fri, Apr 13, 2012 at 2:02 AM, Thomas Kellerer  wrote:
> Sounds like you implement something very similar to Liquibase.

Never heard of it till today; just googled it. Looks cool. At first
glance, yes, does appear similar; but what we're doing is WAY less
sophisticated.

ChrisA

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] recommended schema diff tools?

2012-04-12 Thread Thomas Kellerer

Chris Angelico wrote on 12.04.2012 17:10:

patchlevel = query("select patchlevel from config")
switch (patchlevel)
{
   default: print("Unknown patch level!"); break;
   case 1:
 print("20120216: Adding Foobar columns to Quux")
 query("ALTER TABLE Quux ADD foo smallint not null default 0, ADD
bar varchar")
   case 2:
 ... etc ...
   case 42:
 ...
 // Add new patch levels here
 query("update config set patchlevel=43"); query("commit");
   case 43: break;
}


Every change is thus assigned a number. The current patch level is a
safe no-op; any unrecognized number is a major error. The script is
thus safe to run on any database, and will always bring that database
up to the script's current patch level.


Sounds like you implement something very similar to Liquibase.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] recommended schema diff tools?

2012-04-12 Thread Chris Angelico
On Fri, Apr 13, 2012 at 1:28 AM, Merlin Moncure  wrote:
> (although
> they can be great from reporting or double checking standpoint).

Good point. Double-checking your patch script may well be worth doing.
It ought, in theory, be possible to load up a copy of your existing
database, spin up a brand new one from the CREATE TABLE script, diff
them, and compare against the patch script. Of course, that does run
the risk of useless diff noise (for instance, I'll often add a new
field into the middle of a table where it most logically fits, but the
patch will simply ALTER TABLE to add it at the end), but it could
still be a useful check, especially if completely automated.

ChrisA

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] recommended schema diff tools?

2012-04-12 Thread Merlin Moncure
On Thu, Apr 12, 2012 at 10:10 AM, Chris Angelico  wrote:
> On Fri, Apr 13, 2012 at 12:57 AM, Welty, Richard  wrote:
>> can anyone recommend an open source tool for diffing schemas?
>>
>> (it should go without saying that i'm looking for ddl to update production
>> and QA DBs from development DBs, but i'll say it, just in case.)
>
> We toyed with this exact issue at work. In the end, we went the other
> direction, and created two files, both managed in source control: a
> .sql file with everything necessary to initialize the database from
> scratch, and a patch script. Every change gets done (by hand) to the
> primary .sql file, and the SQL statements needed to effect the
> transition (eg ALTER TABLE to add a column) get added to the patch
> script. A field in our singleton configuration table records the
> current patch level, so only the necessary changes will be made.
>
> It requires some developer discipline, but it ensures that there's
> always an audit trail giving the _why_ of every change, which is
> something that a diff utility can never do. The patch script is quite
> simple, and looks broadly like this:
>
> patchlevel = query("select patchlevel from config")
> switch (patchlevel)
> {
>  default: print("Unknown patch level!"); break;
>  case 1:
>    print("20120216: Adding Foobar columns to Quux")
>    query("ALTER TABLE Quux ADD foo smallint not null default 0, ADD
> bar varchar")
>  case 2:
>    ... etc ...
>  case 42:
>    ...
>    // Add new patch levels here
>    query("update config set patchlevel=43"); query("commit");
>  case 43: break;
> }
>
>
> Every change is thus assigned a number. The current patch level is a
> safe no-op; any unrecognized number is a major error. The script is
> thus safe to run on any database, and will always bring that database
> up to the script's current patch level.
>
> This has worked out far safer than attempting an after-the-event diff.
> YMMV of course though.

+1

this, or some variation of the approach, is the correct path for doing
database updates in a team environment.  tool driven systems bring
enormous challenges that require enormous engineering to work
properly.  i've heard (although not personally observed) that some
incredibly pricey and complex commercial tools have solved the problem
of automating database updates but the button clicky stuff i've seen
in the open source and shovelware realms come up short in terms of
integrating fully into a proper change management system (although
they can be great from reporting or double checking standpoint).

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] recommended schema diff tools?

2012-04-12 Thread Robert Gravsjö
(sorry for top posting but I'm using a less than sane email client)
 
I came across SQL Power Architect not long ago and it might be something you 
could use.
http://code.google.com/p/power-architect/
 
I haven't had much time to look at it though.
 
Regards,
roppert
 
Från: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] För Welty, Richard
Skickat: den 12 april 2012 16:58
Till: pgsql-general@postgresql.org
Ämne: [GENERAL] recommended schema diff tools?
 
can anyone recommend an open source tool for diffing schemas?

(it should go without saying that i'm looking for ddl to update production and 
QA DBs from development DBs, but i'll say it, just in case.)

thanks,
   richard


Re: [GENERAL] recommended schema diff tools?

2012-04-12 Thread Chris Angelico
On Fri, Apr 13, 2012 at 12:57 AM, Welty, Richard  wrote:
> can anyone recommend an open source tool for diffing schemas?
>
> (it should go without saying that i'm looking for ddl to update production
> and QA DBs from development DBs, but i'll say it, just in case.)

We toyed with this exact issue at work. In the end, we went the other
direction, and created two files, both managed in source control: a
.sql file with everything necessary to initialize the database from
scratch, and a patch script. Every change gets done (by hand) to the
primary .sql file, and the SQL statements needed to effect the
transition (eg ALTER TABLE to add a column) get added to the patch
script. A field in our singleton configuration table records the
current patch level, so only the necessary changes will be made.

It requires some developer discipline, but it ensures that there's
always an audit trail giving the _why_ of every change, which is
something that a diff utility can never do. The patch script is quite
simple, and looks broadly like this:

patchlevel = query("select patchlevel from config")
switch (patchlevel)
{
  default: print("Unknown patch level!"); break;
  case 1:
print("20120216: Adding Foobar columns to Quux")
query("ALTER TABLE Quux ADD foo smallint not null default 0, ADD
bar varchar")
  case 2:
... etc ...
  case 42:
...
// Add new patch levels here
query("update config set patchlevel=43"); query("commit");
  case 43: break;
}


Every change is thus assigned a number. The current patch level is a
safe no-op; any unrecognized number is a major error. The script is
thus safe to run on any database, and will always bring that database
up to the script's current patch level.

This has worked out far safer than attempting an after-the-event diff.
YMMV of course though.

Chris Angelico

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Two entries with the same primary key

2012-04-12 Thread Merlin Moncure
On Thu, Apr 12, 2012 at 9:20 AM, Ivan Evtuhovich  wrote:
> Hello,
>
> More then month ago we upgrade DB from 9.0 to 9.1 with pg_upgrade. Then we
> move DB to another server with standard pg streaming replication.
>
> Now we have two entries with the same primary key. And I do not know what to
> do.
>
> SELECT ctid, id from billing_invoices where id = 27362891;
>
>  ctid |id
> --+--
>  (1112690,11) | 27362891
>  (1112438,26) | 27362891
>
>
>
>  \d billing_invoices
>Table "public.billing_invoices"
>  Column |Type |
> Modifiers
>
> +-+---
>  id | integer | not null default
> nextval('billing_invoices_id_seq'::regclass)
>
> ...
>  created_at | timestamp without time zone |
>  updated_at | timestamp without time zone |
> Indexes:
> "billing_invoices_pkey" PRIMARY KEY, btree (id)

well, the first step is to determine the extent of the damage. we need
to get the database to the point where it can load from a standard
backup, and we need to have the database loaded into a testbed where
we can stitch together the corrections you are going to apply to the
production system.  this is probably going to involve a schema level
dump, a custom format data dump, and some trial and error to see which
tables are busted (for each one, dropping the keys, restoring the
data, fixing the data, and restoring the keys, etc).  take notes of
everything you fix so that corrections can be back applied to your
production system.

a full file system level backup also couldn't hurt so that the
evidence trail pointing to how this happened isn't destroyed.

one the database is fixed and internally consistent, hopefully we can
figure out how this happened.   I don't see anything glaring in the
9.1 release notes that points to a fixed pg_upgrade bug that matches
your behavior, so it's not a given that pg_upgrade actually caused the
issue.  do you have a database dump around time time you upgraded
(ideally, both before and after?)  did you preserve the pre-upgrade
database cluster?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] recommended schema diff tools?

2012-04-12 Thread Welty, Richard
can anyone recommend an open source tool for diffing schemas?

(it should go without saying that i'm looking for ddl to update production and 
QA DBs from development DBs, but i'll say it, just in case.)

thanks,
   richard


[GENERAL] Two entries with the same primary key

2012-04-12 Thread Ivan Evtuhovich
Hello,

More then month ago we upgrade DB from 9.0 to 9.1 with pg_upgrade. Then we
move DB to another server with standard pg streaming replication.

Now we have two entries with the same primary key. And I do not know what
to do.

SELECT ctid, id from billing_invoices where id = 27362891;
 ctid |id
--+--
 (1112690,11) | 27362891
 (1112438,26) | 27362891



 \d billing_invoices
   Table "public.billing_invoices"
 Column |Type |
   Modifiers
+-+---
 id | integer | not null default
nextval('billing_invoices_id_seq'::regclass)
...
 created_at | timestamp without time zone |
 updated_at | timestamp without time zone |
Indexes:
"billing_invoices_pkey" PRIMARY KEY, btree (id)


Re: [GENERAL] non-static LIKE patterns

2012-04-12 Thread Tom Lane
haman...@t-online.de writes:
> Tom Lane wrote:
> If you want it to be bulletproof, what I'd think about is something like
>   WHERE second.path LIKE quote_like(first.path)||'%'

> Just out of curiosity: wouldn't that (as well as using non-static like)
> be an enormous performance problem?

Well, it won't be free, but I think you've already doomed yourself to
a not-very-bright plan by using LIKE in this way at all.

In any case, as a wise man once said, you can make it run arbitrarily
fast if it doesn't have to give the right answer.  Correctness trumps
any micro-optimization questions, so if you have to have prefix matching
of this sort, it's gonna cost ya somehow.

Actually, if the only case you're worried about is prefix match, you
could do it in substring style:

WHERE second.path = substring(first.path, 1, length(second.path))

(better double-check the substring syntax, I'm too lazy to).  This is
still going to completely suck on a macro level: there's still no way to
perform the join except by tediously iterating through every combination
of rows.  But it'll likely outrun any LIKE-based solution by some
percentage.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Searchable chess positions in a Postgress DB

2012-04-12 Thread Merlin Moncure
2012/4/11 Ondrej Ivanič :
> Hi,
>
> On 11 April 2012 17:15, Sidney Cadot  wrote:
>> I have written code to extract these positions, and now I want to put
>> them into a Postgres database. Specifically, I want to do this in a
>> way that allows *fast* lookups of positions, e.g. "give me all
>> positions that have a White King on c4 and either a Black Bishop or
>> White Knight on f7".
>
> I would try to use single table with 16 columns like:
> white_pawn char(2)[] -- like {'c1', 'd3', ... }, max 8 elements
> white_rook char(2)[] -- max 2 elements
> white_bishop char(2)[] -- max 2 elements
> white_knight char(2)[] -- max 2 elements
> white_queen char(2)
> white_king char(2)
> black_pawn_1 char(2)[]
> ...
> black_king char(2)
>
> and each column; char(2) and char(2)[] should have btree and GiST
> index respectively. The query should looks like this:
> select * from positions where white_king = 'c4' and (white_bishop &&
> ARRAY['f7'] or white_knight && ARRAY['f7'])
>
> Another alternative might be to use hstore (and GiST index):
> http://www.postgresql.org/docs/9.1/static/hstore.html

yeah -- if you want fast searching of positions (or even games) using
phrases you should immediately be thinking GIST.  GIST can optimize
quals such as  'A contains B' or 'A overlaps B'.   This is a
non-trival but interesting project and I highly encourage you to give
it a go if you're so inclined.  Before banging on the schema, I'd
start thinking about to organize the position into a type such that
you can engineer GIST operations.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Fwd: Re: [GENERAL] Searchable chess positions in a Postgress DB

2012-04-12 Thread Gavin Flower



 Original Message 
Subject:Re: [GENERAL] Searchable chess positions in a Postgress DB
Date:   Fri, 13 Apr 2012 00:33:17 +1200
From:   Gavin Flower 
Organisation:   ArchiDevSys
To: Sidney Cadot 



On 12/04/12 01:14, Sidney Cadot wrote:

 Hi Gavin,

 I appreciate the neatness of your proposed solution, but I feel that
 having a separate "piece" table may blow up the storage requirements
 by too much. I'm looking at 400M Positions; a Position on average has
 about 23.8 pieces, So I'd be looking at a single 9,5 billion row
 table; and each Piece row will be about 20 bytes(?). I have no feeling
 about how Postgres will hold up on a table that big. Also it's about 6
 times the amount of storage compared to the direct approach of storing
 a 64-byte board in a Position row.


Hi Sydney,

Postgres can handle tables much biggrr than you need...

http://www.postgresql.org/about
[...]
There are active PostgreSQL systems in production environments that
manage in excess of 4 terabytes of data. Some general PostgreSQL limits
are included in the table below.
LimitValue
Maximum Database SizeUnlimited
   Maximum Table Size32 TB
 Maximum Row Size1.6 TB
   Maximum Field Size1 GB
   Maximum Rows per TableUnlimited
Maximum Columns per Table250 - 1600 depending on column types
Maximum Indexes per TableUnlimited
[...]

Note the table def should use char(1), not char - too much Java
peogramming!  :-)

The way I defined the tables would faciltate quite a wide range of queries.

I used to play in the New Zealand national open, but never got near the
prize money!


Cheers,
Gavin




Re: [GENERAL] Writing data to a text file based on a trigger event...

2012-04-12 Thread François Beausoleil


Le jeudi 12 avril 2012 à 06:58, raghupradeep a écrit :

> What I need to know is it possible to write a trigger which will write the
> data of the row whose status column gets updated to a text file?

I would advise simply INSERTing the old values into a new table. This would be 
more flexible in the end, as you could export to a file, or run queries, or 
whatever.

The trigger would be an ON UPDATE, and very similar to the auditing trigger at 
http://www.postgresql.org/docs/9.1/static/plpgsql-trigger.html#PLPGSQL-TRIGGER-AUDIT-EXAMPLE

Welcome to PostgreSQL! Hope that helps!
François


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Writing data to a text file based on a trigger event...

2012-04-12 Thread raghupradeep
Hi ,

I am basically a MySQL DBA and have little idea on PostgreSQL. In our
environment we have an application which is using PostgreSQL as its back
end. The application logs the status of the jobs running in it to a table in
this database i.e when a job starts it inserts a new row to this table and
it keeps on updating the column `status` based on the status of the job
running. So the requirement that we have is I need to capture certain status
values and based on it need to through alert to our centralized monitoring
system. 

What I need to know is it possible to write a trigger which will write the
data of the row whose status column gets updated to a text file?

Thanks & Regards
Raghupradeep

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Writing-data-to-a-text-file-based-on-a-trigger-event-tp5635290p5635290.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Value to long for type ....: Columnname missing

2012-04-12 Thread Thomas Guettler

Hi,

I think it would be very good, if postgresql reports which column is too small:

   Value to long for type character varying(1024) (message translated from 
german to english)

Is there a reason not to report the column name?

How can you report feature request?

  Thomas Güttler



--
Thomas Guettler, http://www.thomas-guettler.de/
E-Mail: guettli (*) thomas-guettler + de

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general