Re: [GENERAL] Download States and Capitals Database

2011-09-28 Thread Johan De Meersman

- Original Message -
 From: Adarsh Sharma adarsh.sha...@orkash.com
 
 This Link gives the capitals of all countries but I need the states
 and their capitals in all these countries too..
 But I think this is not possible because very few countries are
 divided into states and some into cantons, county etc.

Yep. The only one I can think of off the top of my head is the USA, although I 
can't exactly claim accurate geopolitical knowledge about the whole world :)

Here in Belgium, we have a bunch of provinces. France has departements. It 
varies a lot. I suppose you could build a hierarchy of geographical 
subdivisions, but you'd still have to save the appropriate naming schemes 
per-country.



-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Identifying old/unused views and table

2011-09-28 Thread Guillaume Lelarge
On Wed, 2011-09-28 at 09:04 +0800, Craig Ringer wrote:
 On 09/28/2011 04:51 AM, Jason Long wrote:
  I have an application with a couple hundred views and a couple hundred
  tables.
 
  Is there some way I can find out which views have been accessed in the
  last 6 months or so?  Or some way to log this?
 
  I know there are views and tables that are no longer in used by my
  application and I am looking for a way to identify them.
 
 Look at the pg_catalog.pg_stat* tables
 

I fail to see how that gives him any answer on the views, and tables no
longer used. AFAICT, there's no way to know for views (apart from
logging all queries in the log). As for tables, still apart from the
log, pg_stat_user_tables could give an answer if he was monitoring it at
least the last six months.


-- 
Guillaume
  http://blog.guillaume.lelarge.info
  http://www.dalibo.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Quick-and-Dirty Data Entry with LibreOffice3?

2011-09-28 Thread Vincent Veyron
Le mardi 27 septembre 2011 à 13:19 -0700, Rich Shepard a écrit :
 On Tue, 27 Sep 2011, John R Pierce wrote:
 
  to use ODBC, you'd need to give it the DSN information, I don't know the
  exact format, but in general, its something like
 
  [PostgreSQL]
  Description = Postgres Database FRED
  Driver  = PostgreSQL

[...]

I'm not presented with an opportunity to offer any of this information
 anywhere.
 

These are the notes I took for an installation of linuxodbc on Debian;
they are a few years old because I came to the same conclusion as you
(see below) :

apt-get install linuxodbc

#install postgres driver
apt-get install odbc-postgresql
odbcinst -i -d -f /usr/share/psqlodbc/odbcinst.ini.template
#create data source
cat /usr/share/doc/odbc-postgresql/examples/odbc.ini.template  /etc/odbc.ini
#edit pg_hba.conf 
# All IPv4 connections from localhost
#hostall all 127.0.0.1 255.255.255.255   ident 
sameuser
hostall all 127.0.0.1 255.255.255.255   trust
#test connectivity with isql
isql 'odbcname' username


Guess the most practical thing to do is give up trying to use LO as a
 front end. I'll just write INSTALL INTO ... statements in emacs then use
 psql to read them into the table.

I'm not sure what 'INSTALL INTO ... statements' are, but are you aware
of the very convenient 'M-x sql-postgres' in emacs? 

I got it to work with this in my .emacs file

(setq sql-user yourusername)
(setq sql-database yourdbname)
(setq sql-server localhost)
(setq sql-postgres-options '(-Uyourusername -P pager=off))


I use that and a mix of copy/paste/replace between emacs and a
spreadsheet to generate all the statements I need; 



-- 
Vincent Veyron
http://marica.fr/
Logiciel de gestion des sinistres et des contentieux pour le service juridique


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Download States and Capitals Database

2011-09-28 Thread Craig Ringer
On 28/09/11 13:43, Johan De Meersman wrote:
 
 - Original Message -
 From: Adarsh Sharma adarsh.sha...@orkash.com

 This Link gives the capitals of all countries but I need the states
 and their capitals in all these countries too..
 But I think this is not possible because very few countries are
 divided into states and some into cantons, county etc.
 
 Yep. The only one I can think of off the top of my head is the USA, although 
 I can't exactly claim accurate geopolitical knowledge about the whole world :)

Australia has states, as does India.

I won't be shocked to hear that there's somewhere with more than two
levels, eg country-state-province-provincial capital .

BTW, Adarsh, you seem to have cross-posted to the MySQL mailing list.
Cross posting is rude enough when you're cross-posting to several Pg
lists, but intermingling mysql and Pg mailing list replies is just going
to get messy. PLEASE DO NOT POST MESSAGES TO MORE THAN ONE MAILING LIST
IN THE SAME MESSAGE.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL recovery when lost some file in data\global

2011-09-28 Thread tuanhoanganh
I can start postgresql but there is error

2011-09-28 15:09:37 ICT LOG:  database system was interrupted; last known up
at 2011-09-26 08:03:39 ICT
2011-09-28 15:09:37 ICT LOG:  database system was not properly shut down;
automatic recovery in progress
2011-09-28 15:09:37 ICT LOG:  consistent recovery state reached at 0/7652700
2011-09-28 15:09:37 ICT LOG:  record with zero length at 0/7652700
2011-09-28 15:09:37 ICT LOG:  redo is not required
2011-09-28 15:09:37 ICT LOG:  database system is ready to accept connections
2011-09-28 15:09:38 ICT LOG:  autovacuum launcher started
2011-09-28 15:09:38 ICT ERROR:  could not open file global/11867: No such
file or directory
2011-09-28 15:09:39 ICT ERROR:  could not open file global/11867: No such
file or directory
2011-09-28 15:09:40 ICT ERROR:  could not open file global/11867: No such
file or directory
2011-09-28 15:09:41 ICT ERROR:  could not open file global/11867: No such
file or directory
2011-09-28 15:09:42 ICT ERROR:  could not open file global/11867: No such
file or directory
2011-09-28 15:09:43 ICT ERROR:  could not open file global/11867: No such
file or directory
2011-09-28 15:09:44 ICT ERROR:  could not open file global/11867: No such
file or directory
2011-09-28 15:09:45 ICT ERROR:  could not open file global/11867: No such
file or directory
2011-09-28 15:09:46 ICT ERROR:  could not open file global/11867: No such
file or directory

Maybe all file in base directory are ok, postgresql only lost some file in
global.
Please help me. Sorry for my English

Tuan Hoang Anh

On Tue, Sep 27, 2011 at 4:55 PM, Venkat Balaji venkat.bal...@verse.inwrote:

 Hi Tuan Hoang Anh,

 Are you able to bring up the cluster ??

 Please let us know what problem you are facing.

 Thanks
 Venkat


 On Tue, Sep 27, 2011 at 12:08 PM, tuanhoanganh hatua...@gmail.com wrote:

 I am running PostgreSQL 9.0.1 32bit on windows 2003. Last night my disk
 had some problem and i lost some file in data\global.
 Is there anyway to recovery postgresql.

 Thanks in advance. Sorry for my English.

 Tuan Hoang Anh





Re: [GENERAL] bytea columns and large values

2011-09-28 Thread David North

On 28/09/11 01:50, Craig Ringer wrote:

On 09/28/2011 01:01 AM, David North wrote:


testdb=# select * from problem_table;
ERROR: invalid memory alloc request size 2003676411


What Pg version are you using?

On which platform?

32-bit or 64-bit OS? If 64-bit, with a 32-bit or 64-bit build of 
PostgreSQL?


psql (PostgreSQL) 9.0.4

64 bit fedora:

Fedora release 14 (Laughlin)
Linux mybox 2.6.35.14-95.fc14.x86_64 #1 SMP Tue Aug 16 21:01:58 UTC 2011 
x86_64 x86_64 x86_64 GNU/Linux


I presume my build of PostgreSQL is x64 - ldd `which psql` spits out 
lots of references to lib64.


David


--
David North, Software Developer, CoreFiling Limited
http://www.corefiling.com
Phone: +44-1865-203192


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Download States and Capitals Database

2011-09-28 Thread Adarsh Sharma

Craig Ringer wrote:

On 28/09/11 13:43, Johan De Meersman wrote:
  

- Original Message -


From: Adarsh Sharma adarsh.sha...@orkash.com

This Link gives the capitals of all countries but I need the states
and their capitals in all these countries too..
But I think this is not possible because very few countries are
divided into states and some into cantons, county etc.
  

Yep. The only one I can think of off the top of my head is the USA, although I 
can't exactly claim accurate geopolitical knowledge about the whole world :)



Australia has states, as does India.

I won't be shocked to hear that there's somewhere with more than two
levels, eg country-state-province-provincial capital .

BTW, Adarsh, you seem to have cross-posted to the MySQL mailing list.
Cross posting is rude enough when you're cross-posting to several Pg
lists, but intermingling mysql and Pg mailing list replies is just going
to get messy. PLEASE DO NOT POST MESSAGES TO MORE THAN ONE MAILING LIST
IN THE SAME MESSAGE.
  


Really Apologies my mistake and fell sorry for that.
I will take care about this in future.


Thanks
  




[GENERAL] tubles matching

2011-09-28 Thread salah jubeh


Hello,

I have two views both contain  identical column names , order and types except 
the primary keys. I want to match these two views - return the pk pair  of the 
rows which match from these views - by comparing all the column values. I want 
to write a pgplsql function to do this Job by iterating through all the 
coloumns and compare the values.  

Is there another way to do that ?

Thanks in advance 


Re: [GENERAL] tubles matching

2011-09-28 Thread Alban Hertroys
On 28 September 2011 13:19, salah jubeh s_ju...@yahoo.com wrote:

 Hello,

 I have two views both contain  identical column names , order and types
 except the primary keys. I want to match these two views - return the pk
 pair  of the rows which match from these views - by comparing all the column
 values. I want to write a pgplsql function to do this Job by iterating
 through all the coloumns and compare the values.

 Is there another way to do that ?

SELECT a.pk1, a.pk2 FROM view1 AS b INNER JOIN view2 AS b ON (a.pk1 =
b.pk1 AND a.pk2 = b.pk2 AND a.col1 = b.col1 AND a.col2=b.col2);

-- 
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Help needed in Search

2011-09-28 Thread Siva Palanisamy
Hi All,

I am trying to retrieve the contact names based on the keyed search string. It 
performs good for the English alphabets and behaves strangely for special chars 
such as _,/,\,%

My query in the function is similar to

SELECT contact_name FROM contacts WHERE LOWER(contact_name) LIKE 
LOWER('_McDonald%') ORDER BY LOWER(contact_name) ASC LIMIT 1;

It looks like, during searching, it retrieves all the contact names instead of 
the desired. The similar bizarre happens for the above mentioned special chars. 
I need to support these. How do I educate postgres to consider these chars? 
Please guide me.

Thanks and Regards,
Siva.


::DISCLAIMER::
---

The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
It shall not attach any liability on the originator or HCL or its affiliates. 
Any views or opinions presented in
this email are solely those of the author and may not necessarily reflect the 
opinions of HCL or its affiliates.
Any form of reproduction, dissemination, copying, disclosure, modification, 
distribution and / or publication of
this message without the prior written consent of the author of this e-mail is 
strictly prohibited. If you have
received this email in error please delete it and notify the sender 
immediately. Before opening any mail and
attachments please check them for viruses and defect.

---


[GENERAL] Help needed in Search

2011-09-28 Thread Siva Palanisamy
Hi All,

I am trying to retrieve the contact names based on the keyed search string. It 
performs good for the English alphabets and behaves strangely for special chars 
such as _,/,\,%

My query in the function is similar to

SELECT contact_name FROM contacts WHERE LOWER(contact_name) LIKE 
LOWER('_McDonald%') ORDER BY LOWER(contact_name) ASC LIMIT 1;

It looks like, during searching, it retrieves all the contact names instead of 
the desired. The similar bizarre happens for the above mentioned special chars. 
I need to support these. How do I educate postgres to consider these chars? 
Please guide me.

Thanks and Regards,
Siva.


::DISCLAIMER::
---

The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
It shall not attach any liability on the originator or HCL or its affiliates. 
Any views or opinions presented in
this email are solely those of the author and may not necessarily reflect the 
opinions of HCL or its affiliates.
Any form of reproduction, dissemination, copying, disclosure, modification, 
distribution and / or publication of
this message without the prior written consent of the author of this e-mail is 
strictly prohibited. If you have
received this email in error please delete it and notify the sender 
immediately. Before opening any mail and
attachments please check them for viruses and defect.

---


Re: [GENERAL] Quick-and-Dirty Data Entry with LibreOffice3?

2011-09-28 Thread Rich Shepard

On Wed, 28 Sep 2011, Vincent Veyron wrote:


I'm not sure what 'INSTALL INTO ... statements' are, but are you aware
of the very convenient 'M-x sql-postgres' in emacs?


Vincent,

  I have a SQL major mode for emacs. Don't know that it's specific to
postgres but it is automatically invoked when I open a buffer with a .sql
file name. I get syntax-related colors; perhaps it does more of which I am
unaware.


I got it to work with this in my .emacs file

(setq sql-user yourusername)
(setq sql-database yourdbname)
(setq sql-server localhost)
(setq sql-postgres-options '(-Uyourusername -P pager=off))


  I'll look for sql-postgres if it will allow me to add rows to an existing
table.

Many thanks,

Rich

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] live metadata changes v8.3.4

2011-09-28 Thread David Fetter
On Tue, Sep 27, 2011 at 01:51:42PM -0700, Gauthier, Dave wrote:
 Hi:
 
 How does one make a metadata change to a DB that's actively being
 used.  Specifically, I want to drop a view, drop some columns from a
 table that's used in the view, recreate the view without those
 columns.
 
 In the past, I've resorted to connecting as a super user, running
 select procpid from pg_stat_activity... then pg_ctl kill ABRT
 procpid.  This would create a window where I could get in and make
 the change.  But it also created some angry users whos processes got
 killed.

You have the choice between taking those users offline and not doing
the change.

 V8.3.4 on linux.

Upgrade to 8.3.16 immediately, if not sooner.  Oh, and start planning
the 9.1 migration, too.  December of 2012 is closer than you think. :)

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Solved] Generic logging system for pre-hstore using plperl triggers

2011-09-28 Thread David Fetter
On Tue, Sep 27, 2011 at 04:52:08PM -0300, Diego Augusto Molina wrote:
 2011/9/27, Diego Augusto Molina diegoaugustomol...@gmail.com:
  Honestly, I don't remember why I used triggers instead of rules in the
  audit and audet tables.
 
 I remember now, that's because in my case, operations over tuples are
 done very lightly (one or two in the same sentence at a time). So, for
 a case as such, rules end up beeing more expensive than triggers
 (right?).

There's an even better reason not to use rules: they're going away in
a not too distant version of PostgreSQL.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Solved] Generic logging system for pre-hstore using plperl triggers

2011-09-28 Thread Diego Augusto Molina
2011/9/27, Merlin Moncure mmonc...@gmail.com:
 *) when posting schema definitions, particularly in email format, try
 not to use dumped definitions from pg_dump or pgadmin. This creates a
 lot of noise in the script that detracts from what you are trying to
 do.   Also an attached file would probably have been more appropriate.

Right! I'm sending it attached and from the source files instead of
the pgAdminIII dump. Well, that'll be the next mail, I don't have the
files right now.

 *) using smallint key for client_inet is really dubious.  why not just
 use the inet itself?

Sure, this has two reasons:

1) I wanted the audit table to be as narrow as possible. Believe me, I
migrated the tenths of millions of tuples from the previous (slow,
inefficient, extremly difficult to maintain) logging system on a test
server just to try things out: the narrower the table, the better it
performs with searchs. And I mean it! You wouldn't imagine. I don't
know what I did with the annotated results, but I will search for them
to share that.

2) I put many things outside the audit table (like the table, schema
and field names too); that makes it narrower but also makes it very
easy to see all values without querying the audit table: I can see in
a very small table all inet's from clients (and you could gather more
info if you would like too). Note that for us most of the accesses to
the database come from the web server which implements a new version
of a big application, so it would be a pitty to allocate 5 extra bytes
(7 bytes for IPv4, 2 for smallint) just to see mostly the same IP
address. So, why bother logging the IP at all? well, besides adding
completeness, it allows us to see if they were using the new
application or the old one, which accessed directly to the database
server from the client's computer.

Other fields, namely client_port and pid, aren't mapped out to other
tables because they do not increase too much the table width and
because I wouldn't analyze those contents very often: like never, but
may happen some time some kind of weird attack that needs to be
analized with this data.

 *) what is the audet table for?  Are you truly storing a record for
 every field of every audited table?  This will be incredibly
 efficient, especially for large, wide tables.

See the answer about criterions.

 *) surely, creating a table called 'table' is not a good idea.

Ok, name it tables then.

 *) this approach obviously is a lot more complicated than hstore.
 however, for 8.4 and down, hstore won't work. but, what about just
 storing the record as text?

Third time lucky! see the next...

 *) I can't quite follow the perl criteron steps -- what is happening
 there?  What are the loops doing?

We have some very wide tables (like 20 or more columns). We only log
the value of two kind of columns: i) those which make up the primary
key of the table (which helps to track down the alteration); and ii)
those whose values change in the event. Note that columns in group i)
can also be in group ii)
This carries the complexity of the criterions, which are meant to
determine the primary key of the table at any cost. Each failing
criterion makes the following one to take place. These are the
criterions I could think of:
1) If we got parameters, _check_ them and consider each of them as one
column of the primary key. This is the cheapest and almost way
through. We really have to check, because if there's i.e. some typo
the whole transaction outside the trigger would fail inconditionally
together and we want this logging system to interfere as least as
possible. A little less performance in exchange for some more
stability.
2) Search in the system catalogs for a primary key constraint.
3) Search in the system catalogs for the unique constraint which has
least columns (in fact, I think it should be the narrowest unique
constraint).
4) If the table has OIDs, use that and emit a warning (that's never a
real pk, unless you make an external unique index, which I don't have
intentions to check right now).
5) The else (or default) case is to log every column emiting a
warning (Really guys, use primary keys! ;).
We wouldn't bear with these complexity every time but only once: if
criterion 1) fails, after determining the primary key we should
execute a string which drops this same trigger and re-creates it
passing it the names of the columns which were determined to be the pk
so that the next time we don't go furher than 1). This works, I tried
it out time ago but never did the change (oops!). I mean, dropping the
trigger from the trigger itself (in the docs it says that 'alter
trigger' can only rename it).

For my case, all this head ache was needed: it was specified as one of
the requirements of the logging system that every tuple should be
trackable. Using a recursive with query, it is possible to track
down the changes to any single tuple in the audited tables (or make a
function, whatever). Fortunately, they never specified a 

Rules going away (was: [GENERAL] [Solved] Generic logging system for pre-hstore using plperl triggers)

2011-09-28 Thread Andrew Sullivan
On Wed, Sep 28, 2011 at 06:20:04AM -0700, David Fetter wrote:
 
 There's an even better reason not to use rules: they're going away in
 a not too distant version of PostgreSQL.

Really?  How?  I thought views were done using rules under the hood?

Also, it'd be awfully nice if, in case rules are going away, the
documentation actually said prominently in the rules section, Rules
are deprecated and are planned to be removed at some point in the
future.  Right now, anyone coming to Postgres for the first time
could easily understand the manual to say that Postgres has this cool
feature on which they can rely.

Best,

A

-- 
Andrew Sullivan
a...@crankycanuck.ca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: Rules going away (was: [GENERAL] [Solved] Generic logging system for pre-hstore using plperl triggers)

2011-09-28 Thread Tom Lane
Andrew Sullivan a...@crankycanuck.ca writes:
 On Wed, Sep 28, 2011 at 06:20:04AM -0700, David Fetter wrote:
 There's an even better reason not to use rules: they're going away in
 a not too distant version of PostgreSQL.

 Really?  How?  I thought views were done using rules under the hood?

 Also, it'd be awfully nice if, in case rules are going away, the
 documentation actually said prominently in the rules section, Rules
 are deprecated and are planned to be removed at some point in the
 future.  Right now, anyone coming to Postgres for the first time
 could easily understand the manual to say that Postgres has this cool
 feature on which they can rely.

I think the true state of affairs is this: rules have a lot of
surprising behaviors, and if we could think of something that works more
straightforwardly, we'd love to replace them.  But I think we'd have to
have the something in place before we consider deprecating rules.
At the moment we don't even have a glimmer of a design, so David's
statement is many years premature.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: Rules going away (was: [GENERAL] [Solved] Generic logging system for pre-hstore using plperl triggers)

2011-09-28 Thread Andrew Sullivan
On Wed, Sep 28, 2011 at 10:34:32AM -0400, Tom Lane wrote:
 I think the true state of affairs is this: rules have a lot of
 surprising behaviors, and if we could think of something that works more
 straightforwardly, we'd love to replace them.

Oh.  Well, _that's_ not news :-)  

-- 
Andrew Sullivan
a...@crankycanuck.ca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Solved] Generic logging system for pre-hstore using plperl triggers

2011-09-28 Thread Merlin Moncure
On Wed, Sep 28, 2011 at 8:20 AM, Diego Augusto Molina
diegoaugustomol...@gmail.com wrote:
 2011/9/27, Merlin Moncure mmonc...@gmail.com:
 *) when posting schema definitions, particularly in email format, try
 not to use dumped definitions from pg_dump or pgadmin. This creates a
 lot of noise in the script that detracts from what you are trying to
 do.   Also an attached file would probably have been more appropriate.

 Right! I'm sending it attached and from the source files instead of
 the pgAdminIII dump. Well, that'll be the next mail, I don't have the
 files right now.

 *) using smallint key for client_inet is really dubious.  why not just
 use the inet itself?

 Sure, this has two reasons:

 1) I wanted the audit table to be as narrow as possible. Believe me, I
 migrated the tenths of millions of tuples from the previous (slow,
 inefficient, extremly difficult to maintain) logging system on a test
 server just to try things out: the narrower the table, the better it
 performs with searchs. And I mean it! You wouldn't imagine. I don't
 know what I did with the annotated results, but I will search for them
 to share that.

 2) I put many things outside the audit table (like the table, schema
 and field names too); that makes it narrower but also makes it very
 easy to see all values without querying the audit table: I can see in
 a very small table all inet's from clients (and you could gather more
 info if you would like too). Note that for us most of the accesses to
 the database come from the web server which implements a new version
 of a big application, so it would be a pitty to allocate 5 extra bytes
 (7 bytes for IPv4, 2 for smallint) just to see mostly the same IP
 address. So, why bother logging the IP at all? well, besides adding
 completeness, it allows us to see if they were using the new
 application or the old one, which accessed directly to the database
 server from the client's computer.

 Other fields, namely client_port and pid, aren't mapped out to other
 tables because they do not increase too much the table width and
 because I wouldn't analyze those contents very often: like never, but
 may happen some time some kind of weird attack that needs to be
 analized with this data.

 *) what is the audet table for?  Are you truly storing a record for
 every field of every audited table?  This will be incredibly
 efficient, especially for large, wide tables.

 See the answer about criterions.

 *) surely, creating a table called 'table' is not a good idea.

 Ok, name it tables then.

 *) this approach obviously is a lot more complicated than hstore.
 however, for 8.4 and down, hstore won't work. but, what about just
 storing the record as text?

 Third time lucky! see the next...

 *) I can't quite follow the perl criteron steps -- what is happening
 there?  What are the loops doing?

 We have some very wide tables (like 20 or more columns). We only log
 the value of two kind of columns: i) those which make up the primary
 key of the table (which helps to track down the alteration); and ii)
 those whose values change in the event. Note that columns in group i)
 can also be in group ii)
 This carries the complexity of the criterions, which are meant to
 determine the primary key of the table at any cost. Each failing
 criterion makes the following one to take place. These are the
 criterions I could think of:
 1) If we got parameters, _check_ them and consider each of them as one
 column of the primary key. This is the cheapest and almost way
 through. We really have to check, because if there's i.e. some typo
 the whole transaction outside the trigger would fail inconditionally
 together and we want this logging system to interfere as least as
 possible. A little less performance in exchange for some more
 stability.
 2) Search in the system catalogs for a primary key constraint.
 3) Search in the system catalogs for the unique constraint which has
 least columns (in fact, I think it should be the narrowest unique
 constraint).
 4) If the table has OIDs, use that and emit a warning (that's never a
 real pk, unless you make an external unique index, which I don't have
 intentions to check right now).
 5) The else (or default) case is to log every column emiting a
 warning (Really guys, use primary keys! ;).
 We wouldn't bear with these complexity every time but only once: if
 criterion 1) fails, after determining the primary key we should
 execute a string which drops this same trigger and re-creates it
 passing it the names of the columns which were determined to be the pk
 so that the next time we don't go furher than 1). This works, I tried
 it out time ago but never did the change (oops!). I mean, dropping the
 trigger from the trigger itself (in the docs it says that 'alter
 trigger' can only rename it).

 For my case, all this head ache was needed: it was specified as one of
 the requirements of the logging system that every tuple should be
 trackable. Using a recursive with query, 

Re: [GENERAL] tubles matching

2011-09-28 Thread Chris Travers
Is something like this what you are trying to do?

somedb=# create table a (a int, b text, c date);
CREATE TABLE
somedb=# create table b (a int, b text, c date);
CREATE TABLE
somedb=# select * from a join b using (a, b, c);
 a | b | c
---+---+---
(0 rows)

somedb=# insert into a values (1, 'test', now());
INSERT 0 1
somedb=# insert into b values (1, 'test', now());
INSERT 0 1
somedb=# insert into b values (2, 'test', now());
INSERT 0 1
somedb=# select * from a join b using (a, b, c);
 a |  b   | c
---+--+
 1 | test | 2011-09-28
(1 row)

Best wishes,
Chris Travers

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bytea columns and large values

2011-09-28 Thread Merlin Moncure
On Wed, Sep 28, 2011 at 3:28 AM, David North d...@corefiling.co.uk wrote:
 On 28/09/11 01:50, Craig Ringer wrote:

 On 09/28/2011 01:01 AM, David North wrote:

 testdb=# select * from problem_table;
 ERROR: invalid memory alloc request size 2003676411

 What Pg version are you using?

 On which platform?

 32-bit or 64-bit OS? If 64-bit, with a 32-bit or 64-bit build of
 PostgreSQL?

 psql (PostgreSQL) 9.0.4

 64 bit fedora:

 Fedora release 14 (Laughlin)
 Linux mybox 2.6.35.14-95.fc14.x86_64 #1 SMP Tue Aug 16 21:01:58 UTC 2011
 x86_64 x86_64 x86_64 GNU/Linux

 I presume my build of PostgreSQL is x64 - ldd `which psql` spits out lots of
 references to lib64.

sending ~1GB bytea values is borderline crazy, and is completely crazy
if you are not absolutely sure the transmission is not 100% binary.  I
don't know if the JDBC sends/receives bytea as binary, but it may not.
 If not, you might have better luck with the large object interface.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] What about improving the rules system we have, was Re: Rules going away

2011-09-28 Thread Chris Travers
On Wed, Sep 28, 2011 at 7:34 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 I think the true state of affairs is this: rules have a lot of
 surprising behaviors, and if we could think of something that works more
 straightforwardly, we'd love to replace them.  But I think we'd have to
 have the something in place before we consider deprecating rules.
 At the moment we don't even have a glimmer of a design, so David's
 statement is many years premature.


I guess my question is what it would take to eliminate some of the
very surprising behaviors.  It seems to me that the cases of
surprising behaviors are actually pretty limited (moreso than table
inheritance for example).  The only surprises I can think of come with
DO ALSO rules which involve expressions which should not be
re-executed (nextval being a good example).  DO INSTEAD rules work
extremely well, IMO, and I can't think of any cases where there
surprises which are particularly easy to get bitten by there.  It
seems to me that a lot of problems with DO ALSO rules are issues which
happen when lexical ambiguity hits implementation (do you mean DO ALSO
nextval() again or take the value from the previous nextval() and feed
it into this rule?).

Instead of considering deprecating and replacing rules with something
else, I guess I am wondering if the issues re fundamental or issues of
implementation.  It seems to me we might get more mileage out of
pondering a Rules 2.0 approach, discussing the specifications, etc.
that we would in looking for an alternative.

I figure if we start this discussion then folks may have a basis for
considering alternatives down the line.  If we never discuss it
however, then folks who want something more predictable and
straightforward have nothing to consider.

So I guess I'd start by asking the community a couple of questions:
1)  Are there surprising behaviors in DO INSTEAD rules?
2)  Should we assume that there are cases dependent on existing behaviors?

If it were up to me I would change the rules spec in a couple of
clearly defined ways and then look at how to tweak what we have to
implement those changes.  In particular I would:

1)  Redefine NEW so that it is post-initial-expression evaluation and
therefore remains constant at the input of the rule's query plan.
2)  Add a new IN tuple which is pre-initial-expression evaluation and
therefore does not stay constant as a guarantee.

So for example, suppose I have the following tables which are used to
stage data coming into an accounting system.

CREATE TABLE my_transactions1 (id int not null unique default
nextval('my_transactions1_id_seq'),
post_on date not null,
credit_account int not null references credit_accounts(id),
source_id text not null unique,
total_amount numeric);

CREATE TABLE my_transactions2 (id int not null unique default
nextval('my_transactions1_id_seq'),
post_on date not null,
credit_account int not null references credit_accounts(id),
source_id text not null unique,
total_amount numeric);

If I:
CREATE RULE insert_deduction AS ON INSERT TO my_transactions1
DO ALSO INSERT INTO my_transactions2 (id, post_on, credit_account,
source_id, total_amount)
VALUES (IN.id, get_deduction_account(NEW.credit_account),
get_deduction_source(NEW.source), NEW.total_amount * -1);

Then I would expect nextval() to be executed twice, while NEW would
behave as it does in triggers.

I don't know how feasible it is to implement such a thing, and
certainly it would break backwards compatibility for at least some
users.  But I don't think it would be any worse than outright
replacing the rules system.

Best Wishes,
Chris Travers

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] tubles matching

2011-09-28 Thread salah jubeh


Thanks Chris, this solution is one alternative, but it will not work in my app 
because the join condition in your example is defined using all the fields. in 
my case the join condition is unknown.  if a row in the first view is a subset 
of a row in the second view that means there is a match. 

Regards


From: Chris Travers chris.trav...@gmail.com
To: 
Cc: salah jubeh s_ju...@yahoo.com; pgsql pgsql-general@postgresql.org
Sent: Wednesday, September 28, 2011 5:09 PM
Subject: Re: [GENERAL] tubles matching

Is something like this what you are trying to do?

somedb=# create table a (a int, b text, c date);
CREATE TABLE
somedb=# create table b (a int, b text, c date);
CREATE TABLE
somedb=# select * from a join b using (a, b, c);
a | b | c
---+---+---
(0 rows)

somedb=# insert into a values (1, 'test', now());
INSERT 0 1
somedb=# insert into b values (1, 'test', now());
INSERT 0 1
somedb=# insert into b values (2, 'test', now());
INSERT 0 1
somedb=# select * from a join b using (a, b, c);
a |  b   |     c
---+--+
1 | test | 2011-09-28
(1 row)

Best wishes,
Chris Travers

Re: [GENERAL] What about improving the rules system we have, was Re: Rules going away

2011-09-28 Thread Tom Lane
Chris Travers chris.trav...@gmail.com writes:
 On Wed, Sep 28, 2011 at 7:34 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 I think the true state of affairs is this: rules have a lot of
 surprising behaviors, and if we could think of something that works more
 straightforwardly, we'd love to replace them.

 I guess my question is what it would take to eliminate some of the
 very surprising behaviors.

You're certainly right that unexpected multiple evaluations of volatile
expressions is the first thing that bites people.  (I don't believe
that's restricted to DO ALSO vs INSTEAD though.)  I think there are
also some fairly serious performance issues for large tables, stemming
from the rule system's tendency to create large joins under-the-hood for
any rewritten UPDATE or DELETE.  Not sure what else.

 Instead of considering deprecating and replacing rules with something
 else, I guess I am wondering if the issues re fundamental or issues of
 implementation.  It seems to me we might get more mileage out of
 pondering a Rules 2.0 approach, discussing the specifications, etc.
 that we would in looking for an alternative.

Maybe.  The advantage of something-thats-not-a-rule is that it would
dodge all issues of backwards compatibility, since we could just leave
the rule system behaving as-is until we were ready to remove it
altogether.  If we try to tweak the semantics of rules then we're likely
to break things for people who are using them now.  But having said
that, it's not unreasonable to think about it and try to scope out
exactly what we might tweak.

 [ proposal involving a constant NEW tuple ]

Not sure this specific proposal makes any sense at all.  IMO the only
real advantage that rules have over triggers is that they work on a
set-operation basis not a tuple-by-tuple basis.  Don't see how to
preserve that characteristic while redefining NEW as a static tuple.
(IOW, the example you gave is the least interesting possible case.
Think about how an ON INSERT rule would rewrite an INSERT ... SELECT,
or in general how you'd rewrite cases that process many tuples.)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Solved] Generic logging system for pre-hstore using plperl triggers

2011-09-28 Thread Diego Augusto Molina
2011/9/28, Merlin Moncure mmonc...@gmail.com:
 Yup -- I get it now.  Well, one point about this is that it seems
 directed towards your personal requirements. This is a very 'heavy'
 audit system that will not be suitable for high performance
 transactional systems.  That said, it looks well thought out.  Storing
 only the changed fields is pretty clever, but I can't help but wonder
 if you're better off using arrays for that purpose:

 create type audit_field_t as (field text, old_value text, new_value text);

 and inside the audit table itself having
   fields audit_field_t,

 and, if at all possible, constructing the array of audit fields in a
 single expression.  This will be much more compact than one record per
 field -- normally, arrays in table definitions tend to be bad mojo but
 this is one case they could be useful.  Audit records are WORM, 'Write
 Once Read Maybe', so compactness is important.   Obviously, for 9.0+,
 I would be rigging a solution around hstore for an 'all sql' solution
 which is usually better if you can get away with it.

 merlin


Well that sounds pretty fair to me. But that flow would not allow me
to make partial indexes on primary key fields. As you can see in the
audet table, there's a column named is_pk which tells if that
column was considered a primary key at the moment of the logging.
Normally there's no indexes, but when I have to make some audits I do
the following:
1) Dump the audits.
2) Restore somewhere else.
3) Generate some indexes on: timestamp, schema|table, field|is_pk and
id (I think, I've got the procedure annotated too, but not here hehe).
This indexing is a pain sometimes but even adding it to the time it
takes to run one query it is really cheap. Making the indexes gets far
more necessary if you run more than one query (which is probably the
case).
I had considered the solution you're posting, but it would get a
_real_ pain to run a query with 'unnest's and 'array_agg's. Also, note
that some of these may not be available in versions of PostgreSQL
prior to 8.4 (I think), so if you're planning to track the tupple you
won't be able to do it in clear (maybe using temp tables).

But! all those arguments above get beat by only one you asserted: that
WORM thing. You are defintly right about that. Logging in the
majority of the cases should be meaningful, light to run,
compact/compressed, and rotated so that it doesn't take up all your
space with time.

Having said that, I'm going to take your advice for the next version,
which I hope that also checks some TODO's in the list. When I get home
I'll send the current code attached and when I get some fresh air at
work I'll make the changes and post the new version.

Any other ideas for the new version? (get some previews in the TODO
list at the top of the perl trigger function in the attachment of the
next mail).

-- 
Diego Augusto Molina
diegoaugustomol...@gmail.com

ES: Por favor, evite adjuntar documentos de Microsoft Office. Serán
desestimados.
EN: Please, avoid attaching Microsoft Office documents. They shall be discarded.
LINK: http://www.gnu.org/philosophy/no-word-attachments.html

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What about improving the rules system we have, was Re: Rules going away

2011-09-28 Thread Harald Fuchs
In article 4116.1317226...@sss.pgh.pa.us,
Tom Lane t...@sss.pgh.pa.us writes:

 Not sure this specific proposal makes any sense at all.  IMO the only
 real advantage that rules have over triggers is that they work on a
 set-operation basis not a tuple-by-tuple basis.

Isn't that what statement-level triggers are for, at least in other DB
systems?  How about telling PostgreSQL's statement-level triggers
something about the set of rows they affect?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What about improving the rules system we have, was Re: Rules going away

2011-09-28 Thread Merlin Moncure
On Wed, Sep 28, 2011 at 11:46 AM, Harald Fuchs hari.fu...@gmail.com wrote:
 In article 4116.1317226...@sss.pgh.pa.us,
 Tom Lane t...@sss.pgh.pa.us writes:

 Not sure this specific proposal makes any sense at all.  IMO the only
 real advantage that rules have over triggers is that they work on a
 set-operation basis not a tuple-by-tuple basis.

 Isn't that what statement-level triggers are for, at least in other DB
 systems?  How about telling PostgreSQL's statement-level triggers
 something about the set of rows they affect?

in theory that would be nice, but they just don't work that way.  you
don't even have access to the SQL statement firing the trigger IIRC.
that said, with some thought you could work an 'after' trigger into a
set level operation, say, by rigging something around now().

now that we have view triggers (not that rules ever really worked for
updating views anyways), even notwithstanding the minor limitations of
triggers of rules vs triggers, I personally find the RULE feature to
be useless and dangerous.  I'd vote for immediately deprecating it
without hesitation.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] : Looking for a PostgreSQL book

2011-09-28 Thread Venkat Balaji
Hello Everyone,

I have been working on PostgreSQL for quite a while (2 yrs) now.

I have got PostgreSQL 9.0 High Performance book and quite excited to go
through it.

Please let me know any source where i can get more books on PG, I am
especially looking for books on PG internals, architecture, Backup 
Recovery and HA.

Looking forward for the information.

Regards,
VB


[GENERAL] Re: What about improving the rules system we have, was Re: Rules going away

2011-09-28 Thread Chris Travers
First, thanks for your thoughtful reply.

On Wed, Sep 28, 2011 at 9:12 AM, Tom Lane t...@sss.pgh.pa.us wrote:


 You're certainly right that unexpected multiple evaluations of volatile
 expressions is the first thing that bites people.  (I don't believe
 that's restricted to DO ALSO vs INSTEAD though.)

I am having trouble thinking of practical uses where this would be a
problem.  I may simply lack imagination though.

I think there are
 also some fairly serious performance issues for large tables, stemming
 from the rule system's tendency to create large joins under-the-hood for
 any rewritten UPDATE or DELETE.  Not sure what else.

I have run into amusing order or operations conditions on
delete/update rules, but those were all DO INSTEAD instead of DO ALSO.

 Maybe.  The advantage of something-thats-not-a-rule is that it would
 dodge all issues of backwards compatibility, since we could just leave
 the rule system behaving as-is until we were ready to remove it
 altogether.  If we try to tweak the semantics of rules then we're likely
 to break things for people who are using them now.  But having said
 that, it's not unreasonable to think about it and try to scope out
 exactly what we might tweak.

One option for a replacement would be to allow triggers on views,
provided that no actual insert or update occurs to the underlying
pseudotable.  Then rules could be deprecated for writing data.

 [ proposal involving a constant NEW tuple ]

 Not sure this specific proposal makes any sense at all.  IMO the only
 real advantage that rules have over triggers is that they work on a
 set-operation basis not a tuple-by-tuple basis.  Don't see how to
 preserve that characteristic while redefining NEW as a static tuple.
 (IOW, the example you gave is the least interesting possible case.
 Think about how an ON INSERT rule would rewrite an INSERT ... SELECT,
 or in general how you'd rewrite cases that process many tuples.)

It's more of a question of order of operations.  I guess I was trying
to start with a simple example.  In a more complex example, like
INSERT SELECT (or better yet, writable common table expressions)
you'd basically have three stages logically (none of which is
necessarily guaranteed to be there if it is not applicable to simpler
cases)

1)  Initial selection
2)  Insert tuple calculation, if applicable
3)  Actual tuple insert plan

So, suppose we have a totally unrealistic explanatory example:

CREATE TABLE a (id int);
CREATE TABLE b (id int);
CREATE TABLE c (a_id int, b_id int);
CREATE RULE a1 AS ON INSERT TO a DO ALSO INSERT INTO b values (in.id);
CREATE RULE a2 AS ON INSERT TO b DO ALSO INSERT INTO c values (new.id, in.id);
INSERT INTO a(id) select round(random() * 1000) from generate_series(1, 10);

Now, currently, something kinda funny happens with using NEW in this
example instead of IN:
in every row in c, a_id = b_id, but these values do not match a or b
tables.  In other words, for every insert, random() gets executed
three times and three different values get inserted into four columns.
  In other words we get random values which are local to each record
of each subquery, but not local to each value.

So currently I think this rewrites to something like:
INSERT INTO a (id) select round(random() * 1000) as randnum from
generate_series(1, 10);
INSERT INTO b (id) select round(random() * 1000) as randnum from
generate_series(1, 10);
INSERT INTO c (a_id, b_id) SELECT randnum, randnum from (select
round(random() * 1000) as randnum from generate_series(1, 10));

In Pseudocode (back to differentiating in vs new here), I am
suggesting something like:

WITH new (randnum) AS (select round(random() * 1000) as randnum from
generate_series(1, 10))
DO
(INSERT INTO a(id) select randnum from new;
INSERT INTO b(id) select round(random() * 1000) from new;
INSERT INTO c(a_id, b_id) select  randnum, round(random() * 1000) from new;
);

Does this make sense?

Best Wishes,
Chris Travers

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rules going away

2011-09-28 Thread Rob Sargent


On 09/28/2011 08:34 AM, Tom Lane wrote:
 Andrew Sullivan a...@crankycanuck.ca writes:
 On Wed, Sep 28, 2011 at 06:20:04AM -0700, David Fetter wrote:
 There's an even better reason not to use rules: they're going away in
 a not too distant version of PostgreSQL.
 Really?  How?  I thought views were done using rules under the hood?
 Also, it'd be awfully nice if, in case rules are going away, the
 documentation actually said prominently in the rules section, Rules
 are deprecated and are planned to be removed at some point in the
 future.  Right now, anyone coming to Postgres for the first time
 could easily understand the manual to say that Postgres has this cool
 feature on which they can rely.
 I think the true state of affairs is this: rules have a lot of
 surprising behaviors, and if we could think of something that works more
 straightforwardly, we'd love to replace them.  But I think we'd have to
 have the something in place before we consider deprecating rules.
 At the moment we don't even have a glimmer of a design, so David's
 statement is many years premature.

   regards, tom lane

Yoda: Damned by faint praise, these rules are

Would this future something more likely be a totally different concept
or a re-implementation?

Is there a list of gotchas w.r.t. rules?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] One small annoyance

2011-09-28 Thread Chris Travers
I don't know if this has been changed in 9.1 but in 9.0.5 we were
trying to set up streaming replication and kept getting the warning
about wal_level=minimal and hence hot standby not being possible.  It
was the same message as this (found in one of the emails I found when
researching the problem):

WARNING:  WAL was generated with wal_level=minimal, data may be missing
HINT:  This happens if you temporarily set wal_level=minimal without
taking a new base backup.
FATAL:  hot standby is not possible because wal_level was not set to
hot_standby on the master server

After a bunch of troubleshooting I discovered the problem:
primary_conninfo  was not set in the recovery.conf.  Apparently this
error will be thrown not only due to wal levels being inadequate but
due to not being told to connect to the primary.

I know this won't get changed in a stable branch, but it might save a
bunch of people significant stress if this condition had its own error
message, if this hasn't been done already for 9.1.

Best Wishes,
Chris Travers

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Solved] Generic logging system for pre-hstore using plperl triggers

2011-09-28 Thread Merlin Moncure
On Wed, Sep 28, 2011 at 11:45 AM, Diego Augusto Molina
diegoaugustomol...@gmail.com wrote:
 2011/9/28, Merlin Moncure mmonc...@gmail.com:
 Yup -- I get it now.  Well, one point about this is that it seems
 directed towards your personal requirements. This is a very 'heavy'
 audit system that will not be suitable for high performance
 transactional systems.  That said, it looks well thought out.  Storing
 only the changed fields is pretty clever, but I can't help but wonder
 if you're better off using arrays for that purpose:

 create type audit_field_t as (field text, old_value text, new_value text);

 and inside the audit table itself having
   fields audit_field_t,

 and, if at all possible, constructing the array of audit fields in a
 single expression.  This will be much more compact than one record per
 field -- normally, arrays in table definitions tend to be bad mojo but
 this is one case they could be useful.  Audit records are WORM, 'Write
 Once Read Maybe', so compactness is important.   Obviously, for 9.0+,
 I would be rigging a solution around hstore for an 'all sql' solution
 which is usually better if you can get away with it.

 merlin


 Well that sounds pretty fair to me. But that flow would not allow me
 to make partial indexes on primary key fields. As you can see in the
 audet table, there's a column named is_pk which tells if that
 column was considered a primary key at the moment of the logging.
 Normally there's no indexes, but when I have to make some audits I do
 the following:
 1) Dump the audits.
 2) Restore somewhere else.
 3) Generate some indexes on: timestamp, schema|table, field|is_pk and
 id (I think, I've got the procedure annotated too, but not here hehe).
 This indexing is a pain sometimes but even adding it to the time it
 takes to run one query it is really cheap. Making the indexes gets far
 more necessary if you run more than one query (which is probably the
 case).
 I had considered the solution you're posting, but it would get a
 _real_ pain to run a query with 'unnest's and 'array_agg's. Also, note
 that some of these may not be available in versions of PostgreSQL
 prior to 8.4 (I think), so if you're planning to track the tupple you
 won't be able to do it in clear (maybe using temp tables).

I disagree.  unnest() and array_agg() (or, even better, array()
constructor syntax) are an absolute joy to work with and thinking in a
more functional way, which is usually the key to making things run
quickly.  Also both functions are trivial to emulate in userland for
compatibility.  Arrays of composites IIRC only go back to 8.3 so  that
would be a true stopper for any solution in that vein.

As for the rest of it, I'd be looking to try and come up with an all
sql implementation.  Also you should give an honest comparison between
what you've come up with vs. this:
http://pgfoundry.org/projects/tablelog/.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] : Looking for a PostgreSQL book

2011-09-28 Thread Adam Cornett
The same publisher (Packt) has a book *PostgreSQL 9 Administration Cookbook* by
Simon Riggs and Hannu Krosing that is equally useful as Greg's *High
Performance* book

On Wed, Sep 28, 2011 at 1:14 PM, Venkat Balaji venkat.bal...@verse.inwrote:

 Hello Everyone,

 I have been working on PostgreSQL for quite a while (2 yrs) now.

 I have got PostgreSQL 9.0 High Performance book and quite excited to go
 through it.

 Please let me know any source where i can get more books on PG, I am
 especially looking for books on PG internals, architecture, Backup 
 Recovery and HA.

 Looking forward for the information.

 Regards,
 VB




-- 
Adam Cornett
adam.corn...@gmail.com
(678) 296-1150


[GENERAL] Create Extension search path

2011-09-28 Thread Roger Niederland

On my windows install of postgres 9.1.0.

I have a search_path set in the config file.  This search path has 
several schemas defined.
Some of the databases within postgres, do not have the schema specified 
on the search path defined within the database.


Trying to add pgcryto via:
CREATE EXTENSION pgcrypto;
or
CREATE EXTENSION pgcrypto SCHEMA public;
Fail with the error invalid value for parameter search_path.

To add the extension required me to change the search_path.
Is it required that all any schema added to the search_path exist in all 
databases?


Thanks,
Roger




Re: [GENERAL] Rules going away

2011-09-28 Thread Merlin Moncure
On Wed, Sep 28, 2011 at 10:53 AM, Rob Sargent robjsarg...@gmail.com wrote:


 On 09/28/2011 08:34 AM, Tom Lane wrote:
 Andrew Sullivan a...@crankycanuck.ca writes:
 On Wed, Sep 28, 2011 at 06:20:04AM -0700, David Fetter wrote:
 There's an even better reason not to use rules: they're going away in
 a not too distant version of PostgreSQL.
 Really?  How?  I thought views were done using rules under the hood?
 Also, it'd be awfully nice if, in case rules are going away, the
 documentation actually said prominently in the rules section, Rules
 are deprecated and are planned to be removed at some point in the
 future.  Right now, anyone coming to Postgres for the first time
 could easily understand the manual to say that Postgres has this cool
 feature on which they can rely.
 I think the true state of affairs is this: rules have a lot of
 surprising behaviors, and if we could think of something that works more
 straightforwardly, we'd love to replace them.  But I think we'd have to
 have the something in place before we consider deprecating rules.
 At the moment we don't even have a glimmer of a design, so David's
 statement is many years premature.

                       regards, tom lane

 Yoda: Damned by faint praise, these rules are

 Would this future something more likely be a totally different concept
 or a re-implementation?

 Is there a list of gotchas w.r.t. rules?

yes, and it is huge -- see the archives. note that views are based on
rules (SELECT rules) and there is nothing wrong there -- so it's not
quite correct to say they will be going away completely.

view triggers removed the #1 thing that most people wanted to do with
rules, namely updatable views.  the basic problem with rules is that
by hacking the sql you send to the server, you lose control over
various aspects of the statement that normally are fairly predictable.
 they are almost impossible to get working properly -- I've tried many
times.  aside from the multiple evaluation thing, you have bizarre
interactions with many sql features that came in later (like
RETURNING).

folks, don't use RULES! use triggers -- and as much as possible, keep
triggers simple, short, and to the point (simple validation, custom
RI, auditing/logging, etc).

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help needed in Search

2011-09-28 Thread planas
On Wed, 2011-09-28 at 12:33 +0530, Siva Palanisamy wrote: 

 Hi All,
 
  
 
 I am trying to retrieve the contact names based on the keyed search
 string. It performs good for the English alphabets and behaves
 strangely for special chars such as _,/,\,%
 
 

The % character is used by SQL as the wild card for searching. To search
for Mecklenburg county (in North Carolina) from a list of US counties
you might try meck% to find all the counties that start with meck. (%
meck for those that end in meck and %meck% for any that contain meck).
The use of % is in the SQL standard and is used by all the all the SQL
dialects I am familiar with.

 
 My query in the function is similar to 
 
  
 
 SELECT contact_name FROM contacts WHERE LOWER(contact_name) LIKE
 LOWER('_McDonald%') ORDER BY LOWER(contact_name) ASC LIMIT 1;
 

I would expect you to get anything that starts with _mcdonald
(_mcdonald, james) not say james mcdonald (%mcdonald% would work) . The
underline (_) is not the same as a space. To search with leading space
try '% mcdonald%'

 
 It looks like, during searching, it retrieves all the contact names
 instead of the desired. The similar bizarre happens for the above
 mentioned special chars. I need to support these. How do I educate
 postgres to consider these chars? Please guide me.
 
  
 
 Thanks and Regards,
 
 Siva.
 
 
 
 
 __
 
 ::DISCLAIMER::
 ---
 
 The contents of this e-mail and any attachment(s) are confidential and
 intended for the named recipient(s) only.
 It shall not attach any liability on the originator or HCL or its
 affiliates. Any views or opinions presented in
 this email are solely those of the author and may not necessarily
 reflect the opinions of HCL or its affiliates.
 Any form of reproduction, dissemination, copying, disclosure,
 modification, distribution and / or publication of
 this message without the prior written consent of the author of this
 e-mail is strictly prohibited. If you have
 received this email in error please delete it and notify the sender
 immediately. Before opening any mail and
 attachments please check them for viruses and defect.
 
 ---



-- 
Jay Lozier
jsloz...@gmail.com


Re: [GENERAL] tubles matching

2011-09-28 Thread Michael Glaesemann

On Sep 28, 2011, at 7:19, salah jubeh wrote:

 
 
 Hello,
 
 I have two views both contain  identical column names , order and types 
 except the primary keys. I want to match these two views - return the pk pair 
  of the rows which match from these views - by comparing all the column 
 values. I want to write a pgplsql function to do this Job by iterating 
 through all the coloumns and compare the values.  
 
 Is there another way to do that ?

SELECT * FROM a NATURAL JOIN b;

Michael Glaesemann
grzm seespotcode net




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rules going away

2011-09-28 Thread Ondrej Ivanič
Hi,

 folks, don't use RULES! use triggers -- and as much as possible, keep
 triggers simple, short, and to the point (simple validation, custom
 RI, auditing/logging, etc).

I like them :). 'DO INSTEAD' rules are great for partitioning so you
can insert (or update) to parent table and 'DO INSTEAD' rule takes
care about the rest.

-- 
Ondrej Ivanic
(ondrej.iva...@gmail.com)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] stored procs

2011-09-28 Thread J.V.
Is is possible within a stored procedure to read all the tables in a 
schema into a list?


From that list and for each table is it possible to find the foreign 
keys in that table?


From that list of foreign keys, is it possible to find out which field 
in which table the FK corresponds to?


I need to extract this meta-data for a project.

Regards,


J.V.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] stored proc

2011-09-28 Thread J.V.
Is it possible to group a bunch of methods and functions into a single 
file (stored procedures  functions) and have a main method that can be 
called

to orchestrate the methods, pass in params, get back results, log to a file?

I know this can be done with Oracle PL/SQL but a simple google on this 
does not show any examples.


Also where can I find a list of stored proc data structures (hash maps, 
arrays, lists of arrays) or anything else that would be useful.


thanks

J.V.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] stored procs

2011-09-28 Thread John R Pierce

On 09/28/11 6:33 PM, J.V. wrote:
Is is possible within a stored procedure to read all the tables in a 
schema into a list?


From that list and for each table is it possible to find the foreign 
keys in that table?


From that list of foreign keys, is it possible to find out which field 
in which table the FK corresponds to?


I need to extract this meta-data for a project.



that metadata should all be available in the information_schema.





--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] : Looking for a PostgreSQL book

2011-09-28 Thread Venkat Balaji
Thanks Adam !

Regards,
VB

On Thu, Sep 29, 2011 at 12:03 AM, Adam Cornett adam.corn...@gmail.comwrote:

 The same publisher (Packt) has a book *PostgreSQL
 9 Administration Cookbook* by Simon Riggs and Hannu Krosing that is
 equally useful as Greg's *High Performance* book


 On Wed, Sep 28, 2011 at 1:14 PM, Venkat Balaji venkat.bal...@verse.inwrote:

 Hello Everyone,

 I have been working on PostgreSQL for quite a while (2 yrs) now.

 I have got PostgreSQL 9.0 High Performance book and quite excited to go
 through it.

 Please let me know any source where i can get more books on PG, I am
 especially looking for books on PG internals, architecture, Backup 
 Recovery and HA.

 Looking forward for the information.

 Regards,
 VB




 --
 Adam Cornett
 adam.corn...@gmail.com
 (678) 296-1150