Re: [GENERAL] shared folder in Hot Standby

2011-04-04 Thread Jaime Casanova
On Mon, Apr 4, 2011 at 12:35 AM, Shoaib Mir shoaib...@gmail.com wrote:
 From my limited knowledge I think we need a shared location where the master
 node is putting the WAL files in and the slave nodes also look at the same
 folder to get new WAL logs to replay the files. Now if we cant have that
 shared location and a scenario where all slaves and the master cant see one
 shared location, how to approach this?

no. while you can do that, it's optional...

you can setup a hot standby just by setting:

on master:
=== postgresql.conf ===
wal_level=hot_standby
archive_mode=on
archive_command='exit 0'
max_wal_senders = number of slaves or greater

on slave:
=== postgresql.conf ===
hot_standby=on

=== recovery.conf ===
standby_mode='on'
primary_conninfo = 'host=master_ip'


or you can use repmgr to make all this easier:
http://projects.2ndquadrant.com/repmgr

-- 
Jaime Casanova         www.2ndQuadrant.com
Professional PostgreSQL: Soporte y capacitación de PostgreSQL

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL documentation on kindle - best practices?

2011-04-04 Thread Harald Armin Massa
my google-fu lead me to the following receipts:

- create HTML documentation as single file, use Calibre to convert
- use downloadable HTML-documentation, convert via Calibre (minor
problems are reported, as in wrong order of sections)
- download PDF and convert via Calibre
- download PDF and put on kindle

 1-3 and use different conversion tools.

So my question: has anyone found a best practice solution to convert
the PostgreSQL documentaiton into a kindle-friendly format? Or has
even an .azw file downloadable somewhere?

Best wishes,

Harald



-- 
Harald Armin Massa     www.2ndQuadrant.com
PostgreSQL  Training, Services  and Support

2ndQuadrant Deutschland GmbH
GF: Harald Armin Massa
Amtsgericht Stuttgart, HRB 736399

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL documentation on kindle - best practices?

2011-04-04 Thread John R Pierce

On 04/04/11 1:47 AM, Harald Armin Massa wrote:

my google-fu lead me to the following receipts:

- create HTML documentation as single file, use Calibre to convert
- use downloadable HTML-documentation, convert via Calibre (minor
problems are reported, as in wrong order of sections)
- download PDF and convert via Calibre
- download PDF and put on kindle

 1-3 and use different conversion tools.

So my question: has anyone found a best practice solution to convert
the PostgreSQL documentaiton into a kindle-friendly format? Or has
even an .azw file downloadable somewhere?


if kindle can view PDFs, I dunno why you wouldn't use that, unless its a 
page size issue.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL documentation on kindle - best practices?

2011-04-04 Thread Thom Brown
On 4 April 2011 09:52, John R Pierce pie...@hogranch.com wrote:
 On 04/04/11 1:47 AM, Harald Armin Massa wrote:

 my google-fu lead me to the following receipts:

 - create HTML documentation as single file, use Calibre to convert
 - use downloadable HTML-documentation, convert via Calibre (minor
 problems are reported, as in wrong order of sections)
 - download PDF and convert via Calibre
 - download PDF and put on kindle

  1-3 and use different conversion tools.

 So my question: has anyone found a best practice solution to convert
 the PostgreSQL documentaiton into a kindle-friendly format? Or has
 even an .azw file downloadable somewhere?

 if kindle can view PDFs, I dunno why you wouldn't use that, unless its a

The problem with PDFs are that they are fixed-size, in that you have
to zoom in and scroll around the page to read everything on a Kindle.
An e-book format would re-flow content to match the e-book reader and
allow the user to adjust text size.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL documentation on kindle - best practices?

2011-04-04 Thread Jayadevan M
 
 So my question: has anyone found a best practice solution to convert
 the PostgreSQL documentaiton into a kindle-friendly format? Or has
 even an .azw file downloadable somewhere?
 
 Best wishes,
 
 Harald
You could always send the pdf file and get it converted to kindle format, 
free of cost. It is not a good idea to try and read pdf files in Kindle.
You have to send the pdf file to kindleusername@free.kindle.com 
Regards,
Jayadevan





DISCLAIMER: 

The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect.






[GENERAL] Pg_restore and dump -- General question

2011-04-04 Thread salah jubeh
Hello,

I am cleaning up a database and I have a list of unused tables, views and 
column 
and I want to drop these entities. 

I have the following constraints:

* The database is  live and thats mean I need to be able to revoke the 
changes 
as quick and possible.
* I need to restore a certain set of the deprecated table,... But not 
the whole 
database
* I need to automate this Job because the number of objects that I need 
to drop 
is over than 200

I want to use the  cascade option when  dropping a table, column, and view, 
which will cause the deletion of foreign key constraints and views if you 
drop  tables and column,  and the deletion of other views if you drop a view ( 
am I right ). I had a quick look on the pg_dumb and pg_restore and there is 
the t option which I think I can use for this task.However,  I am a little bit 
afraid of the approach that I am going to implement. For the following reasons.

 For views, I do not think I will have a problem because the view is stateless. 
i.e the definition will not change.

Tables and column might have different scenarios such as 

suppose that  table b  depends on a, and let us say that other  tables depends 
on b. 


What will happen if 

1. dropped table a
2. insert data on b and the other relations
3. restore table a and it's dependency (table b).

Is there is a general drop and restore strategy for my case. Also, what are 
your 
advices?

Regards

Re: [GENERAL] Pg_restore and dump -- General question

2011-04-04 Thread John R Pierce

On 04/04/11 3:47 AM, salah jubeh wrote:
I am cleaning up a database and I have a list of unused tables, views 
and column and I want to drop these entities.


suppose that  table b  depends on a, and let us say that other  tables 
depends on b.



doesn't this dependency cancel the 'unused' part?



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table lock while adding a column and clients are logged in

2011-04-04 Thread Sven Haag
 Original-Nachricht 
 Datum: Sun, 03 Apr 2011 16:25:35 +0200
 Von: Thomas Kellerer spam_ea...@gmx.net
 An: pgsql-general@postgresql.org
 Betreff: Re: [GENERAL] Table lock while adding a column and clients are 
 logged in

 Sven Haag wrote on 03.04.2011 16:13:
 
   Original-Nachricht 
  Datum: Sun, 03 Apr 2011 15:37:17 +0200
  Von: Thomas Kellererspam_ea...@gmx.net
  An: pgsql-general@postgresql.org
  Betreff: Re: [GENERAL] Table lock while adding a column and clients are
 logged in
 
  Alban Hertroys wrote on 03.04.2011 11:17:
  On 2 Apr 2011, at 12:44, Thomas Kellerer wrote:
 
  Even after a plain SELECT you should issue a COMMIT (or ROLLBACK)
  to end the transaction that was implicitely started with the
  SELECT.
 
  Sorry, but you're wrong about that. A statement that implicitly
  starts a transaction also implicitly COMMITs it. Otherwise single
  UPDATE and INSERT statements outside of transaction blocks would not
  COMMIT, and they do.
 
  AFAIK this is only true if you are running in auto commit mode.
 
  If you have auto commit turned off, a SELECT statement will leave the
  current transaction as IDLE in transaction not IDLE which means it
 *will*
  hold a lock on the tables involved that will prevent an ALTER TABLE.
 
 
  well, as we are using the default setting here (according to the manual
 this is ON) this shouldn't be the case?!
 
 The client defines the default behaviour, so it's your application that
 controls this.
 
 Did you check that you have sessions that are show as IDLE in
 transaction in pg_stat_activity?
 
 Regards
 Thomas


hi thomas,

there are indeed IDLE in transaction queries running since hours. so i guess 
i have to commit all queries explicitly, even if there are only SELECT 
statements.

thanks a lot for all the help!
Sven Haag

-- 
NEU: FreePhone - kostenlos mobil telefonieren und surfen!   
Jetzt informieren: http://www.gmx.net/de/go/freephone

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pg_restore and dump -- General question

2011-04-04 Thread salah jubeh
Most probably it is . I have just analysed the dependency for one level. and I 
am planning to do the restore for also one level.  i.e suppose that c--b --a 
if I dropped a and turned out that a is used for some reasons, I will restore a 
and b only without c. 


I want to do that for the following reasons. If I restore the whole dependency 
tree I might end up of restoring the database and the data which are newly 
inserted will be lost. This situation is actually rare because most of the 
entities are empty, I just want to make sure.

 



 





From: John R Pierce pie...@hogranch.com
To: pgsql-general@postgresql.org
Sent: Mon, April 4, 2011 12:55:41 PM
Subject: Re: [GENERAL] Pg_restore and dump -- General question

On 04/04/11 3:47 AM, salah jubeh wrote:
 I am cleaning up a database and I have a list of unused tables, views and 
column and I want to drop these entities.
 
 suppose that  table b  depends on a, and let us say that other  tables 
 depends 
on b.


doesn't this dependency cancel the 'unused' part?



-- Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pg_restore and dump -- General question

2011-04-04 Thread Howard Cole

On 04/04/2011 11:47 AM, salah jubeh wrote:


What will happen if

1. dropped table a
2. insert data on b and the other relations
3. restore table a and it's dependency (table b).

Simple advice would be to create a script on an offline system for 
testing - when you are happy with the results - do it on the online 
system - after making a backup of course! Anything else would be suicidal.


Re: [GENERAL] Pg_restore and dump -- General question

2011-04-04 Thread Scott Marlowe
On Mon, Apr 4, 2011 at 5:40 AM, Howard Cole howardn...@selestial.com wrote:
 On 04/04/2011 11:47 AM, salah jubeh wrote:

 What will happen if

 1. dropped table a
 2. insert data on b and the other relations
 3. restore table a and it's dependency (table b).

 Simple advice would be to create a script on an offline system for testing -
 when you are happy with the results - do it on the online system - after
 making a backup of course! Anything else would be suicidal.

Agreed. AND on the production system first take a backup and THEN run
the drop cascade inside a transaction in case it does crazy things you
didn't foresee.

begin;
drop object yada cascade;

then rollback if things get too scary.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Large Object permissions lost in transfer

2011-04-04 Thread Howard Cole
I have recently transferred data from 8.4 to 9.0 and now only my 
superuser has read access to the large objects.


(The transfer was done using pg_dump version 9.0).

The large objects have all transferred, the problem is just the 
permissions, - but how do I set up the permissions for the large objects?


Thanks,

Howard Cole
www.selestial.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to get index columns/dir/ord informations?

2011-04-04 Thread Raghavendra
 A pseudo code demonstrate it:

 select * from pg_index_columns where index_name = 'x2'

 Ordinal   ColNameIsAsc
 1 a   False
 2 b   True

 Have PGSQL same information?


AFAIK, you can pull that information from 'indexdef' column of pg_indexes.

 select * from pg_indexes where tablename='a';

Best Regards,
Raghavendra
EnterpriseDB Corporation



  Thanks:
 dd




 2011.04.01. 18:01 keltezéssel, Raghavendra írta:

 Hi,

  Query to list the tables and its concerned indexes.

  SELECT indexrelid::regclass as index , relid::regclass as
 table FROM pg_stat_user_indexes JOIN pg_index USING
 (indexrelid) WHERE idx_scan  100 AND indisunique IS FALSE;

  Query will list the contraints.

  SELECT relname FROM pg_class WHERE oid IN ( SELECT indexrelid FROM
 pg_index, pg_class WHERE pg_class.oid=pg_index.indrelid AND ( indisunique =
 't' OR indisprimary = 't' ) );

  To get the column order number, use this query.

  SELECT a.attname,a.attnum FROM pg_class c, pg_attribute a, pg_type t
 WHERE c.relname = 'vacc' AND a.attnum  0 AND a.attrelid = c.oid AND
 a.atttypid = t.oid;

  Note: This query for a particular Table 'VACC'


  Best Regards,
 Raghavendra
 EnterpriseDB Corporation

  On Fri, Apr 1, 2011 at 8:54 PM, Durumdara durumd...@gmail.com wrote:

 Hi!

  I want to migrate some database to PG.
 I want to make intelligens migrator, that makes the list of the SQL-s what
 need to do to get same table structure in PG as in the Source DB.

  All things I can get from the views about tables, except the indices.

  These indices are not containing the constraints - these elements I can
 analyze.

  I found and SQL that get the index columns:


 select
 t.relname as table_name,
 i.relname as index_name,
 a.attname as column_name
 from
 pg_class t,
 pg_class i,
 pg_index ix,
 pg_attribute a
 where
 t.oid = ix.indrelid
 and i.oid = ix.indexrelid
 and a.attrelid = t.oid
 and a.attnum = ANY(ix.indkey)
 and t.relkind = 'r'
 and t.relname = 'a'
 and ix.indisunique = 'f'
 and ix.indisprimary = 'f'
 order by
 t.relname,
 i.relname;

  This can list the columns. But - what a pity - this don't containing
 that:
 - Is this index unique?
 - What the direction of the sort by columns
 - What is the ordinal number of the column

  So everything what I need to analyze that the needed index is exists or
 not.


  Please help me: how can I get these informations?
 I don't want to drop the tables everytime if possible.

  Thanks:
 dd






[GENERAL] CHAR(11) - Replication Conference

2011-04-04 Thread Simon Riggs
CHAR(11), the conference on Clustering, High Availability, Replication
will be held in Cambridge, UK on 11-12 July 2011.
http://www.char11.org/

Early bird bookings are available now by card or paypal only.

Call for Speakers is now open: please send your talk proposals to
spea...@char11.org by April 21. Speakers will be announced by May 3.
Selected speakers receive free accommodation and meals.

Look forward to seeing you there.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large Object permissions lost in transfer

2011-04-04 Thread Andy Colson

On 4/4/2011 6:12 AM, Howard Cole wrote:

I have recently transferred data from 8.4 to 9.0 and now only my
superuser has read access to the large objects.

(The transfer was done using pg_dump version 9.0).

The large objects have all transferred, the problem is just the
permissions, - but how do I set up the permissions for the large objects?

Thanks,

Howard Cole
www.selestial.com



I had the same problem.  and there is no grant all... there is the new 
DO though.  That's what I used, a simple DO expression to iterate and 
update the permissions.


-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Merged Model for libpq

2011-04-04 Thread Merlin Moncure
On Sun, Apr 3, 2011 at 11:43 PM, Annamalai Gurusami
annamalai.gurus...@gmail.com wrote:
 On 2 April 2011 11:17, John R Pierce pie...@hogranch.com wrote:

 what you describe is neither postgres nor SQL

 perhaps you should look at a storage engine like BerkeleyDB

 I hope that not everybody dismisses this mail thread because of the
 above response.  We are deriving our product from pgsql.  And since we
 are customizing pgsql to our proprietary telecom products, we are
 using things that are not designed for that purpose.  For example, we
 are using SPI to come up with an embedded client.  I was basically
 trying to find out if there are better alternatives.  Have the pgsql
 development team thought about embedded clients and is SPI the way to
 go?

 What we are trying to achieve is that a single application can work as
 an ordinary client or an embedded client.  For example, if we
 implement libpq using SPI interface then any libpq client can behave
 as an ordinary client (using current libpq library) or as an embedded
 client (by making use of libpq over SPI - which we are implementing).

 I have no clue as to why you have recommended BerkeleyDB in this
 context!  What I have described is pgsql and the applications all use
 SQL queries.  If somethings are not clear and requires further
 elaboration from me, kindly let me know.  Providing inputs to extend
 pgsql in a proper well-defined way will help us to contribute back the
 feature to pgsql (if my company decides so and if pgsql needs it.)
 Even if the feature is not contributed back, if the pgsql dev team
 finds it a useful feature, anybody can implement it.

I'm not sure you grasped the ramification of my message upthread.
There is a lot of use for libpq (or libpq-ish) api in the backend to
execute queries.  Unfortunately, that api can not wrap the SPI
interface as it exists today.  The SPI interface is for writing
backend functions, not application code.  Those functions *must* be
called from the application layer, and *must* terminate within a
reasonable amount of time (think seconds).  I think you are looking in
the wrong place -- if you want to embed a libpq api in the backend,
perhaps you might want to look at wrapping the backend in standalone
mode.  This has issues that will prevent general use in an
application, but it's a start, and should give you an idea of what you
are up against.

A more involved project would be to look at modifying the postgresql
internals so that you could usefully embed code and run it with
explicit transaction control.  This is a pretty big task and would
likely end up as a complete stored procedure implementation.  If done
though, you could run in a more or less clientless way.

PostgreSQL today can not usefully operate without participation from a
client (although that client can be quite thin if you want it to be).
Having 100% of your application in SPI layer is *not* going to work.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Merged Model for libpq

2011-04-04 Thread Tom Lane
Annamalai Gurusami annamalai.gurus...@gmail.com writes:
 On 2 April 2011 11:17, John R Pierce pie...@hogranch.com wrote:
 what you describe is neither postgres nor SQL
 perhaps you should look at a storage engine like BerkeleyDB

 I hope that not everybody dismisses this mail thread because of the
 above response.  We are deriving our product from pgsql.  And since we
 are customizing pgsql to our proprietary telecom products, we are
 using things that are not designed for that purpose.  For example, we
 are using SPI to come up with an embedded client.  I was basically
 trying to find out if there are better alternatives.  Have the pgsql
 development team thought about embedded clients and is SPI the way to
 go?

I don't think you've entirely grasped the seriousness of that response.
The PG development team *has* thought about embedded scenarios, and
explicitly rejected them.  There is no interest at all here in that line
of development, and we are unlikely to even consider patches that might
make it easier.  We don't like the reliability implications of having
random client code in the same address space as the database code.
Moreover, the general trend of recent development has been towards
making the database more, not less, dependent on auxiliary processes
such as autovacuum and bgwriter.  There's no way to manage that in an
embedded scenario ... at least not without resorting to threads, which
is another thing that we are unprepared to support.

So really you should be looking at some other DBMS if you want an
embedded implementation.  It'd be nice if PG could be all things to all
people, but it can't; and this is one of the things it can't be.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pg_restore and dump -- General question

2011-04-04 Thread salah jubeh


 


Thank you all, I will take that in account 
 





From: Scott Marlowe scott.marl...@gmail.com
To: Howard Cole howardn...@selestial.com
Cc: salah jubeh s_ju...@yahoo.com; pgsql-general@postgresql.org
Sent: Mon, April 4, 2011 1:44:28 PM
Subject: Re: [GENERAL] Pg_restore and dump -- General question

On Mon, Apr 4, 2011 at 5:40 AM, Howard Cole howardn...@selestial.com wrote:
 On 04/04/2011 11:47 AM, salah jubeh wrote:

 What will happen if

 1. dropped table a
 2. insert data on b and the other relations
 3. restore table a and it's dependency (table b).

 Simple advice would be to create a script on an offline system for testing -
 when you are happy with the results - do it on the online system - after
 making a backup of course! Anything else would be suicidal.

Agreed. AND on the production system first take a backup and THEN run
the drop cascade inside a transaction in case it does crazy things you
didn't foresee.

begin;
drop object yada cascade;

then rollback if things get too scary.


Re: [GENERAL] Large Object permissions lost in transfer

2011-04-04 Thread Adrian Klaver
On Monday, April 04, 2011 6:47:44 am Andy Colson wrote:
 On 4/4/2011 6:12 AM, Howard Cole wrote:
  I have recently transferred data from 8.4 to 9.0 and now only my
  superuser has read access to the large objects.
  
  (The transfer was done using pg_dump version 9.0).
  
  The large objects have all transferred, the problem is just the
  permissions, - but how do I set up the permissions for the large objects?

Is to late to redo the dump/restore? If not you may want to take a look at:
http://www.postgresql.org/docs/9.0/interactive/runtime-config-
compatible.html#GUC-LO-COMPAT-PRIVILEGES

  
  Thanks,
  
  Howard Cole
  www.selestial.com
 
 I had the same problem.  and there is no grant all... there is the new
 DO though.  That's what I used, a simple DO expression to iterate and
 update the permissions.
 
 -Andy

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Merged Model for libpq

2011-04-04 Thread Merlin Moncure
On Mon, Apr 4, 2011 at 9:31 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Annamalai Gurusami annamalai.gurus...@gmail.com writes:
 On 2 April 2011 11:17, John R Pierce pie...@hogranch.com wrote:
 what you describe is neither postgres nor SQL
 perhaps you should look at a storage engine like BerkeleyDB

 I hope that not everybody dismisses this mail thread because of the
 above response.  We are deriving our product from pgsql.  And since we
 are customizing pgsql to our proprietary telecom products, we are
 using things that are not designed for that purpose.  For example, we
 are using SPI to come up with an embedded client.  I was basically
 trying to find out if there are better alternatives.  Have the pgsql
 development team thought about embedded clients and is SPI the way to
 go?

 I don't think you've entirely grasped the seriousness of that response.
 The PG development team *has* thought about embedded scenarios, and
 explicitly rejected them.  There is no interest at all here in that line
 of development, and we are unlikely to even consider patches that might
 make it easier.  We don't like the reliability implications of having
 random client code in the same address space as the database code.
 Moreover, the general trend of recent development has been towards
 making the database more, not less, dependent on auxiliary processes
 such as autovacuum and bgwriter.  There's no way to manage that in an
 embedded scenario ... at least not without resorting to threads, which
 is another thing that we are unprepared to support.

 So really you should be looking at some other DBMS if you want an
 embedded implementation.  It'd be nice if PG could be all things to all
 people, but it can't; and this is one of the things it can't be.

That's a perhaps overly strong statement.  First of all, we already
support user provided code (in C no less) in the database.  It is raw
and problematic for most people but it's also pretty cool.

True embedding where the user application is in direct control of the
process is of course not practical, but that doesn't mean a tighter
coupling of user code and the database is not possible.  Stored
procedures (I know I'm way into broken record mode on this) would
likely cover what Annamalai is looking to do IMSNO, even if they were
limited to a high level language like plpgsql, since you could still
dip into C appropriately using classic methods.  Getting there isn't
easy, of course.

In the current state of affairs you can kinda sorta emulate this by
having a client side 'ticker' that dials in every period of time and
executes a control function which kicks off your server side logic and
maintains this state.  That way the bulk of your code and data
manipulation is database side and you more or less completely bypass
the overhead of streaming information through the protocol to the
client unless you want to pay that cost.  There are a lot of reasons
not to do this...it's a 'wrong tool' type of thing, but people want to
do it and it's interesting to think about what doors you could open if
it could be taken further.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Merged Model for libpq

2011-04-04 Thread Tom Lane
Merlin Moncure mmonc...@gmail.com writes:
 On Mon, Apr 4, 2011 at 9:31 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 So really you should be looking at some other DBMS if you want an
 embedded implementation. It'd be nice if PG could be all things to all
 people, but it can't; and this is one of the things it can't be.

 That's a perhaps overly strong statement.  First of all, we already
 support user provided code (in C no less) in the database.  It is raw
 and problematic for most people but it's also pretty cool.

Sure.  The difference there is that it's understood by all parties that
user-supplied server-side C code has to conform to the expectations and
coding practices of the server.  The server code is in charge, not the
user-supplied code.  Generally people who ask for an embedded database
expect the opposite.  Certainly people who are accustomed to coding
against the libpq API expect that they are in charge, not libpq.  This
is not only a matter of who calls whom but who controls memory
management, error recovery practices, etc.

 True embedding where the user application is in direct control of the
 process is of course not practical, but that doesn't mean a tighter
 coupling of user code and the database is not possible.  Stored
 procedures (I know I'm way into broken record mode on this) would
 likely cover what Annamalai is looking to do IMSNO, even if they were
 limited to a high level language like plpgsql, since you could still
 dip into C appropriately using classic methods.

Possibly.  Annamalai's stated goal of driving a locally-implemented
database through a libpq-ish API, and having that be interchangeable
with a traditional client setup, doesn't seem to fit into this viewpoint
though.  I guess to get much further we'd have to ask why is that the
goal and what's the wider purpose?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Trigger vs web service

2011-04-04 Thread Marc-André Goderre
I receive a long string (about 1 per second) than content many information and 
for the moment it is directly inserted in the database (1 column).
I have to treat the hole string every time i need information in it.
Now, I want split the sting and save the informations in differents fields.
I have 2 solutions and would like to have your opinion on them.

1- Program a trigger function detecting the orginal insert, split the string 
and fill the other field.
2- Program a web service for receiving the string, split it and insert the 
informations in the db.

Witch is the fastest one (in performance).

Thanks

Marc-Andre Goderre
TI Analyst



Re: [GENERAL] Trigger vs web service

2011-04-04 Thread John R Pierce

On 04/04/11 8:47 AM, Marc-André Goderre wrote:


1- Program a trigger function detecting the orginal insert, split the 
string and fill the other field.


2- Program a web service for receiving the string, split it and insert 
the informations in the db.


Witch is the fastest one (in performance).



I would expect parsing and splitting your string into fields before 
handing it to SQL would be faster than handing it into SQL, then using a 
trigger to hack it into 2 fields.This would, of course, at least 
partially depend on what sort of language that web service is written 
in, if its in some hypothetical horribly inefficient interpreted 
language, all bets are off.


Does all your data go through a web service now?   if not, what data 
path IS it coming from?





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger vs web service

2011-04-04 Thread Marc-André Goderre

I receive a long string (about 1 per second) than content many information. For 
the moment it is directly inserted in the database (1 column).
I have to treat the hole string every time i need information in it.

Now, I want split the sting and save the informations in differents fields.
I have 2 solutions and would like to have your opinion on them.

1- Program a trigger function detecting the orginal insert, split the string 
and fill the other field.
2- Program a web service for receiving the string, split it and insert the 
informations in the db.

Witch is the fastest one (in performance).

Thanks

Marc-Andre Goderre
TI Analyst



Re: [GENERAL] Large Object permissions lost in transfer

2011-04-04 Thread Andy Colson

On 4/4/2011 9:19 AM, Howard Cole wrote:

On 04/04/2011 2:47 PM, Andy Colson wrote:

permissions, - but how do I set up the permissions for the large
objects?

I had the same problem. and there is no grant all... there is the new
DO though. That's what I used, a simple DO expression to iterate and
update the permissions.

-Andy


Thanks Andy. As a temporary measure I am going to use the server
configuration variable lo_compat_privileges when I work out how to use
it. For the longer term, could I borrow your script for the DO :)



I dont seem to have it anymore... but here is a re-created, untested 
version.


do $$
delcare r record;
begin
for r in select loid from pg_catalog.pg_largeobject loop
  execute 'ALTER LARGE OBJECT ' || r.loid || ' OWNER TO andy';
end loop;
end$$;


I wanted to change the owner.. where-as you want grant... but it should 
get you the idea.


-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large Object permissions lost in transfer

2011-04-04 Thread Tom Lane
Andy Colson a...@squeakycode.net writes:
 On 4/4/2011 9:19 AM, Howard Cole wrote:
 Thanks Andy. As a temporary measure I am going to use the server
 configuration variable lo_compat_privileges when I work out how to use
 it. For the longer term, could I borrow your script for the DO :)

 I dont seem to have it anymore... but here is a re-created, untested 
 version.

 do $$
 delcare r record;
 begin
 for r in select loid from pg_catalog.pg_largeobject loop
execute 'ALTER LARGE OBJECT ' || r.loid || ' OWNER TO andy';
 end loop;
 end$$;

Suggest select distinct loid to avoid a lot of duplicated work,
otherwise this should be fine.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ..horribly documented, inefficient, user-hostile, impossible to maintain interpreted language..

2011-04-04 Thread Martin Gainty

..horribly documented, inefficient, user-hostile, impossible to maintain 
interpreted language..
to whom might you be alluding to
???

Martin 
__ 
Verzicht und Vertraulichkeitanmerkung/Note de déni et de confidentialité
 
Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene Empfaenger 
sein, so bitten wir hoeflich um eine Mitteilung. Jede unbefugte Weiterleitung 
oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht dient lediglich dem 
Austausch von Informationen und entfaltet keine rechtliche Bindungswirkung. 
Aufgrund der leichten Manipulierbarkeit von E-Mails koennen wir keine Haftung 
fuer den Inhalt uebernehmen.
Ce message est confidentiel et peut être privilégié. Si vous n'êtes pas le 
destinataire prévu, nous te demandons avec bonté que pour satisfaire informez 
l'expéditeur. N'importe quelle diffusion non autorisée ou la copie de ceci est 
interdite. Ce message sert à l'information seulement et n'aura pas n'importe 
quel effet légalement obligatoire. Étant donné que les email peuvent facilement 
être sujets à la manipulation, nous ne pouvons accepter aucune responsabilité 
pour le contenu fourni.




 Date: Mon, 4 Apr 2011 09:57:11 -0700
 From: pie...@hogranch.com
 To: pgsql-general@postgresql.org
 Subject: Re: [GENERAL] Trigger vs web service
 
 On 04/04/11 8:47 AM, Marc-André Goderre wrote:
 
  1- Program a trigger function detecting the orginal insert, split the 
  string and fill the other field.
 
  2- Program a web service for receiving the string, split it and insert 
  the informations in the db.
 
  Witch is the fastest one (in performance).
 
 
 I would expect parsing and splitting your string into fields before 
 handing it to SQL would be faster than handing it into SQL, then using a 
 trigger to hack it into 2 fields.This would, of course, at least 
 partially depend on what sort of language that web service is written 
 in, if its in some hypothetical horribly inefficient interpreted 
 language, all bets are off.
 
 Does all your data go through a web service now?   if not, what data 
 path IS it coming from?
 
 
 
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
  

Re: [GENERAL] ..horribly documented, inefficient, user-hostile, impossible to maintain interpreted language..

2011-04-04 Thread John R Pierce

On 04/04/11 12:07 PM, Martin Gainty wrote:
..horribly documented, inefficient, user-hostile, impossible to 
maintain interpreted language..

to whom might you be alluding to


I only used a few of those adjectives, and prefixed them by 
hypothetical.   to be honest, I would expect most languages commonly 
used in web service environments to be more efficient at string 
processing than pl/pgsql, and I really can't think of a counterexample 
off the top of my head.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table lock while adding a column and clients are logged in

2011-04-04 Thread juan pedro meriño



Re: [GENERAL] ..horribly documented, inefficient, user-hostile, impossible to maintain interpreted language..

2011-04-04 Thread Radosław Smogura
John R Pierce pie...@hogranch.com Monday 04 April 2011 21:20:51
 On 04/04/11 12:07 PM, Martin Gainty wrote:
  ..horribly documented, inefficient, user-hostile, impossible to
  maintain interpreted language..
  to whom might you be alluding to
 
 I only used a few of those adjectives, and prefixed them by
 hypothetical.   to be honest, I would expect most languages commonly
 used in web service environments to be more efficient at string
 processing than pl/pgsql, and I really can't think of a counterexample
 off the top of my head.

Java is such funny example, splitting even large strings is faster then in C, 
because string is wrapper around char[], and splited string will be only 
wrapper around same array, but with updated start, and len. But other 
operations, like manual search, or creating string from array may be slower.

In any case if you think application will grow, then I suggest you to use 
higher language then triggers. You will get access to better libraries, code 
is simpler to maintain, as well application is simpler to deploy. From Java 
point of view, PG is currently only one, and if you put there processing, even 
if you will get 10-20% boost, then with new users you may need to buy new and 
_replace_  old server, in Java you may add new server to cluseter. Same with 
PHP, just use Apache load balancer. Choice is yours.

Regards,
Rdek


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autovacuum firing up during my manual vacuum on same table

2011-04-04 Thread Jens Wilke
On 3. April 2011, Joshua D. Drake wrote:
 On Sat, 2 Apr 2011 19:26:56 +0200, Henry C. he...@cityweb.co.za 
wrote:
  On Sat, April 2, 2011 14:17, Jens Wilke wrote:
  Nevertheless since at least 8.4 IMO there's no need to bother
  with manual vacuum any more.

 Uhh, this is entirely untrue. There are plenty of cases where 8.4
 autovacuum can't cut it.

Which cases?
Isn't it more like something else went suboptimal when starting to 
think about manual vacuum? 
May be i better had written that since 8.4 there's the opportunity 
not to bother with manual vacuum any more.

Regards,
Jens

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] scary xpath_table behaviour

2011-04-04 Thread Rob Sargent
When using pipe separated xpath statements to fill multiple columns of 
output,  there is no guarantee of correlation among the columns of the 
rows returned.


Using locally built 9.0.3 (--with-libxml)

I'm dealing with an element which has one fixed attribute (@page-layout) 
and either (@collection_id and @type) or (@default).


I was hoping for a result set along the lines of

+---++---+--+-+
|id | layout | collection_id | type | default |
+---+|---+--+-+
|1  |layout1 | collection1   | t1   | null|
|2  |layout1 | collection1   | t3   | null|
|3  |layout2 | null  | null | true|
+---++---+--+-+

but instead I get

+---++---+--+-+
|id | layout | collection_id | type | default |
+---+|---+--+-+
|1  |layout1 | collection1   | t1   | true|
|2  |layout1 | collection1   | t3   | null|
|1  |layout2 | null  | null | null|
+---++---+--+-+

where all the non-null values from the last xpath are at the top of the 
result set (as they are the only values returned for that xpath query).


Here's my actual
select x.* from
xpath_table(
  'doc_id',
  'xml_text',
  'static_docs',
  '//*[name(.) = page-layout]/@name
   | //*[name(.) = page-layout]/@collection-id
   | //*[name(.) = page-layout]/@type
   |  //*[name(.) = page-layout]/@default ',
  'doc_id = ''lookups.xml'''
 )
 as x(doc_id text, chapter_layout text, 
collection_id text, doc_type text, defaulted_type text)



NOTE:  There is a multiplicity of namespaces in the xml_text field, 
hence the //*[name(.) = something trick.  Is that the real problem?  
And how does one namespace the xpath in xpath_table anyway?


Example data in the xml_text column:

page-layout name=pi-chapter-layout
collection-id=pi-brain type=dx/
page-layout name=pi-chapter-layout
collection-id=pi-gutype=dx/
page-layout name=pi-chapter-layout
collection-id=pi-gitype=dx/
page-layout name=onc-page-layouts 
collection-id=di-oncology  type=tsm/
page-layout name=pain-management-procedure-chapter-layout 
collection-id=pain-management  type=procedure/
page-layout name=pain-management-procedure-chapter-layout 
collection-id=pain-management  type=section-intro/
page-layout name=procedure-chapter-layout 
collection-id=procedures-book  type=procedure/
page-layout name=procedure-chapter-layout 
collection-id=procedures-book  type=section-intro/
page-layout name=pathology-dx-page-layouts
default=pathology-dx/
page-layout name=pathology-pcf-overview-page-layouts  
default=pcf-overview/
page-layout name=pathology-intro-page-layouts 
default=path-intro/
page-layout name=pathology-intro-page-layouts 
default=specific-factor/




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ..horribly documented, inefficient, user-hostile, impossible to maintain interpreted language..

2011-04-04 Thread Leif Biberg Kristensen
On Monday 04 April 2011 21:07:38 Martin Gainty wrote:
 ..horribly documented, inefficient, user-hostile, impossible to maintain
 interpreted language.. to whom might you be alluding to
 ???

Probably something starting with P.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ..horribly documented, inefficient, user-hostile, impossible to maintain interpreted language..

2011-04-04 Thread John R Pierce

On 04/04/11 12:12 PM, Leif Biberg Kristensen wrote:

Probably something starting with P.


Pascal?

Prolog??

PL/I ? ! ?


:)



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Foreign key and locking problem

2011-04-04 Thread Edoardo Serra
Hi guys,

I have a problem with the implicit FOR SHARE lock which postgres seems to get 
on the referenced record when there is a foreign key.
I'm using postgres 8.3.3 from debian packages.

Here is a sample database structure and commands to reproduce.

-- Test database structure

CREATE TABLE people (
id serial NOT NULL,
nickname character varying(255) NOT NULL,
status integer NOT NULL
);

ALTER TABLE people ADD PRIMARY KEY (id);

CREATE TABLE friendships (
id serial NOT NULL,
person1_id integer NOT NULL,
person2_id integer NOT NULL
);

ALTER TABLE friendships ADD PRIMARY KEY (id);

ALTER TABLE friendships ADD FOREIGN KEY (person1_id) REFERENCES people (id)
  ON UPDATE CASCADE ON DELETE CASCADE;
ALTER TABLE friendships ADD FOREIGN KEY (person2_id) REFERENCES people (id)
  ON UPDATE CASCADE ON DELETE CASCADE;

INSERT INTO people (id, nickname, status) VALUES (1, 'john.doe', 5);
INSERT INTO people (id, nickname, status) VALUES (2, 'jane.doe', 5);

-- now, in 2 different sessions I type:

client1 BEGIN; SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
client2 BEGIN; SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
client2 UPDATE people SET status = 6 WHERE id = 1;
client1 INSERT INTO friendships (id, person1_id, person2_id) VALUES (default, 
1, 2); 

client1 hangs trying to acquire the implicit FOR SHARE lock.

client2 COMMIT;

At this point, client1 gives the following error:
ERROR:  could not serialize access due to concurrent update
CONTEXT:  SQL statement SELECT 1 FROM ONLY public.people x WHERE id 
OPERATOR(pg_catalog.=) $1 FOR SHARE OF x

Is there a way to work around that?

In my architecture I have a background task which is computing friendships and 
a web frontend which is updating the records in the people table.
So updates to the people table can occurr while the background task is doing 
his job.

Any idea?

Tnx in advance

Regards

Edoardo
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] schema access privs

2011-04-04 Thread Ray Stell
What does the results in col 'List of schemas Access privileges'
indicate?  Are those three results split by the '/' char?  What
are the three sections?  What is 'postgres+' 
 
Can't find this explained in the docs.

template1-# \dn+ pg_catalog
   List of schemas
Name|  Owner   |  Access privileges   |  Description  
+--+--+---
 pg_catalog | postgres | postgres=UC/postgres+| system catalog schema
|  | =U/postgres  | 
(1 row)

http://www.postgresql.org/docs/current/static/app-psql.html

\dn[+] [ pattern ]

Lists schemas (namespaces). If pattern is specified, only schemas
whose names match the pattern are listed. Non-local temporary schemas
are suppressed. If + is appended to the command name, each object
is listed with its associated permissions and description, if any.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema access privs

2011-04-04 Thread Vibhor Kumar

On Apr 5, 2011, at 2:31 AM, Ray Stell wrote:

 What does the results in col 'List of schemas Access privileges'
 indicate?  Are those three results split by the '/' char?  What
 are the three sections?  What is 'postgres+' 
 
 Can't find this explained in the docs.
 
 template1-# \dn+ pg_catalog
   List of schemas
Name|  Owner   |  Access privileges   |  Description  
 +--+--+---
 pg_catalog | postgres | postgres=UC/postgres+| system catalog schema
|  | =U/postgres  | 
 (1 row)

Following link contains detail about Access privileges:
http://www.postgresql.org/docs/8.4/static/sql-grant.html
   r -- SELECT (read)
  w -- UPDATE (write)
  a -- INSERT (append)
  d -- DELETE
  D -- TRUNCATE
  x -- REFERENCES
  t -- TRIGGER
  X -- EXECUTE
  U -- USAGE
  C -- CREATE
  c -- CONNECT
  T -- TEMPORARY
arwdDxt -- ALL PRIVILEGES (for tables, varies for other objects)
  * -- grant option for preceding privilege

  / -- role that granted this privilege

Thanks  Regards,
Vibhor Kumar
EnterpriseDB Corporation
The Enterprise PostgreSQL Company
vibhor.ku...@enterprisedb.com
Blog:http://vibhork.blogspot.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] scary xpath_table behaviour

2011-04-04 Thread Tom Lane
Rob Sargent robjsarg...@gmail.com writes:
 When using pipe separated xpath statements to fill multiple columns of 
 output,  there is no guarantee of correlation among the columns of the 
 rows returned.

Yeah, this is a known problem mentioned in our TODO list.  Nobody has
any idea how to persuade libxml to do that, and the general opinion
seems to be that we shouldn't have designed xpath_table that way in the
first place ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] scary xpath_table behaviour

2011-04-04 Thread Rob Sargent



On 04/04/2011 03:12 PM, Tom Lane wrote:

Rob Sargentrobjsarg...@gmail.com  writes:

When using pipe separated xpath statements to fill multiple columns of
output,  there is no guarantee of correlation among the columns of the
rows returned.


Yeah, this is a known problem mentioned in our TODO list.  Nobody has
any idea how to persuade libxml to do that, and the general opinion
seems to be that we shouldn't have designed xpath_table that way in the
first place ...

regards, tom lane
Duly noted.  I think I can work around it now that I understand it a bit 
better.  Should I put a comment in the on-line docs?


Cheers,
rjs

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Some PostgreSQL enthusiast working in holland?

2011-04-04 Thread Andre Lopes
Hi,

I'm willing to contact a PostgreSQL developer working in Holland. I
know that this is not the main reason of this list.

Please contact-me by e-mail.

Best Regards,

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ..horribly documented, inefficient, user-hostile, impossible to maintain interpreted language..

2011-04-04 Thread Leif Biberg Kristensen
On Monday 04 April 2011 21:20:51 John R Pierce wrote:
 On 04/04/11 12:07 PM, Martin Gainty wrote:
  ..horribly documented, inefficient, user-hostile, impossible to
  maintain interpreted language..
  to whom might you be alluding to
 
 I only used a few of those adjectives, and prefixed them by
 hypothetical.   to be honest, I would expect most languages commonly
 used in web service environments to be more efficient at string
 processing than pl/pgsql, and I really can't think of a counterexample
 off the top of my head.

I had to move a piece of regexp/replace logic from PHP into pl/pgsql because 
PHP couldn't handle more than abt. 50 replacements in one text unit, instead 
it just dumped the text in the bit bucket. It was probably a memory allocation 
problem. On the other hand pl/pgsql has had no problem with the logic.

Documentation here:

http://solumslekt.org/blog/?p=23

regards, Leif

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Foreign key and locking problem

2011-04-04 Thread Craig Ringer

On 04/05/2011 04:18 AM, Edoardo Serra wrote:

Hi guys,

I have a problem with the implicit FOR SHARE lock which postgres seems to get 
on the referenced record when there is a foreign key.
I'm using postgres 8.3.3 from debian packages.


[snip]


At this point, client1 gives the following error:
ERROR:  could not serialize access due to concurrent update
CONTEXT:  SQL statement SELECT 1 FROM ONLY public.people x WHERE id 
OPERATOR(pg_catalog.=) $1 FOR SHARE OF x

Is there a way to work around that?


Is your concern really the locking? Or is it the fact that your two 
transactions aren't successfully serialized?


If you're using ISOLATION  LEVEL SERIALIZABLE you need to be prepared to 
re-try transactions after serialization failures. Your application code 
cannot just fire  forget transactions, it has to remember them and be 
able to reissue them on failure. If that is not acceptable to you, then 
you should look into whether READ COMMITTED isolation will offer you 
sufficient guarantees and see if you can use that.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Large Object permissions lost in transfer

2011-04-04 Thread Howard Cole



configuration variable lo_compat_privileges when I work out how to use
it. For the longer term, could I borrow your script for the DO :)



I dont seem to have it anymore... but here is a re-created, untested 
version.


do $$
delcare r record;
begin
for r in select loid from pg_catalog.pg_largeobject loop
  execute 'ALTER LARGE OBJECT ' || r.loid || ' OWNER TO andy';
end loop;
end$$;


I wanted to change the owner.. where-as you want grant... but it 
should get you the idea.


-Andy

Thanks All, Especially Andy. I shall not bother thanking Tom because in 
his omnipient state - He knows! ;)


Assistance on this forum is s good. It puts most of the support I 
pay for to shame. (for non postgres stuff)


Howard.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Plpgsql function to compute every other Friday

2011-04-04 Thread C. Bensend

Hey folks,

   So, I'm working on a little application to help me with my
budget.  Yeah, there are apps out there to do it, but I'm having
a good time learning some more too.  :)

   I get paid every other Friday.  I thought, for scheduling
purposes in this app, that I would take a stab at writing a plpgsql
function to determine if a given date is a payday.  Here is what I
have so far:


CREATE OR REPLACE FUNCTION is_payday( d DATE ) RETURNS BOOLEAN AS $$

DECLARE epoch DATE;
days_since_epoch INTEGER;
mult FLOAT8;
ret BOOLEAN := FALSE;

BEGIN

SELECT INTO epoch option_value
FROM options WHERE option_name = 'payroll_epoch';

SELECT INTO days_since_epoch ( SELECT CURRENT_DATE - d);

*** here's where I'm stuck ***

RETURN ret;

END;
$$ LANGUAGE plpgsql;


   OK.  So, I have a starting payday (payroll_epoch) in an options
table.  That is the first payday of the year.  I then calculate the
number of days between that value and the date I pass to the function.
Now I need to calculate whether this delta (how many days since
epoch) is an even multiple of 14 days (the two weeks).

   I have no idea how to do that in plpgsql.  Basically, I need to
figure out if the date I pass to the function is a payday, and if
it is, return TRUE.

   I would very much appreciate any help with this last bit of math
and syntax, as well as any advice on whether this is a reasonable
way to attack the problem.  And no - this isn't a homework
assignment.  :)

Thanks folks!

Benny


-- 
Hairy ape nads.-- Colleen, playing Neverwinter Nights





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Plpgsql function to compute every other Friday

2011-04-04 Thread David Johnston
It is a very simplistic approach since you do not take into account
holidays.  But if it meets your needs what you want is the modulo operator (
%; mod(x,y) is the equivalent function ) which performs division but
returns only the remainder.  

N % 14 = [a number between 0 and (14 - 1)]

N = 7; 7 % 14 = 7 (0, 7 remainder)
N = 14; 14 % 14 = 0 (1, 0 remainder)
N = 28; 28 % 14 = 0  (2, 0 remainder)
N = 31; 31 % 14 = 3 (2, 3 remainder)

If you KNOW the epoch date you are using is a Friday then you have no need
for CURRENT_DATE since you are passing in a date to check as a function
parameter.

I'll have to leave it to you or others to address the specific way to
integrate the modulo operator/function into the algorithm.

David J.

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of C. Bensend
Sent: Monday, April 04, 2011 8:12 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Plpgsql function to compute every other Friday


Hey folks,

   So, I'm working on a little application to help me with my budget.  Yeah,
there are apps out there to do it, but I'm having a good time learning some
more too.  :)

   I get paid every other Friday.  I thought, for scheduling purposes in
this app, that I would take a stab at writing a plpgsql function to
determine if a given date is a payday.  Here is what I have so far:


CREATE OR REPLACE FUNCTION is_payday( d DATE ) RETURNS BOOLEAN AS $$

DECLARE epoch DATE;
days_since_epoch INTEGER;
mult FLOAT8;
ret BOOLEAN := FALSE;

BEGIN

SELECT INTO epoch option_value
FROM options WHERE option_name = 'payroll_epoch';

SELECT INTO days_since_epoch ( SELECT CURRENT_DATE - d);

*** here's where I'm stuck ***

RETURN ret;

END;
$$ LANGUAGE plpgsql;


   OK.  So, I have a starting payday (payroll_epoch) in an options table.
That is the first payday of the year.  I then calculate the number of days
between that value and the date I pass to the function.
Now I need to calculate whether this delta (how many days since
epoch) is an even multiple of 14 days (the two weeks).

   I have no idea how to do that in plpgsql.  Basically, I need to figure
out if the date I pass to the function is a payday, and if it is, return
TRUE.

   I would very much appreciate any help with this last bit of math and
syntax, as well as any advice on whether this is a reasonable way to attack
the problem.  And no - this isn't a homework assignment.  :)

Thanks folks!

Benny


-- 
Hairy ape nads.-- Colleen, playing Neverwinter Nights





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Plpgsql function to compute every other Friday

2011-04-04 Thread Andy Colson

On 04/04/2011 07:12 PM, C. Bensend wrote:


Hey folks,

So, I'm working on a little application to help me with my
budget.  Yeah, there are apps out there to do it, but I'm having
a good time learning some more too.  :)

I get paid every other Friday.  I thought, for scheduling
purposes in this app, that I would take a stab at writing a plpgsql
function to determine if a given date is a payday.  Here is what I
have so far:


CREATE OR REPLACE FUNCTION is_payday( d DATE ) RETURNS BOOLEAN AS $$

 DECLARE epoch DATE;
 days_since_epoch INTEGER;
 mult FLOAT8;
 ret BOOLEAN := FALSE;

BEGIN

 SELECT INTO epoch option_value
 FROM options WHERE option_name = 'payroll_epoch';

 SELECT INTO days_since_epoch ( SELECT CURRENT_DATE - d);

 *** here's where I'm stuck ***

 RETURN ret;

END;
$$ LANGUAGE plpgsql;


OK.  So, I have a starting payday (payroll_epoch) in an options
table.  That is the first payday of the year.  I then calculate the
number of days between that value and the date I pass to the function.
Now I need to calculate whether this delta (how many days since
epoch) is an even multiple of 14 days (the two weeks).

I have no idea how to do that in plpgsql.  Basically, I need to
figure out if the date I pass to the function is a payday, and if
it is, return TRUE.

I would very much appreciate any help with this last bit of math
and syntax, as well as any advice on whether this is a reasonable
way to attack the problem.  And no - this isn't a homework
assignment.  :)

Thanks folks!

Benny




Not sure if your needs are like mine, but here is the function I use.  It stores the date 
in a config table, and rolls it forward when needed.  It also calculates it from some 
know payroll date, which I'm guessing was near when I wrote it?  (I'm not 
sure why I choose Nov 16 2008.)  for me, this procedure is called a lot, and the things 
calling it expect it to roll into the next pay period.  Not sure if it'll work for you, 
but might offer some ideas.



CREATE OR REPLACE FUNCTION startpayperiod()
 RETURNS date
 LANGUAGE plpgsql
AS $function$
declare
st date;
last date;
needins boolean;
begin
select avalue::date into st from config where akey = 'startPayPeriod';
if (st is null) then
st := '2008.11.16';
needins := true;
else
needins := false;
end if;
-- find the end of the pp
last := st + interval '13 days';
if (current_date  last) then
-- raise notice 'need update';
loop
last := st;
st := st + interval '2 weeks';
if current_date  st then
exit;
end if;
end loop;
st := last;
if needins then
insert into config(akey, avalue) 
values('startPayPeriod', st::text);
else
update config set avalue = st::text where akey = 
'startPayPeriod';
end if;
end if;
return st;
end; $function$


-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Plpgsql function to compute every other Friday

2011-04-04 Thread John R Pierce

generate_series(date '2001-01-05', date '2020-12-31', interval '2 weeks')


will return every payday from jan 5 2001 to the end of 2020 (assuming 
the 5th was payday, change the start to jan 12 if that was instead).




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Plpgsql function to compute every other Friday

2011-04-04 Thread C. Bensend

 It is a very simplistic approach since you do not take into account
 holidays.  But if it meets your needs what you want is the modulo operator
 (
 %; mod(x,y) is the equivalent function ) which performs division but
 returns only the remainder.

 N % 14 = [a number between 0 and (14 - 1)]

 N = 7; 7 % 14 = 7 (0, 7 remainder)
 N = 14; 14 % 14 = 0 (1, 0 remainder)
 N = 28; 28 % 14 = 0  (2, 0 remainder)
 N = 31; 31 % 14 = 3 (2, 3 remainder)

Ah, thank you, David.  This gives me some good knowledge that I
was missing!  I know about %, but I was fumbling a bit with it in
plpgsql, and your examples helped.

Thanks!

Benny


-- 
Hairy ape nads.-- Colleen, playing Neverwinter Nights



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Plpgsql function to compute every other Friday

2011-04-04 Thread C. Bensend

 Not sure if your needs are like mine, but here is the function I use.  It
 stores the date in a config table, and rolls it forward when needed.  It
 also calculates it from some know payroll date, which I'm guessing was
 near when I wrote it?  (I'm not sure why I choose Nov 16 2008.)  for me,
 this procedure is called a lot, and the things calling it expect it to
 roll into the next pay period.  Not sure if it'll work for you, but might
 offer some ideas.

Great stuff, Andy!  Thank you for this - this function gives me a
lot of great hints about functions in plpgsql.  Very useful
indeed.

Benny


-- 
Hairy ape nads.-- Colleen, playing Neverwinter Nights



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Plpgsql function to compute every other Friday

2011-04-04 Thread C. Bensend

 generate_series(date '2001-01-05', date '2020-12-31', interval '2 weeks')


 will return every payday from jan 5 2001 to the end of 2020 (assuming
 the 5th was payday, change the start to jan 12 if that was instead).

And THERE is the winner.  I feel like an idiot for not even
considering generate_series().

Thanks a bunch, John!  This will do nicely!

Benny


-- 
Hairy ape nads.-- Colleen, playing Neverwinter Nights



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] could not access status of transaction 1118722281

2011-04-04 Thread Gordon Shannon
Running 9.0.2 on Centos.

I just discovered this in my production error log.  Starting about 45
minutes ago, I got 70 of these, within 2 seconds:

28652   2011-04-04 21:47:29 EDT [33]WARNING:  PD_ALL_VISIBLE flag was
incorrectly set in relation pg_toast_49338181 page 16820

These warnings were immediately proceeded by this, which has continuously
repeated every 15 seconds since then:

8895   2011-04-04 22:15:28 EDT [1]ERROR:  could not access status of
transaction 1118722281
8895   2011-04-04 22:15:28 EDT [2]DETAIL:  Could not open file
pg_clog/042A: No such file or directory.
8895   2011-04-04 22:15:28 EDT [3]CONTEXT:  automatic vacuum of table
mcore.pg_toast.pg_toast_48975830

I checked and the pg_clog files start at 04BF and run through 0A57 (1,433
files)

Any help would be greatly appreciated.

Gordon

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/could-not-access-status-of-transaction-1118722281-tp4283137p4283137.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] postgres segfaulting on pg_restore

2011-04-04 Thread Chris Curvey
Thank goodness I'm still in development!  I'm trying to use pg_restore with
a dump created with pg_dump.  But it keeps failing with segmentation or
general protection faults.  My restore command is:

$ sudo -u postgres pg_restore -d mydatabase -j8 mydatabase.dmp

I've now tried five times, and gotten five failures. (I've tries setting
--jobs to smaller values, including not specifying anything at all.)  Here's
the postgres error log from the last failure:

2011-04-04 23:07:00 EDT LOG:  server process (PID 7632) was terminated by
signal 11: Segmentation fault
2011-04-04 23:07:00 EDT LOG:  terminating any other active server processes
2011-04-04 23:07:00 EDT WARNING:  terminating connection because of crash of
another server process
2011-04-04 23:07:00 EDT DETAIL:  The postmaster has commanded this server
process to roll back the current transaction and exit, because another
server process exited abnormally and possibly corrupted shared memory.
2011-04-04 23:07:00 EDT HINT:  In a moment you should be able to reconnect
to the database and repeat your command.
2011-04-04 23:07:00 EDT WARNING:  terminating connection because of crash of
another server process
2011-04-04 23:07:00 EDT DETAIL:  The postmaster has commanded this server
process to roll back the current transaction and exit, because another
server process exited abnormally and possibly corrupted shared memory.
2011-04-04 23:07:00 EDT HINT:  In a moment you should be able to reconnect
to the database and repeat your command.
2011-04-04 23:07:00 EDT CONTEXT:  COPY transactions, line 10629187
2011-04-04 23:07:00 EDT WARNING:  terminating connection because of crash of
another server process
2011-04-04 23:07:00 EDT DETAIL:  The postmaster has commanded this server
process to roll back the current transaction and exit, because another
server process exited abnormally and possibly corrupted shared memory.
2011-04-04 23:07:00 EDT HINT:  In a moment you should be able to reconnect
to the database and repeat your command.
2011-04-04 23:07:00 EDT WARNING:  terminating connection because of crash of
another server process
2011-04-04 23:07:00 EDT DETAIL:  The postmaster has commanded this server
process to roll back the current transaction and exit, because another
server process exited abnormally and possibly corrupted shared memory.
2011-04-04 23:07:00 EDT HINT:  In a moment you should be able to reconnect
to the database and repeat your command.
2011-04-04 23:07:00 EDT FATAL:  the database system is in recovery mode
2011-04-04 23:07:00 EDT LOG:  all server processes terminated;
reinitializing
2011-04-04 23:07:00 EDT LOG:  database system was interrupted; last known up
at 2011-04-04 23:06:25 EDT
2011-04-04 23:07:00 EDT LOG:  database system was not properly shut down;
automatic recovery in progress
2011-04-04 23:07:00 EDT LOG:  consistent recovery state reached at
19/6E524410
2011-04-04 23:07:00 EDT LOG:  redo starts at 19/6E175B68
2011-04-04 23:07:00 EDT LOG:  unexpected pageaddr 18/425D6000 in log file
25, segment 110, offset 6119424
2011-04-04 23:07:00 EDT LOG:  redo done at 19/6E5D5F90
2011-04-04 23:07:00 EDT LOG:  last completed transaction was at log time
2011-04-04 23:06:16.684615-04
2011-04-04 23:07:01 EDT LOG:  database system is ready to accept connections
2011-04-04 23:07:01 EDT LOG:  autovacuum launcher started

And in /var/log/messages, I have the following (last line is for this crash,
previous lines for previous crashes):

Apr  4 21:27:27 mu kernel: [ 1964.787667] do_general_protection: 24
callbacks suppressed
Apr  4 21:27:27 mu kernel: [ 1964.787671] postgres[3439] general protection
ip:7f8372c8d2f7 sp:7fff61b72b38 error:0 in postgres[7f8372973000+4a8000]
Apr  4 21:32:21 mu kernel: [ 2258.266116] postgres[4307]: segfault at 30 ip
7f8372c8d2fb sp 7fff61b72bd8 error 4 in
postgres[7f8372973000+4a8000]
Apr  4 21:47:27 mu kernel: [ 3164.740812] postgres[5040]: segfault at 40 ip
7f8372c8d2fb sp 7fff61b72b38 error 4 in
postgres[7f8372973000+4a8000]
Apr  4 22:03:12 mu kernel: [ 4108.987420] postgres[5077]: segfault at 40 ip
7f8372c8d2fb sp 7fff61b72978 error 4 in
postgres[7f8372973000+4a8000]
Apr  4 22:56:13 mu kernel: [ 7288.639099] postgres[5308] general protection
ip:7f8372c8d2f7 sp:7fff61b72748 error:0 in postgres[7f8372973000+4a8000]
Apr  4 23:07:00 mu kernel: [ 7935.670820] postgres[7632]: segfault at 7365
ip 7f8372c8c94e sp 7fff61b72ad0 error 4 in
postgres[7f8372973000+4a8000]

This is on Ubuntu 10.10 server, 64-bit on Intel.  I'm running Postgres
9.0.3, 64-bit from the ppa repository.

What can I do to help the developers find the root cause of this?  I'm happy
to run a debug version, or find a corefile, or whatever I can, but I might
need some guidance, as I'm pretty new to postgres.

-Chris


-- 
Ignoring that little voice in my head since 1966!


Re: [GENERAL] postgres segfaulting on pg_restore

2011-04-04 Thread Craig Ringer
On 05/04/11 11:18, Chris Curvey wrote:

 This is on Ubuntu 10.10 server, 64-bit on Intel.  I'm running Postgres
 9.0.3, 64-bit from the ppa repository.
 
 What can I do to help the developers find the root cause of this?  I'm
 happy to run a debug version, or find a corefile, or whatever I can, but
 I might need some guidance, as I'm pretty new to postgres.

A backtrace would be a good start. On Ubuntu you will need to install
debuginfo packages first. For details, see:

http://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general