[GENERAL] Why security-definer functions are executable by public by default?

2011-04-05 Thread hubert depesz lubaczewski
hi

was pointed to the fact that security definer functions have the same
default privileges as normal functions in the same language - i.e. if
the language is trusted - public has the right to execute them.

maybe i'm missing something important, but given the fact that security
definer functions are used to get access to things that you usually
don't have access to - shouldn't the privilege be revoked by default,
and grants left for dba to decide?

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why security-definer functions are executable by public by default?

2011-04-05 Thread Sim Zacks

On 04/05/2011 09:41 AM, hubert depesz lubaczewski wrote:


hi

was pointed to the fact that security definer functions have the same
default privileges as normal functions in the same language - i.e. if
the language is trusted - public has the right to execute them.

maybe i'm missing something important, but given the fact that security
definer functions are used to get access to things that you usually
don't have access to - shouldn't the privilege be revoked by default,
and grants left for dba to decide?

depesz
That is exactly the point of security definer. It means that even though 
you do not have rights to data, I have a special function that will 
allow you the rights in a very specific way.


For example, I give my users no rights on any tables. The only way they 
can access data is through views and security definer functions. The 
functions are built in such a way that it only allows them access in the 
manner that I want them to.


So while my user cannot insert into the table, he can pass the correct 
parameters into the function and if everything checks out write it will 
insert the row.


Sim

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Integrating New Data Type

2011-04-05 Thread Nick Raj
Hi all,
I have defined a new data type. I have defined in and out function for that
data type.
But i want to know how to integrate this data type with postgres (how
postgres compile my code or know my datatype) ?

Thanks,
Nirmesh


Re: [GENERAL] Integrating New Data Type

2011-04-05 Thread Pavel Stehule
Hello

2011/4/5 Nick Raj nickrajj...@gmail.com:
 Hi all,
 I have defined a new data type. I have defined in and out function for that
 data type.
 But i want to know how to integrate this data type with postgres (how
 postgres compile my code or know my datatype) ?

you have to register in, out functions, you have to register new type

look to postgresql sources - contrib, there is very simple type citext

http://doxygen.postgresql.org/dir_23dd6926f287bddac2c9dcb7db5b1712.html

http://doxygen.postgresql.org/citext_8c-source.html

Any contrib module has sql file with registrations

Regards

Pavel Stehule



 Thanks,
 Nirmesh


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Database gnu make equivalent

2011-04-05 Thread pasman pasmański
Hello.


I search a tool  to send queries to database when
specific rows contain null or not exist.

What can be used for this purpose ?



pasman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database gnu make equivalent

2011-04-05 Thread Gabriele Bartolini

Hi Pasman,

On Tue, 5 Apr 2011 11:14:16 +0200, pasman pasmański 
pasma...@gmail.com wrote:

I search a tool  to send queries to database when
specific rows contain null or not exist.

What can be used for this purpose ?


Could you please elaborate this and provide us with more information? I 
am afraid I have not understood your request. Thanks.


Cheers,
Gabriele

--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it - www.2ndQuadrant.it

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Named advisory locks

2011-04-05 Thread rihad
Hi, all. I'm looking for a way to lock on an arbitrary string, just how 
MySQL allows doing with GET_LOCK('foo.123') / RELEASE_LOCK('foo.123'). I 
know that at least Postgres 8.3 has pg_advisory_lock() / 
pg_advisory_unlock() but they seem to accept integer values only, and 
we're already using integer values elsewhere.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database gnu make equivalent

2011-04-05 Thread John R Pierce

On 04/05/11 2:14 AM, pasman pasmański wrote:

Hello.


I search a tool  to send queries to database when
specific rows contain null or not exist.

What can be used for this purpose ?



A perl script, perhaps? You would of course have to make a query to 
determine that the specific row you are looking for is either null or 
nonexistant, then send your additional queries.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Out of memory

2011-04-05 Thread Jeremy Palmer
Hi,

I've been having repeated troubles trying to get a PostgreSQL app to play 
nicely on Ubuntu. I recently posted a message on this list about an out of 
memory error and got a resolution by reducing the work_mem setting. However I'm 
now getting further out of memory issues during the same stage of plpgsql 
function as mentioned before.

The function itself is run as part of larger transaction which does the 
following:

1/ Maintains 104 tables (15 PostGIS tables), by loading and applying 
incremental table changes. A typical incremental load with maintain about 
10,000 rows.

2/ When each one of these tables is updated an after trigger is fired that 
maintains an associated table revision table.

3/ After all of the tables are maintained a plpgsql function is called to 
build/maintain a set of de-normalised tables. These tables total about 20GB. 
Each one of these tables is compared against the previous table revision to 
determine its row changes. It's in this function that the out of memory 
exception is occurring.

The server log error message I'm getting in the function is here 
http://pastebin.com/346zi2sS. It's very long and contains the top transaction 
memory debug info.

My initial observation about this error is that maybe PostgreSQL is 
encountering a memory corruption error because the amount of OS memory does not 
seem to run out. The plpgsql function uses functions from both PostGIS and 
pgc_checksum (http://pgfoundry.org/projects/pg-comparator) - so maybe they are 
the cause of the problem. Or maybe I have configured something wrong...

I did some memory logging during and the execution of the function. It shows 
for the majority of the transaction execution that the actual memory used is 
about 1GB (grows from the initial 600mb) with about 6.5GB cached for the OS:

 total   used   free sharedbuffers cached
Mem:  8004   7839165  0  0   6802
-/+ buffers/cache:   1037   6967
Swap:  397  0397

But just before the out of memory error occurs there is a spike to 2.5GB of 
used memory, but there us still 4.5GB cached by the OS:

 total   used   free sharedbuffers cached
Mem:  8004   7702301  0  0   4854
-/+ buffers/cache:   2848   5156
Swap:  397  0397

Then after the error the memory slowly returns this state:

 total   used   free sharedbuffers cached
Mem:  8004   1478   6526  0  0   1133
-/+ buffers/cache:345   7659
Swap:  397  0397

The OS I'm running is:
 
Linux TSTLHAPP01 2.6.32-29-server #58-Ubuntu SMP Fri Feb 11 21:06:51 UTC 2011 
x86_64 GNU/Linux.
 
It’s a running on VMWare and, has 2 CPU’s and 8GB of RAM. This VM is dedicated 
to PostgreSQL, not much else is running other than cacti, ssh and ftp server 
daemons. The main OS parameters I have tuned are:
 
vm.swappiness=0
vm.overcommit_memory=2
kernel.shmmax = 4196769792
kernel.shmall = 1024602
 
And the PostgreSQL is:
 
PostgreSQL 9.0.3 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Ubuntu 
4.4.3-4ubuntu5) 4.4.3, 64-bit.
 
The main changed postgresql.conf parameters I've tuned are:
 
shared_buffers = 512MB
maintenance_work_mem = 512MB
temp_buffers = 256MB
work_mem = 1MB
wal_buffers = 16MB
effective_cache_size = 4094MB

The size of the database is 350GB. The typical number of users connected to the 
database is 1 or 2. This database is used for loading external data, managing 
revision table information and generating and outputting de-normalised 
datasets, so it does not have a high number of transactions running. Typically 
1 large one per day.

Two questions:

1) Have I set the OS and postgresql parameter to sensible values given the 
hardware and database utilization.
2) Can anyone help me make sense of the top transaction memory error to help 
track down the issue?

Any other suggestions would be greatly appreciated.

Thanks
Jeremy


From: Jeremy Palmer
Sent: Saturday, 26 March 2011 9:57 p.m.
To: Scott Marlowe
Cc: pgsql-general@postgresql.org
Subject: RE: [GENERAL] Out of memory

Hi Scott,

It was the work_mem that was set too high. I reduced it to 32mb and the 
function executed.

Just so I understand this. Every time a sort is performed within a function, 
the sort memory is allocated, and then it not released until the function 
completes? Rather then deallocating the memory after each sort operation has 
completed.

Thanks,
Jeremy


From: Scott Marlowe [scott.marl...@gmail.com]
Sent: Friday, 25 March 2011 5:04 p.m.
To: Jeremy Palmer
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Out of memory

On Thu, Mar 24, 2011 at 9:23 PM, Jeremy Palmer jpal...@linz.govt.nz wrote:
 I’ve been getting database out of memory failures with some 

Re: [GENERAL] PostgreSQL documentation on kindle - best practices?

2011-04-05 Thread ray
On Apr 4, 4:00 am, jayadevan.maym...@ibsplc.com (Jayadevan M) wrote:
  So my question: has anyone found a best practice solution to convert
  the PostgreSQL documentaiton into a kindle-friendly format? Or has
  even an .azw file downloadable somewhere?

  Best wishes,

  Harald

 You could always send the pdf file and get it converted to kindle format,
 free of cost. It is not a good idea to try and read pdf files in Kindle.
 You have to send the pdf file to kindleusername@free.kindle.com
 Regards,
 Jayadevan

 DISCLAIMER:

 The information in this e-mail and any attachment is intended only for
 the person to whom it is addressed and may contain confidential and/or
 privileged material. If you have received this e-mail in error, kindly
 contact the sender and destroy all copies of the original communication.
 IBS makes no warranty, express or implied, nor guarantees the accuracy,
 adequacy or completeness of the information contained in this email or any
 attachment and is not liable for any errors, defects, omissions, viruses
 or for resultant loss or damage, if any, direct or indirect.

I looks like a tagged PDF will reflow on supporting devices such as
kindle.  EPUB seems to be an alternative.
http://www.google.com/search?q=pdf+reflowhl=ennum=10lr=ft=icr=safe=imagestbs=

ray

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema access privs

2011-04-05 Thread Vibhor Kumar
[ Please don't overpost the list. Adding PG General List]

On Apr 5, 2011, at 3:30 AM, Ray Stell wrote:

 On Tue, Apr 05, 2011 at 02:42:30AM +0530, Vibhor Kumar wrote:
 
 On Apr 5, 2011, at 2:31 AM, Ray Stell wrote:
 
 What does the results in col 'List of schemas Access privileges'
 indicate?  Are those three results split by the '/' char?  What
 are the three sections?  What is 'postgres+' 
 
 Following link contains detail about Access privileges:
 http://www.postgresql.org/docs/8.4/static/sql-grant.html
 
 yeah, I saw that, but that's not what I asked about.  there seems
 to be 3 sections of the result.  What are the sections?  and what is
 postgres+ ?

Following are details:
 postgres=UC/postgres+
 [user]   [privs] /[ ROLE who granted privs.

Postgres is a user which has granted USAGE and CREATE Privileges to user 
postgres

+ seems a wrapper in next line.

Thanks  Regards,
Vibhor Kumar
EnterpriseDB Corporation
The Enterprise PostgreSQL Company
vibhor.ku...@enterprisedb.com
Blog:http://vibhork.blogspot.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Trigger vs web service

2011-04-05 Thread Marc-André Goderre

I receive a long string (about 1 per second) than content many information. For 
the moment it is directly inserted in the database (1 column).
I have to treat the hole string every time i need information in it.

Now, I want split the sting and save the informations in differents fields.
I have 2 solutions and would like to have your opinion on them.

1- Program a trigger function detecting the orginal insert, split the string 
and fill the other field.
2- Program a web service for receiving the string, split it and insert the 
informations in the db.

Witch is the fastest one (in performance).

Thanks

Marc-Andre Goderre
TI Analyst


--
Ce message a été vérifié par le service de sécurité pour courriels 
LastSpamhttp://www.lastspam.com.


Re: [GENERAL] schema access privs

2011-04-05 Thread Ray Stell
On Tue, Apr 05, 2011 at 03:58:46PM +0530, Vibhor Kumar wrote:
 
 Following are details:
  postgres=UC/postgres+
  [user]   [privs] /[ ROLE who granted privs.

What's the logic for reporting the U priv twice?  

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema access privs

2011-04-05 Thread Vibhor Kumar

On Apr 5, 2011, at 6:07 PM, Ray Stell wrote:

 On Tue, Apr 05, 2011 at 03:58:46PM +0530, Vibhor Kumar wrote:
 
 Following are details:
 postgres=UC/postgres+
 [user]   [privs] /[ ROLE who granted privs.
 
 What's the logic for reporting the U priv twice?  


If you are talking about following:
 =U/postgres  
[public]=[access][ ROLE who granted privs]

for public there will no username, its always =(equals to) followed by 
access/[Role who granted privs].


Thanks  Regards,
Vibhor Kumar
EnterpriseDB Corporation
The Enterprise PostgreSQL Company
vibhor.ku...@enterprisedb.com
Blog:http://vibhork.blogspot.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Named advisory locks

2011-04-05 Thread Craig Ringer

On 5/04/2011 5:42 PM, rihad wrote:

Hi, all. I'm looking for a way to lock on an arbitrary string, just how
MySQL allows doing with GET_LOCK('foo.123') / RELEASE_LOCK('foo.123'). I
know that at least Postgres 8.3 has pg_advisory_lock() /
pg_advisory_unlock() but they seem to accept integer values only, and
we're already using integer values elsewhere.


Already using _string_ values elsewhere?

Alas, I don't know of any way to use string based advisory locks directly.

You could store a mapping of lock strings to allocated ints in your app 
or in the DB.


Alternately, you could maybe use the full 64 bits of the single-argument 
form locks to pack in the initial chars of the lock ID strings if 
they're short. If you can cheat and require that lock identifiers 
contain only the base 64 characters - or even less - you can pack 10 
or more characters into the 64 bits rather than the 8 chars you'd get 
with one byte per char. Of course, you can't do that if your strings are 
in any way user-supplied or user-visible because you can't support 
non-ascii charsets when doing ugly things like that.


--
Craig Ringer

Tech-related writing at http://soapyfrogs.blogspot.com/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Memory leak in SPI_finish call

2011-04-05 Thread Jorge Arévalo
Hello,

I'm having problems with a PostgreSQL server side C-function. It's not
an aggregate function (operates over a only row of data). When the
function is called over tables with ~4000 rows, it causes postgres
backend crash with SEGFAULT. I know the error is a kind of
cumulative, because with 3460 rows works fine, but from 3461 fails
with SEGFAULT.

Debugging, I've found the problem is a SPI_finish call. If I comment
the call, the function ends without errors. The only problem is a
warning message is raised, because I skipped the SPI_finish call.

I'm working with postgres 8.4.7 in a Ubuntu 9.10 machine. Same problem
in Windows machine, anyway. Things I've tried:

- Quit SPI_finish call, obviously. But it's not a solution

- Modify log configuration: log_min_messages=debug5,
log_error_verbosity=verbose, log_min_error_statement=debug5,
log_min_duration_statement=0, log_connections=on,
log_disconnections=on, log_statment=all. I can't see any conclussion.
Here, a log example:
http://dl.dropbox.com/u/6599273/postgresql-2011-04-04_195420.log. The
function that crashes is MapAlgebra.

- Attach postgres process to GDB (gdb --pid=...). When I connect with
PostgreSQL via psql/pgadmin, the backend creates 2 new processes in
idle state, until I execute a query. One connected to the postgres
database (I'm using postgres user) and another one connected to my
testing database. I've tried to attach a gdb instance to both
processes.

When I attach gdb to the process connected with my testing database, I
get :Program exited with code 02. And no more. No core dumped. I've
looked for that error, and looks like it depends on the software that
caused the signal, not gdb.

When I attach gdb to the process connected with postgres database, I
get Program received signal SIGQUIT, Quit.
0x00651422 in __kernel_vsyscall (). No more information. No core dumped.

So, what can I do to find the error, apart from that things? Any clue
with the information posted above?

Thanks in advance,

-- 
Jorge Arévalo
Internet  Mobilty Division, DEIMOS
jorge.arev...@deimos-space.com
http://es.linkedin.com/in/jorgearevalo80
http://mobility.grupodeimos.com/
http://gis4free.wordpress.com
http://geohash.org/ezjqgrgzz0g

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Foreign key and locking problem

2011-04-05 Thread Vick Khera
On Mon, Apr 4, 2011 at 4:18 PM, Edoardo Serra edoa...@serra.to.it wrote:

 At this point, client1 gives the following error:
 ERROR:  could not serialize access due to concurrent update
 CONTEXT:  SQL statement SELECT 1 FROM ONLY public.people x WHERE id
 OPERATOR(pg_catalog.=) $1 FOR SHARE OF x

 Is there a way to work around that?

 In my architecture I have a background task which is computing friendships
 and a web frontend which is updating the records in the people table.
 So updates to the people table can occurr while the background task is
 doing his job.

 Any idea?


Do you really need SERIALIZABLE transactions?  You have to more or less
expect transaction failures when you use that mode, and handle them.

I also ran into this issue when running 8.3. We have statistics tables we
update via triggers, and I was getting such locks blocking progress of
competing processes.  The solution was to make the updates via a queue and
have a single thread apply them to the table.  Thus, there are no competing
locks, and the main processes can fly along as fast as possible since all
they do is a single insert requesting the update into a table with no FKs or
other indexes that will slow it down.  The only issue is that the thread
that applies the changes must always be running, and must be fast enough for
your workload.

Also, try 9.0. The FK locks are lighter now.  Not sure if it will help your
serializable case though.


Re: [GENERAL] Plpgsql function to compute every other Friday

2011-04-05 Thread Merlin Moncure
On Mon, Apr 4, 2011 at 7:12 PM, C. Bensend be...@bennyvision.com wrote:

 Hey folks,

   So, I'm working on a little application to help me with my
 budget.  Yeah, there are apps out there to do it, but I'm having
 a good time learning some more too.  :)

   I get paid every other Friday.  I thought, for scheduling
 purposes in this app, that I would take a stab at writing a plpgsql
 function to determine if a given date is a payday.  Here is what I
 have so far:


 CREATE OR REPLACE FUNCTION is_payday( d DATE ) RETURNS BOOLEAN AS $$

        DECLARE epoch DATE;
                days_since_epoch INTEGER;
                mult FLOAT8;
                ret BOOLEAN := FALSE;

 BEGIN

        SELECT INTO epoch option_value
                FROM options WHERE option_name = 'payroll_epoch';

        SELECT INTO days_since_epoch ( SELECT CURRENT_DATE - d);

        *** here's where I'm stuck ***

        RETURN ret;

 END;
 $$ LANGUAGE plpgsql;


   OK.  So, I have a starting payday (payroll_epoch) in an options
 table.  That is the first payday of the year.  I then calculate the
 number of days between that value and the date I pass to the function.
 Now I need to calculate whether this delta (how many days since
 epoch) is an even multiple of 14 days (the two weeks).

   I have no idea how to do that in plpgsql.  Basically, I need to
 figure out if the date I pass to the function is a payday, and if
 it is, return TRUE.

   I would very much appreciate any help with this last bit of math
 and syntax, as well as any advice on whether this is a reasonable
 way to attack the problem.  And no - this isn't a homework
 assignment.  :)

 Thanks folks!

 Benny

first, let's fix your function definition.  I would advise you to take
in both the base pay date (so we know which 'every other' to use) and
the epoch so you don't have to read it from the database in the
function.  Why do that? you can make your function immutable.

CREATE OR REPLACE FUNCTION is_payday(
  d DATE,
  base_date DATE,
  payroll_epoch INT) RETURNS BOOLEAN AS $$
  SELECT (select extract('j' from $1)::int - select extract('j' from
$2)::int) % $3 = 0;
$$ LANGUAGE sql IMMUTABLE;

By making this function sql and immutable, you give the database more
ability to inline it into queries which can make a tremendous
performance difference in some cases.  You can also index based on it
which can be useful.

By pulling out julian days, we can do simple calculation based on days
(julian day, not to be confused with julian calendar, is kinda sorta
like epoch for days.  While it doesn't really apply to this toy
example, a key thing to remember if if trying to write high
performance pl/pgsql is to separate stable/immutable, and volatile
elements.  Also, use sql, not plpgsql in trivial functions.

If you don't want to select out your option in every query, I'd advise
making an option() function which wraps the trivial select:

select is_payday(some_date, option('base_date'), option('payroll_epoch');

The 'option' function should be stable.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger vs web service

2011-04-05 Thread Jorge Godoy
If I was Yoda, I would say The answer you seek is... it depends

If I were you, I would test both solutions and check which one performs
better and impacts the least on your environment.

For example, if you have no logic at all on the database then I would code
that in the frontend.  If you have code in the database, then checking how
this specific function works there would also be worth a shot.

--
Jorge Godoy jgo...@gmail.com


On Tue, Apr 5, 2011 at 09:16, Marc-André Goderre magode...@cgq.qc.cawrote:



 I receive a long string (about 1 per second) than content many information.
 For the moment it is directly inserted in the database (1 column).

 I have to treat the hole string every time i need information in it.



 Now, I want split the sting and save the informations in differents fields.

 I have 2 solutions and would like to have your opinion on them.



 1- Program a trigger function detecting the orginal insert, split the
 string and fill the other field.

 2- Program a web service for receiving the string, split it and insert the
 informations in the db.



 Witch is the fastest one (in performance).



 Thanks



 *Marc-Andre Goderre***

 TI Analyst




 --
 Ce message a été vérifié par le service de sécurité pour courriels *
 LastSpam* http://www.lastspam.com.



Re: [GENERAL] Named advisory locks

2011-04-05 Thread rihad

On 5/04/2011 5:42 PM, rihad wrote:


Hi, all. I'm looking for a way to lock on an arbitrary string, just how
MySQL allows doing with GET_LOCK('foo.123') / RELEASE_LOCK('foo.123'). I
know that at least Postgres 8.3 has pg_advisory_lock() /
pg_advisory_unlock() but they seem to accept integer values only, and
we're already using integer values elsewhere.



Already using _string_ values elsewhere?

No, what I meant was that we're already using ints for a different 
purpose in another app on the same server, so I cannot safely reuse 
them. Aren't advisory lock ID's unique across the whole server? The sole 
purpose of the string ID is to be able to supply an initial namespace 
prefix (foo.NNN) so NNN wouldn't clash in different subsystems of the 
app. MySQL is pretty convenient in this regard. Now I think it would be 
easier for me to work around this Postgres limitation by simply LOCKing 
on some table (maybe one created specifically as something to lock on 
to) instead of using pg_advisory_lock explicitly.



Alas, I don't know of any way to use string based advisory locks directly.


You could store a mapping of lock strings to allocated ints in your app or in 
the DB.

Alternately, you could maybe use the full 64 bits of the single-argument form locks to 
pack in the initial chars of the lock ID strings if they're short. If you can cheat and 
require that lock identifiers contain only the base 64 characters - or even 
less - you can pack 10 or more characters into the 64 bits rather than the 8 chars you'd 
get with one byte per char. Of course, you can't do that if your strings are in any way 
user-supplied or user-visible because you can't support non-ascii charsets when doing 
ugly things like that.



--
Craig Ringer

Tech-related writing at http://soapyfrogs.blogspot.com/


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why security-definer functions are executable by public by default?

2011-04-05 Thread Tom Lane
hubert depesz lubaczewski dep...@depesz.com writes:
 was pointed to the fact that security definer functions have the same
 default privileges as normal functions in the same language - i.e. if
 the language is trusted - public has the right to execute them.

 maybe i'm missing something important, but given the fact that security
 definer functions are used to get access to things that you usually
 don't have access to - shouldn't the privilege be revoked by default,
 and grants left for dba to decide?

I don't see that that follows, at all.  The entire point of a security
definer function is to provide access to some restricted resource to
users who couldn't get at it with their own privileges.  Having it start
with no privileges would be quite useless.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] schema access privs

2011-04-05 Thread Ray Stell
On Tue, Apr 05, 2011 at 06:33:46PM +0530, Vibhor Kumar wrote:
 
 On Apr 5, 2011, at 6:07 PM, Ray Stell wrote:
 
  On Tue, Apr 05, 2011 at 03:58:46PM +0530, Vibhor Kumar wrote:
  
  Following are details:
  postgres=UC/postgres+
  [user]   [privs] /[ ROLE who granted privs.
  
  What's the logic for reporting the U priv twice?  
 
 [public]=[access][ ROLE who granted privs]
 
 for public there will no username, its always =(equals to) followed by 
 access/[Role who granted privs].


template1=# \pset expanded
Expanded display is on.
template1=# \dn+ information_schema
List of schemas
-[ RECORD 1 ]-+-
Name  | information_schema
Owner | postgres
Access privileges | postgres=UC/postgres
  | =U/postgres
Description   | 


From this:

template1=# \pset expanded
Expanded display is off.
template1=# \dn+ information_schema
  List of schemas
Name|  Owner   |  Access privileges   | Description 
+--+--+-
 information_schema | postgres | postgres=UC/postgres+| 
|  | =U/postgres  | 
(1 row)

I was reading 3 fields:

1. postgres=UC
2. postgres=U
3. postgres

which made no sense at all.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Named advisory locks

2011-04-05 Thread Ben Chobot

On Apr 5, 2011, at 7:35 AM, rihad wrote:

 No, what I meant was that we're already using ints for a different purpose in 
 another app on the same server, so I cannot safely reuse them. Aren't 
 advisory lock ID's unique across the whole server? The sole purpose of the 
 string ID is to be able to supply an initial namespace prefix (foo.NNN) so 
 NNN wouldn't clash in different subsystems of the app. MySQL is pretty 
 convenient in this regard. Now I think it would be easier for me to work 
 around this Postgres limitation by simply LOCKing on some table (maybe one 
 created specifically as something to lock on to) instead of using 
 pg_advisory_lock explicitly.

Simply locking tables might be easy, but probably won't be optimal. Why are you 
using advisory locks at all? They certainly have their place, but they can also 
be an overused crutch, especially for people less familiar with MVCC.
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Named advisory locks

2011-04-05 Thread rihad

On 04/05/2011 08:29 PM, Ben Chobot wrote:


On Apr 5, 2011, at 7:35 AM, rihad wrote:


No, what I meant was that we're already using ints for a different
purpose in another app on the same server, so I cannot safely reuse
them. Aren't advisory lock ID's unique across the whole server? The
sole purpose of the string ID is to be able to supply an initial
namespace prefix (foo.NNN) so NNN wouldn't clash in different
subsystems of the app. MySQL is pretty convenient in this regard.
Now I think it would be easier for me to work around this Postgres
limitation by simply LOCKing on some table (maybe one created
specifically as something to lock on to) instead of using
pg_advisory_lock explicitly.


Simply locking tables might be easy, but probably won't be optimal.
Why are you using advisory locks at all? They certainly have their
place, but they can also be an overused crutch, especially for people
less familiar with MVCC. .



We're using advisory locks to limit access to an external shared resource.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Named advisory locks

2011-04-05 Thread Vick Khera
On Tue, Apr 5, 2011 at 10:35 AM, rihad ri...@mail.ru wrote:

 No, what I meant was that we're already using ints for a different purpose
 in another app on the same server, so I cannot safely reuse them. Aren't
 advisory lock ID's unique across the whole server? The sole purpose of the
 string ID is to be able to supply an initial namespace prefix (foo.NNN) so
 NNN wouldn't clash in different subsystems of the app. MySQL is pretty
 convenient in this regard. Now I think it would be easier for me to work
 around this Postgres limitation by simply LOCKing on some table (maybe one
 created specifically as something to lock on to) instead of using
 pg_advisory_lock explicitly.


so if you have a namespace problem, solve that. the range of integers is
quite large. just assign a range to each application so they don't clash.


[GENERAL] Seeking Postgres users, DBAs and developers in areas where we don't have conferences or user groups

2011-04-05 Thread Selena Deckelmann
Hi!

I'm trying to find people who use, administrate or develop PostgreSQL
and live in regions that our community doesn't currently serve.

By doesn't currently serve, I mean that:

* You don't know many other people that use PostgreSQL in your town,
nearby city or country,
* You've never been to a Postgres conference,
* We don't have Postgres conference on your continent or within a
10-hour flight.

Here's the survey:

http://chesnok.com/u/1g

I'll use this information to: contact you about possible sponsorship
for your attendance at an event, connecting you with PostgreSQL people
who live in your region and if you are interested, starting up a user
group in your region.

This survey will not be used for commercial purposes.

Thanks,
-selena
User Group Liaison for PostgreSQL Global Development Group

-- 
http://chesnok.com/daily - me

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Is index rebuilt upon updating table with the same values as already existing in the table?

2011-04-05 Thread Zeev Ben-Sender
Hi,

Having the update statement like this:
UPDATE my_table SET (COL1 = '05cf5219-38e6-46b6-a6ac-5bbc3887d16a', COL2 = 28) 
WHERE COL3 = 35;

Will this statement result indexes rebuild if COL1 and COL2 already equal 
'05cf5219-38e6-46b6-a6ac-5bbc3887d16a' and 28?

Thank you



Re: [GENERAL] ..horribly documented, inefficient, user-hostile, impossible to maintain interpreted language..

2011-04-05 Thread Merlin Moncure
On Mon, Apr 4, 2011 at 2:20 PM, John R Pierce pie...@hogranch.com wrote:
 On 04/04/11 12:07 PM, Martin Gainty wrote:

 ..horribly documented, inefficient, user-hostile, impossible to maintain
 interpreted language..
 to whom might you be alluding to

 I only used a few of those adjectives, and prefixed them by hypothetical.
 to be honest, I would expect most languages commonly used in web service
 environments to be more efficient at string processing than pl/pgsql, and I
 really can't think of a counterexample off the top of my head.

most language *are* more efficient at string processing but that's not
the whole story, since to get at that benefit you typically have to:

1. application makes query to get the data
2. database searches for data, converts it to wire format and sends it
through protocol to libpq
3. libpq wrapper converts it to language native string (unless you are in C)
4. language string processing takes place
5. data is re-stacked into queries and sent back to the database over
wire format via protocol
6. database writes it out

Now, if your data is not meant for consumption by the database then
the case for application side coding is stronger.  But if you are just
manhandling data only to send it right back the database you should
think twice about introducing all those steps to access the benefits.
Not to mention, by introducing a client side procedural language you
are introducing a whole new set of data types, conditions, constraint
checking etc.  Procedural languages are also defect factories (this
includes pl/pgsql if written in more procedural fashion, so you should
keep it to sql, or at least in relational style if you can).

pl/pgsql is perfectly fine for string processing as long as your
problem is such that you can avoid heavy iteration (string
concatenation in a loop is especially problematic, but work around
that using arrays is trivial and effective) and manage the strings
relationally and with the built in functions.

The better you are with sql, the less iteration you tend to need.  The
server backend string api is fairly rich and can get you through most
light to moderate string processing tasks.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problem with multiple action rule on modifiable view

2011-04-05 Thread Chris Oldfield
Hi,
   I'm trying to implement a modifiable view and have run into a road
block.  A DELETE rule attached to my view refuses to execute any commands
after the first delete on one of views the component tables. Examining the
output of EXPLAIN, it seems that the view is constructed for every action 
in the rule, whether the action references the view (via OLD) or not.

Is this expected behavior? Is there a work around? 

I realize that I'm probably missing something about the way rules work, 
but nonetheless I'm confused. Naively, it seems to me that the view 
should only be evaluated for an action if OLD is referenced.  Otherwise, 
some strange behavior happens. Such is the example case below, the action 
DELETE FROM parent_child_view WHERE id=1; results in only the first 
action executing, but DELETE FROM parent_child_view; executes both 
actions.

I'm using Postgres 9.0.3. Thanks for any help.  Below is the example case:

CREATE TABLE parent(
   id serial PRIMARY KEY,
   p_data integer NOT NULL UNIQUE
);
CREATE TABLE child(
   id serial PRIMARY KEY,
   parent_id integer NOT NULL REFERENCES parent(id), 
   c_data integer NOT NULL
);
CREATE TABLE parent_child_view(
   id integer,
   p_data integer,
   c_data integer
);
CREATE RULE _RETURN AS ON SELECT TO parent_child_view DO INSTEAD
   SELECT child.id, p_data, c_data
   FROM parent JOIN child ON (parent_id=parent.id);
CREATE RULE child_view_delete AS ON DELETE TO child_view DO INSTEAD(
   DELETE FROM child WHERE id=OLD.id returning id; 
   DELETE FROM parent WHERE id NOT IN (SELECT parent_id FROM child);
);

 EXPLAIN DELETE FROM parent_child_view WHERE id=1;

QUERY
PLAN
---
 Delete  
   -  Nested Loop  
 -  Nested Loop
   -  Index Scan using child_pkey on child
 Index Cond: (id = 1)
   -  Index Scan using child_pkey on child
 Index Cond: (public.child.id = 1)
 -  Index Scan using parent_pkey on parent
   Index Cond: (parent.id = public.child.parent_id)
 
 Delete
   -  Nested Loop
 -  Nested Loop
   -  Index Scan using child_pkey on child
 Index Cond: (id = 1)
   -  Index Scan using parent_pkey on parent
 Index Cond: (public.parent.id = 
public.child.parent_id)
 -  Seq Scan on parent
   Filter: (NOT (hashed SubPlan 1))
   SubPlan 1
 -  Seq Scan on child
(21 rows)



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is index rebuilt upon updating table with the same values as already existing in the table?

2011-04-05 Thread Vick Khera
On Tue, Apr 5, 2011 at 11:22 AM, Zeev Ben-Sender ze...@checkpoint.comwrote:

  Hi,



 Having the update statement like this:

 UPDATE my_table SET (COL1 = ‘05cf5219-38e6-46b6-a6ac-5bbc3887d16a’, COL2 =
 28) WHERE COL3 = 35;



 Will this statement result indexes rebuild if COL1 and COL2 already equal
 ‘05cf5219-38e6-46b6-a6ac-5bbc3887d16a’ and 28?



 Thank you




Easy to test.  If the row moved to another page, then yes, else no.

To see if the row moved, select the ctid and compare before and after:

select ctid from my_table where COL3=35;
UPDATE ...
select ctid from my_table where COL3=35;

The ctid is returned as a tuple indicating the page number and position
within that page.


Re: [GENERAL] Plpgsql function to compute every other Friday

2011-04-05 Thread C. Bensend

 By making this function sql and immutable, you give the database more
 ability to inline it into queries which can make a tremendous
 performance difference in some cases.  You can also index based on it
 which can be useful.

Very nice, Merlin.  These aren't really a concern in my case as
I'm the only one accessing the app (and hence, performance isn't
an issue), but it's good to see a better way to do things.

The PostgreSQL community really is top notch.

Benny


-- 
Hairy ape nads.-- Colleen, playing Neverwinter Nights



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ..horribly documented, inefficient, user-hostile, impossible to maintain interpreted language..

2011-04-05 Thread John R Pierce

On 04/05/11 9:40 AM, Merlin Moncure wrote:

On Mon, Apr 4, 2011 at 2:20 PM, John R Piercepie...@hogranch.com  wrote:

I only used a few of those adjectives, and prefixed them by hypothetical.
to be honest, I would expect most languages commonly used in web service
environments to be more efficient at string processing than pl/pgsql, and I
really can't think of a counterexample off the top of my head.

most language *are* more efficient at string processing but that's not
the whole story, since to get at that benefit you typically have to:

1. application makes query to get the data
2. database searches for data, converts it to wire format and sends it
through protocol to libpq
3. libpq wrapper converts it to language native string (unless you are in C)
4. language string processing takes place
5. data is re-stacked into queries and sent back to the database over
wire format via protocol
6. database writes it out


in the OP's case, he was asking about strings he was inserting into 
postgres, currently he was inserting them as a single long field, but he 
wanted to break them up into multiple fields.  So, he could send the 
long string to a pgsql function that did the dicing up, or he could dice 
up the string first then send the pieces to fields of a database.I 
was expressing the opinion that its highly likely the 2nd solution would 
work better, and I guess my bit of misplaced humor clouded that message.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ..horribly documented, inefficient, user-hostile, impossible to maintain interpreted language..

2011-04-05 Thread Merlin Moncure
On Tue, Apr 5, 2011 at 1:04 PM, John R Pierce pie...@hogranch.com wrote:
 On 04/05/11 9:40 AM, Merlin Moncure wrote:

 On Mon, Apr 4, 2011 at 2:20 PM, John R Piercepie...@hogranch.com  wrote:

 I only used a few of those adjectives, and prefixed them by hypothetical.
 to be honest, I would expect most languages commonly used in web service
 environments to be more efficient at string processing than pl/pgsql, and
 I
 really can't think of a counterexample off the top of my head.

 most language *are* more efficient at string processing but that's not
 the whole story, since to get at that benefit you typically have to:

 1. application makes query to get the data
 2. database searches for data, converts it to wire format and sends it
 through protocol to libpq
 3. libpq wrapper converts it to language native string (unless you are in
 C)
 4. language string processing takes place
 5. data is re-stacked into queries and sent back to the database over
 wire format via protocol
 6. database writes it out

 in the OP's case, he was asking about strings he was inserting into
 postgres, currently he was inserting them as a single long field, but he
 wanted to break them up into multiple fields.  So, he could send the long
 string to a pgsql function that did the dicing up, or he could dice up the
 string first then send the pieces to fields of a database.    I was
 expressing the opinion that its highly likely the 2nd solution would work
 better, and I guess my bit of misplaced humor clouded that message.

right -- it follows from my mantra to 'use built in functions when you
can' that string to array or regexp_split_to_array would probably work
for this case (maybe with some escaping, maybe not).

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Named advisory locks

2011-04-05 Thread rihad

On Tue, Apr 5, 2011 at 10:35 AM, rihad rihad(at)mail(dot)ru wrote:


No, what I meant was that we're already using ints for a different purpose
in another app on the same server, so I cannot safely reuse them. Aren't
advisory lock ID's unique across the whole server? The sole purpose of the
string ID is to be able to supply an initial namespace prefix (foo.NNN) so
NNN wouldn't clash in different subsystems of the app. MySQL is pretty
convenient in this regard. Now I think it would be easier for me to work
around this Postgres limitation by simply LOCKing on some table (maybe one
created specifically as something to lock on to) instead of using
pg_advisory_lock explicitly.



so if you have a namespace problem, solve that. the range of integers is
quite large. just assign a range to each application so they don't clash.


Can't do that, because I'm simply using some table's serial value as the 
lock ID, which is itself a bigint.


The workaround of LOCKing on a table looks fine to me.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory

2011-04-05 Thread Jeff Davis
On Tue, 2011-04-05 at 21:50 +1200, Jeremy Palmer wrote:
 Hi,
 
 I've been having repeated troubles trying to get a PostgreSQL app to play 
 nicely on Ubuntu. I recently posted a message on this list about an out of 
 memory error and got a resolution by reducing the work_mem setting. However 
 I'm now getting further out of memory issues during the same stage of plpgsql 
 function as mentioned before.
 
 The function itself is run as part of larger transaction which does the 
 following:

Where is the source to the function?

Regards,
Jeff Davis


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Named advisory locks

2011-04-05 Thread Vick Khera
On Tue, Apr 5, 2011 at 2:49 PM, rihad ri...@mail.ru wrote:

 Can't do that, because I'm simply using some table's serial value as the
 lock ID, which is itself a bigint.


So you assigned the entire namespace to the other purpose seems to be
programmer's bad planning :(


Re: [GENERAL] Named advisory locks

2011-04-05 Thread rihad

On 04/06/2011 12:20 AM, Vick Khera wrote:



On Tue, Apr 5, 2011 at 2:49 PM, rihad ri...@mail.ru
mailto:ri...@mail.ru wrote:

Can't do that, because I'm simply using some table's serial value as
the lock ID, which is itself a bigint.


So you assigned the entire namespace to the other purpose seems to
be programmer's bad planning :(


Better programmers have invented refactoring ;-)

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory

2011-04-05 Thread John R Pierce

On 04/05/11 2:50 AM, Jeremy Palmer wrote:

I've been having repeated troubles trying to get a PostgreSQL app to play 
nicely on Ubuntu. I recently posted a message on this list about an out of 
memory error and got a resolution by reducing the work_mem setting. However I'm 
now getting further out of memory issues during the same stage of plpgsql 
function as mentioned before.

The function itself is run as part of larger transaction which does the 
following:

1/ Maintains 104 tables (15 PostGIS tables), by loading and applying 
incremental table changes. A typical incremental load with maintain about 
10,000 rows.

2/ When each one of these tables is updated an after trigger is fired that 
maintains an associated table revision table.

3/ After all of the tables are maintained a plpgsql function is called to 
build/maintain a set of de-normalised tables. These tables total about 20GB. 
Each one of these tables is compared against the previous table revision to 
determine its row changes. It's in this function that the out of memory 
exception is occurring.


a few random questions...

Does that all really have to be a single transaction?

Do you really need to use triggers for your revision tracking, and can't 
rely on your daily update cycle to manually set the revision information?


Is it really necessary to generate massive denormalized tables, rather 
than using view's to join the data?




shared_buffers = 512MB
maintenance_work_mem = 512MB
temp_buffers = 256MB
work_mem = 1MB
wal_buffers = 16MB
effective_cache_size = 4094MB

The size of the database is 350GB. The typical number of users connected to the 
database is 1 or 2. This database is used for loading external data, managing 
revision table information and generating and outputting de-normalised 
datasets, so it does not have a high number of transactions running. Typically 
1 large one per day.



with only 1-2 connections, you certainly could increase the work_mem. 
Alternately, this single giant transaction could manually set a larger 
work_mem which would only apply to it.   Personally, given your 8gb 
system and what you've described, I think I'd set the tuning parameters 
something like...


shared_buffers = 1GB
maintenance_work_mem = 128MB
temp_buffers = 64MB
work_mem = 16MB
wal_buffers = 16MB
effective_cache_size = 4094MB


adjust effective_cache_size to somewhat less than the 'cached' value 
shown in `free -m` after your system has been running for awhile.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] unique amount more than one table

2011-04-05 Thread Perry Smith
I have five tables each with a name field.  Due to limitations in my user 
interface, I want a name to be unique amoung these five tables.

I thought I could first create a view with something like:

SELECT name, 'table1' as type from table1
  UNION ALL
SELECT name, 'table2' as type from table2
  UNION ALL
SELECT name, 'table3' as type from table3
 ...

I called this view xxx (I'm just experimenting right now).

I then created a function:

CREATE OR REPLACE FUNCTION unique_xxx ( ) RETURNS boolean AS $$
   SELECT ( SELECT max(cnt) FROM ( SELECT count(*) AS cnt FROM xxx GROUP BY 
name ) AS foo ) = 1;
$$ LANGUAGE SQL;

Next I added a check constraint with:

ALTER TABLE table1 ADD CHECK ( unique_xxx() );

A test shows:

select unique_xxx();
 unique_xxx 

 t
(1 row)

After I insert a row that I want to be rejected, I can do:

select unique_xxx();
 unique_xxx 

 f
(1 row)

but the insert was not rejected.  I'm guessing because the check constraint 
runs before the insert?  So, I could change my approach and have my unique_xxx 
function see if the name to be added is already in the xxx view but it is at 
that point that I stopped and thought I would ask for advice.  Am I close or am 
I going down the wrong road?

Thank you for your time,
pedz


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] unique amount more than one table

2011-04-05 Thread Jeff Davis
On Tue, 2011-04-05 at 17:02 -0500, Perry Smith wrote:
 CREATE OR REPLACE FUNCTION unique_xxx ( ) RETURNS boolean AS $$
SELECT ( SELECT max(cnt) FROM ( SELECT count(*) AS cnt FROM xxx GROUP 
 BY name ) AS foo ) = 1;
 $$ LANGUAGE SQL;
 
 Next I added a check constraint with:
 
 ALTER TABLE table1 ADD CHECK ( unique_xxx() );

...

 After I insert a row that I want to be rejected, I can do:
 
 select unique_xxx();
  unique_xxx 
 
  f
 (1 row)
 
 but the insert was not rejected.  I'm guessing because the check constraint 
 runs before the insert?

Yes. But even if it ran afterward, there is still a potential race
condition, because the query in the CHECK constraint doesn't see the
results of concurrent transactions.

To make this work, you should be using LOCK TABLE inside of a trigger
(probably a BEFORE trigger that locks the table, then looks to see if
the value exists in the view already, and if so, throws an exception).
CHECK is not the right place for this kind of thing.

Keep in mind that the performance will not be very good, however. There
is not a good way to make this kind of constraint perform well,
unfortunately. But that may not be a problem in your case -- try it and
see if the performance is acceptable.

Regards,
Jeff Davis


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] unique amount more than one table

2011-04-05 Thread David Johnston
You can try restricting all name insertions (on any of the tables) to go 
through one or more functions that serialize amongst themselves.  Basically 
lock a common table and check the view for the new name before inserting.

On Apr 5, 2011, at 18:02, Perry Smith pedz...@gmail.com wrote:

 I have five tables each with a name field.  Due to limitations in my user 
 interface, I want a name to be unique amoung these five tables.
 
 I thought I could first create a view with something like:
 
 SELECT name, 'table1' as type from table1
  UNION ALL
 SELECT name, 'table2' as type from table2
  UNION ALL
 SELECT name, 'table3' as type from table3
 ...
 
 I called this view xxx (I'm just experimenting right now).
 
 I then created a function:
 
 CREATE OR REPLACE FUNCTION unique_xxx ( ) RETURNS boolean AS $$
   SELECT ( SELECT max(cnt) FROM ( SELECT count(*) AS cnt FROM xxx GROUP 
 BY name ) AS foo ) = 1;
 $$ LANGUAGE SQL;
 
 Next I added a check constraint with:
 
 ALTER TABLE table1 ADD CHECK ( unique_xxx() );
 
 A test shows:
 
 select unique_xxx();
 unique_xxx 
 
 t
 (1 row)
 
 After I insert a row that I want to be rejected, I can do:
 
 select unique_xxx();
 unique_xxx 
 
 f
 (1 row)
 
 but the insert was not rejected.  I'm guessing because the check constraint 
 runs before the insert?  So, I could change my approach and have my 
 unique_xxx function see if the name to be added is already in the xxx view 
 but it is at that point that I stopped and thought I would ask for advice.  
 Am I close or am I going down the wrong road?
 
 Thank you for your time,
 pedz
 
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory

2011-04-05 Thread Jeremy Palmer
Hi John,

 Does that all really have to be a single transaction?

Yes - I need to ensure that of the changesets and denormalised tables are 
created in the same transaction, so that if an error occurs the database is 
rolled back to the last successfully applied changeset. I don't want to get 
into the business of the splitting it into separate transactions and then 
having to revert changes that were applied in a previous transaction step.

 Do you really need to use triggers for your revision tracking, and can't 
 rely on your daily update cycle to manually set the revision information?

They are not necessary, but it has the cleanest code implementation and makes 
the revision maintenance to the tables almost transparent. If they are causing 
the problem I could change the logic...

 Is it really necessary to generate massive denormalized tables, rather 
 than using view's to join the data?

Yes - to create the tables is complex, and often involves complex functions and 
multiple temp tables. The overall time to create these tables is somewhere in 
the area of 3hours on this server. I'm also unloading these tables multiple 
times for separate purposes, so they would need to be materialised anyway.

 with only 1-2 connections, you certainly could increase the work_mem. 

I can't increase this value at the moment on this server because I was getting 
out of memory errors with the initial population of the database (which builds 
the denormalized tables, but does not determine the changeset to the previous 
table revision). 

I tried values, 256mb - 2mb and could only get the query to run with 1mb. I 
suspect even this was pushing the boundary, so when I got to the next stage in 
my testing - to apply incremental updates - the memory issue raised it head 
again.

Regards,
Jeremy

__

This message contains information, which is confidential and may be subject to 
legal privilege. 
If you are not the intended recipient, you must not peruse, use, disseminate, 
distribute or copy this message.
If you have received this message in error, please notify us immediately (Phone 
0800 665 463 or i...@linz.govt.nz) and destroy the original message.
LINZ accepts no responsibility for changes to this email, or for any 
attachments, after its transmission from LINZ.

Thank you.
__

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] unique amount more than one table

2011-04-05 Thread Rob Sargent



On 04/05/2011 04:02 PM, Perry Smith wrote:

I have five tables each with a name field.  Due to limitations in my user 
interface, I want a name to be unique amoung these five tables.

I thought I could first create a view with something like:

SELECT name, 'table1' as type from table1
   UNION ALL
SELECT name, 'table2' as type from table2
   UNION ALL
SELECT name, 'table3' as type from table3
  ...

I called this view xxx (I'm just experimenting right now).

I then created a function:

CREATE OR REPLACE FUNCTION unique_xxx ( ) RETURNS boolean AS $$
SELECT ( SELECT max(cnt) FROM ( SELECT count(*) AS cnt FROM xxx GROUP 
BY name ) AS foo ) = 1;
$$ LANGUAGE SQL;

Next I added a check constraint with:

ALTER TABLE table1 ADD CHECK ( unique_xxx() );

A test shows:

select unique_xxx();
  unique_xxx

  t
(1 row)

After I insert a row that I want to be rejected, I can do:

select unique_xxx();
  unique_xxx

  f
(1 row)

but the insert was not rejected.  I'm guessing because the check constraint 
runs before the insert?  So, I could change my approach and have my unique_xxx 
function see if the name to be added is already in the xxx view but it is at 
that point that I stopped and thought I would ask for advice.  Am I close or am 
I going down the wrong road?

Thank you for your time,
pedz




You might try making a separate name table and having a unique index 
there and make the other users of name refer to the new table's name 
field.  (I would stick on id on the new name table...)


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Dumping functions with pg_dump

2011-04-05 Thread Greg Corradini
Hello,
Is it possible yet in 8.4 to pg_dump specific functions without having to do
the whole pg_restore thing?

If it is possible, what is the syntax to dump a specific function?

If not possible, then how does one use pg_restore to target a specific
function?

thx


Re: [GENERAL] Dumping functions with pg_dump

2011-04-05 Thread Raymond O'Donnell

On 06/04/2011 00:15, Greg Corradini wrote:

Hello,
Is it possible yet in 8.4 to pg_dump specific functions without having
to do the whole pg_restore thing?


If I understand correctly what you're trying to do, a handy alternative 
is to use pgAdmin, right click on the function in the tree view, and 
select Scripts - Create.


Ray.

--
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dumping functions with pg_dump

2011-04-05 Thread Greg Corradini
Thanks for the reply Raymond!

This is all through remote terminal so I can't use pg_admin ;(

Maybe some more quick context I don't want to dump whole database b/c
the thing is 12GB and for the application we're building we only access
certain tables in the DB. There's one table that has two triggers associated
to it, each which in turn references it's own procedure.

So my pg_dump syntax has a lot of -t table_name -ttable_nameII in it to
target only the tables I want. But the procedures tied to the triggers
aren't coming with it unless I do the whole dump.

thx

On Tue, Apr 5, 2011 at 4:19 PM, Raymond O'Donnell r...@iol.ie wrote:

 On 06/04/2011 00:15, Greg Corradini wrote:

 Hello,
 Is it possible yet in 8.4 to pg_dump specific functions without having
 to do the whole pg_restore thing?


 If I understand correctly what you're trying to do, a handy alternative is
 to use pgAdmin, right click on the function in the tree view, and select
 Scripts - Create.

 Ray.

 --
 Raymond O'Donnell :: Galway :: Ireland
 r...@iol.ie




-- 
Greg


Re: [GENERAL] Dumping functions with pg_dump

2011-04-05 Thread Jerry Sievers
Greg Corradini gregcorrad...@gmail.com writes:

 Hello,
 Is it possible yet in 8.4 to pg_dump specific functions without having to do
 the whole pg_restore thing?

 If it is possible, what is the syntax to dump a specific function?

 If not possible, then how does one use pg_restore to target a specific
 function?

Just do;

pg_dump --schema-only

Go find the function definition in the output script, snip it out and
load with psql.  Remember to load it into the correct schema and
whatever other details.

HTH

-- 
Jerry Sievers
e: gsiever...@comcast.net
p: 305.321.1144

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dumping functions with pg_dump

2011-04-05 Thread Adrian Klaver
On Tuesday, April 05, 2011 4:19:56 pm Raymond O'Donnell wrote:
 On 06/04/2011 00:15, Greg Corradini wrote:
  Hello,
  Is it possible yet in 8.4 to pg_dump specific functions without having
  to do the whole pg_restore thing?
 
 If I understand correctly what you're trying to do, a handy alternative
 is to use pgAdmin, right click on the function in the tree view, and
 select Scripts - Create.
 
 Ray.

In addition, from inside psql do \ef function_name. This opens the function in 
an editor from which you can save it elsewhere. 
If you are talking a lot of functions then you can use the -l and -L options to 
pg_restore to create a TOC list that can be edited to contain only the 
functions 
you want. These than can either be restored to a database or file.

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] unique amount more than one table

2011-04-05 Thread Perry Smith

On Apr 5, 2011, at 5:50 PM, Rob Sargent wrote:

 
 
 On 04/05/2011 04:02 PM, Perry Smith wrote:
 I have five tables each with a name field.  Due to limitations in my user 
 interface, I want a name to be unique amoung these five tables.
 
 I thought I could first create a view with something like:
 
 SELECT name, 'table1' as type from table1
   UNION ALL
snip
  f
 (1 row)
 
 but the insert was not rejected.  I'm guessing because the check constraint 
 runs before the insert?  So, I could change my approach and have my 
 unique_xxx function see if the name to be added is already in the xxx view 
 but it is at that point that I stopped and thought I would ask for advice.  
 Am I close or am I going down the wrong road?
 
 Thank you for your time,
 pedz
 
 
 
 You might try making a separate name table and having a unique index there 
 and make the other users of name refer to the new table's name field.  (I 
 would stick on id on the new name table...)

Thanks to all.

I think this is the way I'm going to go.  I'll have an id, name, and type to 
tell me which of the other tables owns it.  Most of the other tables don't need 
to exist even.

Thank you again,
pedz


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Dumping functions with pg_dump

2011-04-05 Thread Greg Corradini
On Tue, Apr 5, 2011 at 4:30 PM, Adrian Klaver adrian.kla...@gmail.comwrote:

 On Tuesday, April 05, 2011 4:19:56 pm Raymond O'Donnell wrote:
  On 06/04/2011 00:15, Greg Corradini wrote:
   Hello,
   Is it possible yet in 8.4 to pg_dump specific functions without having
   to do the whole pg_restore thing?
 
  If I understand correctly what you're trying to do, a handy alternative
  is to use pgAdmin, right click on the function in the tree view, and
  select Scripts - Create.
 
  Ray.

 In addition, from inside psql do \ef function_name. This opens the function
 in
 an editor from which you can save it elsewhere.
 If you are talking a lot of functions then you can use the -l and -L
 options to
 pg_restore to create a TOC list that can be edited to contain only the
 functions
 you want. These than can either be restored to a database or file.

 --
 Adrian Klaver
 adrian.kla...@gmail.com



Thx for the replies Adrian and Jerry,

Those are both options. Jerry, your suggestion is the work around I've
already used. Adrian, I did not know you could do that. Still...I was
looking for something that worked inline with pg_dump...and it looks like
pg_restore is still the major game in town.

It would be nice if pg_dump got some option flags to do this sort of thing
(though I'm naive on why this doesn't exist in the first place)


Re: [GENERAL] Dumping functions with pg_dump

2011-04-05 Thread Adrian Klaver
On Tuesday, April 05, 2011 5:24:13 pm Greg Corradini wrote:
 On Tue, Apr 5, 2011 at 4:30 PM, Adrian Klaver adrian.kla...@gmail.comwrote:

 
 Thx for the replies Adrian and Jerry,
 
 Those are both options. Jerry, your suggestion is the work around I've
 already used. Adrian, I did not know you could do that. Still...I was
 looking for something that worked inline with pg_dump...and it looks like
 pg_restore is still the major game in town.

Yea, the other common method is to develop from the outside in, instead of 
inside out. To explain, outside in would be to keep the schema object creation 
scripts in files external to the database and feed them to the database as 
needed. Initial object creation and  revisions are done on the external files. 
Inside out would be what you are doing, pulling the schema files from inside 
the 
database. One is not necessarily better than the other, just each has its 
strengths and weaknesses, as you are finding:)

 
 It would be nice if pg_dump got some option flags to do this sort of thing
 (though I'm naive on why this doesn't exist in the first place)

pg_dump/pg_restore has become more flexible over the years, but there are still 
dependency issues between schema objects that make what you want difficult.  
The 
dependency tracking really only fully works for a complete dump/restore.

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memory

2011-04-05 Thread Jeremy Palmer
Hi Jeff,

 Where is the source to the function? 

The source is located here: https://github.com/linz/linz_bde_uploader

The main function LDS_MaintainSimplifiedLayers that is being called is on line 
37 is in 
https://github.com/linz/linz_bde_uploader/blob/master/sql/lds_layer_functions.sql.
 

The actual out of memory exception was caught with the bde_GetTableDifferences 
function source file on line 3263 in 
https://github.com/linz/linz_bde_uploader/blob/master/sql/bde_control_functions.sql.

When I was actually getting an out of memory issue when creating the tables 
(not maintaining them), the query that seemed to kill the transaction was the 
one located at line 1463 of 
https://github.com/linz/linz_bde_uploader/blob/master/sql/lds_layer_functions.sql.
 After I dropped the work_mem to 1MB it got past that and completed ok. But 
during the maintenance of the table the row differences need to be calculated 
and then applied to the table. See the LDS_ApplyTableDifferences function on 
line 353.

Regards,
Jeremy
__

This message contains information, which is confidential and may be subject to 
legal privilege. 
If you are not the intended recipient, you must not peruse, use, disseminate, 
distribute or copy this message.
If you have received this message in error, please notify us immediately (Phone 
0800 665 463 or i...@linz.govt.nz) and destroy the original message.
LINZ accepts no responsibility for changes to this email, or for any 
attachments, after its transmission from LINZ.

Thank you.
__

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general