[HACKERS] pgAgent job limit

2008-02-26 Thread Roberts, Jon
In pgAgent.cpp, I would like to add LIMIT as shown below:

LogMessage(_(Checking for jobs to run), LOG_DEBUG);
DBresult *res=serviceConn-Execute(
 wxT(SELECT J.jobid )
 wxT(  FROM pgagent.pga_job J )
 wxT( WHERE jobenabled )
 wxT(   AND jobagentid IS NULL )
 wxT(   AND jobnextrun = now() )
 wxT(   AND (jobhostagent = '' OR jobhostagent = ') + hostname +
wxT('))
 wxT( ORDER BY jobnextrun)
 wxT( LIMIT pgagent.pga_job_limit(') + hostname + wxT(')));


This requires two new objects:
create table pgagent.pga_job_throttle (jobmax int);

insert into pgagent.pga_job_throttle values (2);

create or replace function pgagent.pga_job_limit(p_hostname varchar)
returns int as
$$
declare
  v_limit int;
begin

  select jobmax
into v_limit
from pgagent.pga_job_throttle;

  if v_limit  0 or v_limit is null then
  select count(*)
into v_limit
from pgagent.pga_job j
   where jobenabled
 and jobagentid is null
 and jobnextrun = now()
 and (jobhostagent = '' or jobhostagent = p_hostname);
  end if;

  return v_limit;

end;
$$
language 'plpgsql';


This function allow pgAgent to be throttled dynamically by managing the
pgagent.pga_job_throttle table.  If you want to disable all jobs from
running, you set the value to 0.  If you want to let as many jobs run at
once (like the default) to run at a time, you either delete the record
from the table or you can set the value to a negative number.

pgAgent scales much better without having excessive number of
connections to the database with one line change to the C++ code.


What do you guys think?


Jon

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] pgAgent job limit

2008-02-26 Thread Roberts, Jon
 Roberts, Jon wrote:
  In pgAgent.cpp, I would like to add LIMIT as shown below:
 
 
 
 
 [snip]
 
  What do you guys think?
 
 
 
 
 What has this to do with -hackers?
 
 I don't even know what project this refers to - it certainly doesn't
 refer to core postgres, which is what -hackers is about.
 

pgAgent is the db job scheduler and I thought it was part of the core db
project.  The daemon for it is packaged with pgAdmin.  


Jon

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] pgAgent job limit

2008-02-26 Thread Roberts, Jon

 -Original Message-
 From: Magnus Hagander [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, February 26, 2008 8:17 AM
 To: Roberts, Jon
 Cc: Andrew Dunstan; pgsql-hackers@postgresql.org
 Subject: Re: [HACKERS] pgAgent job limit
 
 On Tue, Feb 26, 2008 at 08:10:09AM -0600, Roberts, Jon wrote:
   Roberts, Jon wrote:
In pgAgent.cpp, I would like to add LIMIT as shown below:
   
   
   
  
   [snip]
  
What do you guys think?
   
   
   
  
   What has this to do with -hackers?
  
   I don't even know what project this refers to - it certainly
doesn't
   refer to core postgres, which is what -hackers is about.
  
 
  pgAgent is the db job scheduler and I thought it was part of the
core db
  project.  The daemon for it is packaged with pgAdmin.
 
 Yeah, it's a part of the pgAdmin project. You'll want to direct your
mails
 to the [EMAIL PROTECTED] list.
 
 //Magnus

Thanks guys.  Sorry about that.  I guess I'm still think of Jobs how
Oracle does it which is part of the core database.  



Jon

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


[HACKERS] Tuning 8.3

2008-02-25 Thread Roberts, Jon
I need to run about 1000 PostgreSQL connections on a server that I can
use about 4 GB of the total 16 GB of total RAM.  It seems that each
session creates a process that uses about 15 MB of RAM just for
connecting so I'm running out of RAM rather quickly.

 

I have these non-default settings:

shared_buffers = 30MB

max_connections = 1000

 

I tried decreasing the work_mem but the db wouldn't start then.

 

I'm running version 8.3 on Windows 2003 Server.

 

Any tips for reducing the memory footprint per session?  There is
pgBouncer but is there anything I can do in the configuration before I
go with a connection pooler?

 

 

Jon



Re: [HACKERS] Tuning 8.3

2008-02-25 Thread Roberts, Jon
  I need to run about 1000 PostgreSQL connections on a server that I
can
  use about 4 GB of the total 16 GB of total RAM.  It seems that each
  session creates a process that uses about 15 MB of RAM just for
  connecting so I'm running out of RAM rather quickly.
 
 I think you're being bitten by a different problem than it appears.
 Windows
 has a fixed size per-session shared memory pool which runs out rather
 quickly.
 You can raise that parameter though. (The 125 mentioned there is
raised to
 about 300 with Pg 8.3.)
 
 See:
 
 http://www.postgresql.org/docs/faqs.FAQ_windows.html#4.4
 
 

Thanks for the tip and I'll be moving this to the performance forum.
Although, with 8.3, it seems that the FAQ is out of date?



Jon

---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] pgAgent job throttling

2008-02-25 Thread Roberts, Jon
I posted earlier about how to tune my server and I think the real
problem is how many connections pgAgent creates for my job needs.  

I basically need to run hundreds of jobs daily all to be executed at
4:00 AM.  To keep the jobs from killing the other systems, I am
throttling this with a queue table.  

With pgAgent, it creates 2 connections (one to the maintenance db and
one to the target db) and then my queue throttling makes a third
connection every 10 seconds checking the job queue to see if there is an
available queue to execute.

A better solution would be to incorporate job throttling in pgAgent.
Currently, pgAgent will spawn as many jobs as required and it creates a
minimum of two database connections per job.  I think a solution would
be for pgAgent to not create the connection and execute my job steps
unless the current number of jobs running is less than a result from a
function.  

Sort of like this:

select count(*) into v_count from queue where status = 'Processing';

while v_count = fn_get_max_jobs() loop


  pg_sleep(fn_get_sleep_time());


  select count(*) into v_count from queue where status = 'Processing';

end loop;


I'm doing this now but inside a function being executed by pgAgent.
This means I have two connections open for each job.  Plus, I use a
function that uses a dblink to lock the queue table and then update the
status so that is a third connection that lasts just for a millisecond.


So if 200 jobs are queued to run at 4:00 AM, then I have 400 connections
open and then it will spike a little bit as each queued job checks to
see if it can run.  

Do you guys think it is a good idea to add job throttling to pgAgent to
limit the number of connections?  Setting the value to -1 could be the
default value which would allow an unlimited number of jobs to run at a
time (like it is now) but a value greater than -1 would be the max
number of jobs that can run concurrently.


Jon

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Including PL/PgSQL by default

2008-02-22 Thread Roberts, Jon


 -Original Message-
 From: [EMAIL PROTECTED] [mailto:pgsql-hackers-
 [EMAIL PROTECTED] On Behalf Of Andrew Dunstan
 Sent: Friday, February 22, 2008 9:28 AM
 To: Tom Lane
 Cc: Joshua D. Drake; Greg Sabino Mullane; pgsql-hackers@postgresql.org
 Subject: Re: [HACKERS] Including PL/PgSQL by default
 
 
 
 Tom Lane wrote:
  Certainly you can cause massive DOS-type problems in plain SQL
without
  any access to plpgsql, but that type of juvenile delinquency isn't
what
  concerns me.  What I'm worried about is whether plpgsql isn't a
useful
  tool for the sort of professional who would much rather you never
knew
  he was there.  It's perhaps true that with generate_series() for
looping
  and CASE for conditionals, plain SQL is Turing-complete and
therefore
  could do anything, but it'd be awfully unpleasant and inefficient to
use
  as a procedural language.  The pro who doesn't want you to know he's
  there is never going to try to do password cracking that way; the
  resource consumption would be large enough to be noticed.  plpgsql
on
  the other hand is fast enough to be a *practical* tool for nefarious
  purposes.
 
 
 
 
 As a matter of interest, are there any other databases that have
 procedural languages that don't have them turned on by default? In
fact,
 are there any that allow you to turn them off?
 
 It certainly looks like MySQL's PL is always on, unless I'm missing
 something, and ISTR PL/SQL is always on in Oracle, although it's now
 quite some years since I touched it in anger.
 
PL/SQL is there by default and so are Java Stored Procedures.  Neither
can be removed.  

However, you can not create anything in Oracle without being given
permission to create it.  The notion that you can create a function
because you have connect rights to the database is foreign to me.
Connect should mean connect, not connect AND create.

Include the language by default and remove CREATE on the public schema.


Jon

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Including PL/PgSQL by default

2008-02-21 Thread Roberts, Jon

 
 Joshua D. Drake wrote:
 
  Notice that user foo is not a super user. Now I log into
  PostgreSQL and connect to the postgres database (the super users
  database) as the non privileged user foo. The user foo in theory
  has *zero* rights here accept that he can connect.
 
 
 
 
 That's not true. The public schema has public UC privs, and always has
 had.
 

Is it safe to remove UC privs on the public schema?  Having rights to
connect should mean connect, not connect and create.


Jon


Jon

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Permanent settings

2008-02-19 Thread Roberts, Jon
 -Original Message-
 From: [EMAIL PROTECTED] [mailto:pgsql-hackers-
 [EMAIL PROTECTED] On Behalf Of Magnus Hagander
 Sent: Tuesday, February 19, 2008 8:36 AM
 To: pgsql-hackers
 Subject: [HACKERS] Permanent settings
 
 What I'd really like to see is something like a new keyword on the SET
 command, so you could to SET PERMANENT foo=bar, which would write the
 configuration back into postgresql.conf.


How about putting an indicator in the postgresql.conf file dynamic=1 and
then the db could manage the file else the dynamic change wouldn't stick
on a restart?  You wouldn't need to add a new keyword this way and less
likely for a DBA to mess up the syntax.


Jon

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Permanent settings

2008-02-19 Thread Roberts, Jon
 
 Gregory Stark wrote:
 
  The alternative is to have two files and read them both. Then if you
 change a
  variable which is overridden by the other source you can warn that
the
 change
  is ineffective.
 
  I think on balance the include file method is so much simpler that I
 prefer it.
 
 I think this is a good idea.  I would suggest being able to query
 exactly which config file a setting came from -- so you can see
whether
 it's the stock postgresql.conf, or the locally-modified
 postgresql.local.conf.
 

So a junior DBA goes to manage the db.  Makes a change the
postgresql.conf file and bounces the db.  The change doesn't stick.
That doesn't sound like fun and it also sounds like Oracle's spfile and
pfile.



Jon

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] subquery in limit

2008-02-15 Thread Roberts, Jon
I have no idea why you can't do a subquery in the limit but you can
reference a function:

create table test as select * from pg_tables;

create or replace function fn_count(p_sql varchar) returns int as
$$
declare
  v_count int;
begin
  execute p_sql into v_count;
  return v_count;
end;
$$
language 'plpgsql' security definer;


select * from test limit fn_count('select round(count(*)*0.9) from
test');

And I'm sure someone will point out a more efficient way to write my
function without using pl/pgsql.  :)


Jon


 -Original Message-
 From: [EMAIL PROTECTED] [mailto:pgsql-hackers-
 [EMAIL PROTECTED] On Behalf Of Grzegorz Jaskiewicz
 Sent: Friday, February 15, 2008 5:35 AM
 To: pgsql-hackers@postgresql.org
 Subject: [HACKERS] subquery in limit
 
 (just as an example):
 select * from test order by a limit (select count(*)*0.9 from test);
 
 is not doable in postgresql. Someone recently asked on IRC about,
 SELECT TOP 90 PERCENT type of query in m$sql.
 Any ideas how should this be approach in psql. I ask here, because you
 guys probably can tell why the first query won't work (subquery is not
 allowed as limit's argument, why?).
 
 cheers.
 
 --
 Grzegorz Jaskiewicz
 [EMAIL PROTECTED]
 
 
 
 
 ---(end of
broadcast)---
 TIP 3: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faq

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] subquery in limit

2008-02-15 Thread Roberts, Jon

 
 Roberts, Jon [EMAIL PROTECTED] writes:
  I have no idea why you can't do a subquery in the limit
 
 It hasn't seemed worth putting any effort into --- AFAIR this is the
 first time anyone's even inquired about it.  As you say, you can
always
 use a function.
 
  And I'm sure someone will point out a more efficient way to write my
  function without using pl/pgsql.  :)
 
 Only that it doesn't seem a particularly bright idea to use SECURITY
 DEFINER for a function that will execute any arbitrary caller-provided
 SQL ...
 

LOL!  I knew something in my code would trigger a response.  :)



Jon

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Merge condition in postgresql

2008-02-04 Thread Roberts, Jon

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:pgsql-hackers-
 [EMAIL PROTECTED] On Behalf Of Stephen Frost
 Sent: Monday, February 04, 2008 8:28 AM
 To: Amit jain
 Cc: pgsql-hackers@postgresql.org
 Subject: Re: [HACKERS] Merge condition in postgresql
 
 * Amit jain ([EMAIL PROTECTED]) wrote:
  I am currently migrating database from ORACLE to postgresql but i am
 stucked
  up at one point while creating procedures.
  There is a query which has used oracle MERGE condition so how can i
 change
  this query as per posgresql. kindly suggest me its very urgent.
 
 If you're talking about what I think you're talking about, then
 basically you need to break up the MERGE into seperate insert/update
 steps.  You just have to write the queries such that if the record
 doesn't exist, it gets inserted, and if it does exist, then it gets
 updated.  MERGE just allows you to do this in a nicer, somewhat more
 efficient, way.  If you've got alot of transactions happening around
the
 same time with the table in question then you may also have to write
 your logic to be able to handle a rollback and to try again.
 

Oracle's merge statement isn't all that fun too.  It looks great on
paper when building a data warehouse and you have a type-1 dimension.  

However, if you have duplicates in the source table (which is extremely
common) and the target has a unique constraint on the natural key
(extremely common), the merge statement will fail.  

Oracle checks for the insert or update at the beginning of the statement
so when it gets to the second key value, it will fail.

Example:

SQL create table customer (id number primary key not null,
  2  natural_key number not null,
  3  name varchar2(100));

Table created.

SQL create sequence customer_id_seq;

Sequence created.

SQL create or replace trigger t_customer_bi before insert on customer
  2  for each row when (new.id is null)
  3  begin
  4select customer_id_seq.nextval into :new.id from dual;
  5  end;
  6  /

Trigger created.

SQL create table stg_customer (natural_key number not null,
  2  name varchar2(100));

Table created.

SQL insert into stg_customer values (1, 'jon');

1 row created.

SQL insert into stg_customer values (1, 'jon');

1 row created.

SQL alter table customer add unique (natural_key);

Table altered.

SQL merge into customer a using stg_customer b on
  2  (a.natural_key = b.natural_key)
  3  when matched then update set a.name = b.name
  4  when not matched then
  5  insert (a.natural_key, a.name) values (b.natural_key, b.name);
merge into customer a using stg_customer b on
*
ERROR at line 1:
ORA-1: unique constraint (JON.SYS_C004125) violated


When I worked with Oracle a lot, I never could use the merge statement
because it really didn't work well.  

If you guys develop Merge for PostgreSQL, I highly suggest putting an
order by statement in the syntax so if the source has duplicates, it
will insert the first one and then do subsequent updates.



Jon

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[HACKERS] timestamp format bug

2008-01-31 Thread Roberts, Jon
select to_char(date, '-mm-dd hh24:mi:ss.ms') as char, 
   date 
  from (select timestamp'2008-01-30 15:06:21.560' as date) sub


2008-01-30 15:06:21.560;2008-01-30 15:06:21.56

Why does the timestamp field truncate the 0 but when I show the
timestamp as a character in the default timestamp format, it does not
truncate the trailing zero?  

These two fields should be consistent because they should be formatted
the same way.  I'm using versions 8.2.4 and 8.2.5 and both versions gave
me the same results.


Jon


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] timestamp format bug

2008-01-31 Thread Roberts, Jon
 -Original Message-
 From: Kevin Grittner [mailto:[EMAIL PROTECTED]
 Sent: Thursday, January 31, 2008 9:48 AM
 To: Roberts, Jon; pgsql-hackers@postgresql.org
 Subject: Re: [HACKERS] timestamp format bug
 
  On Thu, Jan 31, 2008 at  9:34 AM, in message
 [EMAIL PROTECTED],
 Roberts,
 Jon [EMAIL PROTECTED] wrote:
  select to_char(date, '-mm-dd hh24:mi:ss.ms') as char,
 date
from (select timestamp'2008-01-30 15:06:21.560' as date) sub
 
  2008-01-30 15:06:21.560;2008-01-30 15:06:21.56
 
  These two fields should be consistent because they should be formatted
  the same way.
 
 Why would you think that?
 
 I would expect the timestamp to be presented with one to nine
 digits to the right of the decimal point, depending on the value.

I expect the query to return either:
2008-01-30 15:06:21.560;2008-01-30 15:06:21.560

or:
2008-01-30 15:06:21.56;2008-01-30 15:06:21.56

The default timestamp format appears to be -mm-dd hh24:mi:ss.ms but it
doesn't follow this for milliseconds.  It truncates the trailing zero for
timestamps and it does not truncate the trailing zero when cast as a
character.  

I don't care which standard should be adopted but it should be the same.

 
 I can think of a couple database products which only go to three
 decimal positions, and always show three, but that's hardly a
 standard.


Oracle and MS SQL Server are consistent in this.



Jon

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] timestamp format bug

2008-01-31 Thread Roberts, Jon

 -Original Message-
 From: Tom Lane [mailto:[EMAIL PROTECTED]
 Sent: Thursday, January 31, 2008 10:48 AM
 To: Kevin Grittner
 Cc: Roberts, Jon; pgsql-hackers@postgresql.org
 Subject: Re: [HACKERS] timestamp format bug
 
 Kevin Grittner [EMAIL PROTECTED] writes:
  On Thu, Jan 31, 2008 at  9:34 AM, in message
 
[EMAIL PROTECTED],
 Roberts,
  Jon [EMAIL PROTECTED] wrote:
  These two fields should be consistent because they should be
formatted
  the same way.
 
  Why would you think that?
 
 Indeed the whole *point* of to_char() is to display the value in a
 different format than the type's standard output converter would use.
 
 I think it'd be a reasonable complaint that to_char() offers no way
 to control how many fractional-second digits you get in its output;
 but that's a missing feature not a bug.
 
  I can think of a couple database products which only go to three
  decimal positions, and always show three, but that's hardly a
  standard.
 
 Considering that to_char() is intended to be compatible with *r*cl*e,
 if that's what they do then we may be stuck with doing the same.
 


No, Larry's company doesn't round the zeros off for timestamp or date
data types and not round off the zeros for character conversions.  That
vendor leaves the trailing zeros for both.

If not to_char, what is the preferred method to convert a timestamp to a
string?



Jon

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] timestamp format bug

2008-01-31 Thread Roberts, Jon
 -Original Message-
 From: Kevin Grittner [mailto:[EMAIL PROTECTED]
 Sent: Thursday, January 31, 2008 12:33 PM
 To: Roberts, Jon; pgsql-hackers@postgresql.org
 Subject: RE: [HACKERS] timestamp format bug
 
  On Thu, Jan 31, 2008 at 12:28 PM, in message
 [EMAIL PROTECTED],
 Roberts,
 Jon [EMAIL PROTECTED] wrote:
  The default timestamp format appears to be -mm-dd hh24:mi:ss.ms
 
 Not to me:
 
 select now();
   now
 ---
  2008-01-31 12:31:40.568746-06
 (1 row)
 

I'm guessing that is a server setting on how to format a timestamp.
Your appears to be -mm-dd hh24:mi:ss.us.  


So on your db, run this query:
select sub.t1, to_char(t1, '-mm-dd hh24:mi:ss.us') as char_t1
from 
(
select timestamp'2008-01-31 12:31:40.50' as t1
) sub


I bet you get this:
2008-01-31 12:31:40.50;2008-01-31 12:31:40.50

Don't you think it should have two identical columns?

Secondly, this link shows that ms should be 000-999 and us should be
00-99.

http://www.postgresql.org/docs/8.2/static/functions-formatting.html


All of the other fields are padded like month, day, year, hour, minute,
and second and are consistent.  The formats ms and us should be
consistent too.



Jon

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] timestamp format bug

2008-01-31 Thread Roberts, Jon
 -Original Message-
 From: Kevin Grittner [mailto:[EMAIL PROTECTED]
 Sent: Thursday, January 31, 2008 1:47 PM
 To: Roberts, Jon; pgsql-hackers@postgresql.org
 Subject: RE: [HACKERS] timestamp format bug
 
  On Thu, Jan 31, 2008 at 12:45 PM, in message
 [EMAIL PROTECTED],
 Roberts,
 Jon [EMAIL PROTECTED] wrote:
 
  So on your db, run this query:
  select sub.t1, to_char(t1, '-mm-dd hh24:mi:ss.us') as char_t1
  from
  (
  select timestamp'2008-01-31 12:31:40.50' as t1
  ) sub
 
 
  I bet you get this:
  2008-01-31 12:31:40.50;2008-01-31 12:31:40.50
 
t1   |  char_t1
 +
  2008-01-31 12:31:40.50 | 2008-01-31 12:31:40.50
 (1 row)
 
  Don't you think it should have two identical columns?
 
 No.  Why should the return value of a function influence the input?
 


This is clearly a bug.  Don't fix it.  I don't care.


Jon


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] autonomous transactions

2008-01-28 Thread Roberts, Jon
 On Wed, Jan 23, 2008 at 05:50:02PM -0500, Tom Lane wrote:
  Simon Riggs [EMAIL PROTECTED] writes:
   From looking at how Oracle does them, autonomous transactions are
   completely independent of the transaction that originates them --
 they
   take a new database snapshot. This means that uncommitted changes
in
 the
   originating transaction are not visible to the autonomous
 transaction.
 
   Oh! Recursion depth would need to be tested for as well. Nasty.
 
  Seems like the cloning-a-session idea would be a possible
implementation
  path for these too.
 
 Oracle has a feature where you can effectively save a session and
return
 to it. For example, if filling out a multi-page web form, you could
save
 state in the database between those calls. I'm assuming that they use
 that capability for their autonomous transactions; save the current
 session to the stack, clone it, run the autonomous transaction, then
 restore the saved one.
 --

You are describing an uncommitted transaction and not an autonomous
transaction.  Transactions in Oracle are not automatically committed
like they are in PostgreSQL.

Here is a basic example of an autonomous transaction: 

create or replace procedure pr_log_error (p_error_message
errorlog.message%type) is 
  pragma autonomous_transaction;
begin
  insert
into errorlog
 (log_user,
  log_time,
  error_message)
  values (user,
  sysdate(),
  p_error_message);
  commit;
exception
  when others then
rollback;
raise;
end;
  

And then you can call it from a procedure like this:

create or replace procedure pr_example is
begin
  null;--do some work
  commit;  --commit the work
exception
  when others
pr_log_error(p_error_message = sqlerrm);
rollback;
raise;
end;

The autonomous transaction allows me to insert and commit a record in
different transaction than the calling procedure so the calling
procedure can rollback or commit.

You can also remove the commit/rollback from pr_example and instead do
it from the anonymous block that calls it.  I just added it to make it
clear that it is a different transaction than the error logging
transaction.



Jon

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [LIKELY_SPAM][HACKERS] Thoughts about bug #3883

2008-01-22 Thread Roberts, Jon
I suggest make a distinction between DDL and DML locks.  A DDL lock would be
required for a TRUNCATE, CREATE, ALTER, DROP, REPLACE, etc while DML is just
insert, update, and delete.

A TRUNCATE (or any DDL activity) should wait until all DML activity is
committed before it can acquire an exclusive lock, perform the task, and
then release the lock.   

This would ensure a consistent view of the database structure.  

Of course, I'm speaking in terms of how I would like to see it and not
knowing the guts of postgres so feel free to bash my ideas.


Jon
 -Original Message-
 From: Tom Lane [mailto:[EMAIL PROTECTED]
 Sent: Monday, January 21, 2008 5:01 PM
 To: pgsql-hackers@postgreSQL.org
 Subject: [LIKELY_SPAM][HACKERS] Thoughts about bug #3883
 
 Steven Flatt's report in this thread:
 http://archives.postgresql.org/pgsql-bugs/2008-01/msg00138.php
 exposes two more-or-less-independent flaws.
 
 One problem is that we allow operations like TRUNCATE on tables that are
 open in the current backend.  This poses a risk of strange behavior,
 such as
 
 regression=# create table foo as select x from generate_series(1,1000) x;
 SELECT
 regression=# begin;
 BEGIN
 regression=# declare c cursor for select * from foo;
 DECLARE CURSOR
 regression=# fetch 10 from c;
  x
 
   1
   2
   3
   4
   5
   6
   7
   8
   9
  10
 (10 rows)
 
 regression=# truncate foo;
 TRUNCATE TABLE
 regression=# fetch 10 from c;
  x
 
  11
  12
  13
  14
  15
  16
  17
  18
  19
  20
 (10 rows)
 
 regression=# fetch all from c;
 ERROR:  could not read block 1 of relation 1663/133283/156727: read only 0
 of 8192 bytes
 
 It's not too consistent that we could still read rows from c until we
 needed to fetch the next page of the table.  For more complex queries
 involving indexscans, I'm afraid the behavior could be even more
 bizarre.
 
 What I propose we do about this is put the same check into TRUNCATE,
 CLUSTER, and REINDEX that is already in ALTER TABLE, namely that we
 reject the command if the current transaction is already holding
 the table open.
 
 
 The issue Steven directly complained of is a potential for undetected
 deadlock via LockBufferForCleanup.  Ordinarily, buffer-level locks don't
 pose a deadlock risk because we don't hold one while trying to acquire
 another (except in UPDATE, which uses an ordering rule to avoid the
 risk).  The problem with LockBufferForCleanup is that it can be blocked
 by a mere pin, which another backend could well hold while trying to
 acquire a lock that will be blocked by VACUUM.
 
 There are a couple of migitating factors: first, patching TRUNCATE et al
 as suggested above will prevent the immediate case, and second, as of
 8.3 this isn't a problem for autovacuum because of the facility for
 kicking autovacuum off a table if it's blocking someone else's lock
 request.  Still, undetected deadlocks are unpleasant, so it'd be nice
 to have some way to recognize the situation if we do get into it.
 I have no idea about a reasonable way to do that though.  Getting the
 heavyweight lock manager involved in buffer accesses seems right out on
 performance grounds.
 
 Comments, ideas?
 
   regards, tom lane
 
 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] autonomous transactions

2008-01-22 Thread Roberts, Jon
I really needed this functionality in PostgreSQL.  A common use for
autonomous transactions is error logging.  I want to log sqlerrm in a
function and raise an exception so the calling application knows there is an
error and I have it logged to a table.  

 

I figured out a way to hack an autonomous transaction by using a dblink in
a function and here is a simple example:

 

create or replace function fn_log_error(p_function varchar, p_location int,
p_error varchar) returns void as

$$

declare

  v_sql varchar;

  v_return varchar;

  v_error varchar;

begin

  perform dblink_connect('connection_name', 'dbname=...');

  

  v_sql := 'insert into error_log (function_name, location, error_message,
error_time) values (''' || p_function_name || ''', ' || 

   p_location || ', ''' || p_error || ''', clock_timestamp())';

  

  select * from dblink_exec('connection_name', v_sql, false) into v_return;

 

  --get the error message

  select * from dblink_error_message('connection_name') into v_error;

 

  if position('ERROR' in v_error)  0 or position('WARNING' in v_error)  0
then

raise exception '%', v_error;

  end if;

 

  perform dblink_disconnect('connection_name');

 

exception

  when others then

perform dblink_disconnect('connection_name');

raise exception '(%)', sqlerrm;

end;

$$

language 'plpgsql' security definer;

 

I thought I would share and it works rather well.  Maybe someone could
enhance this concept to include it with the core database to provide
autonomous transactions.

 

 

Jon



Re: [HACKERS] autonomous transactions

2008-01-22 Thread Roberts, Jon
 On Tue, 2008-01-22 at 10:02 -0600, Roberts, Jon wrote:
  Maybe someone could enhance this concept to include it with the core
  database to provide autonomous transactions.
 
 I agree that autonomous transactions would be useful, but doing them via
 dblink is a kludge. 

Kludge or hack but I agree!

 If we're going to include anything in the core
 database, it should be done properly (i.e. as an extension to the
 existing transaction system).

I agree!  That is why I said someone could enhance this concept to include
it with the core database.  




Jon

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Password policy

2008-01-16 Thread Roberts, Jon

 -Original Message-
 From: D'Arcy J.M. Cain [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, January 16, 2008 9:39 AM
 To: Andrew Dunstan
 Cc: Roberts, Jon; pgsql-hackers@postgresql.org
 Subject: Re: [HACKERS] Password policy
 
 On Wed, 16 Jan 2008 08:32:12 -0500
 Andrew Dunstan [EMAIL PROTECTED] wrote:
   I need to set a basic password policy for accounts but I don't see
 any
   Look at my chkpass type in contrib.  There is a function to verify the
   password.  It is just a placeholder now but you can modify it to do
 all
   your checking.
 
  I assumed he was asking about Postgres level passwords rather than
  passwords maintained by an application. chkpass is only for the latter.
 
 Could be.  I saw accounts and thought Unix shell or ISP accounts.
 

I was referring to PostgreSQL accounts.


Jon

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[HACKERS] Password policy

2008-01-15 Thread Roberts, Jon
I need to set a basic password policy for accounts but I don't see any
documentation on how to do it.  I'm assuming there is a way to do this,
maybe even with a trigger.

The policy would be something like this:
1.  Must contain letters and numbers
2.  Must be at least 8 characters long
3.  Must contain one special character (#,@,$,%,!, etc)
4.  Password (not the account) must expire after 90 days
5.  Must warn users 10 days before the expire to change the password


Jon

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Postgresql Materialized views

2008-01-14 Thread Roberts, Jon


 -Original Message-
 From: Tom Lane [mailto:[EMAIL PROTECTED]
 Sent: Sunday, January 13, 2008 8:18 PM
 To: Sean Utt
 Cc: Andrew Dunstan; Joshua D. Drake; pgsql-hackers@postgresql.org
 Subject: Re: [HACKERS] Postgresql Materialized views
 
 Sean Utt [EMAIL PROTECTED] writes:
  My point is simply this: The lack of a clear formal process for feature
  requests leads to this degradation in the conversation.
 
 Two comments:
 
 1) The existing informal process has served us very well for more than
 ten years now.  I'm disinclined to consider replacing it, because that
 would risk altering the community's dynamics for the worse.
 
 2) In the end, this is an open source *community*; no amount of formal
 feature requesting will have any material impact on what actually gets
 implemented, because there isn't any central control.  

Wow.  Being new to Open Source, this amazes me.

 What gets
 implemented is whatever individual contributors choose to work on,
 either because they find it interesting or (in some cases) because
 someone pays them to do something specific.  Certainly, some
 contributors pay attention to what's being requested, but I see no
 reason to think that increasing the level of formality will help them.

What happens when a person adds a feature or changes the architecture of the
database that is perceived by some as incorrect or going in the wrong
direction?  



Jon

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] to_char incompatibility

2008-01-14 Thread Roberts, Jon
 
  You'll have to explain to Oracle and their customers that Oracle's
  security model is not a great idea then.
 
 I'd love to, and in fact *do* whenever I'm given the chance.
 
 In fact, Oracle's security model is pretty bad; the reason why Oracle
 advertises Unbreakable so hard is that they have a terrible record of
 security exploits, making them nearly as bad as MySQL. Heck, these days
 you're better off using MSSQL than Oracle to protect your data.

LOL!  I'm not going to trade jabs with you on which product has more
exploits because that is just stupid.

I'm stating that the *model* for Oracle security is very similar to the
non-default behavior of PostgreSQL of using security definer.  I prefer
this model.  I think it is a great idea and I mention Oracle because it is
highly reputable database company that uses this model.

For instance, if I want to allow a user to insert data, I most likely want
them to ONLY do it through my method.  That means creating a function with
security definer set and granting the user execute on the function.  I don't
want the user to select my sequence or inserting data directly to the table.


Also, there is no need to argue this because we can have it both ways.
Security definer is an option and I recommend to always use it over the
default.  If you don't want to use it, don't.



Jon

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[HACKERS] could not open relation: Invalid argument

2008-01-11 Thread Roberts, Jon
Version: PostgreSQL 8.2.5 on i686-pc-mingw32

I recently started getting this error message randomly, could not open
relation 42904/42906/42985: Invalid argument.  I also got it for a couple
of other files.  All three files are related to tables that have just a
single row each.

I googled the error message and found that this looks like a problem on
Windows (which I'm running until we get access to our Linux server) and
probably related to anti-virus scanning.  Having run Oracle on Windows
before, I have experienced the problem with an anti-virus scanner locking a
file and causing the database to error.  I thought I understood and fixed
the problem.

Today, I used pgAdmin to Vacuum Analyze the entire database.  It flew
through this task as my database is fairly small.  However, when it got to
the three tables that exhibited this error message, it hung.  It took maybe
2-3 seconds to vacuum analyze each one of these tables.

Now I don't get it.  Was there something corrupt in the file and it wasn't
related to the anti-virus scanner?  


Jon

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] to_char incompatibility

2008-01-10 Thread Roberts, Jon
 -Original Message-

 On Oracle:
 
 SQL select to_date('31-DEC-200700:00:00', 'dd-mon- hh24:mi:ss') from
 dual;
 
 TO_DATE('
 -
 31-DEC-07
 
 On PostgreSQL:
 
 select to_date('31-DEC-200700:00:00', 'dd-mon- hh24:mi:ss');
to_date
 --
  200700-12-31
 
 Now the input value is probably a mistake.  But according to the theory
 described in the PostgreSQL documentation that to_char more or less
 ignores
 whitespace unless FX is used, I think the Oracle behavior is more correct.
 In
 addition, even if it wants to take 6 digits for the year in spite of only
 4
 Y's, the rest of the format wouldn't match anymore.
 
 Is anyone an Oracle format code expert who can comment on this?
 

Oracle removes all white spaces in the date you pass in and the date format.

SQL select to_date('31  - DEC - 2007', 'dd-mon-') from dual;

TO_DATE('
-
31-DEC-07

SQL select to_date('31-DEC-2007', 'dd  -  mon  -  ') from dual;

TO_DATE('
-
31-DEC-07

And then in PostgreSQL with to_timestamp or to_date:

# select to_date('31-dec-2007', 'dd   -mon  -  ');
ERROR:  invalid value for MON/Mon/mon

# select to_date('31  -dec-2007', 'dd-mon-');
ERROR:  invalid value for MON/Mon/mon

I've used Oracle for years but I think PostgreSQL is actually more accurate.

I put together this function very quickly that will make it behave like
Oracle:

create or replace function fn_to_date(p_date varchar, p_format varchar)
returns timestamp as
$$
declare
  v_date varchar;
  v_format varchar;
  v_timestamp timestamp;
begin
  v_date := replace(p_date, ' ', '');
  v_format := replace(p_format, ' ', '');
  v_timestamp := to_timestamp(v_date, v_format);
  return v_timestamp;
exception
  when others then
raise exception '%', sqlerrm;
end;
$$
language 'plpgsql' security definer;


# select fn_to_date('31  -dec-2007', 'dd-mon-');
 fn_to_date
-
 2007-12-31 00:00:00
(1 row)

# select fn_to_date('31-dec-2007', 'dd-mon-');
 fn_to_date
-
 2007-12-31 00:00:00
(1 row)


Or with your exact example:

# select fn_to_date('31-DEC-200700:00:00', 'dd-mon- hh24:mi:ss');

 fn_to_date
-
 2007-12-31 00:00:00
(1 row)

Jon


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] to_char incompatibility

2008-01-10 Thread Roberts, Jon
I always put security definer as I really think that should be the default
behavior.  Anyway, your function should run faster.


Jon

 -Original Message-
 From: Pavel Stehule [mailto:[EMAIL PROTECTED]
 Sent: Thursday, January 10, 2008 8:47 AM
 To: Roberts, Jon
 Cc: Peter Eisentraut; pgsql-hackers@postgresql.org
 Subject: Re: [HACKERS] to_char incompatibility
 
 small non important note: your function is very expensive
 
 exactly same but faster is:
 
 CREATE OR REPLACE FUNCTION fn_to_date(p_date varchar, p_format varchar)
 RETURNS timestamp AS $$
  SELECT to_timestamp(replace($1, ' ', ''), replace($2, ' ', ''));
 $$ LANGUAGE SQL STRICT IMMUTABLE;
 
 or
 
 CREATE OR REPLACE FUNCTION fn_to_date(p_date varchar, p_format varchar)
 RETURNS timestamp AS $$
 BEGIN
  RETURN  to_timestamp(replace(p_date, ' ', ''), replace(p_format, ' ',
 ''));
 END$$ LANGUAGE SQL STRICT IMMUTABLE;
 
 there isn't any reason for using security definer and you forgot
 IMMUTABLE,
 
 Regards
 Pavel Stehule

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] to_char incompatibility

2008-01-10 Thread Roberts, Jon
 
 Jon,
 
  I always put security definer as I really think that should be the
  default behavior.  Anyway, your function should run faster.
 
 That's not a real good idea.  A security definer function is like an SUID
 shell script; only to be used with great care.
 

You'll have to explain to Oracle and their customers that Oracle's security
model is not a great idea then.  

soapbox
Executing a function should never require privileges on the underlying
objects referenced in it.  The function should always run with the rights of
the owner of the function, not the user executing it.
/soapbox


Jon

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Psql command-line completion bug

2008-01-08 Thread Roberts, Jon
Option 5 would be to deprecate the ability to use a \ in an object name.


Jon

 -Original Message-
 From: Gregory Stark [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, January 08, 2008 8:14 AM
 To: pgsql-hackers list
 Subject: [HACKERS] Psql command-line completion bug
 
 
 If you hit tab on a table name containing a \ you get spammed with a
 series of
 WARNINGS and HINTS about nonstandard use of \\ in a string literal:
 
 postgres=# select * from bar\bazTAB
 
 WARNING:  nonstandard use of \\ in a string literal
 LINE 1: ... substring(pg_catalog.quote_ident(c.relname),1,7)='bar\\baz'...
  ^
 HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.
 WARNING:  nonstandard use of \\ in a string literal
 LINE 3: ...ing(pg_catalog.quote_ident(n.nspname) || '.',1,7)='bar\\baz'...
  ^
 HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.
 WARNING:  nonstandard use of \\ in a string literal
 LINE 3: ...alog.quote_ident(nspname) || '.',1,7) = substring('bar\\baz'...
  ^
 HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.
 WARNING:  nonstandard use of \\ in a string literal
 LINE 5: ... || '.' || pg_catalog.quote_ident(c.relname),1,7)='bar\\baz'...
  ^
 HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.
 WARNING:  nonstandard use of \\ in a string literal
 LINE 5: ...og.quote_ident(n.nspname) || '.',1,7) = substring('bar\\baz'...
  ^
 HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.
 WARNING:  nonstandard use of \\ in a string literal
 LINE 5: ...alog.quote_ident(nspname) || '.',1,7) = substring('bar\\baz'...
  ^
 HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.
 
 
 There are a few options here:
 
 1) Use E'' in all the psql completion queries. This means they won't work
 on
 older versions of postgres (but they don't in general do so anyways). It
 would
 also break anybody who set standard_conforming_string = 'on'. Ideally we
 would
 want to use E'' and then pass false directly to PQEscapeStringInternal but
 that's a static function.
 
 2) Use $$%s$$ style quoting. Then we don't need to escape the strings at
 all.
 We would probably have to move all the quoting outside the C strings and
 borrow the function from pg_dump to generate the quoting as part of
 sprintf
 parameter substitution.
 
 3) set standards_conforming_strings=on for psql tab-completion queries and
 then reset it afterwards. That way we can just use plain standard-
 conforming
 '' and not get any warnings.
 
 4) Replace PQExec with PQExecParam in tab-complete.c
 
 Personally I think (4) is the best long-term option but at this point that
 doesn't seem feasible. (3) or (2) seems the next best option.
 
 --
   Gregory Stark
   EnterpriseDB  http://www.enterprisedb.com
   Ask me about EnterpriseDB's On-Demand Production Tuning
 
 ---(end of broadcast)---
 TIP 7: You can help support the PostgreSQL project by donating at
 
 http://www.postgresql.org/about/donate

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[HACKERS] viewing source code

2007-12-22 Thread Roberts, Jon
Tom Lane indicated this thread should be moved here.  Instead of asking for
what I consider the solution, let me propose a real business case and you
guys tell me how best to handle it.  

I am building an Enterprise Data Warehouse with PostgreSQL.  BTW, I love
this database.  I will have data from all over the company housed in it and
a variety of business users from all over the company as well.

Users Communities
  Call Center
  Finance
  Sales
  IT
  Marketing
  HR
Security

Major Feature 1: These users will be using common dimensions and fact tables
that are unique to each department.  PostgreSQL security will handle this
just fine.

Major Feature 2: Users will be able to load their own data into the data
warehouse and secure this to their department.  PostgreSQL security will
handle this just fine.

Major Feature 3: Users will build their own functions to manipulate their
own data and share the output with their department.  PostgreSQL security
currently does not secure the functions they write so the feature is not
fully met.

The user community is not highly technical and they are not looking for an
obfuscation solution.  They are looking for simple security to their code
they write in pl/pgsql that is similar if not the same as the security for
protecting their data.

How best can I achieve Major Feature 3?

Examples of what would be in the functions that need to be secured:
1.  HR could have pay and performance calculations that they don't want
shared with other departments.
2.  Finance could have earnings calculations they don't want to share with
marketing.
3.  Security could have functions to identify network abusers.
4.  Finance could have fraud and abuse calculations they don't want to share
with the call center.

Building a database per department isn't feasible.  It would require a vast
amount of data replication and duplication.  We want an Enterprise solution.

One solution already proposed is to create a view in place of the pg_proc
table.  The source code column would be protected in a similar fashion to
pg_stat_activity.current_query.  This seems like a great solution to me and
it would meet our Major Feature 3.


Jon

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings