Re: [GENERAL] recover corrupt DB?

2009-05-01 Thread Tom Lane
Craig Ringer  writes:
> These reports seem to come up a bit, with disk full issues resulting in
> the need to pg_resetxlog, dump, and re-initdb, but I wouldn't be too
> shocked if they all turned out to be on xfs or something like that.

Well, there are cases where that might actually be the best strategy,
but PG by itself will not lose data on a disk-full condition.  What it
will do is shut down and refuse to play until you find it some more disk
space.  If you have no way to clear some space on the partition ...

Also, over the years we have found various bugs that contributed to bad
behavior in extreme circumstances.  For instance 7.x had a problem that
a very long index build prevented checkpoints from completing, which
would cause pg_xlog to bloat because WAL segments couldn't be recycled:
http://archives.postgresql.org//pgsql-general/2004-05/msg00414.php
which led to cases like this one:
http://archives.postgresql.org//pgsql-general/2005-03/msg01373.php
I think I actually recommended pg_resetxlog in that case because the
alternative was to wait for it to churn through 100GB of uncheckpointed
WAL.

Another interesting example is here:
http://archives.postgresql.org//pgsql-hackers/2004-01/msg00530.php
explanation here:
http://archives.postgresql.org/pgsql-hackers/2004-01/msg00606.php
in which Postgres survived an out-of-space condition for quite some time
without data loss.  It did finally panic because of a rather minor
bug in the pg_clog logic ... but still did not lose any committed
transactions.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Possible to prevent transaction abort?

2009-05-01 Thread Craig Ringer
Adam B wrote:
> Hello all,
> 
> Is it possible to prevent Postgre from aborting the transaction upon a 
> constraint violation?

Not without use of savepoints.

What I like to do is bulk-insert the suspect data into a temp table
without constraints, then INSERT INTO ... SELECT it into the target
table with appropriate WHERE constraints to prevent attempts to insert
invalid values.

Another alternative is to constrain each INSERT statement with an
appropriate WHERE clause after rephrasing it in INSERT INTO ... SELECT
form .

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] recover corrupt DB?

2009-05-01 Thread Craig Ringer
Tom Lane wrote:
> Craig Ringer  writes:
>> I've been wondering about this for a while. Why does Pg end up with the 
>> database in an unusable, unrecoverable state after a disk-full error?
> 
> It doesn't.  There must have been some other filesystem misfeasance
> involved in the OP's problem.

Cool - so it's definitely meant to handle disk-full fine. That's what  I
was concerned about.

These reports seem to come up a bit, with disk full issues resulting in
the need to pg_resetxlog, dump, and re-initdb, but I wouldn't be too
shocked if they all turned out to be on xfs or something like that.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] function in pgAdmin

2009-05-01 Thread rwade
How do I view the result set of a function that returns a refcursor in
pgAdmin?

I am trying to test it in pgadmin my calling it like this, but I can't see
the result set, it only says:

Query result with 1 rows discarded.
Query result with 328 rows discarded.

Query returned successfully with no result in 32 ms.

How I'm calling in pgAdmin:

begin;

select select_movies_by_web_search('Test', 2008, '', null, null, null);
fetch all in moviecursor;
commit;

Here is my function:

CREATE OR REPLACE FUNCTION select_movies_by_web_search(title character
varying(100),
   movieyear integer, director_rest_of_name character varying(50),
   director_last_name character varying(50), star_first_name character
varying(50),
   star_last_name character varying(50))
  RETURNS refcursor AS
$BODY$
   DECLARE
  rc refcursor default 'moviecursor';
  sql character varying(2000);
   BEGIN
 sql = 'SELECT id, title, "year", director_rest_of_name,
director_last_name, banner_url, trailer_url
FROM movies';
 open rc for EXECUTE sql;

 RETURN rc;
   END;$BODY$
  LANGUAGE 'plpgsql' VOLATILE
  COST 100;
ALTER FUNCTION select_movies_by_web_search(character varying, integer,
   character varying, character varying, character varying,
   character varying) OWNER TO testuser;

Thanks

Ryan


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Possible to prevent transaction abort?

2009-05-01 Thread Adam B
Perhaps I'm doing something wrong.  I'm consistently taking over 20s for 
the following test case.  (Without savepoints it takes under 10s)

CREATE TABLE lots2
(
  lid serial NOT NULL,
  "name" character varying(64),
  CONSTRAINT lots2pk PRIMARY KEY (lid),
  CONSTRAINT lots2_unique_name UNIQUE (name)
)

Java code:

Connection con = 
DriverManager.getConnection("jdbc:postgresql://localhost/driver_test", 
"postgres", "*");

Statement st = con.createStatement();
st.executeUpdate("DELETE FROM lots2");
st.close();

con.setAutoCommit(false);

PreparedStatement ps = con.prepareStatement("INSERT INTO 
lots2 (name) VALUES (?)");

long start = System.currentTimeMillis();
for (int i = 0; i < 10; i++)
{
ps.setString(1, "number " + i);
Savepoint saved = con.setSavepoint();
ps.executeUpdate();
con.releaseSavepoint(saved);
}

con.setAutoCommit(true);
long stop = System.currentTimeMillis();
System.out.println((stop - start) + "ms");



Thomas Kellerer wrote:
> Adam B wrote on 01.05.2009 19:50:
>> I realize that I could set a save-point before every INSERT but that 
>> nearly doubles the processing time. 
>
> That's interesting.
>
> I did a quick test with JDBC inserting 500,000 rows and the time when 
> using a savepoint for each INSERT was not really different to the one 
> when not using a savepoint (less than a second which could well be 
> caused by other things in the system).
>
> I tested this locally so no real network traffic involved, which might 
> change the timing as more stuff is sent over to the server when using 
> the savepoint.
>
> Thomas
>
>



Videx Inc. 1105 N. E. Circle Blvd. Corvallis OR 97330 (541) 758-0521
CONFIDENTIAL COMMUNICATION: The email message and any attachments are intended 
only for the addressee.  They may be privileged, confidential, and protected 
from disclosure. If you are not the intended recipient, any dissemination, 
distribution, or copying is expressly prohibited.  If you received this email 
message in error, please notify the sender immediately by replying to this 
e-mail message or by telephone


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 08P01: unexpected EOF on client connection

2009-05-01 Thread Tomas Vondra

Hi,


Tomas Vondra wrote:


$conn = pg_connect(...);
$res = pg_query("SELECT mime, thumbnail_data FROM images WHERE filename
= ");
$row = pg_fetch_assoc($row);
header('Content-Type: ' . $row['mime']);
echo pg_unescape_bytea($row['thumbnail_data']);


PHP?

Try running your script in a PHP debugger (from the command line or, if
your server supports it, while hosted) and see at what point exactly
things go pear shaped. Check your web server error log for any client
library or PHP errors, too.


Running in a PHP debugger would be quite difficult in this case, as the 
script is executed concurrently - once for each image displayed on the 
page and requested by the browser.



Even just adding a bunch of logging statements into your script then
looking at your web server error log might help you a little.


I've added some logging into the script - basically there are 4 
interesting places, and for a page with 69 images, the results are these 
(number of executions that reach the point):


1) before connect : 69 cases
2) after connect / before select : 32 cases
3) after select : 24 cases
4) at the end : 24 cases

So only about 30% of the images is actually drawn. About 50% of the 
executions fail right at the connection, and another 8 fail when 
fetching the data :-(


Anyway it seems there's something wrong with my development box. I've 
tried to run this app on a production server and everything works fine 
there (all the images displayed, no 'unexpected EOF' errors etc).


The production box is not exactly the same, but in general the versions 
of PHP, PostgreSQL and Apache do match. OK, this is probably the 
punishment for living on the edge as I use gentoo (instead of debian, 
installed on the server). But how to find out the rotten piece?


regards
Tomas

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Possible to prevent transaction abort?

2009-05-01 Thread Adam B




I'm intrigued by this solution, Johan.  It might be just the ticket! 
I'll do some benchmarks when I have time in a week or so.

Johan Nel wrote:
Adam
B wrote:
  
  Hello all,


Is it possible to prevent Postgre from aborting the transaction upon a
constraint violation?

  
>From the help files maybe the following could get you on the right
track:
  
  
This example uses exception handling to perform either UPDATE or
INSERT, as appropriate:
  
CREATE TABLE db (a INT PRIMARY KEY, b TEXT);
  
CREATE FUNCTION merge_db(key INT, data TEXT) RETURNS VOID AS
  
$$
  
BEGIN
  
    LOOP
  
    -- first try to update the key
  
    UPDATE db SET b = data WHERE a = key;
  
    IF found THEN
  
    RETURN;
  
    END IF;
  
    -- not there, so try to insert the key
  
    -- if someone else inserts the same key concurrently,
  
    -- we could get a unique-key failure
  
    BEGIN
  
    INSERT INTO db(a,b) VALUES (key, data);
  
    RETURN;
  
    EXCEPTION WHEN unique_violation THEN
  
    -- do nothing, and loop to try the UPDATE again
  
    END;
  
    END LOOP;
  
END;
  
$$
  
LANGUAGE plpgsql;
  
  
HTH,
  
  
Johan Nel
  
Pretoria, South Africa.
  
  





Videx Inc. 1105 N. E. Circle Blvd. Corvallis OR 97330 (541) 758-0521
CONFIDENTIAL COMMUNICATION: The email message and any attachments are intended only for the addressee.  They may be privileged, confidential, and protected from disclosure. If you are not the intended recipient, any dissemination, distribution, or copying is expressly prohibited.  If you received this email message in error, please notify the sender immediately by replying to this e-mail message or by telephone






Re: [GENERAL] Possible to prevent transaction abort?

2009-05-01 Thread Adam B




Strange indeed.  Perhaps there's some background stuff happening that
messes with the results (auto VACUUM?).

In my mind, however, it makes sense that it would take longer: 2 extra
operations against the server (save&release).

Thomas Kellerer wrote:
Adam B
wrote on 01.05.2009 22:59:
  
  Perhaps I'm doing something wrong.  I'm
consistently taking over 20s for the following test case.  (Without
savepoints it takes under 10s)


  
  
That's really strange. I can reproduce your results on my computer (25
vs. 65 seconds).
  
  
When running my import program against your table, I don't see a big
difference between the savepoint solution and the one without (I added
a row to the import file that would fail to make sure I was really
using savepoints)
  
  
My import program is doing more or less the same thing as your code, so
I have
  
no idea what's going on here.
  
  
There was one strange thing though: I had one run where it took a lot
longer with the savepoint than without. But I could not reproduce that,
all other tests where  approx. the same runtime with or without
savepoints.
  
  
Very strange.
  
  
Might be worth posting to the JDBC list, to see if this is a driver
issue
  
  
Thomas
  
  
  





Videx Inc. 1105 N. E. Circle Blvd. Corvallis OR 97330 (541) 758-0521
CONFIDENTIAL COMMUNICATION: The email message and any attachments are intended only for the addressee.  They may be privileged, confidential, and protected from disclosure. If you are not the intended recipient, any dissemination, distribution, or copying is expressly prohibited.  If you received this email message in error, please notify the sender immediately by replying to this e-mail message or by telephone






Re: [GENERAL] Handling large number of OR/IN conditions

2009-05-01 Thread Tom Lane
Steve Atkins  writes:
> On May 1, 2009, at 2:42 PM, David Wall wrote:
>> Does anybody know if PG will perform better with the table join  
>> instead of evaluating the series of OR/IN?  The OR/IN has to be  
>> parsed, but the comparisons may be faster than the table join.

> It used to be that populating and then joining with a temporary table  
> was faster than using IN (1, 3, 5, 7, 9, 11, 13) for all but the  
> smallest sets. That's no longer true, and IN() is pretty good.

> I'd still use a temporary table myself, though. It's cleaner and  
> easier to populate one than to cleanly produce a statement with a  
> variable number of identifiers in it. And you can reuse it for  
> multiple reports, join against it different ways and so on. Also you  
> can populate it either from your UI or by selecting from the  
> relationships table suggested above (create temporary table foo as  
> select peon from reports where overlord in ('bob', 'ben', 'jerry) ),  
> and still run the same reports against it.

Possibly worth noting: if you're depending on the quality of join plans
involving such a table, it's worth doing an ANALYZE against it after
you populate it.  (Autovacuum won't do that for you, because it can't
access temp tables.)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Possible to prevent transaction abort?

2009-05-01 Thread Thomas Kellerer

Adam B wrote on 01.05.2009 22:59:
Perhaps I'm doing something wrong.  I'm consistently taking over 20s for 
the following test case.  (Without savepoints it takes under 10s)




That's really strange. I can reproduce your results on my computer (25 vs. 65 
seconds).


When running my import program against your table, I don't see a big difference 
between the savepoint solution and the one without (I added a row to the import 
file that would fail to make sure I was really using savepoints)


My import program is doing more or less the same thing as your code, so I have
no idea what's going on here.

There was one strange thing though: I had one run where it took a lot longer 
with the savepoint than without. But I could not reproduce that, all other tests 
where  approx. the same runtime with or without savepoints.


Very strange.

Might be worth posting to the JDBC list, to see if this is a driver issue

Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Handling large number of OR/IN conditions

2009-05-01 Thread Steve Atkins


On May 1, 2009, at 2:42 PM, David Wall wrote:


(quoted from Chris)
Select field1,field2 FROM table1 inner join relationships on  
table1.creator_user_id = relationships.employee WHERE  
relationships.manager = ?


(quoted from Steve)
select table1.field1, table2.field2 from table1, reports where  
table1.creator_user_id = reports.peon and reports.overlord = 'bob'


Thanks, Steve and Chris, who both suggested similar things.

I believe that will work for me in some situations, but currently  
the report allows the manager to select any combination (from one to  
all) of users that report to him.  So one solution would be to run  
the report as you have it and then have the application filter it,  
but that could be painful if the users "de-selected" from his full  
team happen to have a large percentage of the matching rows.  Of  
course, I may see if this is something they really do much of.  I  
mean, perhaps if they select individual users (just a few), I can  
just use the OR/IN style, and if they select "all my users" I could  
rely on the table joins.


Does anybody know if PG will perform better with the table join  
instead of evaluating the series of OR/IN?  The OR/IN has to be  
parsed, but the comparisons may be faster than the table join.


It used to be that populating and then joining with a temporary table  
was faster than using IN (1, 3, 5, 7, 9, 11, 13) for all but the  
smallest sets. That's no longer true, and IN() is pretty good.


I'd still use a temporary table myself, though. It's cleaner and  
easier to populate one than to cleanly produce a statement with a  
variable number of identifiers in it. And you can reuse it for  
multiple reports, join against it different ways and so on. Also you  
can populate it either from your UI or by selecting from the  
relationships table suggested above (create temporary table foo as  
select peon from reports where overlord in ('bob', 'ben', 'jerry) ),  
and still run the same reports against it.


Cheers,
  Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Online Backups PostGre

2009-05-01 Thread Adam Ruth

Cygwin comes with rsync on Windows.

On 02/05/2009, at 4:06 AM, John R Pierce wrote:


Joshua D. Drake wrote:
Well that's just it. Out of the box it doesn't actually work.  
PostgreSQL

only gives you the facilities to roll your own PITR solution. You can
look at PITR Tools:

https://projects.commandprompt.com/public/pitrtools

It doesn't quite work on Windows due to lack of rsync and signaling
differences but could give you an idea of how to move forward with  
your

own implementation.



Quite possibly 'robocopy' from Microsoft somewhere (doesn't come  
with windows, but was part of an admin kit or something) would be a  
workable replacement for the rsync part.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Handling large number of OR/IN conditions

2009-05-01 Thread David Wall
(quoted from Chris) 
Select field1,field2 FROM table1 inner join relationships on 
table1.creator_user_id = relationships.employee WHERE 
relationships.manager = ?


(quoted from Steve) 
select table1.field1, table2.field2 from table1, reports where 
table1.creator_user_id = reports.peon and reports.overlord = 'bob'


Thanks, Steve and Chris, who both suggested similar things. 

I believe that will work for me in some situations, but currently the 
report allows the manager to select any combination (from one to all) of 
users that report to him.  So one solution would be to run the report as 
you have it and then have the application filter it, but that could be 
painful if the users "de-selected" from his full team happen to have a 
large percentage of the matching rows.  Of course, I may see if this is 
something they really do much of.  I mean, perhaps if they select 
individual users (just a few), I can just use the OR/IN style, and if 
they select "all my users" I could rely on the table joins.


Does anybody know if PG will perform better with the table join instead 
of evaluating the series of OR/IN?  The OR/IN has to be parsed, but the 
comparisons may be faster than the table join.


Many thanks for your help

David


Re: [GENERAL] Possible to prevent transaction abort?

2009-05-01 Thread Thomas Kellerer

Adam B wrote on 01.05.2009 19:50:
I realize that I could set a save-point before every INSERT but that 
nearly doubles the processing time. 


That's interesting.

I did a quick test with JDBC inserting 500,000 rows and the time when using a 
savepoint for each INSERT was not really different to the one when not using a 
savepoint (less than a second which could well be caused by other things in the 
system).


I tested this locally so no real network traffic involved, which might change 
the timing as more stuff is sent over to the server when using the savepoint.


Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Possible to prevent transaction abort?

2009-05-01 Thread Johan Nel

Adam B wrote:

Hello all,

Is it possible to prevent Postgre from aborting the transaction upon a 
constraint violation?

From the help files maybe the following could get you on the right track:

This example uses exception handling to perform either UPDATE or INSERT, 
as appropriate:

CREATE TABLE db (a INT PRIMARY KEY, b TEXT);
CREATE FUNCTION merge_db(key INT, data TEXT) RETURNS VOID AS
$$
BEGIN
LOOP
-- first try to update the key
UPDATE db SET b = data WHERE a = key;
IF found THEN
RETURN;
END IF;
-- not there, so try to insert the key
-- if someone else inserts the same key concurrently,
-- we could get a unique-key failure
BEGIN
INSERT INTO db(a,b) VALUES (key, data);
RETURN;
EXCEPTION WHEN unique_violation THEN
-- do nothing, and loop to try the UPDATE again
END;
END LOOP;
END;
$$
LANGUAGE plpgsql;

HTH,

Johan Nel
Pretoria, South Africa.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Online Backups PostGre

2009-05-01 Thread Scott Marlowe
On Fri, May 1, 2009 at 12:06 PM, John R Pierce  wrote:
> Joshua D. Drake wrote:
>>
>> Well that's just it. Out of the box it doesn't actually work. PostgreSQL
>> only gives you the facilities to roll your own PITR solution. You can
>> look at PITR Tools:
>>
>> https://projects.commandprompt.com/public/pitrtools
>>
>> It doesn't quite work on Windows due to lack of rsync and signaling
>> differences but could give you an idea of how to move forward with your
>> own implementation.
>>
>
> Quite possibly 'robocopy' from Microsoft somewhere (doesn't come with
> windows, but was part of an admin kit or something) would be a workable
> replacement for the rsync part.

There is an rsync for windows, called delta copy:

http://www.aboutmyip.com/AboutMyXApp/DeltaCopy.jsp

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Handling large number of OR/IN conditions

2009-05-01 Thread Chris Spotts
A separate table for managing the relationships.  One column for the manager
and one for employee.

Then you end up with a query like this.

 

Select field1,field2 FROM table1 inner join relationships on
table1.creator_user_id = relationships.employee WHERE relationships.manager
= ?

  _  

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of David Wall
Sent: Friday, May 01, 2009 12:49 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Handling large number of OR/IN conditions

 

We have a database report function that seemed clean when the number of
users was small, but as the number of users grow, I was wondering if anybody
had any good ideas about how to handle OR or IN for SELECTs.  

The general scenario is that a manager runs reports that list all records
that were created by users under his/her oversight.  So, when the number of
users is small, we had simple queries like:

SELECT field1, field2 FROM table1 WHERE creator_user_id = 'U1' OR
creator_user_id = 'U2';

But when there are thousands of users, and a manager has oversight of 100 of
them, the OR construct seems out of whack when you read the query:

WHERE creator_user_id = 'U1' OR creator_user_id = 'U2' ... OR
creator_user_id = 'U99' OR creator_user_id = 'U100'

I know it can be shortened with IN using something like, but don't know if
it's any more/less efficient or a concern:

WHERE creator_user_id IN ('U1', 'U2', , 'U99', 'U100)

How do people tend to handle this sort of thing?  I suspect manager reports
against their people must be pretty common.  Are there any good tricks on
how to group users like this?  Unfortunately, group membership changes over
time, and users may report to more than one manager and thus belong to more
than one group, so we can't just have a 'creator_group_id' attribute that is
set and then query against that. 

Thanks,
David



Re: [GENERAL] Handling large number of OR/IN conditions

2009-05-01 Thread Steve Atkins


On May 1, 2009, at 10:49 AM, David Wall wrote:

We have a database report function that seemed clean when the number  
of users was small, but as the number of users grow, I was wondering  
if anybody had any good ideas about how to handle OR or IN for  
SELECTs.


The general scenario is that a manager runs reports that list all  
records that were created by users under his/her oversight.  So,  
when the number of users is small, we had simple queries like:


SELECT field1, field2 FROM table1 WHERE creator_user_id = 'U1' OR  
creator_user_id = 'U2';


But when there are thousands of users, and a manager has oversight  
of 100 of them, the OR construct seems out of whack when you read  
the query:


WHERE creator_user_id = 'U1' OR creator_user_id = 'U2' ... OR  
creator_user_id = 'U99' OR creator_user_id = 'U100'


I know it can be shortened with IN using something like, but don't  
know if it's any more/less efficient or a concern:


WHERE creator_user_id IN ('U1', 'U2', , 'U99', 'U100)

How do people tend to handle this sort of thing?  I suspect manager  
reports against their people must be pretty common.  Are there any  
good tricks on how to group users like this?  Unfortunately, group  
membership changes over time, and users may report to more than one  
manager and thus belong to more than one group, so we can't just  
have a 'creator_group_id' attribute that is set and then query  
against that.



Sounds like a job for a two column table that lists manager and report.

select table1.field1, table2.field2 from table1, reports where  
table1.creator_user_id = reports.peon and reports.overlord = 'bob'


Cheers,
  Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Online Backups PostGre

2009-05-01 Thread Adrian Klaver

- "Joshua D. Drake"  wrote:

> On Fri, 2009-05-01 at 09:22 -0700, PostGre Newbie wrote:
> > Hi everyone,
> 
> > I searched the web and I know I should get a snapshot of the
> > filesystem and then backup the WAL archive logs but this is the
> part
> > where I am confused. I do not know of any open source backup
> utilities
> > that can take snapshots of the filesystem. I get the overall
> concept
> > of online backups but I am still unclear EXACTLY how the system
> works.
> > I would be grateful if anyone could explain it to me. I have
> already
> > gone through the postgresql document on online backups.
> 
> Well that's just it. Out of the box it doesn't actually work.
> PostgreSQL
> only gives you the facilities to roll your own PITR solution. You can
> look at PITR Tools:
> 
> https://projects.commandprompt.com/public/pitrtools
> 
> It doesn't quite work on Windows due to lack of rsync and signaling
> differences but could give you an idea of how to move forward with
> your
> own implementation.
> 
> I would also note that Win32 8.1 is deprecated and unsupported. You
> need
> to update to at least 8.2.
> 
> Sincerely,
> 
> Joshua D. Drake

For the rsync requirement you want to take a look at:
DeltaCopy
http://www.aboutmyip.com/AboutMyXApp/DeltaCopy.jsp

Adrian Klaver
akla...@comcast.net




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Handling large number of OR/IN conditions

2009-05-01 Thread David Wall
We have a database report function that seemed clean when the number of 
users was small, but as the number of users grow, I was wondering if 
anybody had any good ideas about how to handle OR or IN for SELECTs. 

The general scenario is that a manager runs reports that list all 
records that were created by users under his/her oversight.  So, when 
the number of users is small, we had simple queries like:


SELECT field1, field2 FROM table1 WHERE creator_user_id = 'U1' OR 
creator_user_id = 'U2';


But when there are thousands of users, and a manager has oversight of 
100 of them, the OR construct seems out of whack when you read the query:


WHERE creator_user_id = 'U1' OR creator_user_id = 'U2' *...* OR 
creator_user_id = 'U99' OR creator_user_id = 'U100'


I know it can be shortened with IN using something like, but don't know 
if it's any more/less efficient or a concern:


WHERE creator_user_id IN ('U1', 'U2', , 'U99', 'U100)

How do people tend to handle this sort of thing?  I suspect manager 
reports against their people must be pretty common.  Are there any good 
tricks on how to group users like this?  Unfortunately, group membership 
changes over time, and users may report to more than one manager and 
thus belong to more than one group, so we can't just have a 
'creator_group_id' attribute that is set and then query against that.


Thanks,
David


[GENERAL] Possible to prevent transaction abort?

2009-05-01 Thread Adam B
Hello all,

Is it possible to prevent Postgre from aborting the transaction upon a 
constraint violation?

Using JDBC if I catch the constraint violation and try another statement 
I get:

  /ERROR: current transaction is aborted, commands ignored until end of 
transaction block/

I realize that I could set a save-point before every INSERT but that 
nearly doubles the processing time.  Since our application INSERTS many 
thousands of rows at a time we need maximum efficiency.  On Mysql (where 
this limitation doesn't exist) it's already only barely fast enough.  If 
we have to use savepoints with Postgre it might prevent us from making 
the switch.

Is there some mode flag I could set, either database or server wide?  
I've found Postgre to be wonderfully configurable so I'm crossing my 
fingers...



Much Thanks.
- Adam



Videx Inc. 1105 N. E. Circle Blvd. Corvallis OR 97330 (541) 758-0521
CONFIDENTIAL COMMUNICATION: The email message and any attachments are intended 
only for the addressee.  They may be privileged, confidential, and protected 
from disclosure. If you are not the intended recipient, any dissemination, 
distribution, or copying is expressly prohibited.  If you received this email 
message in error, please notify the sender immediately by replying to this 
e-mail message or by telephone


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Online Backups PostGre

2009-05-01 Thread John R Pierce

Joshua D. Drake wrote:

Well that's just it. Out of the box it doesn't actually work. PostgreSQL
only gives you the facilities to roll your own PITR solution. You can
look at PITR Tools:

https://projects.commandprompt.com/public/pitrtools

It doesn't quite work on Windows due to lack of rsync and signaling
differences but could give you an idea of how to move forward with your
own implementation.
  


Quite possibly 'robocopy' from Microsoft somewhere (doesn't come with 
windows, but was part of an admin kit or something) would be a workable 
replacement for the rsync part.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Online Backups PostGre

2009-05-01 Thread Alan Hodgson
On Friday 01 May 2009, PostGre Newbie  wrote:
>I do not know of any open source backup utilities
> that can take snapshots of the filesystem. I get the overall concept
> of online backups but I am still unclear EXACTLY how the system works.
> I would be grateful if anyone could explain it to me. I have already
> gone through the postgresql document on online backups.

rsync or tar. One of the really nice things about PITR is that the base 
backup source can change while it's being backed up and it still works.

PITR is really just as simple as it sounds. Call start_backup(). Copy the 
PostgreSQL data directory. Call stop_backup(). Keep all the WAL logs 
generated during and after the base backup.

-- 
Even a sixth-grader can figure out that you can’t borrow money to pay off 
your debt

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Online Backups PostGre

2009-05-01 Thread Joshua D. Drake
On Fri, 2009-05-01 at 09:47 -0700, Ben Chobot wrote:
> On Fri, 1 May 2009, PostGre Newbie wrote:
> 
> > I know that taking the backup of the whole database every 10 minutes
> > is a very bad way to go about it but until I find a better way to do
> > it, it will have to do for now. Any suggestions/tips/articles on how
> > to do the backup would be appreciated very much.
> 
> The postgres manual tends to be a great source of information. See, for 
> example:
> 
> http://www.postgresql.org/docs/8.1/interactive/backup.html

Not for Windows it isn't.

Joshua D. Drake

-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Online Backups PostGre

2009-05-01 Thread Joshua D. Drake
On Fri, 2009-05-01 at 09:22 -0700, PostGre Newbie wrote:
> Hi everyone,

> I searched the web and I know I should get a snapshot of the
> filesystem and then backup the WAL archive logs but this is the part
> where I am confused. I do not know of any open source backup utilities
> that can take snapshots of the filesystem. I get the overall concept
> of online backups but I am still unclear EXACTLY how the system works.
> I would be grateful if anyone could explain it to me. I have already
> gone through the postgresql document on online backups.

Well that's just it. Out of the box it doesn't actually work. PostgreSQL
only gives you the facilities to roll your own PITR solution. You can
look at PITR Tools:

https://projects.commandprompt.com/public/pitrtools

It doesn't quite work on Windows due to lack of rsync and signaling
differences but could give you an idea of how to move forward with your
own implementation.

I would also note that Win32 8.1 is deprecated and unsupported. You need
to update to at least 8.2.

Sincerely,

Joshua D. Drake


-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Online Backups PostGre

2009-05-01 Thread Ben Chobot

On Fri, 1 May 2009, PostGre Newbie wrote:


I know that taking the backup of the whole database every 10 minutes
is a very bad way to go about it but until I find a better way to do
it, it will have to do for now. Any suggestions/tips/articles on how
to do the backup would be appreciated very much.


The postgres manual tends to be a great source of information. See, for 
example:


http://www.postgresql.org/docs/8.1/interactive/backup.html



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Online Backups PostGre

2009-05-01 Thread PostGre Newbie
Hi everyone,

I am new to PostGreSql but I have to use it at work. I have to take
backups of the database every 10 mins so that work isn't lost in case
of an accident. I use PostGreSql v8.1 on Windows XP.

I have already set up a system that automatically takes the backup of
the database every 10 mins. I did this using a batch file script set
up in scheduled tasks to run every 10 mins. The command I used inside
the batch file is:
pg_dump dbname -U postgres -o > backup_file_name

I tested the system and it works correctly but I know that in the live
server the pg_dump command will take a long time to execute. I know
that I should use a 'point in time recovery backup' but I am confused
how to exactly go about it as I am new to PostGre.

I searched the web and I know I should get a snapshot of the
filesystem and then backup the WAL archive logs but this is the part
where I am confused. I do not know of any open source backup utilities
that can take snapshots of the filesystem. I get the overall concept
of online backups but I am still unclear EXACTLY how the system works.
I would be grateful if anyone could explain it to me. I have already
gone through the postgresql document on online backups.

I know that taking the backup of the whole database every 10 minutes
is a very bad way to go about it but until I find a better way to do
it, it will have to do for now. Any suggestions/tips/articles on how
to do the backup would be appreciated very much.


Thanks in advance,

PostGre Newbie.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Understand this error

2009-05-01 Thread Tom Lane
Craig Ringer  writes:
> Note that it's not very likely that PostgreSQL was the process that used
> up all your memory. It was just unlucky enough to be picked as the one
> to be killed, because the OOM killer is terrible at estimating which
> process is using the most memory when programs like PostgreSQL have
> allocated large blocks of shared memory.

It's worse than that: the OOM killer is broken by design, because it
intentionally picks on processes that have a lot of large children
--- without reference to the fact that a lot of the "largeness" might
be the same shared memory block.  So the postmaster process very often
looks like a good target to it, even though killing the postmaster will
in fact free a negligible amount of memory.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connecting to a postgreSQL database with windows CE over wi-fi; failing gracefully

2009-05-01 Thread Peter Geoghegan
Wow, a response from the famous Tom Lane to my lame problem :-) .

> What I'd try is a "ping" to the database server, and not initiate any
> libpq operation unless the server is answering pings.  If you get
> a failure due to connectivity loss midway through an operation,
> PQreset is appropriate then --- but don't do it to recover from
> a momentary network outage.

That's interesting. In general, it's hard to ping from a windows
program, unless you want to repeatedly invoke ping.exe. I'm not sure
that I can even retrieve the result of that. That strikes me as fairly
kludgey - acceptable for some utility shellscript, but probably not
for what is supposed to be a responsive program.

Perhaps it would be preferable to call the function after the
operation, but before the application reports success. I'll look into
it.

Regards,
Peter Geoghegan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Do TEMP Tables have an OID? Can this be a problem if used too frequently?

2009-05-01 Thread Tom Lane
"Raymond O'Donnell"  writes:
> On 30/04/2009 10:01, Phil Couling wrote:
>> If so am I right to assume that, if the function is used too frequently,
>> it could cause the database to crash by wraping OIDs?

> I'd imagine that this depends on how often the database is VACUUMed.

Wrapping around the OID counter does not create any problem in any
reasonably modern version of Postgres.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] recover corrupt DB?

2009-05-01 Thread Tom Lane
Craig Ringer  writes:
> I've been wondering about this for a while. Why does Pg end up with the 
> database in an unusable, unrecoverable state after a disk-full error?

It doesn't.  There must have been some other filesystem misfeasance
involved in the OP's problem.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Fwd: [GENERAL] triggers and execute...

2009-05-01 Thread Dimitri Fontaine

Hi, it seems it didn't make it the first time.

Début du message réexpédié :


De : Dimitri Fontaine 
Date : 30 avril 2009 12:03:10 HAEC
À : pgsql-general@postgresql.org
Objet : Rép : [GENERAL] triggers and execute...

On Monday 27 April 2009 22:32:22 Scott Marlowe wrote:
OK, I'm hitting a wall here.  I've written this trigger for  
partitioning:


create or replace function page_access_insert_trigger ()
returns trigger as $$
DECLARE
part text;
q text;
BEGIN
part = to_char(new."timestamp",'MMDD');
q = 'insert into page_access_'||part||' values (new.*)';


What you want looks like this (thanks RhodiumToad):

'INSERT INTO page_access_' || part ||
'SELECT (' || quote_literal(textin(record_out(NEW))) ||  
'::page_access).*;'


That's supposing you have a parent table named page_access, of  
course. And
casting this way has drawbacks too (which I can't recall at this  
moment), but
I've been using this live for maybe more than a year now without any  
problem.



It works.  So, how am I supposed to run it with dynamic table names?


Hack your way around, partitioning is not yet there "for real"...



--
dim



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connecting to a postgreSQL database with windows CE over wi-fi; failing gracefully

2009-05-01 Thread Tom Lane
Peter Geoghegan  writes:
> I'm developing a PostgreSQL application for Windows CE 5 on a PDT/ PDA
> in C++/Qt, using Hiroshi Saito's libpq port for that platform. Since
> the connection is established over wi-fi, and wi-fi connectivity is
> often flaky, I feel that I have to "fail gracefully" to as great an
> extent as possible. The following utility function,
> VerifyDbConnection(), is called before every piece of database work:

> void MainInterface::VerifyDbConnection()
> {
> if (PQstatus(conn) != CONNECTION_OK)
> {
>   ... try to recover ...
> }
> }

PQstatus does *not* probe to see if there's a live connection to the
database; it just returns the state left behind by the last operation.
So this code will just fall through and not do anything useful until
after you've already had a failure.  The forcible PQreset is probably
not the most graceful way of recovering, either.

What I'd try is a "ping" to the database server, and not initiate any
libpq operation unless the server is answering pings.  If you get
a failure due to connectivity loss midway through an operation,
PQreset is appropriate then --- but don't do it to recover from
a momentary network outage.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: syntax error at or near "IF"... why?

2009-05-01 Thread Sam Mason
On Wed, Apr 29, 2009 at 07:54:20AM -0700, DaNieL wrote:
> ERROR:  syntax error at or near "IF"
> 
> Where am i mistaken?
> 
> p.s: dont focus on the example functionality, its just a trial for me
> to understand the transactions.. and now, the IF clause...

As others have said, IF statements are only valid in plpgsql functions,
not in plain SQL.  Probably not relevant but in this example can work
around it by doing something like:

  CREATE FUNCTION failif(BOOL,TEXT) RETURNS VOID AS $$
BEGIN IF $1 THEN RAISE EXCEPTION '%', $2; END IF; END $$ LANGUAGE plpgsql;

  BEGIN;
  INSERT INTO ...
  UPDATE ...
  UPDATE ...
  SELECT failif((SELECT credit FROM users WHERE name = 'mary') < 0,
'error, credit can't be less than zero');
  COMMIT;

In general, you're probably better off writing the whole thing in
plpgsql.

Hope that helps.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Mapping output from a SEQUENCE into something non-repeating/colliding but random-looking?

2009-05-01 Thread Bill Moran
In response to Craig Ringer :

> Bill Moran wrote:
> 
> > Sounds like you're reinventing message digests ...

[snip your comments about why I was wrong about MDs working]

> So long as I don't call it "xor encryption" ... sigh.
> 
> > Most of the systems I've seen like this do one of a few things:
> > * Start with an arbitrary # like 1000
> > * Prepend the date (pretty common for invoice #s) like 20090501001
> > * Just start with #1 ... I mean, what's the big deal?
> 
> I'm not the one who cares. Alas, I've been given requirements to
> satisfy, and one of the major ones is that customer numbers in
> particular must be non-sequential (as close to random-looking as
> possible) and allocated across a large range.

Why not just grab random values for that one, then?  Just generate
a random value, check to ensure it doesn't already exist ... rinse
and repeat if it's a duplicate.  I mean, how often is this system
adding new customers?

Another trick with the invoice #s is to prepend them with a subset
of the client ID.  You could also use the same technique ... generate
a random value, repeat if it already exists ...

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Connecting to a postgreSQL database with windows CE over wi-fi; failing gracefully

2009-05-01 Thread Peter Geoghegan
It now appears that I was presumptuous in blaming libpq for my
application's shutdown. It probably was down to a problem with the way
I was remote debugging the application - a fluke.

I can now get the message to appear, and then re-establish a
connection by either disconnecting and reconnecting the ethernet cable
that connects the postgreSQL server to its switch, or by walking out
of and into range of the wi-fi access point with the handheld. Both
work consistently. I was able to consistently lose and regain a
connection as desired.

I guess I'm happy so. Can my approach be improved upon?

Regards,
Peter Geoghegan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Connecting to a postgreSQL database with windows CE over wi-fi; failing gracefully

2009-05-01 Thread Craig Ringer
Peter Geoghegan wrote:

> though, but it does. My guess is that libpq is calling abort() or
> something similar, directly or indirectly. I know that would cause an
> error message with windows XP, but windows CE is funny.

Maybe this is a stupid question (I don't really do WinCE) but ... can't
you just run your app in a debugger, with breakpoints set in the  major
runtime library exit points, so you can get a backtrace at exit time?

Since you're dropping the network, you can't do remote debugging over
wifi, but WinCE must surely offer serial or USB remote debugging...

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pgsql errors, DBI and CGI::Carp

2009-05-01 Thread Daniel Verite

Toomas Vendelin wrote:

I'm writing CGI scripts in Perl using Postgresql via DBI interface.  
RAISE_ERROR is on.


For some reason (unlike with MySQL), when a Perl script dies from  
Postgresql error, the line number of Perl script where the error  
occurred is not reported, just the SQL statement line number is 

given.  
In a longer script looking it may become a tedious task to guess the  


line in script that caused the problem. Is that an expected behavior  



or am I missing something?


It works for me:

$ cat dberr.pl
use CGI::Carp;
use DBI;
my $dbh=DBI->connect("dbi:Pg:dbname=test") or die;
$dbh->do("bogus SQL");

$ perl dberr.pl
[Fri May  1 15:05:08 2009] dberr.pl: DBD::Pg::db do failed: ERROR:  
syntax error at or near "bogus"

[Fri May  1 15:05:08 2009] dberr.pl: LINE 1: bogus SQL
[Fri May  1 15:05:08 2009] dberr.pl: ^ at dberr.pl line 4.

If you run that trivial program in your environment, what output do you 
get?


Best regards,
--
Daniel
PostgreSQL-powered mail user agent and storage:
http://www.manitou-mail.org

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Mapping output from a SEQUENCE into something non-repeating/colliding but random-looking?

2009-05-01 Thread Craig Ringer
Bill Moran wrote:

> Sounds like you're reinventing message digests ...

Message digests do not guarantee non-colliding output for a given input.

They provide a result UNLIKELY to collide for any reasonable set of
inputs. They need to produce quite large output values to do this
effectively. Truncating the digest to fit the desired data type doesn't
help, if you have to do this.

A message digest is intended for use where the inputs are arbitrarily
large and must be condensed down to a generally fixed-length small-ish
value that should be very unlikely to collide for any two given inputs.

What I'm looking for is a function that, given an input within a
constrained range (say, a 32 bit integer) produces a different output
within the same range. For any given input, the output should be the
same each time, and for any given output there should only be one input
that results in that output.

So far, picking a suitable value to xor the input with seems like it'll
be better than nothing, and good enough for the casual examination
that's all I'm required to care about.

So long as I don't call it "xor encryption" ... sigh.

> Most of the systems I've seen like this do one of a few things:
> * Start with an arbitrary # like 1000
> * Prepend the date (pretty common for invoice #s) like 20090501001
> * Just start with #1 ... I mean, what's the big deal?

I'm not the one who cares. Alas, I've been given requirements to
satisfy, and one of the major ones is that customer numbers in
particular must be non-sequential (as close to random-looking as
possible) and allocated across a large range.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Connecting to a postgreSQL database with windows CE over wi-fi; failing gracefully

2009-05-01 Thread Peter Geoghegan
I apologise for the duplicate posts - my initial post wasn't appearing
on the mailing list archives hours later, and someone in #postgresql
said that there was a problem with the service provider, so I thought
I'd resend.

> such an approach is doomed to failure, you have just implemented a
> race condition. what happens if the server becomes unavailable between
> the request and the response, or during the response.

It had occurred to me that this was the case. However, even though the
suggested approach is sub-optimal,  it still significantly ameliorates
the problem - the server may be shut down intentionally by someone
that is not aware that the handset is being used much of the time, and
that is unlikely to occur the instant between checking the connection
is all right and using the connection. If you can suggest an
alternative approach that doesn't have a race condition, I'm all ears.
Also, maybe you should give the hyperbole a rest.

> when you shut the server down it announces this to the clients, it may
> even ask their permission before proceeding.

Yes, that too had occurred to me. I recall that Slony-I really hates
it when you cut the connection, but is fine when you shut down
PostgreSQL correctly. From the Slony-I docs : "Any problems with that
connection can kill the connection whilst leaving "zombied" database
connections on the node that (typically) will not die off for around
two hours." It doesn't necessarily follow that my program should die
though, but it does. My guess is that libpq is calling abort() or
something similar, directly or indirectly. I know that would cause an
error message with windows XP, but windows CE is funny.

> you are not checking the return values from some of your libpq calls.

The only thing that I don't check the return value of is PQreset,
which returns void. What do you mean?

Regards,
Peter Geoghegan

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] xml not enabled by default on rhel4 packages from commandprompt

2009-05-01 Thread Grzegorz Jaśkiewicz
Any idea why xml support is off ?
The libxml2, version 2.6.23 is there on centos4.7 (which is what I am
using), is there any known problem with xml that it is off, or is just
because they wanted to make sure that the package is going to work in
versions prior to 4.7 as well ??

any ideas ?

-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Mapping output from a SEQUENCE into something non-repeating/colliding but random-looking?

2009-05-01 Thread Bill Moran
In response to Jasen Betts :

> On 2009-04-30, Craig Ringer  wrote:
> > Hi
> >
> > This must be a fairly common requirement, but either I don't know how to
> > ask Google about it or there's not as much out there as I would've expected.
> >
> > I'm looking for a way to map the output from a monotonically increasing
> > sequence (not necessarily gapless - ie a normal Pg SEQUENCE) into a
> > fairly random different value in the availible space with a 1:1
> > input->output relationship. In other words, for the input "27" the
> > output will always be the same (say 32 bit) number, and no other input
> > will produce that output.
> 
> so you want 
> 
>   DEFAULT magic_func( nextval('foo_id_seq'::regclass) )
> 
> where magic_func is the 1:1 mapping and foo_id_seq is the sequence
> that feeds it.

Sounds like you're reinventing message digests ...

> > If I find something good and there aren't any existing Pl/PgSQL
> > implementations I'll post one for others' use, since I'm pretty sure it
> > must come up a lot. You don't want your database to send out "invoice
> > #1" or "customer #1" after all.

Most of the systems I've seen like this do one of a few things:
* Start with an arbitrary # like 1000
* Prepend the date (pretty common for invoice #s) like 20090501001
* Just start with #1 ... I mean, what's the big deal?

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] recover corrupt DB?

2009-05-01 Thread Craig Ringer

> On all our servers we have a cron job that runs daily and reports disk
> usage stats.
> Maybe you need something similar.

Of course. I have Cacti running to monitor disk usage on all my servers.

That doesn't help if a user creates several duplicates of a huge table,
or otherwise gobbles disk space. There's always the *potential* to run
out of disk space, and I'm concerned that Pg doesn't handle that
gracefully. I agree it shouldn't happen, but Pg shouldn't mangle the DB
when it does, either.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Any way to execute ad-hoc pl/pgsql?

2009-05-01 Thread Keaton Adams
You can wrap a temporary function in a script and call it this way:

keaton:811:~$more my_shell_script.sh
#!/bin/bash

OS=`uname -s`
PSQL="/usr/bin/psql"

USERNAME="postgres"
export PGPASSWORD="${PASSWORD}"
DATABASE="mydatabase"


${PSQL} "${DATABASE}" -U "${USERNAME}" << EOF
BEGIN;
CREATE OR REPLACE FUNCTION tmp_mxl_db_convert1 (VARCHAR) RETURNS INT AS '
DECLARE
s_tableALIAS FOR \$1;
tday   VARCHAR(128);
tmonth VARCHAR(128);
tqtr   VARCHAR(128);
tbegin TIMESTAMP WITH TIME ZONE;
tend   TIMESTAMP WITH TIME ZONE;
n_ret  INTEGER;
BEGIN
-- 2 quarters ago
tqtr := to_char(now() - interval ''6 months'', ''"q"Q'');
tbegin := date_trunc(''quarter'', now() - ''6 months''::interval);
tend := date_trunc(''quarter'', now() - ''3 months''::interval);
n_ret := tmp_mxl_threat_convert2(s_table, tqtr, tbegin, tend);

-- last quarter
tqtr := to_char(now() - interval ''3 months'', ''"q"Q'');
tbegin := tend;
tend := date_trunc(''quarter'', now());
n_ret := tmp_mxl_threat_convert2(s_table, tqtr, tbegin, tend);

.
.
More StoredProc / Function Code here
.
.


RETURN 0;
END;
' LANGUAGE 'plpgsql';

drop function tmp_mxl_threat_convert1 (VARCHAR);

END;
EOF


-Keaton



On 4/30/09 11:25 PM, "Carlo Stonebanks"  wrote:

One of our developers asked me, "is there any way to execute arbitrary
plpgsql"? By that I beleive he means: is there some way to execute ad-hoc
pl/pgsql code without creating a stored procedure or a function?

I believe MS SQL Server can do this - has any one heard of some sort of
command shell to do this for PG?

(I suppose one possibility would be something that created a temporary
stored proc to execute the code, then cleaned up after itself.)

Carlo


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] recover corrupt DB?

2009-05-01 Thread Steve Clark

Craig Ringer wrote:

Peter Eisentraut wrote:

On Thursday 23 April 2009 18:30:27 Dan Armbrust wrote:

I had a test system (read as not backed up, sigh) which had the disk
go full while PostgreSQL was loaded, consequently, PostgreSQL will no
longer start.

It is logging an error about detecting an invalid shutdown, trying to
replay something, and then an error about not being able to open a
file it is looking for.
Knowing what file would help analyze this.  In general, pg_resetxlog would be 
the tool to try here.  Don't panic yet. ;-)


I've been wondering about this for a while. Why does Pg end up with the 
database in an unusable, unrecoverable state after a disk-full error? Is 
there no way it can efficiently defend against issues writing to the 
WAL? Is it, in fact, issues with appending to the current WAL segment 
that're the problem anyway?


This may come up even on fairly well managed databases if users have 
direct access. To me, with a largely user-and-admin perspective, it 
seems like something that really should be handled a bit more cleanly.


--
Craig Ringer


Hmm...

On all our servers we have a cron job that runs daily and reports disk usage 
stats.
Maybe you need something similar.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: syntax error at or near "IF"... why?

2009-05-01 Thread Chris Spotts
Could if be referencing the second IF..the one in your "END IF" that doesn't
have a semicolon after it...?

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of DaNieL
Sent: Wednesday, April 29, 2009 9:54 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] ERROR: syntax error at or near "IF"... why?

Hi guys, im new with postgresql, and already got my first problem..

Well, I wroted some code for understend how the transaction works,
following step by step the manual.

TO make it short, i've created 2 tables, user and movements: in the
firs one there are the name, email and credit colons, in the second
the colons from, to, import.

So, i was triyng that way:

BEGIN;
INSERT INTO movements (from, to, import) VALUES ('mary', 'steve',
600);
UPDATE users SET credit = credit - 600 WHERE name = 'mary';
UPDATE users SET credit = credit + 600 WHERE name = 'steve';
--here comes the problem!
IF (SELECT credit FROM users WHERE name = 'mary') < 0 THEN
 ROLLBACK;
END IF
COMMIT;

i always get the error
ERROR:  syntax error at or near "IF"

Where am i mistaken?

p.s: dont focus on the example functionality, its just a trial for me
to understand the transactions.. and now, the IF clause...

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Mapping output from a SEQUENCE into something non-repeating/colliding but random-looking?

2009-05-01 Thread Jasen Betts
On 2009-04-30, Craig Ringer  wrote:
> Hi
>
> This must be a fairly common requirement, but either I don't know how to
> ask Google about it or there's not as much out there as I would've expected.
>
> I'm looking for a way to map the output from a monotonically increasing
> sequence (not necessarily gapless - ie a normal Pg SEQUENCE) into a
> fairly random different value in the availible space with a 1:1
> input->output relationship. In other words, for the input "27" the
> output will always be the same (say 32 bit) number, and no other input
> will produce that output.

so you want 

  DEFAULT magic_func( nextval('foo_id_seq'::regclass) )

where magic_func is the 1:1 mapping and foo_id_seq is the sequence
that feeds it.

> Note that I'm *NOT* looking for a PRNG that takes the previous output as
> its input. That'd force me to use the same techniques as for a gapless
> sequence in Pg, with all the associated horror with locking and
> deadlocks, the performance issues, etc.

any good PRNG will have the 1:1 mapping you want, but fed sequential
values they tend to produce predictable output.

I suggest for magic_func you use a collection of bit-shifts, adds, and
XORs then mask out the bits abouve 31  and use what's left.

test and adjust if needed,

> If I find something good and there aren't any existing Pl/PgSQL
> implementations I'll post one for others' use, since I'm pretty sure it
> must come up a lot. You don't want your database to send out "invoice
> #1" or "customer #1" after all.

to this end was pg_catalog.setvalue( sequence, value,TRUE) 
invented.

> (I'm also going to be looking for efficient ways to calculate effective
> check digits for arbitrary numbers within a certain range, too, and will
> post something for that, but that comes later).


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Export Data from one DB and Import into a new DB

2009-05-01 Thread Jasen Betts
On 2009-04-29, Stefan Sturm  wrote:
> Hello,
>
> we are changing the structure of our database from a new release.
> Now we need to export large amounts of data and import it into the new
> db( with a new structure).
>
> Are there any tools( for osx ) to support me doing this?

last time I had to convert a database I loaded it into postgres and
used "SQL" and plpgsql functions.

then dropped the functions, columns and tables  I didn't want to keep around.

things like 

 INSERT INTO somenewtable SELECT somequery...
 
are very powerfull tools

The hardest bit was dealing with the inconsistencies in the input data.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Any way to execute ad-hoc pl/pgsql?

2009-05-01 Thread Craig Ringer
Carlo Stonebanks wrote:

> (I suppose one possibility would be something that created a temporary
> stored proc to execute the code, then cleaned up after itself.)

Yep, that's what I do -

CREATE FUNCTION fred() RETURNS blah AS $$
$$ LANGUAGE 'plpgsql';

SELECT fred();

DROP FUNCTION fred();

I've only needed it a couple of times in really complex data conversion
and import routines, though.

It'd be really quite nice to have CREATE TEMPORARY FUNCTION even if Pg
didn't have true anonymous blocks (which IMO encourage unnecessary
procedural style coding).

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Any way to execute ad-hoc pl/pgsql?

2009-05-01 Thread Jasen Betts
On 2009-05-01, Carlo Stonebanks  wrote:

> One of our developers asked me, "is there any way to execute arbitrary 
> plpgsql"? By that I beleive he means: is there some way to execute ad-hoc 
> pl/pgsql code without creating a stored procedure or a function?

no.  arbitrary SQL is no problem, 
arbitrary plpgsql must be declared as a function so that it can be
compiled and run. (compiling is automatic)


inside a function execute can be used but not all plpgsql can be
executed (but, again, you can use execute to define and run a new function)

> I believe MS SQL Server can do this - has any one heard of some sort of 
> command shell to do this for PG?

> (I suppose one possibility would be something that created a temporary 
> stored proc to execute the code, then cleaned up after itself.)

You seem to be wanting something like "anonymous functions". postgres 
doesn't do them yet. It may in the future, or they may be incompatible
with the design.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Connecting to a postgreSQL database with windows CE over wi-fi; failing gracefully

2009-05-01 Thread Jasen Betts
On 2009-04-29, Peter Geoghegan  wrote:
> Hello,
>
> I'm developing a PostgreSQL application for Windows CE 5 on a PDT/ PDA
> in C++/Qt, using Hiroshi Saito's libpq port for that platform. Since
> the connection is established over wi-fi, and wi-fi connectivity is
> often flaky, I feel that I have to "fail gracefully" to as great an
> extent as possible. The following utility function,
> VerifyDbConnection(), is called before every piece of database work:

such an approach is doomed to failure, you have just implemented a
race condition. what happens if the server becomes unavailable between
the request and the response, or during the response.

Also, test by pulling out the ethernet from the server to the switch.

when you shut the server down it announces this to the clients, it may
even ask their permission before proceeding.

> This seemed to work fine initially; I'd abruptly stop the database
> server, and I would see a messagebox informing me of a connectivity
> problem. Then, I'd start the server, click the retry button, and have
> connectivity restored. However, when I walk out of range of the wi-fi
> access point, which is the probable cause of losing connectivity in
> the real world, my program crashes without displaying an error message
> (I would expect to see a visual C++ runtime error message).

> Can someone suggest a reason for this, or a workaround?

you are not checking the return values from some of your libpq calls.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Importing large objects from the client side programatically.

2009-05-01 Thread Daniel Verite

Andrew Maclean wrote:


I am using C++ and trying to programatically import a large
object from the client side into a server.


In that context, the simplest way is to use libpq's C functions:
http://www.postgresql.org/docs/8.3/static/lo-interfaces.html

Best regards,
--
Daniel
PostgreSQL-powered mail user agent and storage:
http://www.manitou-mail.org

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ERROR: syntax error at or near "IF"... why?

2009-05-01 Thread Johan Nel

Daniel,


IF (SELECT credit FROM users WHERE name = 'mary') < 0 THEN
 ROLLBACK;
END IF
COMMIT;

i always get the error
ERROR:  syntax error at or near "IF"

Where am i mistaken?


SELECT returns in essence a record or setof records.

DECLARE _credit int;
...
SELECT credit FROM users WHERE name = 'mary' INTO _credit;
IF _credit < 0 THEN
  ROLLBACK;
END IF;

If there is a chance that the select returns more than one record you 
can do something similar to:

DECLARE rec record;
...
FOR rec IN (SELECT credit FROM users WHERE name = 'mary'
LOOP
  IF rec.credit < 0 THEN
...
  ELSE
...
  END IF;
END LOOP;

HTH,

Johan Nel
Pretoria, South Africa.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Do TEMP Tables have an OID? Can this be a problem if used too frequently?

2009-05-01 Thread Raymond O'Donnell
On 30/04/2009 10:01, Phil Couling wrote:

> I've just written a search function which creates a temp table, preforms
> some reasoning on it returning results then drops it again.
> I'm using temp tables in an attempt to gain efficiency (not repeating
> work between one section of the function and another).
> 
> However I'm worried that there may be some pit falls in doing this. I'm
> especially worried about OIDs.

Yes, a temp table does get an OID.

You haven't said what version of PostgreSQL you're on, but one pitfall
in earlier versions (pre-8.3 I think) is because execution plans for
functions are cached, the first call to the function will work fine, but
subsequent calls will attempt to reference the temp table using the old
OID - boom!

The work-around to this is to construct dynamically any queries that
touch the temp table, and then use EXECUTE to run them.

There's a FAQ entry about it here:

http://wiki.postgresql.org/wiki/FAQ#Why_do_I_get_.22relation_with_OID_.23.23.23.23.23_does_not_exist.22_errors_when_accessing_temporary_tables_in_PL.2FPgSQL_functions.3F

> If so am I right to assume that, if the function is used too frequently,
> it could cause the database to crash by wraping OIDs?

I'd imagine that this depends on how often the database is VACUUMed.

HTH,

Ray.


--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
r...@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ERROR: syntax error at or near "IF"... why?

2009-05-01 Thread DaNieL
Hi guys, im new with postgresql, and already got my first problem..

Well, I wroted some code for understend how the transaction works,
following step by step the manual.

TO make it short, i've created 2 tables, user and movements: in the
firs one there are the name, email and credit colons, in the second
the colons from, to, import.

So, i was triyng that way:

BEGIN;
INSERT INTO movements (from, to, import) VALUES ('mary', 'steve',
600);
UPDATE users SET credit = credit - 600 WHERE name = 'mary';
UPDATE users SET credit = credit + 600 WHERE name = 'steve';
--here comes the problem!
IF (SELECT credit FROM users WHERE name = 'mary') < 0 THEN
 ROLLBACK;
END IF
COMMIT;

i always get the error
ERROR:  syntax error at or near "IF"

Where am i mistaken?

p.s: dont focus on the example functionality, its just a trial for me
to understand the transactions.. and now, the IF clause...

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Any way to execute ad-hoc pl/pgsql?

2009-05-01 Thread Dave Page
On Fri, May 1, 2009 at 6:25 AM, Carlo Stonebanks
 wrote:
> One of our developers asked me, "is there any way to execute arbitrary
> plpgsql"? By that I beleive he means: is there some way to execute ad-hoc
> pl/pgsql code without creating a stored procedure or a function?
>
> I believe MS SQL Server can do this - has any one heard of some sort of
> command shell to do this for PG?
>
> (I suppose one possibility would be something that created a temporary
> stored proc to execute the code, then cleaned up after itself.)

Sounds like you need anonymous blocks, which no, PostgreSQL doesn't support.

-- 
Dave Page
EnterpriseDB UK:   http://www.enterprisedb.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Understand this error

2009-05-01 Thread Dennis Brakhane
On Thu, Apr 30, 2009 at 3:00 PM, paulo matadr  wrote:
> Hi all,
> my database entry in mode recovery,
> analyzing my pg_log I seem this:
> system logger process (PID 6517) was terminated by signal 9
> background writer process (PID 6519) was terminated by signal 9
> terminating any other active server processes

You are bitten by the OOM-killer. It can lead to severy data loss if it decides
to kill the postmaster. To avoid this, you should always set overcommit_memory
to 2 (which means off). See Section 17.4.3. here:

http://www.postgresql.org/docs/8.3/interactive/kernel-resources.html

You should *never* run a production database server in overcommit_memory mode!

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 08P01: unexpected EOF on client connection

2009-05-01 Thread Craig Ringer
Tomas Vondra wrote:

> $conn = pg_connect(...);
> $res = pg_query("SELECT mime, thumbnail_data FROM images WHERE filename
> = ");
> $row = pg_fetch_assoc($row);
> header('Content-Type: ' . $row['mime']);
> echo pg_unescape_bytea($row['thumbnail_data']);

PHP?

Try running your script in a PHP debugger (from the command line or, if
your server supports it, while hosted) and see at what point exactly
things go pear shaped. Check your web server error log for any client
library or PHP errors, too.

Even just adding a bunch of logging statements into your script then
looking at your web server error log might help you a little.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] could not bind IPv4 socket

2009-05-01 Thread Craig Ringer
Greg Smith wrote:

> Normal practice here is to set:
> 
> listen_address='*'
> 
> So that the server is remotely accessible from all of its interfaces,
> and then you can do all filtering of who can connect just via
> pg_hba.conf instead.

Just to expand on that:

listen_addresses is usually used if you want to do things like:

- Have the database running on a gateway host that's directly connected
to the Internet, but only permit connections to the database on the LAN
side, so even pre-auth exploits can't attack the database server from
the Internet side;

- Lock a cluster down to only be accessible from the local host, so
remote hosts can't even see it's running and can't attempt to talk to it; or

- Run multiple clusters on one host, each one bound to a different IP
address on the usual PostgreSQL port, so the machine hosting the
clusters looks like it's really multiple separate machines to outside users.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Importing large objects from the client side programatically.

2009-05-01 Thread Andrew Maclean
I am using C++ and trying to programatically import a large object
from the client side into a server. I am using QT and, for the server
side I can pass a command like:
insert into x values('x1',lo_import('c:/temp/x1.txt'));
Which works.

However this will not work from the client side.
For the client side there is a psql function \lo_import that works if
I run psql and do something like:
\lo_import('c:/temp/x1.txt')
insert into x values('x1',:LASTOID);

I understand the function \lo_import first sends the data to the
server and then creates the blob returning the oid of it.

How can I do this in an SQL query?

Do I have to write a function on the server in psql using commands like
\lo_import('c:/temp/x1.txt')
insert into x values('x1',:LASTOID);

and then call this in an sql statement?

Will this work or what do people usually do?

Thanks for any help


Andrew



-- 
___
Andrew J. P. Maclean
Centre for Autonomous Systems
The Rose Street Building J04
The University of Sydney  2006  NSW
AUSTRALIA
Ph: +61 2 9351 3283
Fax: +61 2 9351 7474
URL: http://www.acfr.usyd.edu.au/
___

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dump and pg_restore problem

2009-05-01 Thread Michele Petrazzo - Unipex

Hi all,
I had a big problem that made me crazy... I want to simple backup some
tables from a my db and restore them to another.

My db:
table_two(
id serial PRIMARY KEY,
...
)
table_one (
id serial PRIMARY KEY,
real_name text NOT NULL,
username text NOT NULL,
id_table_two integer REFERENCES table_two(id),
...
)
table_three (
id serial PRIMARY KEY,
id_user integer REFERENCES table_one (id) NOT NULL,
)...

For do this, I use pg_dump with two pass:
pg_dump --format=c --schema-only my_db > file_schema
pg_dump --format=c -a -t table_one -t table_two -t table_three > file_data

when I, into another host (same 8.3.7 version), execute the pg_restore I
receive a striking reply:

(drop db && create db && pg_restore -d my_db file_schema &&
pg_restore -d my_db file_data)

g_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 1877; 0 16395 TABLE
DATA table_thee my_db
pg_restore: [archiver (db)] COPY failed: ERROR:  insert or update on
table "table_three" violates foreign key constraint
"table_thee_id_user_fkey"
DETAIL:  Key (id_user)=(644) is not present in table "table_one".
WARNING: errors ignored on restore: 1

On the 2^th host whit a "select id from table_one where id=644" I see
the data
there! And on table_three (of course?) there is no data...

Moving the pg_dump to the standard sql format, inside the sql file I see
that the table_three id_user=644 are the first of the exported records

The unique solution that I found it's that to export with pg_dump all my
tables except the table_three and, after, exporting only that and on the
other host and import that alone...

It's this a normal behavior, a "missing feature", or...?

Thanks,
Michele


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Understand this error

2009-05-01 Thread Craig Ringer
paulo matadr wrote:
> Hi all,
> my database entry in mode recovery,
> analyzing my pg_log I seem this:
> 
> system logger process (PID 6517) was terminated by signal 9
> background writer process (PID 6519) was terminated by signal 9
> terminating any other active server processes

You haven't told us what OS you are on. Based on the log below, though,
it looks like Linux.

`kill -l' on Linux tells us that signal 9 is SIGKILL, a hard kill. That
should only happen if (a) you send it with `kill -9' or `kill -KILL' or
(b) the machine runs out of memory while in overcommit mode (the
default) and the OOM killer picks PostgreSQL as the process to terminate
to free memory.

You should NOT have your server in overcommit mode if you are running
PostgreSQL. See, in the PostgreSQL manual:

http://www.postgresql.org/docs/current/static/kernel-resources.html#AEN22235

>  kernel:  [] out_of_memory+0x53/0x267
[snip]
> kernel: Out of memory: Killed process 6519 (postmaster).

> How prenvent postgres use all memory of system?Why this happen?

Read the link in the PostgreSQL manual, above.

Note that it's not very likely that PostgreSQL was the process that used
up all your memory. It was just unlucky enough to be picked as the one
to be killed, because the OOM killer is terrible at estimating which
process is using the most memory when programs like PostgreSQL have
allocated large blocks of shared memory.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Any way to execute ad-hoc pl/pgsql?

2009-05-01 Thread Carlo Stonebanks
One of our developers asked me, "is there any way to execute arbitrary 
plpgsql"? By that I beleive he means: is there some way to execute ad-hoc 
pl/pgsql code without creating a stored procedure or a function?


I believe MS SQL Server can do this - has any one heard of some sort of 
command shell to do this for PG?


(I suppose one possibility would be something that created a temporary 
stored proc to execute the code, then cleaned up after itself.)


Carlo 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] retrieving primary key for row with MIN function

2009-05-01 Thread Erick Papadakis
Could you end the query with a "LIMIT 1"?


SELECT h.id AS host_id, MIN(r.start_date) AS reservation_start_date, r.id AS
reservation_id
FROM hosts h
LEFT OUTER JOIN reservation_hosts rh ON rh.host_id = h.id
LEFT OUTER JOIN reservation r ON r.id = rh.reservation_id AND
(r.start_date, r.end_date) OVERLAPS ('2009-04-29'::date,
'2010-04-29'::date)
GROUP BY h.id, r.id
ORDER BY reservation_start_date ASC
LIMIT 1
;



On Wed, Apr 29, 2009 at 8:30 PM, Marcin Krol  wrote:
> Hello everyone,
>
> I need to retrieve PK (r.id in the query) for row with MIN(r.start_date),
> but with a twist: I need to select only one record, the one with minimum
> date.
>
> Doing it like this does not solve the problem:
>
> SELECT h.id AS host_id, MIN(r.start_date) AS reservation_start_date, r.id AS
> reservation_id
> FROM hosts h
> LEFT OUTER JOIN reservation_hosts rh ON rh.host_id = h.id
> LEFT OUTER JOIN reservation r ON r.id = rh.reservation_id AND (r.start_date,
> r.end_date) OVERLAPS ('2009-04-29'::date, '2010-04-29'::date)
> GROUP BY h.id, r.id
> ORDER BY reservation_start_date ASC
>
> I have to use either GROUP BY r.id or use MIN(r.id). MIN(r.id) doesn't
> select the id from the row with corresponding MIN(r.start_date), so it's
> useless, while GROUP BY r.id produces more than one row:
>
> host_id reservation_start_date  reservation_id
> 361     2009-05-11              38
> 361     2009-05-17              21
>
> I need to select only row with reservation_id = 38.
>
> I would rather not do subquery for every 'host' record, since there can be a
> lot of them...
>
> Regards,
> mk
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Time zone HADT timestamp syntax error in trigger

2009-05-01 Thread Tom Lane
John Smithus  writes:
> [ 'HADT' is not recognized as a timezone abbreviation ]
> The server is running PostgreSQL 8.3.7 on an AMD64 Gentoo Linux
> machine. The system time zone is set to 'America/Adak' and datestyle
> is set to 'sql, mdy' in postgresql.conf.

Not every timezone abbreviation in the world is recognized by default.
You'll want to add HADT, as well as whatever the standard-time abbreviation
for that zone is, to Postgres's list of recognized abbreviations.  See
http://www.postgresql.org/docs/8.3/static/datetime-config-files.html

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] recover corrupt DB?

2009-05-01 Thread Craig Ringer

Peter Eisentraut wrote:

On Thursday 23 April 2009 18:30:27 Dan Armbrust wrote:

I had a test system (read as not backed up, sigh) which had the disk
go full while PostgreSQL was loaded, consequently, PostgreSQL will no
longer start.

It is logging an error about detecting an invalid shutdown, trying to
replay something, and then an error about not being able to open a
file it is looking for.


Knowing what file would help analyze this.  In general, pg_resetxlog would be 
the tool to try here.  Don't panic yet. ;-)


I've been wondering about this for a while. Why does Pg end up with the 
database in an unusable, unrecoverable state after a disk-full error? Is 
there no way it can efficiently defend against issues writing to the 
WAL? Is it, in fact, issues with appending to the current WAL segment 
that're the problem anyway?


This may come up even on fairly well managed databases if users have 
direct access. To me, with a largely user-and-admin perspective, it 
seems like something that really should be handled a bit more cleanly.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] triggers and execute...

2009-05-01 Thread Alban Hertroys

On Apr 29, 2009, at 4:20 AM, Scott Marlowe wrote:


Oh man, it just gets worse.  I really need a simple elegant solution
here, because if I try to build the query by hand null inputs make
life a nightmare.  I had built something like this:

q = 'insert into '||schem||'.page_access_'||part||' values (
   '||new.paid||',
   '''||new.timestamp||''',
   '||new.total_time||',
   '''||new.http_host||''',
   '''||new.php_self||''',
   '''||new.query_string||''',
   '''||new.remote_addr||''',
   '''||new.logged_in||''',
   '||new.uid||',
   '''||new.http_user_agent||''',
   '''||new.server_addr||''',
   '''||new.notes||'''
   )';
   execute q;

But if any of the fields referenced are null, the whole query string
is now null.  So the next step is to use coalesce to build a query
string?  That get insane very quickly.  There's got to be some quoting
trick or something to let me use new.*, please someone see this and
know what that trick is.



I think you could do this if you'd be using a PL-language that  
supported reflection (on the NEW objects' type in this case). I can't  
say I know which one does though, I've only been using PL/pgsql so  
far, but I'd guess PL/Python, PL/Perl or PL/Java should be able to do  
the trick. Or plain C.


AFAIK there's no way to dynamically list column names from a table- 
type variable like NEW in PL/pgsql, which is why the above probably  
can't be done any easier using PL/pgsql. It would be nice to be able  
to LOOP over a variable like that or some similar method (I guess a  
more relational approach where the columns would be available as a  
result set would be preferred), especially if it'd be similarly easy  
to inspect the name and type of each column.


Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,49f82a8c129742043099112!



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Tracking down a deadlock

2009-05-01 Thread Bill Moseley

I need a bit of help understanding what might be causing a deadlock.

To duplicate the problem I'm running a test script that forks two
child processes.  Each child runs the same transaction and thus the
order of execution is exactly the same.  (i.e. not like the typical
deadlock where the order of updates might be reversed between two
sessions.)

The transaction inserts a new document into a document management
system.  The transaction does a number of selects and inserts.  At the
end of the transaction they both try and update the same row in the
"account" table.


It does not happen every time I run my test script -- but if I run it
enough I get a deadlock.  If I fork more child process I can make it
happen more often.  So, it does seem like a timing issue.


No explicit LOCK or SELECT FOR UPDATE is used in the transaction.
I'm running in the default "read committed" isolation level.
The initial problem was reported on PostgreSQL 8.3.5, but
I'm now testing on PostgreSQL 8.2.9.



I've set my deadlock_timeout high so I can review the locks.
I see these entires:


select * from pg_locks where not granted;
   locktype| database | relation | page | tuple | transactionid | classid | 
objid | objsubid | transaction |  pid  | mode  | granted 
---+--+--+--+---+---+-+---+--+-+---+---+-
 transactionid |  |  |  |   |  18410123 | | 
  |  |18410135 | 13420 | ShareLock | f
 tuple |  2474484 |  2474485 |   30 |11 |   | | 
  |  |18410123 | 13419 | ExclusiveLock | f
(2 rows)

select * from pg_locks where locktype='tuple';
 locktype | database | relation | page | tuple | transactionid | classid | 
objid | objsubid | transaction |  pid  | mode  | granted 
--+--+--+--+---+---+-+---+--+-+---+---+-
 tuple|  2474484 |  2474485 |   30 |11 |   | |  
 |  |18410135 | 13420 | ExclusiveLock | t
 tuple|  2474484 |  2474485 |   30 |11 |   | |  
 |  |18410123 | 13419 | ExclusiveLock | f
(2 rows)


And pg_stat_activity shows two of the exact same queries in "waiting"
state.  The "current_query" is just:

UPDATE account set foo = 123 where id = $1

and $1 is indeed the same for both.


If I comment out that update to the "account" table from the
transaction I never get a deadlock.



Maybe I'm missing something, but that by itself doesn't seem like a
deadlock situation.

The "account" table does have a number of constraints, and one looks
like:

CHECK( ( foo + bar ) <= 0 );

Could those be responsible?  For a test I dropped all the constraints
(except foreign keys) and I'm still getting a deadlock.

In general, do the constraints need to be deferrable and then defer
constraints at the start of the transaction?

What else can I do to debug?


Thanks,


-- 
Bill Moseley
mose...@hank.org
Sent from my iMutt


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] triggers and execute...

2009-05-01 Thread Scott Marlowe
On Tue, Apr 28, 2009 at 10:46 PM, David Fetter  wrote:
> On Tue, Apr 28, 2009 at 08:20:34PM -0600, Scott Marlowe wrote:
>> On Mon, Apr 27, 2009 at 3:24 PM, Richard Broersma
>>  wrote:
>> > On Mon, Apr 27, 2009 at 1:32 PM, Scott Marlowe  
>> > wrote:
>> >> OK, I'm hitting a wall here.  I've written this trigger for partitioning:
>> >>
>> >> create or replace function page_access_insert_trigger ()
>> >> returns trigger as $$
>> >> DECLARE
>> >>        part text;
>> >>        q text;
>> >> BEGIN
>> >>        part = to_char(new."timestamp",'MMDD');
>> >>        q = 'insert into page_access_'||part||' values (new.*)';
>> >> ...
>> >>
>> >> When I create it and try to use it I get this error:
>> >> ERROR:  NEW used in query that is not in a rule
>> >> CONTEXT:  SQL statement "insert into page_access_20090427 values (new.*)"
>> >
>> > At this point I don't think that there is a way for this function to
>> > know the correct table type of new.* since page_access_... is still
>> > only a concatenated string.  There there a way to cast new.* to the
>> > correct table type as part of this insert statement?
>>
>> Oh man, it just gets worse.  I really need a simple elegant solution
>> here, because if I try to build the query by hand null inputs make
>> life a nightmare.  I had built something like this:
>>
>> q = 'insert into '||schem||'.page_access_'||part||' values (
>>                 '||new.paid||',
>>                 '''||new.timestamp||''',
>>                 '||new.total_time||',
>>                 '''||new.http_host||''',
>>                 '''||new.php_self||''',
>>                 '''||new.query_string||''',
>>                 '''||new.remote_addr||''',
>>                 '''||new.logged_in||''',
>>                 '||new.uid||',
>>                 '''||new.http_user_agent||''',
>>                 '''||new.server_addr||''',
>>                 '''||new.notes||'''
>>         )';
>>         execute q;
>>
>> But if any of the fields referenced are null, the whole query string
>> is now null.  So the next step is to use coalesce to build a query
>> string?  That get insane very quickly.  There's got to be some
>> quoting trick or something to let me use new.*, please someone see
>> this and know what that trick is.
>
> Well, you can add in piles of COALESCE, but that way madness lies.
>
> Instead, use dollar quoting, the appropriate quote_*() functions, and
> this:
>
> http://wiki.postgresql.org/wiki/PL/pgSQL_Dynamic_Triggers

Thanks so much!  I'm off to read up on it.  Dollar quoting, quote()
and the wiki.  Thanks again.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Do TEMP Tables have an OID? Can this be a problem if used too frequently?

2009-05-01 Thread Phil Couling

Hi
I've just written a search function which creates a temp table, preforms 
some reasoning on it returning results then drops it again.
I'm using temp tables in an attempt to gain efficiency (not repeating 
work between one section of the function and another).


However I'm worried that there may be some pit falls in doing this. I'm 
especially worried about OIDs.


Does creating a temp table assign an OID to the table?
If so am I right to assume that, if the function is used too frequently, 
it could cause the database to crash by wraping OIDs?


Thanks very much for your time
Phil

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Two Questions Re: Warm Backup

2009-05-01 Thread Terry Lee Tucker
Greetings:

We are researching implementing a warm backup solution for our existing 
databases. We have a two node cluster running RH which are connected to a 
SAN. There is a total of 11 database clusters with the two node linux cluster 
balancing the load. At the moment, we are not doing any WAL archiving.
exp=# select version();
version

 PostgreSQL 8.3.3 on i686-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 
20070626 (Red Hat 4.1.2-14)
(1 row)

Q1: Can we set up a scenario where there is more that one warm standby? I want 
a warm standby locally and one that is hundreds of miles away connected with 
a T4 data circuit.

Q2: Am I correct in assuming that ALL changes to any of the production schema 
will be written to the warm standby? For example, if I drop a constraint in 
production I assume the same will occur on the warm standby. If I create and 
drop a table in production, I assume it will occur on the warm standby.

TIA
-- 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Export Data from one DB and Import into a new DB

2009-05-01 Thread Stefan Sturm
Hello,

we are changing the structure of our database from a new release.
Now we need to export large amounts of data and import it into the new
db( with a new structure).

Are there any tools( for osx ) to support me doing this?

Thanks for your help,
Stefan Sturm

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general