Re: [GENERAL] Out of memory error on pg_restore

2006-03-10 Thread Guillaume Lelarge
8 Mar 2006 07:31:19 -0800, Nik [EMAIL PROTECTED]:
 [...]
 psql: ERROR: out of memory
 DETAIL: Failed on request of size 32.


I also have this kind of error (out of memory) during the restoration
of objects on my database. I use a 8.1.2 pg_dump on a 7.1.1 PostgreSQL
server. Size of the dump is approx 20GB. I restore it using the 8.1.2
pg_restore on a 8.1.2 PostgreSQL server. And I finaly receive a out
of memory error. I don't think there is trigger on pg_largeobject.

What can be the cause of this one ? I will try with the 8.1.3 release
but I don't see a related fix on the release notes.

Thanks.


--
Guillaume.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PL/pgSQL question

2006-03-10 Thread ycrux
Hi All!
First of all, a great Thanks, your suggestions works fine.

I'll hope to enhance a little bit my understanding of SETOF return type.
I have now two problems.

1) I would like to return some columns from one table in PL/pgSQL function. 
What's in this case the correct return type of the PL/pgSQL function. This is a 
pseudo-code for my first problem:


CREATE FUNCTION function_name (sometypes) RETURNS return_type AS $$
FOR some_type IN SELECT column1, column3 FROM tablename WHERE some_conditions 
LOOP
  RETURN NEXT some_type;
END LOOP;
RETURN;
$$ LANGUAGE 'plpgsql' STABLE;

What's return_type and some_type in this case?

2) The next problem is almost same as above. But now, I would like to return 
different columns from different tables.
What's in this case the correct return type of PL/pgSQL function.
This is a pseudo-code for my second problem:


CREATE FUNCTION function_name (sometypes) RETURNS return_type AS $$
FOR some_type IN SELECT column1_table1, column17_table2 FROM table1, table2 
WHERE some_conditions 
LOOP
  RETURN NEXT some_type;
END LOOP;
RETURN;
$$ LANGUAGE 'plpgsql' STABLE;


Thanks in advance
Younes



Message d'origine
A: Ycrux [EMAIL PROTECTED]
Copie à: pgsql-general@postgresql.org
Sujet: Re: [GENERAL] PL/pgSQL question 
Date: Thu, 09 Mar 2006 19:25:52 -0500
De: Tom Lane [EMAIL PROTECTED]

Ycrux [EMAIL PROTECTED] writes:
 # SELECT grantAccess('sara', 'sarapass');
 ERROR:  set-valued function called in context that cannot accept a set

You need to do SELECT * FROM grantAccess(...).  This is a plpgsql
implementation restriction that we'll probably try to fix someday,
although there's also a school of thought that says that set-returning
functions in the SELECT targetlist are a bad idea and should be phased
out.

   regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Baffled by failure to use index when WHERE uses a function

2006-03-10 Thread Clive Page
I have a table cov3 of about 3 million rows, with a B-tree index on an 
integer column called hpix.  If I do a simple select on this column it 
works in milliseconds, using the index naturally:


select * from cov3 where hpixint = 482787587;
   hpix|  expos  |  hpixint
---+-+---
 482787587 | 30529.6 | 482787587
(1 row)

The problem is that I want to use a user-defined function called healpix 
which returns a single integer value in my queries; the function details 
are unlikely to be relevant (it selects a pixel from a celestial 
position), but its definition is:


 \df healpix
List of functions
 Schema |  Name   | Result data type |Argument data types
+-+--+
 public | healpix | integer  | double precision, double precision

So I would like to use this function to find rows, and I try for example:

select * from cov3 where hpix = healpix(2.85,-11.48);

but it takes ages.  An EXPLAIN shows why, it insists upon a sequential 
scan:


explain select * from cov3 where hpix = healpix(2.85,-11.48);
  QUERY PLAN
--
 Seq Scan on cov3  (cost=0.00..93046.81 rows=1 width=20)
   Filter: (hpix = (healpix(2.85::double precision, -11.48::double 
precision))::text)

Does anyone have any idea why, or know how I can restore adequate 
performance?


I am using Postgres 8.1.0 on Linux.

--
Clive Page
Dept of Physics  Astronomy,
University of Leicester, 
Leicester, LE1 7RH,  U.K.



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Baffled by failure to use index when WHERE uses a function

2006-03-10 Thread Martijn van Oosterhout
On Fri, Mar 10, 2006 at 09:14:27AM +, Clive Page wrote:
 I have a table cov3 of about 3 million rows, with a B-tree index on an 
 integer column called hpix.  If I do a simple select on this column it 
 works in milliseconds, using the index naturally:

snip

 So I would like to use this function to find rows, and I try for example:
 
 select * from cov3 where hpix = healpix(2.85,-11.48);
 
 but it takes ages.  An EXPLAIN shows why, it insists upon a sequential 
 scan:

snip

You don't describe the exact structure of your table nor the exact
declaraion of your function, but is it possible your function is marked
VOLATILE rather tha STABLE or IMMUTABLE?

 I am using Postgres 8.1.0 on Linux.

Hope this helps,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


signature.asc
Description: Digital signature


Re: [GENERAL] Baffled by failure to use index when WHERE uses a function

2006-03-10 Thread Michael Fuhr
On Fri, Mar 10, 2006 at 09:14:27AM +, Clive Page wrote:
 I have a table cov3 of about 3 million rows, with a B-tree index on an 
 integer column called hpix.  If I do a simple select on this column it 
 works in milliseconds, using the index naturally:
 
 select * from cov3 where hpixint = 482787587;
   ^^^
That's not the column you said you were using and that your other
example uses.  Have you verified that hpix has an index?

 So I would like to use this function to find rows, and I try for example:
 
 select * from cov3 where hpix = healpix(2.85,-11.48);
 
 but it takes ages.  An EXPLAIN shows why, it insists upon a sequential 
 scan:

If healpix() always returns the same output for given input then
define it to be IMMUTABLE.  For more information see Function
Volatility Categories in the documentation.

http://www.postgresql.org/docs/8.1/interactive/xfunc-volatility.html

-- 
Michael Fuhr

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Storage Estimates

2006-03-10 Thread Richard Huxton

Nik wrote:

Is there any documentation or literature on storage estimation for
PostgreSQL 8.1 on Windows?


At times like this I always start with the manuals, then follow with 
mailing-list searches.


The section on Database Physical Storage is probably a good place to 
start.

  http://www.postgresql.org/docs/8.1/static/storage.html

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Baffled by failure to use index when WHERE uses a function

2006-03-10 Thread Richard Huxton

Clive Page wrote:
I have a table cov3 of about 3 million rows, with a B-tree index on an 
integer column called hpix.  If I do a simple select on this column it 
works in milliseconds, using the index naturally:


select * from cov3 where hpixint = 482787587;
   hpix|  expos  |  hpixint
---+-+---
 482787587 | 30529.6 | 482787587
(1 row)



This doesn't show any index being used. EXPLAIN ANALYSE would have.

The problem is that I want to use a user-defined function called healpix 
which returns a single integer value in my queries; the function details 
are unlikely to be relevant (it selects a pixel from a celestial 
position), but its definition is:


 \df healpix
List of functions
 Schema |  Name   | Result data type |Argument data types
+-+--+
 public | healpix | integer  | double precision, double precision



select * from cov3 where hpix = healpix(2.85,-11.48);

but it takes ages.  An EXPLAIN shows why, it insists upon a sequential 
scan:


explain select * from cov3 where hpix = healpix(2.85,-11.48);
  QUERY PLAN
-- 


 Seq Scan on cov3  (cost=0.00..93046.81 rows=1 width=20)
   Filter: (hpix = (healpix(2.85::double precision, -11.48::double 
precision))::text)


Does anyone have any idea why, or know how I can restore adequate 
performance?


Do you understand the difference between the IMMUTABLE,STABLE,VOLATILE 
attributes for functions and what the difference between them is?


http://www.postgresql.org/docs/8.1/static/sql-createfunction.html

However, in the example above the real problem is that the query using 
an index tests against hpixint whereas your function compares against 
hpix. Make sure you're testing against the same column, then post back.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] PL/pgSQL question

2006-03-10 Thread Richard Huxton

[EMAIL PROTECTED] wrote:

Hi All!
First of all, a great Thanks, your suggestions works fine.

I'll hope to enhance a little bit my understanding of SETOF return type.
I have now two problems.

1) I would like to return some columns from one table in PL/pgSQL function. 
What's in this case the correct return type of the PL/pgSQL function. This is a 
pseudo-code for my first problem:


CREATE FUNCTION function_name (sometypes) RETURNS return_type AS $$
FOR some_type IN SELECT column1, column3 FROM tablename WHERE some_conditions 
LOOP

  RETURN NEXT some_type;
END LOOP;
RETURN;
$$ LANGUAGE 'plpgsql' STABLE;

What's return_type and some_type in this case?


Depends on what column1,column3 are. See the manuals for CREATE TYPE.
If column1 was int4 and column3 was a date you'd do something like:
  CREATE TYPE return_type AS (
a int4,
b date
  );

some_type is a variable not a type definition, although you'd probably 
define it to be of type return_type.


Oh, and it should be ... RETURNS SETOF return_type


2) The next problem is almost same as above. But now, I would like to return 
different columns from different tables.
What's in this case the correct return type of PL/pgSQL function.
This is a pseudo-code for my second problem:


CREATE FUNCTION function_name (sometypes) RETURNS return_type AS $$
FOR some_type IN SELECT column1_table1, column17_table2 FROM table1, table2 WHERE some_conditions 
LOOP

  RETURN NEXT some_type;
END LOOP;
RETURN;
$$ LANGUAGE 'plpgsql' STABLE;


Same difference, but you would change your type definition.

--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Baffled by failure to use index when WHERE uses a

2006-03-10 Thread Clive Page

On Fri, 10 Mar 2006, Martijn van Oosterhout wrote:


You don't describe the exact structure of your table nor the exact
declaraion of your function, but is it possible your function is marked
VOLATILE rather tha STABLE or IMMUTABLE?


Thanks for that hint - my function was not marked in any way, so I guess 
it 
got to be VOLATILE by default.  I have just marked it as IMMUTABLE and it 
now uses the index as expected, with a huge performance gain.  I confess 
that I was totally ignorant of the differences between these three types 
of function.


Sorry I slightly messed up the cut/paste of my posting, I had been 
experimenting with various versions of the same table and didn't quite get 
the details consistent in what I posted.


Thanks also to Richard Huxton and Martin van Oosterhout who gave me the 
same hint.


What an excellent support group this is.

--
Clive Page
Dept of Physics  Astronomy,
University of Leicester, 
Leicester, LE1 7RH,  U.K.



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


[GENERAL] in Pl/PgSQL, do commit every 5000 records

2006-03-10 Thread Emi Lu

Good morning,

In a plpgsql function, I am trying to insert 900, 000 records into 
several tables. I remembered people mentioned before that it is better 
and more efficient to commit actions for let's say every 5000 records' 
insertion.


May I get more inputs about why and how this commit can speed up the 
transaction please?


Thanks a lot,
Ying


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] panic during pgsql startup

2006-03-10 Thread igor
Message-ID: 11616

Hi All,

I can't start postgresql. I'm getting:

PANIC:  failed to re-find parent key in 17497

in serverlog.

It's PostgreSQL v8.1.3 on Fedora Core 4, ~15g database. Is there any way do 
dump my data and/or fix it ? I do have a backup, and this is probably a problem 
w/ my hdd/memory, but still it'd be interesting to know which options do I have 
in such situation.

--
Best Regards,
Igor Shevchenko



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] panic during pgsql startup

2006-03-10 Thread Tom Lane
[EMAIL PROTECTED] writes:
 I can't start postgresql. I'm getting:
 PANIC:  failed to re-find parent key in 17497
 in serverlog.

Is this happening during WAL replay?  If so, you could probably get
the database to start by doing pg_resetxlog.  No guarantees about how
consistent your data will be afterwards :-( ... but with a little luck
you'll be able to run pg_dump and compare with your backup.  I'd
definitely recommend an initdb and reload after you get a dump that
seems sane.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] About updates

2006-03-10 Thread Emi Lu

Hello,

postgresql 8.0.1, in a plpgsql function

To update columns' values in a table (without OID), if I ran:
1. update table1 set col1 = ..., col2 = ... ... col100 =

or
2.
update table1 set col1 = 
...
update table1 set col100 = 

way 1 only has one disk I/O, right? While way 2 is more time consuming 
since there is disk I/O when a update is ran.


Thanks a lot,
Ying



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] in Pl/PgSQL, do commit every 5000 records

2006-03-10 Thread Bruno Wolff III
On Fri, Mar 10, 2006 at 09:36:16 -0500,
  Emi Lu [EMAIL PROTECTED] wrote:
 Good morning,
 
 In a plpgsql function, I am trying to insert 900, 000 records into 
 several tables. I remembered people mentioned before that it is better 
 and more efficient to commit actions for let's say every 5000 records' 
 insertion.

You can't do commits inside of a function. I think you are misremembering
advice about not do inserts with a transaction per row which will have
a lot of overhead for all of the commits.

You can do savepoints inside of a function, but those are going to slow things
down, not speed them up.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] in Pl/PgSQL, do commit every 5000 records

2006-03-10 Thread Emi Lu

Hi Bruno,

You can't do commits inside of a function. 



The example I have is:

CREATE OR REPLACE function test() returns boolean AS $$
DECLARE
... ...
  counterINTEGER := 0;
BEGIN
... ...
  query_value := ' .' ;
  OPEN curs1 FOR EXECUTE query_value;
  LOOP
 FETCH curs1 INTO studid;
 EXIT WHEN NOT FOUND;

 query_value := ' INSERT INTO ... ...';

 EXECUTE query_value  ;


 counter := counter + 1 ;
 IF counter%5000 = 0 THEN
counter := 0;
COMMIT;
 END IF;

  END LOOP;


  CLOSE curs1;  
...

END;

... ...

The above function works ok. 

can't do commits inside of a function  , do you mean although the 
function complied ok and run successfully, but it did not really commit 
insertion actions at every 5000 records?



I think you are misremembering advice about not do inserts with a transaction 
per row which will have
a lot of overhead for all of the commits.





---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] About updates

2006-03-10 Thread A. Kretschmer
am  10.03.2006, um 10:46:39 -0500 mailte Emi Lu folgendes:
 Hello,
 
 postgresql 8.0.1, in a plpgsql function
 
 To update columns' values in a table (without OID), if I ran:
 1. update table1 set col1 = ..., col2 = ... ... col100 =
 
 or
 2.
 update table1 set col1 = 
 ...
 update table1 set col100 = 
 
 way 1 only has one disk I/O, right? While way 2 is more time consuming 
 since there is disk I/O when a update is ran.

Because of MVCC every UPDATE is practical a DELETE + INSERT.
Way1: you have only one DELETE+INSERT, Way2 one hundred, and you have
100 dead rows until the next VACUUM.


HTH, Andreas
-- 
Andreas Kretschmer(Kontakt: siehe Header)
Heynitz:  035242/47215,  D1: 0160/7141639
GnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net
 ===Schollglas Unternehmensgruppe=== 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] About updates

2006-03-10 Thread Emi Lu
Thanks Andreas. That was a quick response. 
So way 1 must be quicker.





am  10.03.2006, um 10:46:39 -0500 mailte Emi Lu folgendes:
 


Hello,

postgresql 8.0.1, in a plpgsql function

To update columns' values in a table (without OID), if I ran:
1. update table1 set col1 = ..., col2 = ... ... col100 =

or
2.
update table1 set col1 = 
...
update table1 set col100 = 

way 1 only has one disk I/O, right? While way 2 is more time consuming 
since there is disk I/O when a update is ran.
   



Because of MVCC every UPDATE is practical a DELETE + INSERT.
Way1: you have only one DELETE+INSERT, Way2 one hundred, and you have
100 dead rows until the next VACUUM.


HTH, Andreas
 




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] programatic database dump

2006-03-10 Thread Tomi NA
I'd like to dump a postgresql database from my (java) app and copy the dump file to the client machine.This backup strategy prooved invalueable in the past (given enough room on the harddrives, which I have) and I'd like to implement it now with postgresql.
Is there something like a system stored procedure that does something like that I can use? Calling pg_dump seems like a bad hack: I'd like to keep communication at the java-sql level if possible. I'll probably bare it, but I'd like to check if I've missed something, first.
TIA,Tomislav


Re: [GENERAL] programatic database dump

2006-03-10 Thread Reid Thompson

Tomi NA wrote:
I'd like to dump a postgresql database from my (java) app and copy the 
dump file to the client machine.
This backup strategy prooved invalueable in the past (given enough 
room on the harddrives, which I have) and I'd like to implement it now 
with postgresql.
Is there something like a system stored procedure that does something 
like that I can use? Calling pg_dump seems like a bad hack: I'd like 
to keep communication at the java-sql level if possible. I'll 
probably bare it, but I'd like to check if I've missed something, first.


TIA,
Tomislav

might help...
test=# \h copy
Command: COPY
Description: copy data between a file and a table
Syntax:
COPY tablename [ ( column [, ...] ) ]
   FROM { 'filename' | STDIN }
   [ [ WITH ]
 [ BINARY ]
 [ OIDS ]
 [ DELIMITER [ AS ] 'delimiter' ]
 [ NULL [ AS ] 'null string' ]
 [ CSV [ QUOTE [ AS ] 'quote' ]
   [ ESCAPE [ AS ] 'escape' ]
   [ FORCE NOT NULL column [, ...] ]

COPY tablename [ ( column [, ...] ) ]
   TO { 'filename' | STDOUT }
   [ [ WITH ]
 [ BINARY ]
 [ OIDS ]
 [ DELIMITER [ AS ] 'delimiter' ]
 [ NULL [ AS ] 'null string' ]
 [ CSV [ QUOTE [ AS ] 'quote' ]
   [ ESCAPE [ AS ] 'escape' ]
   [ FORCE QUOTE column [, ...] ]


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] in Pl/PgSQL, do commit every 5000 records

2006-03-10 Thread Emi Lu
I got the answer. Although the compile passed, when it reaches 5000, the 
commit command causes a SPI_ERROR_TRANSACTION exception.


Thank you for all your hint.



You can't do commits inside of a function.



The example I have is:

CREATE OR REPLACE function test() returns boolean AS $$
DECLARE
... ...
  counterINTEGER := 0;
BEGIN
... ...
  query_value := ' .' ;
  OPEN curs1 FOR EXECUTE query_value;
  LOOP
 FETCH curs1 INTO studid;
 EXIT WHEN NOT FOUND;

 query_value := ' INSERT INTO ... ...';
 EXECUTE query_value  ;

 counter := counter + 1 ;
 IF counter%5000 = 0 THEN
counter := 0;
COMMIT;
 END IF;

  END LOOP;


  CLOSE curs1;  ...
END;

... ...

The above function works ok.
can't do commits inside of a function  , do you mean although the 
function complied ok and run successfully, but it did not really 
commit insertion actions at every 5000 records?


I think you are misremembering advice about not do inserts with a 
transaction per row which will have

a lot of overhead for all of the commits




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] in Pl/PgSQL, do commit every 5000 records

2006-03-10 Thread Florian G. Pflug

Emi Lu wrote:

The example I have is:

CREATE OR REPLACE function test() returns boolean AS $$
DECLARE
... ...
  counterINTEGER := 0;
BEGIN
... ...
  query_value := ' .' ;
  OPEN curs1 FOR EXECUTE query_value;
  LOOP
 FETCH curs1 INTO studid;
 EXIT WHEN NOT FOUND;

 query_value := ' INSERT INTO ... ...';
 EXECUTE query_value  ;

 counter := counter + 1 ;
 IF counter%5000 = 0 THEN
counter := 0;
COMMIT;
 END IF;

  END LOOP;


  CLOSE curs1;  ...
END;

Are you aware of the insert into table (field1, ..., fieldn) select val1, .., 
valn from 
command? It'd be much faster to use that it it's possible...

greetings, Florian Pflug


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] in Pl/PgSQL, do commit every 5000 records

2006-03-10 Thread Emi Lu

Florian G. Pflug wrote:


Emi Lu wrote:


The example I have is:

CREATE OR REPLACE function test() returns boolean AS $$
DECLARE
... ...
  counterINTEGER := 0;
BEGIN
... ...
  query_value := ' .' ;
  OPEN curs1 FOR EXECUTE query_value;
  LOOP
 FETCH curs1 INTO studid;
 EXIT WHEN NOT FOUND;

 query_value := ' INSERT INTO ... ...';
 EXECUTE query_value  ;

 counter := counter + 1 ;
 IF counter%5000 = 0 THEN
counter := 0;
COMMIT;
 END IF;

  END LOOP;


  CLOSE curs1;  ...
END;


Are you aware of the insert into table (field1, ..., fieldn) 
select val1, .., valn from 

command? It'd be much faster to use that it it's possible...

greetings, Florian Pflug


It did faster. Thank you Florian. Could you hint me why insert into .. 
select  is faster than a cursor transaction please?


How about update?

Way1:
update tableA
set col1= X.col1, col2=X.col2, ... coln = X.coln
from table (select ... from ... where ..) AS X
where A.pk = X.pk ;

should be faster than

Way2:
open cursor:
fetch (select ... from ... where ... ) into xCol1, xCol2, ... xColn
   update tableA
   set col1 = xCol1, col2 =xCol2..., coln =xColn
   where tableA.pkCols = xPkCols

right?



  



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Schema is Missing

2006-03-10 Thread Ron St-Pierre
We received the following error on our development server this morning 
(postgresql 7.4.1, debian woody):
   org.postgresql.util.PSQLException: ERROR: schema customer does not exist
When I login to postgres it looks as if the other schemas are okay, but the 
customer schema is gone. I have a backup from midnight last night which I can 
restore, but I want to find out the cause of the problem first.

.psql_history doesn't display anything useful, just some queries that I ran 
today and yesterday. I looked at /var/log/messages and /var/log/syslog, and 
there aren't any zipped backups in the directory, which makes me suspicious. 
The files contents are:
[EMAIL PROTECTED]:/var/log# head messages
Feb 16 10:21:43 ** Starting Arno's IPTABLES firewall v1.8.2 **
Feb 16 10:21:45 ** All firewall rules applied **
Feb 17 10:23:20 ** Starting Arno's IPTABLES firewall v1.8.2 **
Feb 17 10:23:21 ** All firewall rules applied **
Feb 19  9:59:15 ** Starting Arno's IPTABLES firewall v1.8.2 **
Feb 19  9:59:17 ** All firewall rules applied **
Feb 22  9:58:10 ** Starting Arno's IPTABLES firewall v1.8.2 **
Feb 22  9:58:13 ** All firewall rules applied **
Mar 10 06:25:52 imperial syslogd 1.4.1#10: restart.
Mar 10 06:30:13 imperial postgres[6330]: [9-1] ERROR:  schema customer does 
not exist
and
[EMAIL PROTECTED]:/var/log# head syslog
Mar 10 06:25:52 imperial syslogd 1.4.1#10: restart.
Mar 10 06:30:13 imperial postgres[6330]: [9-1] ERROR:  schema customer does 
not exist
Mar 10 06:36:03 imperial postgres[9058]: [9-1] ERROR:  schema customer does 
not exist

Where should I look to see if data corruption was the problem? I am holding off 
restoring the customer schema for now.

Thanks
Ron St.Pierre


-- 
___
Play 100s of games for FREE! http://games.mail.com/


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Schema is Missing

2006-03-10 Thread Ron St-Pierre

 - Original Message -
 From: Michael Fuhr [EMAIL PROTECTED]
 To: Ron St-Pierre [EMAIL PROTECTED]
 Subject: Re: [GENERAL] Schema is Missing
 Date: Fri, 10 Mar 2006 11:27:54 -0700
 
 
 On Fri, Mar 10, 2006 at 12:57:17PM -0500, Ron St-Pierre wrote:
  We received the following error on our development server this
  morning (postgresql 7.4.1, debian woody):
 
 That's pretty old.  If you must run 7.4 then at least consider
 upgrading to the latest minor release, currently 7.4.12.  Lots of
 bugs have been fixed since 7.4.1.
We will be upgrading our servers soon, most likely within a month or two, and 
are planning on upgrading the database at the same time.

 
 org.postgresql.util.PSQLException: ERROR: schema customer does not 
  exist
 
 When was the last time you know the schema existed?  Have you been
 doing database-wide vacuums?  What's the output of the following
 command?

Vacuums have not been run on this database for a while now :(  I know that the 
customer schema was there yesterday, I added data to a few tables, logins to 
the web pages which it supports worked, etc. It also looks as if the backup 
from midnight (last night) is okay.

 
 SELECT datname, age(datvacuumxid), age(datfrozenxid) FROM pg_database;
imperial=# SELECT datname, age(datvacuumxid), age(datfrozenxid) FROM 
pg_database;
 datname  |age|age
--+---+
 impimp   | 298777961 | 1372519784
 imp TEST | 332548272 |  332548272
 testdb   | 332548272 |  332548272
 imp  |   2228730 | 1075970551
 fsynchtest   | 332548272 |  332548272
 template1| 332548272 |  332548272
 template0| 332548272 |  332548272
 test1| 332548272 |  332548272
(8 rows)


  .psql_history doesn't display anything useful, just some queries
  that I ran today and yesterday. I looked at /var/log/messages and
  /var/log/syslog, and there aren't any zipped backups in the directory,
  which makes me suspicious.
 
 When was the last time you saw those zipped files?  Do you know for
 sure that your system does that?
No, not 100% sure. However, the drive was almost full a few weeks ago, and I 
may have dropped them then, now that I think about it.

 
 Who all has access to the server?  Could somebody have dropped the
 schema without your knowing about it?
Possible, but not very likely, I' have to say extremely unlikely.

 
  The files contents are:
 [...]
  Feb 22  9:58:13 ** All firewall rules applied **
  Mar 10 06:25:52 imp syslogd 1.4.1#10: restart.
  Mar 10 06:30:13 imp postgres[6330]: [9-1] ERROR:  schema 
  customer does not exist
 
 Is the gap between 22 Feb and 10 Mar expected? 
No, I'm 99% certain it's not. 

 What made syslogd
 restart?  Is that an unusual event for that time?  Any hardware
 problems?  Full disk?  Has anything else out of the ordinary happened
 on that system lately?
I don't know why syslogd restarted. The disk has about 13G of free space, and 
nothing unusual has occured lately that we've noticed (otherwise). I've been 
moving a lot of data into and out of the database over the last two weeks, 
probably in excess of 25G out and 10GB in, but in a different schema.

Ron

 
 --
 Michael Fuhr
 
 ---(end of broadcast)---
 TIP 6: explain analyze is your friend




-- 
___
Play 100s of games for FREE! http://games.mail.com/


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Creating a function that acept any data type

2006-03-10 Thread Alejandro Michelin Salomon \( Adinet \)
Hi :

I am working in a migration. Im am migrating systems based in mysql to
postgresql.

I am trying to create a function named IFNULL, to not migrate any ocurrence
of this mysql function in my code.

The IFNULL function is the same of COALESCE in postgresql.

This code does not work.

CREATE OR REPLACE FUNCTION IFNULL( xValor ANY, xPadrao ANY )
RETURNS ANY AS $$

BEGIN
RETURN COALESCE( xValor, xPadrao );
END;
$$ LANGUAGE plpgsql
CALLED ON NULL INPUT
SECURITY INVOKER;

Can you help-me to create a function that acept any data type ?

Thank in advance

Alejandro Michelin Salomon
Porto Alegre
Brasil

-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.375 / Virus Database: 268.2.1/278 - Release Date: 9/3/2006
 



-- 
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.375 / Virus Database: 268.2.1/278 - Release Date: 9/3/2006


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Creating a function that acept any data type

2006-03-10 Thread Michael Fuhr
On Fri, Mar 10, 2006 at 05:12:53PM -0300, Alejandro Michelin Salomon ( Adinet ) 
wrote:
 I am working in a migration. Im am migrating systems based in mysql to
 postgresql.
 
 I am trying to create a function named IFNULL, to not migrate any ocurrence
 of this mysql function in my code.
 
 The IFNULL function is the same of COALESCE in postgresql.

Are you aware of the MySQL Compatibility Functions module?  It has
IFNULL.

http://pgfoundry.org/projects/mysqlcompat/
http://software.newsforge.com/article.pl?sid=05/12/15/1611251from=rss

 This code does not work.
 
 CREATE OR REPLACE FUNCTION IFNULL( xValor ANY, xPadrao ANY )
 RETURNS ANY AS $$

Change ANY to ANYELEMENT and the code should work.  And for something
this simple you could use an SQL function:

CREATE OR REPLACE FUNCTION ifnull(anyelement, anyelement)
RETURNS anyelement AS $$
  SELECT COALESCE($1, $2);
$$ LANGUAGE sql IMMUTABLE;

You'll have to cast one of the arguments if their types can't be
determined.

test= SELECT ifnull('abc', 'xyz');
ERROR:  could not determine anyarray/anyelement type because input has type 
unknown

test= SELECT ifnull('abc', 'xyz'::text);
 ifnull 

 abc
(1 row)

test= SELECT ifnull(NULL, 'xyz'::text);
 ifnull 

 xyz
(1 row)

-- 
Michael Fuhr

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Creating a function that acept any data type

2006-03-10 Thread Tom Lane
Alejandro Michelin Salomon \( Adinet \) [EMAIL PROTECTED] writes:
 Can you help-me to create a function that acept any data type ?

Use ANYELEMENT, not ANY.  Also I'd suggest making it a SQL function not
a plpgsql function, so that it can be inlined.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] in Pl/PgSQL, do commit every 5000 records

2006-03-10 Thread Emi Lu




Florian G. Pflug wrote:
 snipped code of stored procedure 



Are you aware of the insert into table (field1, ..., fieldn) 
select val1, .., valn from 

command? It'd be much faster to use that it it's possible...

greetings, Florian Pflug



It did faster. Thank you Florian. Could you hint me why insert into 
.. select  is faster than a cursor transaction please?


Well, you're avoiding a lot of overhead. insert into ... select from ..
is just one sql-statement. Of course, postgres internally does
something similar to your stored procedure, but it's all compiled
C code now (instead of interpreted plpgsql). Additionally, postgres
might be able to optimize this more than you could from plpgsql, because
you're restricted to the api that is exposed to plpgsql, while the 
backend-code

might be able to pull a few more tricks.

In general, if you have the choice between looping over a large result
in a stored procedure (or, even worse, in a client app) and letting the
backend do the looping, then letting the backend handle it is nearly 
always

faster.



The information are very helpful! Thank you again Florian.

If now, I have a series of queries to be run:

1. insert into t1... (select .. from ...left join ... .. where ) 
2. insert into t2 ... the same sub-query as in 1 
3. update t3 set ... from ( the same sub-query as in 1) AS X where 
t3.pk = X.pk  
4. update t4 set ... from ( the same sub-query as in 1) AS X where 
t4.pk = X.pk


. the subquery (select .. from ...left join ... .. where ) is two 
big tables doing left join


Will there be a better way between

a. put all there 4 queries into one function
   in perl or java, just call this function

b. in perl / java, write and run the 4 queries independently

The pl/pgsql function does not allow commit. So, in the function , if 
any step went wrong, all 4 steps rollback. While in java, after every 
query, I can do commit. May java speed up all four updates?



- Ying











---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Schema is Missing

2006-03-10 Thread Michael Fuhr
On Fri, Mar 10, 2006 at 12:57:17PM -0500, Ron St-Pierre wrote:
 We received the following error on our development server this
 morning (postgresql 7.4.1, debian woody):

That's pretty old.  If you must run 7.4 then at least consider
upgrading to the latest minor release, currently 7.4.12.  Lots of
bugs have been fixed since 7.4.1.

org.postgresql.util.PSQLException: ERROR: schema customer does not exist

When was the last time you know the schema existed?  Have you been
doing database-wide vacuums?  What's the output of the following
command?

SELECT datname, age(datvacuumxid), age(datfrozenxid) FROM pg_database;

 .psql_history doesn't display anything useful, just some queries
 that I ran today and yesterday. I looked at /var/log/messages and
 /var/log/syslog, and there aren't any zipped backups in the directory,
 which makes me suspicious.

When was the last time you saw those zipped files?  Do you know for
sure that your system does that?

Who all has access to the server?  Could somebody have dropped the
schema without your knowing about it?

 The files contents are:
[...]
 Feb 22  9:58:13 ** All firewall rules applied **
 Mar 10 06:25:52 imperial syslogd 1.4.1#10: restart.
 Mar 10 06:30:13 imperial postgres[6330]: [9-1] ERROR:  schema customer does 
 not exist

Is the gap between 22 Feb and 10 Mar expected?  What made syslogd
restart?  Is that an unusual event for that time?  Any hardware
problems?  Full disk?  Has anything else out of the ordinary happened
on that system lately?

-- 
Michael Fuhr

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] in Pl/PgSQL, do commit every 5000 records

2006-03-10 Thread Florian G. Pflug

Emi Lu wrote:

Florian G. Pflug wrote:

 snipped code of stored procedure 


Are you aware of the insert into table (field1, ..., fieldn) 
select val1, .., valn from 

command? It'd be much faster to use that it it's possible...

greetings, Florian Pflug


It did faster. Thank you Florian. Could you hint me why insert into .. 
select  is faster than a cursor transaction please?

Well, you're avoiding a lot of overhead. insert into ... select from ..
is just one sql-statement. Of course, postgres internally does
something similar to your stored procedure, but it's all compiled
C code now (instead of interpreted plpgsql). Additionally, postgres
might be able to optimize this more than you could from plpgsql, because
you're restricted to the api that is exposed to plpgsql, while the backend-code
might be able to pull a few more tricks.

In general, if you have the choice between looping over a large result
in a stored procedure (or, even worse, in a client app) and letting the
backend do the looping, then letting the backend handle it is nearly always
faster.


How about update?

Way1:
update tableA
set col1= X.col1, col2=X.col2, ... coln = X.coln
from table (select ... from ... where ..) AS X
where A.pk = X.pk ;

should be faster than

Way2:
open cursor:
fetch (select ... from ... where ... ) into xCol1, xCol2, ... xColn
   update tableA
   set col1 = xCol1, col2 =xCol2..., coln =xColn
   where tableA.pkCols = xPkCols

right?

I'd say so, yes.

greetings, Florian Pflug

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] NULL TIMESTAM problem

2006-03-10 Thread Guy Rouillier
Enrique Sánchez wrote:
 Hi! I'm new in Postgres.
 
 I nedd to fill a database table x from a file With the COPY command
 an the delimiter '*'.
 This  table  has a timestamp null column (I declared like: ' birthday
 timestamp NULL' ).
 
 But when I try to insert NULL values(specified in the file), postgres
 throw an error.
 
 
 I don't know how can I specify this NULL value wkthout an '\N'
 character. 

I created a table t1 with 3 columns, all nullable:

f1 int
f2 timestamp
f3 int

Using the following input file t1.csv:

5,NULL,7
8,NULL,10

The following COPY command successfully put those rows in the table, with f2 
null:

copy t1 (f1, f2, f3)
from 't1.csv'
null as 'NULL'
csv;

-- 
Guy Rouillier


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Baffled by failure to use index when WHERE uses a function

2006-03-10 Thread Hongxi.Ma
if ur function 'healpix' marked  'VOLATILE ' , it meas 'passed the same
params may result to diffrennt result', so , database have to compare the
value row by row (db does not know what u actully mean)
- Original Message - 
From: Clive Page [EMAIL PROTECTED]
To: pgsql-general@postgresql.org
Sent: Friday, March 10, 2006 5:14 PM
Subject: [GENERAL] Baffled by failure to use index when WHERE uses a
function


 I have a table cov3 of about 3 million rows, with a B-tree index on an
 integer column called hpix.  If I do a simple select on this column it
 works in milliseconds, using the index naturally:

 select * from cov3 where hpixint = 482787587;
 hpix|  expos  |  hpixint
 ---+-+---
   482787587 | 30529.6 | 482787587
 (1 row)

 The problem is that I want to use a user-defined function called healpix
 which returns a single integer value in my queries; the function details
 are unlikely to be relevant (it selects a pixel from a celestial
 position), but its definition is:

   \df healpix
  List of functions
   Schema |  Name   | Result data type |Argument data types
 +-+--+
   public | healpix | integer  | double precision, double precision

 So I would like to use this function to find rows, and I try for example:

 select * from cov3 where hpix = healpix(2.85,-11.48);

 but it takes ages.  An EXPLAIN shows why, it insists upon a sequential
 scan:

 explain select * from cov3 where hpix = healpix(2.85,-11.48);
QUERY PLAN
 --

   Seq Scan on cov3  (cost=0.00..93046.81 rows=1 width=20)
 Filter: (hpix = (healpix(2.85::double precision, -11.48::double
precision))::text)

 Does anyone have any idea why, or know how I can restore adequate
 performance?

 I am using Postgres 8.1.0 on Linux.

 -- 
 Clive Page
 Dept of Physics  Astronomy,
 University of Leicester,
 Leicester, LE1 7RH,  U.K.


 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly