Re: [GENERAL] [ADMIN] Increasing the shared memory

2007-04-13 Thread Sorin N. Ciolofan

 I will simplify the things in order to describe when the error occurred:
The input of the application is some data which is read from files on disk,
processed and then inserted in the database in one transaction. This total
quantity of data represents an integer number of data files, n*q, where q is
a file which has always 60kb and n is the positive integer.
For n=23 and shared_buffers=1000 and max_locks_per_transaction=64 the
Postgres throws the following exception:

org.postgresql.util.PSQLException: ERROR: out of shared memory
at
org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorI
mpl.java:1525)
at
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.ja
va:1309)
at
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:188)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.j
ava:452)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2St
atement.java:340)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2State
ment.java:286)
at
gr.forth.ics.rdfsuite.rssdb.repr.SSRepresentation.createClassTable(SSReprese
ntation.java:1936)
at
gr.forth.ics.rdfsuite.rssdb.repr.SSRepresentation.store(SSRepresentation.jav
a:1783)
at
gr.forth.ics.rdfsuite.swkm.model.db.impl.RDFDB_Model.storeSchema(RDFDB_Model
.java:814)
at
gr.forth.ics.rdfsuite.swkm.model.db.impl.RDFDB_Model.store(RDFDB_Model.java:
525)
at
gr.forth.ics.rdfsuite.services.impl.ImporterImpl.storeImpl(ImporterImpl.java
:79)
... 50 more

For n=23 I estimated that we create and manipulate about 8000 tables. 
One of the suggestion received here was that maybe there are not sufficient
locks slots per transaction, that's why I've increased the
max_locks_per_transaction (to 128) in order to be able to manipulate about
12 800 tables.

So, I doubled both shared_buffers and  max_locks_per_transaction and for
n=23 I received the same error. I would expect to see a difference, even a
little one, for example from n=23 to n=24 but the maximum quantity of data
accepted was the same. 

Thank you very much,
With best regards
Sorin

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 12, 2007 5:01 PM
To: Sorin N. Ciolofan
Cc: 'Shoaib Mir'; [EMAIL PROTECTED]; [EMAIL PROTECTED];
'Dimitris Kotzinos'
Subject: Re: [ADMIN] Increasing the shared memory 

Sorin N. Ciolofan [EMAIL PROTECTED] writes:
  This had also no effect. Because I can't see any
difference
 between the maximum input accepted for our application with the old
 configuration and the maximum input accepted now, with the new
 configuration. It looks like nothing happened. 

This is the first you've mentioned about *why* you wanted to increase the
settings, and what it sounds like to me is that you are increasing the
wrong thing.  What's the actual problem?

regards, tom lane



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Arrays with Rails?

2007-04-13 Thread Tino Wildenhain

Joshua D. Drake schrieb:

Rick Schumeyer wrote:

Has anyone here used a postgres array with Rails?  If so, how?



split()?


Err... there is no type mapping?

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Arrays with Rails?

2007-04-13 Thread Listmail
On Fri, 13 Apr 2007 10:30:29 +0200, Tino Wildenhain [EMAIL PROTECTED]  
wrote:



Joshua D. Drake schrieb:

Rick Schumeyer wrote:

Has anyone here used a postgres array with Rails?  If so, how?


 split()?


Err... there is no type mapping?



	You know, some languages spoil us developers, so that we for granted that  
Doing It The Right Way is indeed the way it is, and then we feel the  
burning pain of having to deal with the reality...

For instance, python and psycopg massively spoil you :

cursor.execute( select '{1,2}'::INTEGER[] AS myarray, 1 AS myint, 'abc'  
AS mytext, %s as myarg1, %s as myarg2, %s as myarg3, (

uthis is an unicode string €éàù,
[1, 2, 3],
[this,is,an,array,of,text]
))

for row in cursor:
for name, item in row.items():
print name, type( item ), item

Results :
	As you can see, arguments and results are auto-converted from python  
types to postgres types (you can write converters for your own types).  
This includes unicode, arrays, etc.
	The interface is clean and elegant : you provide SQL with %s in it and a  
list of arguments. You can use named or positional arguments too.


mytext type 'str' abc
myarray type 'list' [1, 2]
myint type 'int' 1
myarg1 type 'str' this is an unicode string €éàù
myarg2 type 'list' [1, 2, 3]
myarg3 type 'list' ['this', 'is', 'an', 'array', 'of', 'text']

	Some (like me) would argue that this is NORMAL and that anything inferior  
to this is murder. I believe the job of libraries and languages is to help  
the programmer, not torture him.


	Then, other languages will make you feel the pain of having to quote all  
your arguments YOURSELF and provide all results as string.

The most famous offender is PHP (this causes countless security holes).
Ruby is no better :

require postgres
conn = PGconn.connect(localhost, 5432, , , test)
res = conn.exec(select '{1,2}'::INTEGER[] AS myarray, 1 AS myint, 'abc'  
AS mytext;)

res.result.each {|row|
row.each { |item|
puts item
puts item.class
puts \n
}
}

{1,2}
String

1
String

abc
String

So the answer to your question is :

- either suffer
	- or code an additional layer above the ruby postgres lib which does the  
same as the python lib above.
	I partially did this for PHP. It's a lifesaver. No more addslashes() !  
Yay !



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] ERROR: XLogFlush: request

2007-04-13 Thread Nitin Verma
Hi All,

xlog.c code from version we use (7.3.2)

/*
 * If we still haven't flushed to the request point then we have a
 * problem; most likely, the requested flush point is past end of
 * XLOG. This has been seen to occur when a disk page has a corrupted
 * LSN.
 *
 * Formerly we treated this as a PANIC condition, but that hurts the
 * system's robustness rather than helping it: we do not want to take
 * down the whole system due to corruption on one data page.  In
 * particular, if the bad page is encountered again during recovery
 * then we would be unable to restart the database at all!  (This
 * scenario has actually happened in the field several times with 7.1
 * releases. Note that we cannot get here while InRedo is true, but
if
 * the bad page is brought in and marked dirty during recovery then
 * CreateCheckpoint will try to flush it at the end of recovery.)
 *
 * The current approach is to ERROR under normal conditions, but only
 * WARNING during recovery, so that the system can be brought up even
 * if there's a corrupt LSN.  Note that for calls from xact.c, the
 * ERROR will be promoted to PANIC since xact.c calls this routine
 * inside a critical section.  However, calls from bufmgr.c are not
 * within critical sections and so we will not force a restart for a
 * bad LSN on a data page.
 */
if (XLByteLT(LogwrtResult.Flush, record))
elog(InRecovery ? WARNING : ERROR,
 XLogFlush: request %X/%X is not satisfied ---
flushed
only to %X/%X,
 record.xlogid, record.xrecoff,
 LogwrtResult.Flush.xlogid,
LogwrtResult.Flush.xrecoff);


A java process using postgres 7.3.2, got these errors 

java.sql.SQLException: ERROR:  XLogFlush: request
0/240169BC is not satisfied --- flushed only to 0/23FFC01C

While these errors where filling the logs, we were able to connect via psql,
and see all the data.

 This has been seen to occur when a disk page has a corrupted LSN
I suppose LSN refers to Logical sector number of a WAL. If that was corrupted
how-come we were able to access it via psql. Is it just an isolated
phenomenon? Does postgres have an auto-recovery for this? If yes did old
connections have stale values of LSN?

Coming to safeguard:

1. Is there any use of restart java process when this happens?
2. Is there any use of or Is it safe to restart postmaster at this time?

What all should be done when this happened? Any suggestions.

-- Nitin


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Arrays with Rails?

2007-04-13 Thread Alexander Presber

Listmail schrieb:
Then, other languages will make you feel the pain of having to 
quote all your arguments YOURSELF and provide all results as string.
The most famous offender is PHP (this causes countless security 
holes).
I partially did this for PHP. It's a lifesaver. No more 
addslashes() ! Yay !


What about PEAR MDB2?
http://pear.php.net/manual/en/package.database.mdb2.php

Is it any good?

Cheers, Alex
begin:vcard
fn:Alexander Presber
n:Presber;Alexander
org;quoted-printable:Wei=C3=9Fhuhn  Wei=C3=9Fhuhn Kommunikationsmanagement GmbH;Softwareentwicklung
adr;quoted-printable;dom:;;Warschauer Stra=C3=9Fe 58a;Berlin;;10243
email;internet:[EMAIL PROTECTED]
title:Dipl.-Phys.
tel;work:61654 - 214
x-mozilla-html:FALSE
version:2.1
end:vcard


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Arrays with Rails?

2007-04-13 Thread Listmail
On Fri, 13 Apr 2007 12:15:30 +0200, Alexander Presber  
[EMAIL PROTECTED] wrote:



Listmail schrieb:

Then, other languages will make you feel the pain of having to
quote all your arguments YOURSELF and provide all results as string.
The most famous offender is PHP (this causes countless security
holes).
I partially did this for PHP. It's a lifesaver. No more
addslashes() ! Yay !


What about PEAR MDB2?
http://pear.php.net/manual/en/package.database.mdb2.php

Is it any good?

Cheers, Alex


	Well, the problem with a lot of PHP libraries is that they are written by  
people who don't think.


	Python's interface for doing a query in your code is close to the ideal,  
which should be something like that :

query( SELECT * FROM duhhh WHERE id=%s AND date  %s, id, date )
(python's API has an extra set of () and it also takes named parameters 
)

	If id is an python integer and date a python datetime object, format  
conversion is automatic.
	If they are not, first they should be, but whatever error the programmer  
makes DOES NOT make a SQL injection. At most psql will complain that you  
try to compare a date with something that is not a date, but you don't get  
hacked, since in order to put an un-quoted argument into the SQL you have  
to do it really on purpose.


	Now, I use many database queries in my web applications (so does  
everyone), therefore I consider a system that needs me to type a lot of  
crap in order to work is DUMB.


PEAR::DB2 says :

$query = 'INSERT INTO tablename (id, itemname, saved_time) VALUES ('
. $mdb2-quote($id,   'integer')   .', '
. $mdb2-quote($name, 'text')  .', '
. $mdb2-quote($time, 'timestamp') .')';
$res = $mdb2-query($query);

As you see,
- it's a lot less compact and readable
	- it's a pain to use, so I will copypaste code, which is the GREAT EVIL  
and a good source of bugs

- you have to repeat the variable types (who cares ?)
	- if I am not well awake I may forget to type that crap because I'm sure  
the variable is an integer, why bother (but am I really sure ? = get  
hacked)


Since PHP has no type for date, a much better way of doing this would 
be :

	query( INSERT INTO ... VALUES %s,%s,%s, $id, $name,  
DB::datetime( $timestamp ) )
	with all the basic types being quoted as they come (ie like a string  
since postgres doesn't care between 1 and '1'), and a few adapters for  
other types (like date).


	Also the ORM part of PEAR::DB2 is braindead since the field specifiers  
are not objects that you can custmize and derive...



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Import data from 8.2.3 into 8.0.8

2007-04-13 Thread Jiří Němec
Hello,

I need to import PostgreSQL DB from 8.2.3 server 8.0.8 server. Is
there some compatibility mode? When I try to import this dump
PostgreSQL 8.0.8 reports errors - probably due version
incompatibility.

Thank you for your advices.

-- 
Jiri Nemec 
http://www.menea.cz/


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Import data from 8.2.3 into 8.0.8

2007-04-13 Thread Csaba Nagy
Jiří,

I guess you should be able to do it via slony. Usually one wants to
upgrade using slony, but it should work the other way around too :-)

Cheers,
Csaba.

On Fri, 2007-04-13 at 15:09, Jiří Němec wrote:
 Hello,
 
 I need to import PostgreSQL DB from 8.2.3 server 8.0.8 server. Is
 there some compatibility mode? When I try to import this dump
 PostgreSQL 8.0.8 reports errors - probably due version
 incompatibility.
 
 Thank you for your advices.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] deadlock

2007-04-13 Thread Tom Lane
Alexey Nalbat [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 Your example doesn't deadlock for me ...

 With default value deadlock_timeout=1000 error raises in first transaction:

Then there's a typo in your example --- please recheck it.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Import data from 8.2.3 into 8.0.8

2007-04-13 Thread Csaba Nagy
Sorry Jiří, that was a hasty answer from my part...

 I haven't used Slony by now. Do you have some advices or articles they
 may help? I have no ssh access at FreeBSD server box when PostgreSQL
 8.0 server runs - so I hope this is no problem...

Slony is quite a heavy beast to install, I'm pretty sure now that I
think about it that you don't want to go that way.

What you probably want is to:
* make a separate schema dump + data dump;
* apply the schema dump to the old box and manually fix errors;
* possibly drop indexes/constraints so the data loading goes faster;
* load the data to the 8.0 box - theoretically here you shouldn't have big 
problems, but I'm not sure about that;
* recreate indexes/constraints if you dropped them;

The index/constraint thing is a concern if you have big amount of data, 
otherwise I wouldn't touch it.

HTH,
Csaba.



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Regard to PANIC: unexpected hash relation size

2007-04-13 Thread Vlastimil Krejcir

  Hi,

  I have the same error as described in the post I have found in the 
pgsql-general archive, see:


  http://archives.postgresql.org/pgsql-general/2007-03/msg01664.php

  I have PostgreSQL running for one and half year without any issues until 
yesterday. I get this:


  PANIC:  unexpected hash relation size: 122, should be 4
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.


  I have PostgreSQL 8.1.5 on SunOS (5.10 Generic_118833-36 sun4u sparc 
SUNW,Sun-Fire-V445). So it is not only the Windows matter.


  To Postgre developers: what kind of information do you need from me to 
help you correcting this bug? Let me know (directly to my email) and 
I'll try to do my best to help you.


  have a nice day

  Vlastik


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] corr() in 8.1

2007-04-13 Thread SunWuKung
Is there an easy way to have the corr() statistical function in 8.1?
(I know very little about compiling things.)
Or does somebody have a custom aggregate function for that?

Thanks for the help.
SWK


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] role passwords and md5()

2007-04-13 Thread Ben Trewern
Looks like the password gets cleared when you rename a role.

Regards,

Ben
Ben Trewern [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
I thought I read this be for I sent it. :-(

 What I meant to say was:
 Does the password hash change (and how?) Or is the original username kept 
 somewhere is the system tables?

 Regards,

 Ben

 Ben Trewern [EMAIL PROTECTED] wrote in message 
 news:[EMAIL PROTECTED]
 How does this work when you rename a role?  Does the is the password hash 
 changed (and how?) or is the original username kept somewhere in the 
 system tables?

 Regards,

 Ben

 Andrew Kroeger [EMAIL PROTECTED] wrote in message 
 news:[EMAIL PROTECTED]
 Lutz Broedel wrote:
 Dear list,

 I am trying to verify the password given by a user against the system
 catalog. Since I need the password hash later on, I can not just use 
 the
 authentication mechanism for verification, but need to do this in SQL
 statements.
 Unfortunately, even if I set passwords to use MD5 encryption in
 pg_hba.conf, the SQL function MD5() returns a different hash.

 A (shortened) example:
 CREATE ROLE my_user WITH ENCRYPTED PASSWORD 'my_password';

 SELECT * FROM pg_authid
 WHERE rolname='my_user' AND rolpassword=MD5('my_password');

 Any ideas, what to do to make this work?
 Best regards,
 Lutz Broedel

 A quick look at the source shows that the hashed value stored in
 pg_authid uses the role name as a salt for the hashing of the password.
 Moreover, the value in pg_authid has the string md5 prepended to the
 hash value (I imagine to allow different hash algorithms to be used, but
 I haven't personally seen anything but md5).

 Given your example above, the following statement should do what you are
 looking for:

 SELECT * FROM pg_authid WHERE rolname='my_user' AND rolpassword = 'md5'
 || md5('my_password' || 'my_user');

 Hope this helps.

 Andrew

 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings




 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] COPY FROM file with zero-delimited fields

2007-04-13 Thread eugene . mindrov
On Apr 11, 7:42 pm, [EMAIL PROTECTED] (Merlin Moncure) wrote:
 On 11 Apr 2007 00:25:50 -0700, [EMAIL PROTECTED]

 [EMAIL PROTECTED] wrote:
  Hi all,
  I wonder if there's a way to use COPY FROM command when dealing with a
  data file containing records whose fields are delimited with zero-
  byte? Or I'd better change the delimiter? :)

 you can always run it through tr or similar tools first.

 merlin

 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?

http://archives.postgresql.org/

Yes, sure, I know that, but the matter is - the fields are mostly
textual ones, and can hold just about any text, so it's hard to choose
a right delimiter for fields, you know ;) I though I'd eliminate the
problem by using zero-byte for a field delimiter, but apparently COPY
FROM accepts only printable symbols as delimiters...
Another solution would be to escape potential delimiter everywhere in
the text fields, but that also requires using external tools... I had
a hope that COPY FROM can be persuaded to use some non-printable
character in its WITH DELIMITER clause, but apparently this is not the
case...


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] DTrace and PostgreSQL

2007-04-13 Thread [EMAIL PROTECTED]
On Apr 12, 8:13 pm, Karen Hill [EMAIL PROTECTED] wrote:
 I was wondering if DTrace could tell me how many inserts are being
 done in a pl/pgsql function while in a loop for example.  As you know
 a pl/pgsql function executes in a single transaction so the DTrace
 probe transaction__commit(int) I believe is not helpful here.  Could
 DTrace measure how many inserts are being done in a transaction that
 has not yet been commited, especially if that transaction block is in
 a pl/pgsql function?  This would be extremely useful as when one has a
 bunch of inserts one could be able to see how far along the pl/pgsql
 function was.

Karen, having fun with communicating with your V125??

FYI:
http://pgfoundry.org/docman/view.php/1000163/230/PostgreSQL-DTrace-Users-Guide.pdf

Rayson




 regards,
 karen



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Providing user based previleges to Postgres DB

2007-04-13 Thread Ben Trewern
Providing user based previleges to Postgres DBSee: 
http://www.postgresql.org/docs/8.2/interactive/user-manag.html

Regards,

Ben
  [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED]
  Hi All,

  Currently in one of the projects we want to restrict the unauthorized users 
to the Postgres DB. Here we are using Postgres version 8.2.0

  Can anybody tell me how can I provide the user based previleges to the 
Postgres DB so that, we can restrict the unauthorized users as well as porivde 
the access control to the users based on the set previleges by the 
administrator.

  Thanks and Regards,
  Ramac 


The information contained in this electronic message and any 
attachments to this message are intended for the exclusive use of the 
addressee(s) and may contain proprietary, confidential or privileged 
information. If you are not the intended recipient, you should not disseminate, 
distribute or copy this e-mail. Please notify the sender immediately and 
destroy all copies of this message and any attachments. 

WARNING: Computer viruses can be transmitted via email. The recipient 
should check this email and any attachments for the presence of viruses. The 
company accepts no liability for any damage caused by any virus transmitted by 
this email.

www.wipro.com
   


[GENERAL] question

2007-04-13 Thread Terry Martin
  

I would like to know if I there is a utility to take a UDP packet which
has specific information in the payload and extract the information from
the packet and place it in the Postgres data base?

 

Terry Martin

Timedata Corporation

VP of Network Operations

Work: (212) 644-1600 X3

Cell: (503) 3188909

 

attachment: image001.jpg


image002.gif
Description: image002.gif


Re: [GENERAL] deadlock

2007-04-13 Thread Alexey Nalbat
Tom Lane wrote:
 
 Your example doesn't deadlock for me ...

Scott Marlowe wrote:
 
 That's not a deadlock, transaction 3 is simply waiting for transaction 1
 to commit or rollback.
 
 If you run a commit or rollback on transaction 1 then transaction 3 will
 then be ready to commit or rollback as needed.

With default value deadlock_timeout=1000 error raises in first transaction:

ERROR:  deadlock detected
DETAIL:  Process 31712 waits for ShareLock on tuple (0,10) of relation 451542 
of database 391598; blocked by process 31786.
Process 31786 waits for ShareLock on transaction 918858; blocked by process 
31712.
CONTEXT:  SQL statement SELECT 1 FROM ONLY public.t1 x WHERE id1 = $1 
FOR SHARE OF x

After setting deadlock_timeout=360 we can see deadlock in pg_locks:

 datid  | datname | procpid | usesysid | usename |  current_query   
|  query_start  | backend_start | 
client_addr | client_port
+-+-+--+-+--+---+---+-+-
 391598 | nalbat  |   32025 |16384 | nalbat  | /*1*/ update t2 set val3=3 
where id2=50; | 2007-04-13 11:14:06.966372+04 | 2007-04-13 11:13:11.018896+04 | 
|  -1
 391598 | nalbat  |   32029 |16384 | nalbat  | /*3*/ update t1 set val1=1 
where id1=10; | 2007-04-13 11:13:58.607838+04 | 2007-04-13 11:13:17.212922+04 | 
|  -1

   locktype| database | relation | page | tuple | transactionid | classid | 
objid | objsubid | transaction |  pid  |   mode   | granted
---+--+--+--+---+---+-+---+--+-+---+--+-
 tuple |   391598 |   451542 |0 |10 |   | | 
  |  |  918867 | 32025 | ShareLock| f
 tuple |   391598 |   451542 |0 |10 |   | | 
  |  |  918869 | 32029 | ExclusiveLock| t
 relation  |   391598 |   451542 |  |   |   | | 
  |  |  918867 | 32025 | AccessShareLock  | t
 relation  |   391598 |   451542 |  |   |   | | 
  |  |  918867 | 32025 | RowShareLock | t
 relation  |   391598 |   451542 |  |   |   | | 
  |  |  918869 | 32029 | AccessShareLock  | t
 relation  |   391598 |   451542 |  |   |   | | 
  |  |  918869 | 32029 | RowExclusiveLock | t
 relation  |   391598 |   451544 |  |   |   | | 
  |  |  918867 | 32025 | AccessShareLock  | t
 relation  |   391598 |   451544 |  |   |   | | 
  |  |  918869 | 32029 | AccessShareLock  | t
 relation  |   391598 |   451544 |  |   |   | | 
  |  |  918869 | 32029 | RowExclusiveLock | t
 relation  |   391598 |   451546 |  |   |   | | 
  |  |  918867 | 32025 | AccessShareLock  | t
 relation  |   391598 |   451546 |  |   |   | | 
  |  |  918867 | 32025 | RowExclusiveLock | t
 relation  |   391598 |   451548 |  |   |   | | 
  |  |  918867 | 32025 | AccessShareLock  | t
 relation  |   391598 |   451548 |  |   |   | | 
  |  |  918867 | 32025 | RowExclusiveLock | t
 transactionid |  |  |  |   |918867 | | 
  |  |  918867 | 32025 | ExclusiveLock| t
 transactionid |  |  |  |   |918869 | | 
  |  |  918869 | 32029 | ExclusiveLock| t
 transactionid |  |  |  |   |918867 | | 
  |  |  918869 | 32029 | ShareLock| f

 relname | relnamespace | reltype | relowner | relam | relfilenode | 
reltablespace | relpages | reltuples | reltoastrelid | reltoastidxid | 
relhasindex | relisshared | relkind | relnatts | relchecks | reltriggers | 
relukeys | relfkeys | relrefs | relhasoids | relhaspkey | relhasrules | 
relhassubclass | relacl
-+--+-+--+---+-+---+--+---+---+---+-+-+-+--+---+-+--+--+-+++-++
 t1  | 2200 |  451543 |16384 | 0 |  451542 |
 0 |0 | 0 | 0 | 0 | t   | f 
  | r   |2 | 0 |   2 |0 |0 |
   0 | f  

Re: [GENERAL] Evaluate only one CASE WHEN in a select

2007-04-13 Thread dcrespo
On Apr 12, 4:30 pm, [EMAIL PROTECTED] (Tom Lane) wrote:
 dcrespo [EMAIL PROTECTED] writes:
  They are exactly the same, that's why I want to evaluate it only once
  and, depending on it, put the corresponding value into two different
  fields that must be returned, instead of evaluating once for each
  field. Any insight?

 There's no solution that wouldn't cost you more than double evaluation,
 for such a simple expression.

 The general solution is to use two levels of SELECT:

 select ..., x, x, ...
   from (select ..., big-expr as x, ... from ... offset 0) ss;

 You need the offset 0 (which is otherwise a no-op) to prevent the
 planner from folding the two selects into a single level and ending up
 with two copies of big-expr anyway.  The runtime overhead associated
 with the extra plan level is about going to eat up whatever you might
 save in this example, though with a seriously expensive expression
 (for instance, a function that does some fairly expensive SELECT itself)
 you might find it worth doing.

 regards, tom lane

 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?

http://archives.postgresql.org/
Thank you, Tom.

In your example, x,x seems to be the same value. I don't want that
exactly. Here is another example:

SELECT
CASE WHEN is_odd(t.number) THEN 'Update' ELSE 'Insert' END AS
action,
CASE WHEN is_odd(t.number) THEN t.number ELSE NULL END AS number,
FROM ... ;

As you can see, is_odd function (only a function example) is being run
twice with exactly the same parameters to put a value into action
field and into number field. Since it's the same function call, I want
it to be run only once and put the above values into their
corresponding fields. My actual function is not that expensive, but
has to be run for every retrieved row, obviously. Does this change
your explanation?

Thank you

Daniel



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] role passwords and md5()

2007-04-13 Thread Ben Trewern
How does this work when you rename a role?  Does the is the password hash 
changed (and how?) or is the original username kept somewhere in the system 
tables?

Regards,

Ben

Andrew Kroeger [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 Lutz Broedel wrote:
 Dear list,

 I am trying to verify the password given by a user against the system
 catalog. Since I need the password hash later on, I can not just use the
 authentication mechanism for verification, but need to do this in SQL
 statements.
 Unfortunately, even if I set passwords to use MD5 encryption in
 pg_hba.conf, the SQL function MD5() returns a different hash.

 A (shortened) example:
 CREATE ROLE my_user WITH ENCRYPTED PASSWORD 'my_password';

 SELECT * FROM pg_authid
 WHERE rolname='my_user' AND rolpassword=MD5('my_password');

 Any ideas, what to do to make this work?
 Best regards,
 Lutz Broedel

 A quick look at the source shows that the hashed value stored in
 pg_authid uses the role name as a salt for the hashing of the password.
 Moreover, the value in pg_authid has the string md5 prepended to the
 hash value (I imagine to allow different hash algorithms to be used, but
 I haven't personally seen anything but md5).

 Given your example above, the following statement should do what you are
 looking for:

 SELECT * FROM pg_authid WHERE rolname='my_user' AND rolpassword = 'md5'
 || md5('my_password' || 'my_user');

 Hope this helps.

 Andrew

 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings
 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Is there a shortage of postgresql skilled ops people

2007-04-13 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 04/09/07 14:05, Bill Moran wrote:
 In response to Alexander Staubo [EMAIL PROTECTED]:
 
 On Apr 9, 2007, at 18:10 , Gerard Seibert wrote:

 On Mon, 9 Apr 2007 10:34:22 -0500
 Erik Jones [EMAIL PROTECTED] wrote:

 Hmmm...  I didn't have anything HTML set anywhere.  I did however
 have message formatting set to Rich-Text (although I was unaware)
 and have switched that to plaintext.  This look better?
 Yes, much better. I believe that 'rich text' is essentially HTML,
 although I might be mistaken. I know that 'GMail' users have that
 problem all the time. Of course 'GMail' users have lots of other
 problems also.
 Erik has actually been posting messages as multipart/alternative,  
 where the message is included in both 7-bit plaintext *and* HTML, the  
 idea being that the mail viewer itself can pick the format it knows  
 best. You are simply using a mail reader which prioritizes HTML;  
 perhaps it has a setting to let you prefer plaintext?
 
 email religion
 Sylpheed has this option, which is one of the reasons I use it.
 /email religion

As does Tbird.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFGH4wcS9HxQb37XmcRAvC/AJsENy7qxU2ydLBoIFdbEuEksQ22lACg6cKm
yOgrtz3cBjTUNeuTsKRYCY8=
=Gf+v
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Is there a shortage of postgresql skilled ops people

2007-04-13 Thread Ron Johnson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 04/09/07 10:50, Erik Jones wrote:
 
 On Apr 9, 2007, at 9:46 AM, Vivek Khera wrote:
[snip]
 One thing that was really counter-intuitive to me from a guy who runs
 really large databases, was to get rid of some of the FK's and manage
 them in the application layer.  This one scares me since I've had my
 behind saved at least a couple of times by having the extra layer in
 the DB to protect me... the data integrity would be managed by some
 external program that sweeps the DB every so often and purges out data
 that should no longer be there (ie stuff that would have been CASCADE
 DELETEd).
 
 This is often debated and it does seem strange to here that stance from
 a dba.  It's normally the application developers who want to do that.

It depends on how efficient your engine and site are at deleting
cascades.  If it causes an unacceptable amount of extra locking in a
multi-user situation, away goes the FK and in comes the off-hour
sweeper.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFGH41yS9HxQb37XmcRArGnAJ4n12NxeKleCf7n1OFUtOQYnJy1wQCg6OVz
fMjwTsezDnukoV8yyXTouJw=
=XgVW
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] [ADMIN] Increasing the shared memory

2007-04-13 Thread Tom Lane
Sorin N. Ciolofan [EMAIL PROTECTED] writes:
  I will simplify the things in order to describe when the error occurred:
 The input of the application is some data which is read from files on disk,
 processed and then inserted in the database in one transaction. This total
 quantity of data represents an integer number of data files, n*q, where q is
 a file which has always 60kb and n is the positive integer.
 For n=23 and shared_buffers=1000 and max_locks_per_transaction=64 the
 Postgres throws the following exception:

 org.postgresql.util.PSQLException: ERROR: out of shared memory

 For n=23 I estimated that we create and manipulate about 8000 tables. 

Okay, as far as I know the only user-causable way to get that message is
to run out of lock-table space, and a transaction does take a lock for
each table it touches, so I concur that raising
max_locks_per_transaction is an appropriate response.  If you didn't see
any change in the maximum N you could handle then I wonder whether you
actually did raise it --- does show max_locks_per_transaction reflect
the intended new value?

Another possibility is that there's something about your code that makes
the number of locks involved very nonlinear in N.  You could try
checking the number of rows in pg_locks immediately before commit at
some smaller values of N to confirm what the scaling is really like.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] role passwords and md5()

2007-04-13 Thread Ben Trewern
I thought I read this be for I sent it. :-(

What I meant to say was:
Does the password hash change (and how?) Or is the original username kept 
somewhere is the system tables?

Regards,

Ben

Ben Trewern [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 How does this work when you rename a role?  Does the is the password hash 
 changed (and how?) or is the original username kept somewhere in the 
 system tables?

 Regards,

 Ben

 Andrew Kroeger [EMAIL PROTECTED] wrote in message 
 news:[EMAIL PROTECTED]
 Lutz Broedel wrote:
 Dear list,

 I am trying to verify the password given by a user against the system
 catalog. Since I need the password hash later on, I can not just use the
 authentication mechanism for verification, but need to do this in SQL
 statements.
 Unfortunately, even if I set passwords to use MD5 encryption in
 pg_hba.conf, the SQL function MD5() returns a different hash.

 A (shortened) example:
 CREATE ROLE my_user WITH ENCRYPTED PASSWORD 'my_password';

 SELECT * FROM pg_authid
 WHERE rolname='my_user' AND rolpassword=MD5('my_password');

 Any ideas, what to do to make this work?
 Best regards,
 Lutz Broedel

 A quick look at the source shows that the hashed value stored in
 pg_authid uses the role name as a salt for the hashing of the password.
 Moreover, the value in pg_authid has the string md5 prepended to the
 hash value (I imagine to allow different hash algorithms to be used, but
 I haven't personally seen anything but md5).

 Given your example above, the following statement should do what you are
 looking for:

 SELECT * FROM pg_authid WHERE rolname='my_user' AND rolpassword = 'md5'
 || md5('my_password' || 'my_user');

 Hope this helps.

 Andrew

 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings


 



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] ERROR: XLogFlush: request

2007-04-13 Thread Tom Lane
Nitin Verma [EMAIL PROTECTED] writes:
 xlog.c code from version we use (7.3.2)
 ...
 What all should be done when this happened? Any suggestions.

Updating to something newer than 7.3.2 would seem to be a good idea.
7.3.18 is the current release in that branch.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Is there a shortage of postgresql skilled ops people

2007-04-13 Thread Martin Gainty
Oracle's suggested solution for 'mutating table error' is to create a global 
temporary table for the parent
To avoid inconsistent behaviour with the parent table a AFTER ROW trigger 
checks new rows and commits rows only to the temporary table then
the  changes from the temp table are committed to permanent parent table 
when AFTER STATEMENT trigger is executed

http://www.akadia.com/services/ora_mutating_table_problems.html
M--
This email message and any files transmitted with it contain confidential
information intended only for the person(s) to whom this email message is
addressed.  If you have received this email message in error, please notify
the sender immediately by telephone or email and destroy the original
message without making a copy.  Thank you.

- Original Message - 
From: Ron Johnson [EMAIL PROTECTED]

To: [EMAIL PROTECTED]
Sent: Friday, April 13, 2007 10:02 AM
Subject: Re: [GENERAL] Is there a shortage of postgresql skilled ops people



-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 04/09/07 10:50, Erik Jones wrote:


On Apr 9, 2007, at 9:46 AM, Vivek Khera wrote:

[snip]

One thing that was really counter-intuitive to me from a guy who runs
really large databases, was to get rid of some of the FK's and manage
them in the application layer.  This one scares me since I've had my
behind saved at least a couple of times by having the extra layer in
the DB to protect me... the data integrity would be managed by some
external program that sweeps the DB every so often and purges out data
that should no longer be there (ie stuff that would have been CASCADE
DELETEd).


This is often debated and it does seem strange to here that stance from
a dba.  It's normally the application developers who want to do that.


It depends on how efficient your engine and site are at deleting
cascades.  If it causes an unacceptable amount of extra locking in a
multi-user situation, away goes the FK and in comes the off-hour
sweeper.

- --
Ron Johnson, Jr.
Jefferson LA  USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFGH41yS9HxQb37XmcRArGnAJ4n12NxeKleCf7n1OFUtOQYnJy1wQCg6OVz
fMjwTsezDnukoV8yyXTouJw=
=XgVW
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Regard to PANIC: unexpected hash relation size

2007-04-13 Thread Tom Lane
Vlastimil Krejcir [EMAIL PROTECTED] writes:
I have PostgreSQL running for one and half year without any issues until 
 yesterday. I get this:

PANIC:  unexpected hash relation size: 122, should be 4
 server closed the connection unexpectedly

I have PostgreSQL 8.1.5 on SunOS (5.10 Generic_118833-36 sun4u sparc 
 SUNW,Sun-Fire-V445).

Mmm ... I don't think so, since AFAICT that message string is not
present in 8.1.5.  Please recheck the server version.

If you can put together a reproducible test case for this, we should
be able to fix it, but right now there's not enough info available to
guess what's going wrong.  FWIW, I imagine that a test case would
involve a specific sequence of data inserted into a hash index.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] ERROR: XLogFlush: request

2007-04-13 Thread Nitin Verma
Thanx Tom, anyway we are moving to 8.1.0 soon.

Leaving that moving all our client to newer release will take sometime. I
hope you know how it works in a product. Till that time we need to release a
patch that recovers from this condition.

Said that, do we have some advice or workarounds?

I saw 8.1.0's code; it even ends up handling the same condition.

/*
 * If we still haven't flushed to the request point then we have a
 * problem; most likely, the requested flush point is past end of
XLOG.
 * This has been seen to occur when a disk page has a corrupted LSN.
 *
 * Formerly we treated this as a PANIC condition, but that hurts the
system's
 * robustness rather than helping it: we do not want to take down the
 * whole system due to corruption on one data page.  In particular,
if the
 * bad page is encountered again during recovery then we would be
unable
 * to restart the database at all!  (This scenario has actually
happened
 * in the field several times with 7.1 releases. Note that we cannot
get
 * here while InRedo is true, but if the bad page is brought in and
marked
 * dirty during recovery then CreateCheckPoint will try to flush it
at the
 * end of recovery.)
 *
 * The current approach is to ERROR under normal conditions, but only
WARNING
 * during recovery, so that the system can be brought up even if
there's a
 * corrupt LSN.  Note that for calls from xact.c, the ERROR will be
 * promoted to PANIC since xact.c calls this routine inside a
critical
 * section.  However, calls from bufmgr.c are not within critical
sections
 * and so we will not force a restart for a bad LSN on a data page.
 */
if (XLByteLT(LogwrtResult.Flush, record))
elog(InRecovery ? WARNING : ERROR,
xlog flush request %X/%X is not satisfied --- flushed only
to %X/%X,
 record.xlogid, record.xrecoff,
 LogwrtResult.Flush.xlogid,
LogwrtResult.Flush.xrecoff);

Thus there is a probability of same happing again, so will need a solution to
recover out of it.


So I re-quote myself again:

=
A java process using postgres 7.3.2, got these errors 

java.sql.SQLException: ERROR:  XLogFlush: request 0/240169BC is not satisfied
--- flushed only to 0/23FFC01C

While these errors where filling the logs, we were able to connect via psql,
and see all the data.

 This has been seen to occur when a disk page has a corrupted LSN
I suppose LSN refers to Logical sector number of a WAL. If that was corrupted
how-come we were able to access it via psql. Is it just an isolated
phenomenon? Does postgres have an auto-recovery for this? If yes did old
connections have stale values of LSN?

Coming to safeguard:

1. Is there any use of restarting java process when this happens?
2. Is there any use of or Is it safe to restart postmaster at this time?

What all should be done when this happened? Any suggestions.

=



-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Friday, April 13, 2007 8:18 PM
To: Nitin Verma
Cc: [EMAIL PROTECTED]
Subject: Re: [GENERAL] ERROR: XLogFlush: request 

Nitin Verma [EMAIL PROTECTED] writes:
 xlog.c code from version we use (7.3.2)
 ...
 What all should be done when this happened? Any suggestions.

Updating to something newer than 7.3.2 would seem to be a good idea.
7.3.18 is the current release in that branch.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] corr() in 8.1

2007-04-13 Thread Tom Lane
SunWuKung [EMAIL PROTECTED] writes:
 Is there an easy way to have the corr() statistical function in 8.1?

Nothing that's likely to be easier than updating to 8.2 ... and given
the lack of multiple-input aggregates in 8.1, anything you did would
be a serious kluge anyway.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] ERROR: XLogFlush: request

2007-04-13 Thread Alvaro Herrera
Nitin Verma wrote:
 Thanx Tom, anyway we are moving to 8.1.0 soon.
 
 Leaving that moving all our client to newer release will take sometime. I
 hope you know how it works in a product. Till that time we need to release a
 patch that recovers from this condition.
 
 Said that, do we have some advice or workarounds?

Please read http://www.postgresql.org/support/versioning

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] ERROR: XLogFlush: request

2007-04-13 Thread Nitin Verma

 Please read http://www.postgresql.org/support/versioning
Quoting part of the document:

Upgrading to a minor release does not require a dump and restore; merely stop
the database server, install the updated binaries, and restart the server.
For some releases, manual changes may be required to complete the upgrade, so
always read the release notes before upgrading. 
=

Thanx, Alvaro... looks like I can just change the binary and move clients to
7.3.18, with any database recreation and dump/restore. This I can do in the
patch itself.

So Alvaro/Tom, do you think 7.3.18 will get me on a lower risk of getting in
this situation?


-Original Message-
From: Alvaro Herrera [mailto:[EMAIL PROTECTED] 
Sent: Friday, April 13, 2007 8:59 PM
To: Nitin Verma
Cc: Tom Lane; [EMAIL PROTECTED]
Subject: Re: [GENERAL] ERROR: XLogFlush: request

Nitin Verma wrote:
 Thanx Tom, anyway we are moving to 8.1.0 soon.
 
 Leaving that moving all our client to newer release will take sometime. I
 hope you know how it works in a product. Till that time we need to release
a
 patch that recovers from this condition.
 
 Said that, do we have some advice or workarounds?

Please read http://www.postgresql.org/support/versioning

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] ERROR: XLogFlush: request

2007-04-13 Thread Scott Marlowe
On Fri, 2007-04-13 at 10:10, Nitin Verma wrote:
 Thanx Tom, anyway we are moving to 8.1.0 soon.
 
 Leaving that moving all our client to newer release will take sometime. I
 hope you know how it works in a product. Till that time we need to release a
 patch that recovers from this condition.
 
 Said that, do we have some advice or workarounds?
 
 I saw 8.1.0's code; it even ends up handling the same condition.

A couple of points.

1:  This problem may be fixed in the latest 7.3.18 version.  An upgrade
from 7.3.x to 7.3.18 is pretty close to painless.  Shut down pgsql,
update package, startup pgsql.  Backup beforehand is a nice option, but
you should have backups already anyway...

2:  Do not upgrade to 8.1.0.  It's been updated many times since then. 
I think the latest 8.1.x is 8.1.8 or so. Get that.  

3:  Keep your versions updated and you should avoid most situations like
this in the future.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] ERROR: XLogFlush: request

2007-04-13 Thread Nitin Verma
 An upgrade from 7.3.x to 7.3.18 is pretty close to painless

http://www.postgresql.org/docs/7.3/static/release-7-3-13.html
http://www.postgresql.org/docs/7.3/static/release-7-3-10.html

Can't do it blindfolded but still it can be categorized as painless :)

 2:  Do not upgrade to 8.1.0.  It's been updated many times since then. I
think the latest 8.1.x is 8.1.8 or so. Get that.

Thanx and understood

 3:  Keep your versions updated and you should avoid most situations like
this in the future.

I hope I do realize that by now :)


I can see something change on
http://www.postgresql.org/docs/7.3/static/release-7-3-5.html

* Force zero_damaged_pages to be on during recovery from WAL

Is this related to XLogFlush problem?




-Original Message-
From: Scott Marlowe [mailto:[EMAIL PROTECTED] 
Sent: Friday, April 13, 2007 9:26 PM
To: Nitin Verma
Cc: pgsql general
Subject: Re: [GENERAL] ERROR: XLogFlush: request

On Fri, 2007-04-13 at 10:10, Nitin Verma wrote:
 Thanx Tom, anyway we are moving to 8.1.0 soon.
 
 Leaving that moving all our client to newer release will take sometime. I
 hope you know how it works in a product. Till that time we need to release
a
 patch that recovers from this condition.
 
 Said that, do we have some advice or workarounds?
 
 I saw 8.1.0's code; it even ends up handling the same condition.

A couple of points.

1:  This problem may be fixed in the latest 7.3.18 version.  An upgrade
from 7.3.x to 7.3.18 is pretty close to painless.  Shut down pgsql,
update package, startup pgsql.  Backup beforehand is a nice option, but
you should have backups already anyway...

2:  Do not upgrade to 8.1.0.  It's been updated many times since then. 
I think the latest 8.1.x is 8.1.8 or so. Get that.  

3:  Keep your versions updated and you should avoid most situations like
this in the future.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] error creating/setting sequence, pg_dump / pg_restore 8.1.5

2007-04-13 Thread Mason Hale

Hello -

After running pg_dump to backup my database, and then running pg_restore to
load the db (on a different server),
I run into a problem with creating a sequence. After the pg_restore is
completed, the sequence is created, but
the value is not set correctly. As a result calls nextval for the sequence
start with 1 and lead id collisions on the table until the sequence value is
reset manually.

Here's the relevant output from pg_restore:

pg_restore: creating TABLE entry
... [snipping out other CREATE TABLE and SEQUENCE SET statements]
pg_restore: creating SEQUENCE entry_id_seq
pg_restore: [archiver (db)] Error from TOC entry 1355; 1259 1302158 SEQUENCE
entry_id_seq lss
pg_restore: [archiver (db)] could not execute query: ERROR:  relation
entry_id_seq already exists
   Command was: CREATE SEQUENCE entry_id_seq
   INCREMENT BY 1
   NO MAXVALUE
   NO MINVALUE
   CACHE 1;
pg_restore: executing SEQUENCE SET entry_id_seq

Looking at the pg_store output, I can see that table 'entry' is created
before the sequence 'entry_id_seq'.
The entry table is defined with a serial column named 'id' -- so my
understanding is this implicitly creates
a sequence named 'entry_id_seq' during the table create. But then it tries
to create the entry_id_seq
again and fails.

Further, it appears the subsequent SEQUENCE SET entry_id_seq doesn't
actually set the
sequence value -- because after the restore is finished, select
nextval('entry_id_seq') returns 1.

Both databases and all versions of pg_dump and pg_restore are 8.1.5.

Here's the exact commands used:

dump the db on server1:
pg_dump -U postgres -Fc -v bduprod_2  /data/backups/working/bduprod_2.dump

restore on server2:
nohup pg_restore -d blizzard -c -v -O -U postgres bduprod_2.dump 
restore.log 21

(where 'blizzard' is a new database created from template0)

I can fix the sequence easy enough by running:

SELECT setval('entry_id_seq', (SELECT COALESCE(MAX(id)+(SELECT increment_by
FROM entry_id_seq), (SELECT min_value FROM entry_id_seq)) FROM entry),
false)

In the blizzard database on server2.
BUT -- I dump and restore these snapshots on a regular basis and would like
to skip this step if possible.

Thanks in advance,
Mason


[GENERAL] Cursor Contents display in pgAdmin - Help.

2007-04-13 Thread steve shiflett
I wish to display the content of a cursor but haven't discovered the 
trick.  Here's what I'm doing:


-The function


CREATE OR REPLACE FUNCTION tmp.sps(character varying, date)
RETURNS refcursor AS
$BODY$
DECLARE
ref refcursor;
BEGIN
  OPEN ref FOR select * from tmp.sps_measurement  where logdate  $2;
  RETURN  ref;
END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE;

---psql 
script expected to display the contents of the cursor


BEGIN;

declare
ref cursor for select tmp.sps('=','2006-06-12') ;

FETCH all IN ref;
commit;

--- Output:

xxx z=#  \i sps_test.sql
BEGIN
DECLARE CURSOR
  sps


unnamed portal 9
(1 row)

COMMIT
xxx z=#

(I am expecting the rows in the tmp.sps_measurement table.)  Can anyone 
help me out with this?


Steve






[GENERAL] Update a Value From TEdit

2007-04-13 Thread Bob Pawley
I am attempting to update a table using a value that is displayed on a TEdit 
component. I am getting an access violation error.

Can someone tell me what I am doing wrong??

procedure TLoopBuilderForm.NewLoopButtonClick(Sender: TObject);

  var Edit1 : TEdit ;

begin
   

   PSQLQuery1.Close;{close the query}
  //assign new SQL expression Monitor
  PSQLQuery1.SQL.Clear;
  PSQLQuery1.SQL.Add ('Update P_ID.Loops');
  PSQLQuery1.SQL.Add ('Set IDW_Loop_ID = + LoopNumberEdit');
  PSQLQuery1.SQL.Add ('Where P_ID.Loops.Loop_Name = :Loop_Name');
  PSQLQuery1.ParamByName('LoopNumberEdit').Value := Edit1.Text;
  PSQLQuery1.Execsql;

end;

Running Delphi 6 and PostgreSQL 8.2 on Win XP.

Bob

[GENERAL] OT: schema-sketching software

2007-04-13 Thread Kynn Jones

Hi.  I'm looking for Pg- and OSX-friendly software for generating schema
diagrams.  It its most basic level, I'm looking for the graphics counterpart
of pg_dump -s, although it would be nice if the program could take as input
the name of a file containing an arbitrary schema definition (i.e. not
necessarily corresponding to an existing database).
TIA!

kj


Re: [GENERAL] Update a Value From TEdit

2007-04-13 Thread Raymond O'Donnell

On 13/04/2007 19:05, Bob Pawley wrote:


I am attempting to update a table using a value that
is displayed on a TEdit component. I am getting an
access violation error.


Hi Bob,

An access violation error means that some part of your code is trying to
access memory that it shouldn't. It most likely means that you're trying
to use a component that hasn't yet been created, so it's a bug in your
code rather than a problem with PostgreSQL.


   PSQLQuery1.Close;{close the query}
  //assign new SQL expression Monitor
  PSQLQuery1.SQL.Clear;
  PSQLQuery1.SQL.Add ('Update P_ID.Loops');
  PSQLQuery1.SQL.Add ('Set IDW_Loop_ID = + LoopNumberEdit');
  PSQLQuery1.SQL.Add ('Where P_ID.Loops.Loop_Name = :Loop_Name');
  PSQLQuery1.ParamByName('LoopNumberEdit').Value := Edit1.Text;
  PSQLQuery1.Execsql;


How do you create PSQLQuery1? - Did you drop it on the form at
design-time, or have you created it on-the-fly in your code? If the
latter, then that's probably your problem - have you called
PSQLQuery1.Create() somewhere before the code above executes?

However, if PSQLQuery1 was created at design time, then the problem lies
elsewhere. Look for some component which you've declared, but whose
constructor you haven't invoked.

BTW, from the code above it seems you're passing in an integer value to
the query; I prefer to use ParamByName().AsInteger as an extra layer of
safety, to ensure that an integer acyually does get passed in.

Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Update a Value From TEdit

2007-04-13 Thread Raymond O'Donnell

On 13/04/2007 19:25, Raymond O'Donnell wrote:


PSQLQuery1.SQL.Add ('Where P_ID.Loops.Loop_Name = :Loop_Name');
PSQLQuery1.ParamByName('LoopNumberEdit').Value := Edit1.Text;


I meant to say also that the parameter name you're passing to the 
ParamByName() function isn't the same as the one you've specified in the 
query.


Ray.

---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Update a Value From TEdit

2007-04-13 Thread Arthur Hoogervorst

Hi:

What's the point of  declaring the TEdit locally? Use a string
instead. Additionally, in Delphi [and any other OOP language) if you
want to use objects, you instantiate them first, as in  Edit1 :=
TEdit.Create(nil);


Regards,


Arthur



On 4/13/07, Bob Pawley [EMAIL PROTECTED] wrote:



I am attempting to update a table using a value that is displayed on a TEdit
component. I am getting an access violation error.

Can someone tell me what I am doing wrong??

procedure TLoopBuilderForm.NewLoopButtonClick(Sender:
TObject);

  var Edit1 : TEdit ;

begin


   PSQLQuery1.Close;{close the query}
  //assign new SQL expression Monitor
  PSQLQuery1.SQL.Clear;
  PSQLQuery1.SQL.Add ('Update P_ID.Loops');
  PSQLQuery1.SQL.Add ('Set IDW_Loop_ID = + LoopNumberEdit');
  PSQLQuery1.SQL.Add ('Where P_ID.Loops.Loop_Name = :Loop_Name');
  PSQLQuery1.ParamByName('LoopNumberEdit').Value := Edit1.Text;
  PSQLQuery1.Execsql;

end;

Running Delphi 6 and PostgreSQL 8.2 on Win XP.

Bob


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] OT: schema-sketching software

2007-04-13 Thread Jon Sime
Kynn Jones wrote:
 Hi.  I'm looking for Pg- and OSX-friendly software for generating schema
 diagrams.  It its most basic level, I'm looking for the graphics
 counterpart of pg_dump -s, although it would be nice if the program
 could take as input the name of a file containing an arbitrary schema
 definition ( i.e. not necessarily corresponding to an existing database).

If you don't mind doing a little Perl (with the side benefit being that
you could more easily automate the diagramming for large numbers of
schemas, or schemas that go through numerous changes), you might want to
check out SQL Fairy: http://sqlfairy.sourceforge.net/

For a non-free package specifically designed for OS X, SQLEditor might
fit the bill. The trial version seemed to fare okay on a couple of my
less complicated definition files. It also appears to have the ability
to connect directly to a database and build the diagram that way, though
that might be disabled in the trial version since it did nothing for me.
It seems to only pay attention to table definitions (and their FKs);
generic (aka non-PK/FK/unique/null) constraints, functions, triggers,
custom datatypes, etc. all were ignored from what I could tell when I
had it parse my definition files.

-Jon

--
Senior Systems Developer
Media Matters for America
http://mediamatters.org/


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] meaning of Total runtime

2007-04-13 Thread jungmin shin

Could you somebody explain to me what is the meaning of Total runtime
which is generated by explain analyze command?

Thanks,
Jungmin

--
Jungmin Shin


[GENERAL] Temporary table in pl/pgsql

2007-04-13 Thread Raymond O'Donnell

Hello again all,

I'm using a temporary table in a pl/PgSQL function, and I've read the 
bit in the FAQ about using EXECUTE to force the table-creation query to 
be re-evaluated each time the function is called.


However, it doesn't seem to work for me. The first time I call the 
function, all is well; the second (and every subsequent) time, the 
function bombs with the 'relation with OID X does not exist' error - 
this seems to imply that the EXECUTE statement is getting cached along 
with the rest of the function, which according to the docs ain't 
supposed to happen :-(


I'm using 8.2.3 on WinXP.

Here's the first part of the function - I'll supply a complete test case 
if necessary:


-
create or replace function tutor_availability_remaining
(
  aTerm integer,
  anInstrument varchar,
  aLevel varchar
)
returns setof tutor_availability
as
$$
declare
  OrigBlock tutor_availability%rowtype;
  SlotsForDay teachingslots%rowtype;
begin
  -- Create a termporary table to hold the results.
  -- Use EXECUTE to force this to be executed each
  -- time, as per the FAQ.
  execute 'create temporary table TheResults('
|| 'block_id integer, term_id integer, term_name varchar(40), '
|| 'the_date date, month_name varchar(12), day_name varchar(12), '
|| 'is_weekend boolean, tutor_id integer, surname varchar(40), '
|| 'firstname varchar(40), block_starts time without time zone, '
|| 'block_ends time without time zone)';

  -- stuff snipped here...
  -- etc etc



I wondered if, the string passed to EXECUTE, being entirely literal, it 
was somehow getting over-optimised :) and I tried changing the second 
line above to use one of the parameters passed in, thus -


  'term_id integer default ' || quote_literal(aTerm) ...

- but it made no difference.

What am I missing?

TIA,

Ray.


---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] meaning of Total runtime

2007-04-13 Thread Joshua D. Drake

jungmin shin wrote:


Could you somebody explain to me what is the meaning of Total runtime
which is generated by explain analyze command?


It is the time it took the query to execute.

 
Thanks,

Jungmin

--
Jungmin Shin



--

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/

Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Temporary table in pl/pgsql

2007-04-13 Thread Merlin Moncure

On 4/13/07, Raymond O'Donnell [EMAIL PROTECTED] wrote:

Hello again all,

I'm using a temporary table in a pl/PgSQL function, and I've read the
bit in the FAQ about using EXECUTE to force the table-creation query to
be re-evaluated each time the function is called.

However, it doesn't seem to work for me. The first time I call the
function, all is well; the second (and every subsequent) time, the
function bombs with the 'relation with OID X does not exist' error -
this seems to imply that the EXECUTE statement is getting cached along
with the rest of the function, which according to the docs ain't
supposed to happen :-(


in addition to the 'create table' stmt, all queries that touch the
table must also be dynamic.  postgresql 8.3 will have improved plan
invalidation which will (aiui) remove this requirement.

merlin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Temporary table in pl/pgsql

2007-04-13 Thread Listmail


OK, suppose in his function :
	- if it does not exist, he creates the temp table, with ON COMMIT DELETE  
ROWS

- if it does exists, he truncates it just to be sure

	So the next execution of the function will find the temp table, it will  
have the same OID, all is well.


Now :

BEGIN
execute the function (and creates the table)
ROLLBACK
execute the function

	The Rollback will have rolled back the table creation too. Now when he  
executes the function again, will it get the cached plan with the rolled  
back table's OID ?



On Fri, 13 Apr 2007 22:55:49 +0200, Merlin Moncure [EMAIL PROTECTED]  
wrote:



On 4/13/07, Raymond O'Donnell [EMAIL PROTECTED] wrote:

Hello again all,

I'm using a temporary table in a pl/PgSQL function, and I've read the
bit in the FAQ about using EXECUTE to force the table-creation query to
be re-evaluated each time the function is called.

However, it doesn't seem to work for me. The first time I call the
function, all is well; the second (and every subsequent) time, the
function bombs with the 'relation with OID X does not exist' error -
this seems to imply that the EXECUTE statement is getting cached along
with the rest of the function, which according to the docs ain't
supposed to happen :-(


in addition to the 'create table' stmt, all queries that touch the
table must also be dynamic.  postgresql 8.3 will have improved plan
invalidation which will (aiui) remove this requirement.

merlin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Temporary table in pl/pgsql

2007-04-13 Thread Raymond O'Donnell

On 13/04/2007 21:55, Merlin Moncure wrote:


in addition to the 'create table' stmt, all queries that touch the
table must also be dynamic.  postgresql 8.3 will have improved plan
invalidation which will (aiui) remove this requirement.


Thanks for that - just tried it and it worked.

Ray.


---
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
[EMAIL PROTECTED]
---

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] question

2007-04-13 Thread Andrej Ricnik-Bay

On 4/13/07, Terry Martin [EMAIL PROTECTED] wrote:

I would like to know if I there is a utility to take a UDP packet which
has specific information in the payload and extract the information
from the packet and place it in the Postgres data base?

Which OS (in Linux ulogd and/or tcpdump spring to mind), do
you need it captured real-time or would batch-processing on
the full-hour or end of day suit  you just fine?


Cheers,
Andrej

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] indexing array columns

2007-04-13 Thread Rajarshi Guha
Hi, I have a table of about 10M rows. It has two columns A and B, where
A is a text field and B is a real[12] field.

Now when this table is queried it is usually of the form:

select A from theTable where sim(B, C)  0.8

Now C will be a 12-element array specified by the user and the value 0.8
can be arbitrary. The function, sim(), is essentially a similarity
function which simply computes the inverse of the Manhattan distance
between the query array and the rows of the array column. Right now the
above query uses a seq scan.

Furthermore, the values of the individual array elements for any given
row can vary from 0 to infinity (but for most cases will be numbers less
than 1000)

My question is: how can I index the column B so that such queries are
fast.

I realize that my table is essentially a collection of 12-dimensional
points and that I could replace my similarity function with a distance
function. 

Thus my query boils down to asking 'find me rows of the table that are
within X distance of my query'

I know that the GIS community deals with 2D points, but I'm not familiar
with this area and if I understand correctly, they use Euclidean
distances, where as I need Manhattan distances. 

What type of indexing, available in Postgres could be used for my
problem? Would it require me to implement my own indexing scheme?

Any pointers would be appreciated

Thanks,

---
Rajarshi Guha [EMAIL PROTECTED]
GPG Fingerprint: 0CCA 8EE2 2EEB 25E2 AB04 06F7 1BB9 E634 9B87 56EE
---
Every little picofarad has a nanohenry all its own.
-- Don Vonada



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Trigger on transaction?

2007-04-13 Thread sixtus

Hello all,


I've been enjoying programming postgres for a while, but I am stuck
with a problem neither the excellent documentation nor the search on
this mailing list provides with a solution.

It's been asked before, I too would like to have a trigger on transaction!

Use cases:

- I want to version a table which sees multiple updates during one
transaction, but I only need transaction resolution. My solution would
be a trigger on insert/update to copy into a temporary table (where
multiple updates overwrite the old value) and on commit copy into the
version table.

- I would like to have transient data within a persistent table. While
I am in the transaction, the transient information is available, on
commit it is deleted. The only other solution I can think of is a
temporary table inheriting my persistent table. Is that doable? How do
I move a row from parent to child table, will that value drop when the
transaction ends?


Any help greatly appreciated!

Regards,

Hagen

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq