Re: [GENERAL] [IDLE IN TRANSACTION] doing a remote pg_dump

2006-04-06 Thread Jim Nasby

On Apr 5, 2006, at 10:47 AM, Dick Wieland wrote:

I'm running pgsql 8.1.3 and ran into an unexpected situation running
pg_dump remotely. We have 2 LAN's situated some distance apart. I can
run pg_dump against machine A on LAN1 from machine B on LAN1 with no
problem. But if I try to run it from Machine C on LAN2 it stalls with
"idle in transaction" on the server.

I've seen some snippets of commentary on timeouts, etc. and have a
feeling I'm running into the same problem, but I can't tell whether
there is a solution or not. Can someone please point out a place  
where I

can read up on this situation in more detail so I can understand the
issues better?


I suspect the only place to read about it would be in the mailing  
list archives.


I suspect what's happening is that something between the two machines/ 
networks is killing the connection part-way through the backup.  
pg_dump starts a serialized transaction early on, so any interruption  
in the process will result in seeing the 'idle in transaction' status.

--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] stored proc vs sql query string

2006-04-06 Thread Jim Nasby

On Apr 6, 2006, at 6:39 AM, Sean Davis wrote:

On 4/6/06 12:12 AM, "surabhi.ahuja" <[EMAIL PROTECTED]> wrote:

i have heard somewhere that writing a stored procedure, is much  
better than
firing a sql query(such as select * from table_name) onto the  
database.

is it true and if yes how?


This isn't going to be true most of the time, I think.  Write SQL  
where you
can, and where you can't (because you can't express something in  
SQL), write
a procedure.  There are places where using a stored procedure can  
be more
efficient, but I think starting with SQL, benchmarking and testing,  
and then
determining what queries need special attention is the best way to  
go at the

beginning.


You're forgetting that (at least in plpgsql), "raw" queries get  
compiled into prepared statements. Prepared statements are faster to  
execute than queries that have to be manually parsed every time. Of  
course you can pass in prepared statements from the client side as  
well, but if you stick with using stored procedures as an API to the  
database you don't have to worry about forgetting to do that. And as  
others have mentioned there's non-performance-related benefits to  
using stored procs as well.

--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] autovacuum settings

2006-04-06 Thread Jim Nasby

On Apr 6, 2006, at 8:57 AM, Jebus wrote:


I am totally new to postgres and I wondering what settings should I be
using for autovacuum ?
For now I just uncommented all the defaults, is this good enough ? The
database is not large but it is a web database so there is a lot of
inserts and updates.


There's no reason to uncomment the defaults. They're defaults. :)  
Unless you change settings they'll be used.


autovacuum = on # enable autovacuum  
subprocess?
autovacuum_naptime = 60 # time between autovacuum  
runs, in secs
autovacuum_vacuum_threshold = 1000  # min # of tuple updates  
before

# vacuum
autovacuum_analyze_threshold = 500  # min # of tuple updates  
before

# analyze
autovacuum_vacuum_scale_factor = 0.4# fraction of rel size before
# vacuum
autovacuum_analyze_scale_factor = 0.2   # fraction of rel size before
# analyze


I recommend cutting the above 4 parameters in half.

autovacuum_vacuum_cost_delay = -1   # default vacuum cost delay  
for

# autovac, -1 means use
# vacuum_cost_delay
autovacuum_vacuum_cost_limit = -1   # default vacuum cost limit  
for

# autovac, -1 means use
# vacuum_cost_limit


I also uncommented this too I am not what they are though ?

vacuum_cost_delay = 0   # 0-1000 milliseconds


If you set vacuum_cost_delay > 0 then every time vacuum accumulates  
enough 'credits' it will sleep for that long. So with the default  
settings below, vacuum would sleep after dirtying 10 pages. Or  
reading 20 pages that weren't in PostgreSQL's buffer (that cost is  
probably too high, depending on how much memory you have on the server).



vacuum_cost_page_hit = 1# 0-1 credits
vacuum_cost_page_miss = 10  # 0-1 credits
vacuum_cost_page_dirty = 20 # 0-1 credits
vacuum_cost_limit = 200 # 0-1 credits

---(end of  
broadcast)---

TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that  
your

   message can get through to the mailing list cleanly



--
Jim C. Nasby, Database Architect[EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"




--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] How can i import .backup file into the database

2006-04-06 Thread Jim Nasby

On Apr 4, 2006, at 4:59 AM, venu gopal wrote:

Dear List,
I am trying to import tables from a file called choro.backup.I  
have created a db called dhis.Is this not possible to import data  
from a .backup file.If it is not possible then how can i import the  
data into my database.Can you please give me the steps how to  
import the data from a .backup file.


PostgreSQL supports several different backup formats, and doesn't  
care what you call them, so a .backup could be any of the three.  
You'll need to take a look at the file to get an idea of what format  
it's in. If it's in plain SQL you can just feed it to psql, if not  
give us the first few lines and we can help. If it's just gibberish  
then try running it through gunzip first.

--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Not able to install PostgreSQL in my machine

2006-04-06 Thread Jim Nasby
Your best bet for installer problems is to post to their mailing  
list; search for install at http://pgfoundry.org


On Apr 4, 2006, at 1:21 AM, Ravi Kumar wrote:


Hello,

I would like to install PostgreSQL on my system which is  
working with Windows XP operating system.


I downloaded file - postgresql-8.1.3-1.zip from the web-site –  
extracted all files and started installation. At the end it is  
stopping and undoes the installation procedure by giving the error  
message –


Not able to create postgres user. Password is too short or not  
complex enough.


In the installation faq screen-shots I have seen only 6 characters  
of password. I tried the password of 15characters with small and  
capital alphabets, numbers and special characters. Could you please  
menstion some sample passwords to satisfy the user creation criteria.


- - - - - - - -

After that I downloaded file - postgresql-8.0.6-binaries-no- 
installer.zip from the web-site – extracted and executed file  
psql.exe. It is saying error message – libpq.dll not found and re- 
install the application.


Kindly help me to solve the installation problems earliest.

Thanks -

Ravi Kumar B V





--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] MediaWiki and Postgresql?

2006-04-06 Thread Jim Nasby


On Apr 4, 2006, at 11:08 AM, Scott Marlowe wrote:


On Mon, 2006-04-03 at 06:36, Markus Wollny wrote:

Hi

[EMAIL PROTECTED] wrote:


Has anyone put MediaWiki up using the current version of Postgresql?


I have, although our Mediawiki installation is currently not openly
accessible. Can't say that it went like a breeze, but the obstacles
where more or less minor and writing a little custom auth-plugin  
wasn't

really too hard either. If your question was just along the lines of
"does it run", I can confirm that, yes, it does. I cannot say  
though how

well it would stand up to high loads and if every query and index has
been fully tweaked.



Given that this page:

http://www.mediawiki.org/wiki/Help:%24wgDBtype

says:

Use ""mysql"" for working code and ""PostgreSQL"" for development/ 
broken

code.

I'd guess that PostgreSQL support is not quite at the same level as
MySQL just yet.  :)


If I'm not confusing wiki's (and I'm offline now, so I can't check),  
Mediawiki is pretty un-interested in supporting PostgreSQL (hard to  
imagine why, given that some of their folks are paid by MySQL AB), so  
a fork was created on pgFoundry. I believe it's called pgpedia.


If I am confusing wiki's, I'm wondering if they'd be willing to  
accept patches to improve PostgreSQL support...

--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] sort a referenced list

2006-04-06 Thread Jim Nasby

On Apr 4, 2006, at 3:22 PM, Matthew Peter wrote:

Tom Lane <[EMAIL PROTECTED]> wrote: Matthew Peter writes:
> What I'm trying to do is create a function that accepts a list  
then sorts and groups the values (like in sql)... I will have an  
unique list I can convert it to an array later or leave it a list


There is no "list" data structure in SQL. There are tables, and there
are arrays, but it's not especially easy to pass an arbitrary table
value to a function. So you almost certainly need to define your
problem as "create a function that accepts an array then ...".

regards, tom lane

Originally I wanted to pass in text or varchar array, group it,  
sort it then do as I willed with it, but I couldn't figure out how.  
I saw some functions in the contrib for doing these operations, yet  
only with integer arrays.


I could always try converting the array_sort, array_uniq int[] C  
function to accept text[], but I decided to ask and see if there  
was a simplier/standard way first since I don't know C all that well.


Sadly there's very little in the way of array operators in  
PostgreSQL, so your best bet would be to modify what's in contrib,  
preferably allowing it to work with any data type. I'm guessing that  
the int array stuff is in contrib and not in the backend because  
we're waiting for someone to make it work for all types...

--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Column descriptions - could they be propagated to new tables?

2006-04-06 Thread Jim Nasby

On Apr 5, 2006, at 12:15 PM, Merlin Moncure wrote:

On 4/5/06, Clive Page <[EMAIL PROTECTED]> wrote:
Since I discovered the facilities in Postgres for providing and  
listing
column descrptions, I have found them very useful, especially for  
adding a

string showing physical units to my columns.


Have you considered using domains for these types?  You can comment
the domain appropriately.  While the domain description will not show
in \d+. to get the description you can do \dT on the domain.

Based on your background I also think you might appreciate domains
from a design perspective.  There are a couple of disadvantages to
using them so I'd suggest reading about them.


I seem to recall some astronomer having created some custom types for  
storing astronomical data in PostgreSQL. Or perhaps he was using  
PostGIS. I know that other astronomers are using PostgreSQL/PostGIS  
so if you look around you might be able to save yourself quite a bit  
of work.

--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PostgreSQL support on Redhat Advance Server 2.1

2006-04-06 Thread Jim Nasby

On Apr 4, 2006, at 12:45 PM, Tony Caduto wrote:

You wrote "Unfortunately Red Hat's backwards-compatibility policy
forbids dropping
new major PG releases into old RHEL branches". So that means I cannot
upgrade to Postgresql 8.x on a RedHat9 server? Currently I am using
Postgresql 7.3 but was planning on upgrading before I put the  
database

into production.


Hi Margaret,
Tom means that Redhat won't provide a update for Postgresql, that does
not mean you can't install it yourself.

I run RedHat EL 4.3(Postgresql is at 7.4.x on EL 4.3) at work and I  
run

8.1 on it, I just installed it myself.


Just realize that RedHat won't provide any support for 8.x on  
RHAS2.1. Pervasive would be happy to, though. :)


Incidentally, someone from Red Hat talked to me at Linuxworld about  
setting up some kind of partnership. Turns out he didn't know who Tom  
Lane was or where he worked. :)

--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Debian package for freeradius_postgresql module

2006-04-06 Thread Tom Lane
Chris <[EMAIL PROTECTED]> writes:
>> This is the ref was given:
>> The old / original BSD license is not compatible.
>> http://www.gnu.org/licenses/license-list.html#GPLIncompatibleLicenses 
>> 
>> Anyway to change this?? So debian users can easily use postgresql and 
>> freeradius
>> together...

> Changing the postgres license isn't going to happen - it has been 
> debated many many many times in the past (check the archives).

The PG license is *not* the "old" (advertising-clause) BSD license, but
the new one.  What I gathered from the other link that was posted is
that Debian's license concern has nothing to do with the Postgres
license, but rather that they think freeradius and openssl have
incompatible licenses.  So it's those two projects that you need to talk
to about this.  We are just bystanders.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] About checking all dead lock tables

2006-04-06 Thread Michael Fuhr
On Thu, Apr 06, 2006 at 03:36:33PM -0400, Emi Lu wrote:
> How to check whether a table is locked?

You can monitor locks by querying the pg_locks view.

http://www.postgresql.org/docs/8.0/interactive/monitoring-locks.html
http://www.postgresql.org/docs/8.0/interactive/view-pg-locks.html

Only a few operations lock entire tables; for more information see
the Concurrency Control chapter in the documentation.

http://www.postgresql.org/docs/8.0/interactive/mvcc.html

> Is there a way that I can find all deadlocks under postgresql 8.0.1 ?

Are you sure you mean "deadlocks"?  Deadlock occurs when transaction
A holds a lock that transaction B wants and transaction B holds a
lock that transaction A wants; PostgreSQL detects this condition
and aborts one of the transactions, allowing the other to continue.
By default this happens after one second (see the deadlock_timeout
setting).

Perhaps you want to see pending or blocked lock attempts, i.e.,
locks that have been requested but are not yet acquired because
another transaction holds a conflicting lock.  If that's the case
then look for rows in pg_locks where "granted" is false.

-- 
Michael Fuhr

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] programatic database dump

2006-04-06 Thread Michael Schmidt



I've been working on this issue lately.  There is quite a bit more to 
a database than the tables (e.g., functions), and I'm not sure how you would go 
about getting a backup of the entire database through SQL.  I've been 
working on a Java class to do the dump and restore.  It doesn't look 
terribly hard.
 
However, having said that, I wonder if someone could answer 
something.  How do pg_dump and pg_restore use stdout and stderr?  I 
saw a very old post that indicated -v caused all output to go to stderr.  I 
couldn't find anything recent and nothing in the manual.
 
Thanks!
Michael Schmidt


Re: [GENERAL] Debian package for freeradius_postgresql module

2006-04-06 Thread Chris

lmyho wrote:

After desperately checking, we were told that debian doesn't distribute the


binary


module of freeradius for postgresql because of the incompatible license of these


two

apps! However we can build the debian pkg from the source ourself if we need. 


So

Sounds terribly unlikely, PostgreSQLs licence doesn't conflict with any
use anywhere. Can you provide a reference?




I wish things are not like this too! so I won't have to go through so much trouble! 
But that's what happened:-( 


This is the ref was given:
The old / original BSD license is not compatible.
http://www.gnu.org/licenses/license-list.html#GPLIncompatibleLicenses 


Anyway to change this?? So debian users can easily use postgresql and freeradius
together...


Changing the postgres license isn't going to happen - it has been 
debated many many many times in the past (check the archives).



Those warnings come from freeradius, not postgres - so best ask on their 
list whether they are serious or not.


--
Postgresql & php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] posgresql <-> oracle

2006-04-06 Thread Joshua D. Drake

Mark Harrison wrote:

We're got the need to replicate (two-way) some data between
postgresql and oracle.

Does anybody have versions of pg_dump or equivalents that
can go back and forth between the two databases?
It would be great if I could get both --schema-only
and --data-only functionality from this.

You might be able to use dbi-link over at pgFoundry.

Joshua D. Drake



Thanks!
Mark




---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Large text data

2006-04-06 Thread Scott Marlowe
On Thu, 2006-04-06 at 17:18, Antimon wrote:
> Hi,
> I need to store text entries and i use text datatype. I want to ask if
> it will be better to split text and entry information?
> 
> I mean, i can use a table like, (id, authorid, insertdate, editdate,
> threadid, textdata) or i can have an entrytexts table (id, entryid,
> textdata) and a foreign key on entryid -> entries.id.
> Which would be better? I might need to do some range searches and
> orders on entries so i thought splitting text might decrease some
> overhead?
> 
> Or shall i just use one table?

Text over a certain size gets moved out of the main table and stored in
the toast table, so there's not that huge of a hit in terms of
performance.

It's really a question of relativity.  If you're non large text fields
will add up to a couple hundred bytes, there's no great gain moving the
text to another table, and when you join them, you've got the overhead
of joining two separate tables.

OTOH, if you'll be storing one int, one date, and one 10 character or so
text keyword or something, then it might be worth your while to move the
text out.

If you're always gonna grab the text at the same time, leave it in the
table.  If you'll grab it once every 1,000 or so accesses, separate may
be better.

Nothing beats a benchmark.  But knowing that the database automagically
compresses and stores text (over a certain size) helps you realize why
you won't get huge returns on moving the text to another table.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Cant find temp tables

2006-04-06 Thread Mike Adams

Dave Page wrote:
 


-Original Message-
From: [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] On Behalf Of 
[EMAIL PROTECTED]

Sent: 03 April 2006 01:21




...snip



2) pgAdmin uses a separate connection for query tool windows. Because
temp tables exist only for the life of the session, if you close the
query tool (or whatever other session you created the table in), all the
pg_class/pg_attribute rows will be removed so pgAdmin won't see them no
matter how much you refresh the tree.



Regarding item 2), is there a way to *disconnect* pgAdmin from a 
database (ie drop the connection) thereby allowing you to modify 
template1 for example, disconnect from template1 so another person can 
initdb using template1 on another connection w/o you having to shutdown 
pgAdmin?
begin:vcard
fn:Michael Adams
n:Adams;Michael
adr:;;2200 Southview Blvd #103;South St Paul;MN;55075;USA
email;internet:[EMAIL PROTECTED]
tel;work:651-726-1636
tel;fax:651-726-1686 (work)
tel;home:651-451-8924
x-mozilla-html:FALSE
version:2.1
end:vcard


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] php to import csv to postgres

2006-04-06 Thread Scott Marlowe
On Thu, 2006-04-06 at 17:27, Chris wrote:
> SunWuKung wrote:
> > I am not sure I should be posting this here, but I guess you are the 
> > people most likely be able to answer this.
> > 
> > I have been looking for a user friendly php script to import csv into 
> > Postgres (set separator, preview data, match columns - or any of these). 
> > I found many for MySQL but  none that would work with Postgres.
> > 
> > Maybe somebody could direct me to such a script.
> 
> Copy might be able to do it for you:
> 
> http://www.postgresql.org/docs/8.1/interactive/sql-copy.html
> 
> use ',' as the delimiter.

And, if you're stuck with an older version of PostgreSQL that can't do
cvs, you should look up this command:

http://us3.php.net/manual/en/function.fgetcsv.php

tres cool if you've got a csv file to import, you just read in each line
with that, and it gives you a numerically indexed array of each field,
ready to go.  Quite useful, even for other stuff.  And yes, it does
understand things like commas in the middle of the field and all that.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] php to import csv to postgres

2006-04-06 Thread Chris

SunWuKung wrote:
I am not sure I should be posting this here, but I guess you are the 
people most likely be able to answer this.


I have been looking for a user friendly php script to import csv into 
Postgres (set separator, preview data, match columns - or any of these). 
I found many for MySQL but  none that would work with Postgres.


Maybe somebody could direct me to such a script.


Copy might be able to do it for you:

http://www.postgresql.org/docs/8.1/interactive/sql-copy.html

use ',' as the delimiter.

--
Postgresql & php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Backup method

2006-04-06 Thread Guy Fraser
On Thu, 2006-06-04 at 15:21 -0400, Douglas McNaught wrote:
> Guy Fraser <[EMAIL PROTECTED]> writes:
> 
> > How do you suggest one does PITR ?
> >
> > It has been a while since I read the Docs, but do not recall 
> > any tools that allow one to do such a thing.
> 
> PITR went in to 8.0 (IIRC); the docs for that version will cover it.
> 
Excellent.

I checked out the docs, and will definitely be planning on using 
PITR.

Thanks a lot for bringing this to my attention.



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pgcrypto-crypt

2006-04-06 Thread Chris

AKHILESH GUPTA wrote:

how do we compare the existing data in the table with the entered one?


same way as anything else, for example:

select * from users where passwd=md5('my_password');

is there any other function which we can use here for both cases 
encryption as well as for decryption at the script as well as database 
level


why do you need it encrypted?

Please do reply-all - you will get a quicker response (from me and the 
list(s) might have suggestions I don't).


On 4/6/06, *chris smith* <[EMAIL PROTECTED] > 
wrote:


On 4/6/06, AKHILESH GUPTA <[EMAIL PROTECTED]
> wrote:
 > dear all,
 >  i want to encrypt and decrypt one of the fields in my table
(i.e-password
 > field)
 >  i have searched and with the help of pgcrypto package, using
function
 > "crypt", i am able to encrypt my data,
 >  but there is nothing which i found to decrypt that same data,
 >  plz anybody give me the function to decrypt that encrypted
value.

The crypt function can't be decrypted (whether it's in postgresql or
anywhere else).

Crypt is meant to be used for passwords and such that you don't need
to reverse (you only compare against).

--
Postgresql & php tutorials
http://www.designmagick.com/




--
Thanks & Regards,
Akhilesh
DAV Institute of Management
Faridabad(Haryana)
GSM:-(+919891606064)
  (+911744293789)

"FAILURES CAN BE FORGIVEN BUT AIMING LOW IS A CRIME"



--
Postgresql & php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Large text data

2006-04-06 Thread Antimon
Hi,
I need to store text entries and i use text datatype. I want to ask if
it will be better to split text and entry information?

I mean, i can use a table like, (id, authorid, insertdate, editdate,
threadid, textdata) or i can have an entrytexts table (id, entryid,
textdata) and a foreign key on entryid -> entries.id.
Which would be better? I might need to do some range searches and
orders on entries so i thought splitting text might decrease some
overhead?

Or shall i just use one table?

Thanks.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Debian package for freeradius_postgresql module

2006-04-06 Thread Tyler MacDonald
Martijn van Oosterhout  wrote:
> On Thu, Apr 06, 2006 at 10:27:36AM -0700, lmyho wrote:
> > After desperately checking, we were told that debian doesn't distribute the 
> > binary
> > module of freeradius for postgresql because of the incompatible license of 
> > these two
> > apps! However we can build the debian pkg from the source ourself if we 
> > need.  So
> 
> Sounds terribly unlikely, PostgreSQLs licence doesn't conflict with any
> use anywhere. Can you provide a reference?

This looks like part of the debate:

http://lists.debian.org/debian-legal/2002/11/msg00254.html

I dont know if this applies to openssl though...

- Tyler


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Debian package for freeradius_postgresql module

2006-04-06 Thread lmyho

> > After desperately checking, we were told that debian doesn't distribute the
> binary
> > module of freeradius for postgresql because of the incompatible license of 
> > these
> two
> > apps! However we can build the debian pkg from the source ourself if we 
> > need. 
> So
> 
> Sounds terribly unlikely, PostgreSQLs licence doesn't conflict with any
> use anywhere. Can you provide a reference?
> 

I wish things are not like this too! so I won't have to go through so much 
trouble! 
But that's what happened:-( 

This is the ref was given:
The old / original BSD license is not compatible.
http://www.gnu.org/licenses/license-list.html#GPLIncompatibleLicenses 

Anyway to change this?? So debian users can easily use postgresql and freeradius
together...

Thanks!!

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] case insensitive match in unicode

2006-04-06 Thread SunWuKung
In article <[EMAIL PROTECTED]>, kleptog@svana.org says...
> On Mon, Mar 27, 2006 at 12:45:05PM +0200, SunWuKung wrote:
> > This sounds like a very interesting concept.
> > It wouldn't be 'case insensitive' just insensitive.
> > 
> > The way I imagine it now is a special case of the ~ function.
> > I create matchgroups in a table and check each character if it is in the 
> > group. If it is I will replace the character with the group in [éÉE], 
> > [oóOÓ??] and do a regexp with that.
> 
> No need to reinvent the wheel. ICU provides a range of services to deal
> with this. For example the following filter in ICU:
> 
>  NFD; [:Nonspacing Mark:] Remove; NFC.
> 
> Will remove all accents from characters. And it works for all Unicode
> characters. With a bit more thinking you can work with case variations
> also.
> 
> There is also a locale-independant case-mapping module there plus
> various locale specific ones also.
> 
> http://icu.sourceforge.net/userguide/Transform.html
> http://icu.sourceforge.net/userguide/caseMappings.html
> http://icu.sourceforge.net/userguide/normalization.html
> 
> Have a nice day,
> 
Thanks, I looked at this and it looks like something that would indeed 
solve the problem.
However I was so far unable to figure out how could I use this from 
within Postgres. If you have experience with it could you give me an 
example?

Thanks
Balázs

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] encode/decode trouble ('invalid input syntax for type bytea'); bug?

2006-04-06 Thread Tobias Herp
Hi,

I need to encrypt some fields in my database. The solution I came up with
is:

- create an additional field w/suffix "_encr" for every field which must be
encrypted;
- create update triggers which write the encrypted value into the new field
and null into the original field (not null constraints must be removed, of
course)
- change the views to use and decrypt the encrypted fields.
- add polymorpic encrypt(...) functions which are distinguished from their
argument types and take the key from a temporary table.

I'm pretty sure I need to use BYTEA fields to store the encrypted values;
and I need to convert TEXT <--> BYTEA, don't I?

So I created the following functions:

REATE OR REPLACE FUNCTION text2bytea(text)
  RETURNS bytea AS
$BODY$BEGIN
  RETURN decode($1, 'escape');
END;$BODY$
  LANGUAGE 'plpgsql' IMMUTABLE STRICT;

CREATE OR REPLACE FUNCTION bytea2text(bytea)
  RETURNS text AS
$BODY$BEGIN
  RETURN encode($1, 'escape');
END;$BODY$
  LANGUAGE 'plpgsql' IMMUTABLE STRICT;
ALTER FUNCTION text2bytea(text) OWNER TO postgres;

Now, unfortunately there are backslashes in many varchar fields in the rows
wich must be encrypted; and for those strings, encode/decode seems to crash:

select decode('abc', 'escape'); -- yields 'abc'
select decode('ab\c', 'escape');-- yields 'abc'
select encode('ab\c', 'escape');-- yields 'abc'
select decode('ab\\c', 'escape');   -- ERROR: invalid input syntax for type
bytea
select encode('ab\\c', 'escape');   -- ERROR: invalid input syntax for type
bytea
select decode('abc', 'escape'); -- yields 'ab\\c'
select encode('abc', 'escape'); -- yields 'ab\\c'

This looks to me like a bug: the backslash should be escaped with another
backslash instead of raising an error.

Any ideas how to get around this?

-- 
TIA,

Tobias

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] recovering from a --clean dump

2006-04-06 Thread [EMAIL PROTECTED]
hello,
I use an automated script that if it detects a problem in a database
uses a dumb from another db to recover data.

'pg_dump -c -h host1 d1 | psql -a -h host2 -d db2 >> recover_dump 2>&1'

now my recover_dump tells me a unix index could`nt be created as the
table contains duplicated keys.

My guess is the data there was data inserted by the application while
the restore was inserting data and the constraints where not set.
This seems possible to me as there is no LOCK statement in the dump.

So actually I m searching for a way to tell the target db not to take
any connections while restoring
but also wonder if I in future have to take more action to protect
databases I dump --clean dumps to.

debian sarge
psql (PostgreSQL) 7.4.7


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] php to import csv to postgres

2006-04-06 Thread SunWuKung
I am not sure I should be posting this here, but I guess you are the 
people most likely be able to answer this.

I have been looking for a user friendly php script to import csv into 
Postgres (set separator, preview data, match columns - or any of these). 
I found many for MySQL but  none that would work with Postgres.

Maybe somebody could direct me to such a script.

Thank you for the help.
Balázs

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Debian package for freeradius_postgresql module

2006-04-06 Thread Martijn van Oosterhout
On Thu, Apr 06, 2006 at 10:27:36AM -0700, lmyho wrote:
> After desperately checking, we were told that debian doesn't distribute the 
> binary
> module of freeradius for postgresql because of the incompatible license of 
> these two
> apps! However we can build the debian pkg from the source ourself if we need. 
>  So

Sounds terribly unlikely, PostgreSQLs licence doesn't conflict with any
use anywhere. Can you provide a reference?

Have a nice day,
-- 
Martijn van Oosterhout  http://svana.org/kleptog/
> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
> tool for doing 5% of the work and then sitting around waiting for someone
> else to do the other 95% so you can sue them.


signature.asc
Description: Digital signature


Re: [GENERAL] Queries with Regular Expressions

2006-04-06 Thread Silas Justiniano
John, it worked completely fine! Thank you! I don't understand exactly
the difference between [] and () for REs, but I'm starting to study
them deeply.

Thank you very much! Pg is great!

On 4/6/06, John D. Burger <[EMAIL PROTECTED]> wrote:
> > But I just can't make it work correctly using brackets:
> > SELECT field FROM table WHERE field ~* 'ch[aã]o';
> >
> > It just returns tuples that have 'chao', but not 'chão'.
> >
> > My queries are utf-8 an the database is SQL_ASCII.
>
> I suspect the bracketed expression is turning into [aXY], where XY is
> the two-byte sequence corresponding to ã in UTF8.  So the regular
> expression is only going to match strings of the form chao, chXo and
> chYo.  To make sure that this is what's happening, try this:
>
>select length('ã');
>
> I bet you get back 2, not 1.  I don't know if a UTF8 database will
> handle this correctly or not.  The safest thing to do may be to use
> queries like this:
>
>SELECT field FROM table WHERE field ~* 'ch(a|ã)o';
>
> - John D. Burger
>MITRE
>
> ---(end of broadcast)---
> TIP 6: explain analyze is your friend
>


--
Silas Justiniano - Brazil

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Queries with Regular Expressions

2006-04-06 Thread Tom Lane
"John D. Burger" <[EMAIL PROTECTED]> writes:
>> My queries are utf-8 an the database is SQL_ASCII.

> I suspect the bracketed expression is turning into [aXY], where XY is 
> the two-byte sequence corresponding to ã in UTF8.

That's what it looks like to me.  You can hardly blame the database for
this, when you haven't clued it in that you're using UTF8.

> I don't know if a UTF8 database will handle this correctly or not.

I believe that it will work in recent PG releases (7.4 and up).

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] posgresql <-> oracle

2006-04-06 Thread Mark Harrison

We're got the need to replicate (two-way) some data between
postgresql and oracle.

Does anybody have versions of pg_dump or equivalents that
can go back and forth between the two databases?
It would be great if I could get both --schema-only
and --data-only functionality from this.

Thanks!
Mark

--
Mark Harrison
Pixar Animation Studios

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] plgpsql and transactions

2006-04-06 Thread Terry Lee Tucker
On Thursday 06 April 2006 03:27 pm, Bill Moseley saith:
> On Thu, Apr 06, 2006 at 01:44:57PM -0500, Terry Lee Tucker wrote:
> > Triggers fire inside a transaction.
>
> Ah, thanks.  Makes sense since each statement is in an implicit
> transaction.
>
> Granted, would help to see the trigger, but
> these are basically the same?
>
> -- fires a trigger that updates more than one table
> insert into semething (default);
>
> and:
>
> begin;
> -- fires a trigger that updates more than one table
> insert into somthing (default);
> commit;
>

In the latter, you have expanded the scope of the transaction; which, 
sometimes you might want to do.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] "Upcalls" (sort of) from the database

2006-04-06 Thread Scott Ribe
> Ah, OK.  More elegant.  But, it still moves responsibility for this
> to the application layer, not the database, itself.  I can't see
> any way of avoiding this :-(

Sure, I had a similar problem where newer data is required to specify
certain fields, but a legacy application can't provide that data. So there's
a flag, that defaults to false, that when set to true means "I know, I know,
just shut up and insert the data anyway."

The "flag" could be an argument to a stored procedure. In my case it's an
actual column in the table, because I wanted to use check constraints and I
wanted a record of the fact that the entry is "incomplete" because of its
source.

-- 
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 722-0567 voice



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] plgpsql and transactions

2006-04-06 Thread Bill Moseley
On Thu, Apr 06, 2006 at 01:44:57PM -0500, Terry Lee Tucker wrote:
> Triggers fire inside a transaction.

Ah, thanks.  Makes sense since each statement is in an implicit
transaction.

Granted, would help to see the trigger, but
these are basically the same?

-- fires a trigger that updates more than one table
insert into semething (default);

and:

begin;
-- fires a trigger that updates more than one table
insert into somthing (default);
commit;




-- 
Bill Moseley
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Queries with Regular Expressions

2006-04-06 Thread John D. Burger

But I just can't make it work correctly using brackets:
SELECT field FROM table WHERE field ~* 'ch[aã]o';

It just returns tuples that have 'chao', but not 'chão'.

My queries are utf-8 an the database is SQL_ASCII.


I suspect the bracketed expression is turning into [aXY], where XY is 
the two-byte sequence corresponding to ã in UTF8.  So the regular 
expression is only going to match strings of the form chao, chXo and 
chYo.  To make sure that this is what's happening, try this:


  select length('ã');

I bet you get back 2, not 1.  I don't know if a UTF8 database will 
handle this correctly or not.  The safest thing to do may be to use 
queries like this:


  SELECT field FROM table WHERE field ~* 'ch(a|ã)o';

- John D. Burger
  MITRE

---(end of broadcast)---
TIP 6: explain analyze is your friend


%Re: [GENERAL] % tsearch gendict

2006-04-06 Thread David Gama Rodrí­guez

Teodor Sigaev wrote:


$ ./config.sh
config.sh : bad sustitution



Simple workaround: take dictionary generated on Gentoo.

I'll look at the problem, but I suspect that reason is a difference 
with Sun and GNU environment (echo, sed and so on).



Thank you Teodor

What I did was run config.sh on Gentoo then copy all the directory 
generated to Solaris and compiled there to get my new library  for spanish


I see there are some changes in tsearch from postgresql 7.4.3 to 
postgresql 8.1.3 version, you add some new functions i think and change 
the tsearch.sql template right?


Tsearch 2 is now working on solaris

David


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Running Command Line From Trigger?

2006-04-06 Thread Scott Marlowe
On Thu, 2006-04-06 at 14:48, Yudie Pg wrote:
> I want to execute a command line from trigger function.
> The reason is I want to execute my perl script from stored procedure.
>  

Why not just install perl as a procedural language?  It's one of a dozen
or more available to you.  Then you can write the whole trigger in perl
instead of calling an external program.

Note that perl, like many other languages, is available in a "trusted"
and an "untrusted" version.  The trusted versions aren't allowed to do
dangerous things like access the file system and run external programs. 
Untrusted will do anything that perl can do from the command line.

Note that plpgsql cannot run external programs on purpose.  It's a
security and safety issue.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Queries with Regular Expressions

2006-04-06 Thread Silas Justiniano
Hello!

My queries work fine with Regular Expressions, as:

SELECT field FROM table WHERE field ~ 'something';
SELECT field FROM table WHERE field ~* 'something';
SELECT field FROM table WHERE field ~* 'som[i,e]thing';

And it works fine for special characters:
SELECT field FROM table WHERE field ~* 'chão';

But I just can't make it work correctly using brackets:
SELECT field FROM table WHERE field ~* 'ch[aã]o';

It just returns tuples that have 'chao', but not 'chão'.

My queries are utf-8 an the database is SQL_ASCII.

Any idea? Thank you!

--
Silas Justiniano - Brazil

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Running Command Line From Trigger?

2006-04-06 Thread Terry Lee Tucker

On Thursday 06 April 2006 03:48 pm, Yudie Pg saith:
> I want to execute a command line from trigger function.
> The reason is I want to execute my perl script from stored procedure.
>
> any idea?
>
>
> Yudie

Yudie,

You might want to write your perl script as a trusted perl funcion and call 
from your trigger as you would any other backend function.

Example of one we use:

CREATE OR REPLACE FUNCTION fmtstr (TEXT, INTEGER) RETURNS TEXT AS '
my ($str, $len) = @_;
return (pack ("A$len", $str));
' LANGUAGE 'plperlu';

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] Running Command Line From Trigger?

2006-04-06 Thread Yudie Pg
I want to execute a command line from trigger function.
The reason is I want to execute my perl script from stored procedure.
 
any idea?
 
 
Yudie


[GENERAL] About checking all dead lock tables

2006-04-06 Thread Emi Lu

Hello,

How to check whether a table is locked? Is there a way that I can find 
all deadlocks under postgresql 8.0.1 ?


Thanks alot!
Ying Lu


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Backup method

2006-04-06 Thread Douglas McNaught
Guy Fraser <[EMAIL PROTECTED]> writes:

> How do you suggest one does PITR ?
>
> It has been a while since I read the Docs, but do not recall 
> any tools that allow one to do such a thing.

PITR went in to 8.0 (IIRC); the docs for that version will cover it.

-Doug

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] plgpsql and transactions

2006-04-06 Thread Terry Lee Tucker

On Thursday 06 April 2006 02:36 pm, Bill Moseley saith:
> In a BEFORE INSERT trigger, depending on input values, I need to lock
> a table and do a few selects.  Of course, the "lock table" isn't much
> use if not currently in a transaction.
>
> So my question is this:  can I tell if I'm inside a transaction or
> not and issue a BEGIN if not.  And then also set a flag so that after
> the INSERT I can detect that I issued a BEGIN and do a COMMIT?
>
> Or, maybe better is to just throw an exception if not already inside a
> transaction.
>
> BTW -- it seems odd to me that you can issue a lock table outside of
> an explicit begin/commit and not get a warning.  When would issuing a
> lock table outside an explicit transaction be of any use?
>
>
> --
> Bill Moseley
> [EMAIL PROTECTED]
>
>
> ---(end of broadcast)---
> TIP 1: if posting/reading through Usenet, please send an appropriate
>subscribe-nomail command to [EMAIL PROTECTED] so that your
>message can get through to the mailing list cleanly

Bill,

Triggers fire inside a transaction.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Unique constraint or index, case insensitive, on multiple

2006-04-06 Thread MargaretGillon

>Tom Lane <[EMAIL PROTECTED]> wrote on 04/06/2006
11:33:57 AM:
> [EMAIL PROTECTED] writes:
> > CREATE UNIQUE INDEX resource_refullname 
> >  ON resource  USING btree  (redtid, (upper(refullname)
text_ops));
> 
> You need something newer than PG 7.3 to do that.  7.3 can't handle
> functional indexes with more than one column.  There are many
other good
> reasons to upgrade anyway.
> 
> BTW the correct syntax would be
> 
> CREATE UNIQUE INDEX resource_refullname 
>  ON resource  USING btree  (redtid, (upper(refullname))
text_ops);
> 
> If you're going to put an opclass name, it goes outside the parens.
> (The only reason the parens are required at all is to separate the
> _expression_ from the opclass name ...)
> 
>          regards, tom lane

I thought it might be version related.
I had planned to upgrade soon and this is another good motivator.
Thank you,
Margaret Gillon

[GENERAL] The dangers of long running open transactions

2006-04-06 Thread Wayne Schroeder
Summary: Long running transaction from an orphaned database connection
caused major slowness on very active and frequently vacuumed tables
because vacuum could not remove rows that were newer than the long
running transaction.

A while ago, I asked for opinions on the effects of long running
transactions on the ability of the system to effectivly vacuum tables.
I got a response from Tom Lane saying that having transactions open for
long periods of time could indeed cause issues with vacuuming.

On one of our database servers, we ended up having this happen -- mind
you that we have monitoring to tell us if we get close to issues in
regards to the actual frozenxids getting too old to ensure vacuuming
happens and do have regular (in fact, multiple times a day) vacuums.
Our problem came from a very active table becoming quite slow to deal
with.  Explain analyze select count(*) on this table showed us that even
though there were less than a thousand rows in this table, the cost in
rows was much higher for the sequential scan.  I don't remember the
exact number, but select count(*) was taking around 20 seconds, and
inserts / deletes were happening to this table on a regular basis.  The
number of rows analyze was exposed to would NOT go down after vacuums of
 this table.

Due to the question I asked the list earlier, I determined that it was
best to kill all the 'idle in transaction' connections (which we thought
were not holding any transactions for more than 60 seconds and cycling
to new xids since we were aware that long running transactions were bad)
and try vacuuming again.  This did the trick, the table was successfully
vacuumed and was 'instant' fast once again.

A review of system monitor logs showed me that one of the 'idle in
transaction' connections did NOT come back (our software will handle
database disconnections and reconnect).  It turns out that a 'worker
thread' in a process on a remote machine had either become stuck / died
during an open transaction and the database connection was leaked since
the process itself did not terminate.  This left a long running
transaction that was never committed / rolled back for a long period of
time, up until it caused massive slowness when vacuums were proving to
be worthless on very active tables.

We have started to monitor for potentially stuck / long running
transactions with the following query:

select pid, transaction, age(transaction) from pg_locks where
transaction is not null order by age(transaction) desc;

This can be changed around obviously for whatever monitoring software
you use / write, but the basic idea is that the maximum age column tells
you 'how long ago in xids' vacuum must preserve data when it vacuums
tables due to a given connection process.  If this number gets big, in
our case, we warn if they get over 20k (we have lots of transactions) so
we can track down and prevent any issues in advance, action should be
taken to prevent database performance problems / at worst, xid roll
around problems if the problem went on for too long.

Wayne

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] plgpsql and transactions

2006-04-06 Thread Bill Moseley
In a BEFORE INSERT trigger, depending on input values, I need to lock
a table and do a few selects.  Of course, the "lock table" isn't much
use if not currently in a transaction.

So my question is this:  can I tell if I'm inside a transaction or
not and issue a BEGIN if not.  And then also set a flag so that after
the INSERT I can detect that I issued a BEGIN and do a COMMIT?

Or, maybe better is to just throw an exception if not already inside a
transaction.

BTW -- it seems odd to me that you can issue a lock table outside of
an explicit begin/commit and not get a warning.  When would issuing a
lock table outside an explicit transaction be of any use?


-- 
Bill Moseley
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Unique constraint or index, case insensitive, on multiple fields

2006-04-06 Thread Tom Lane
[EMAIL PROTECTED] writes:
> CREATE UNIQUE INDEX resource_refullname 
>  ON resource  USING btree  (redtid, (upper(refullname) text_ops));

You need something newer than PG 7.3 to do that.  7.3 can't handle
functional indexes with more than one column.  There are many other good
reasons to upgrade anyway.

BTW the correct syntax would be

CREATE UNIQUE INDEX resource_refullname 
 ON resource  USING btree  (redtid, (upper(refullname)) text_ops);

If you're going to put an opclass name, it goes outside the parens.
(The only reason the parens are required at all is to separate the
expression from the opclass name ...)

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] "Upcalls" (sort of) from the database

2006-04-06 Thread Eric E

Don Y wrote:

Hi,

I wasn't prepared to ask this question, yet :<  but
all the talk of stored procedures, etc. suggests
this might be a good time to venture forth...

Humor me:  assume I have done the analysis and *know*
this to be correct for my situation  :>

I want to embed a good deal of the invariant aspects
of my application *domain* in the databases that
service it -- instead of in the applications riding
*above* them.  
Keeping in mind that you've done plenty of analysis, I'd raise the 
standard cautionary note that you have to be pretty certain that you're 
right about what's invariant if you're going to couple your application 
logic very tightly to your data model.



Anyway, the problem I have is how to handle cases
where the "database" needs user confirmation of an
action (!).  I.e. it has verified that the data
coming in is legitimate (adheres to all of the rules)
and *can* be stored in the appropriate tables -- BUT,
notices "something" that causes it to wonder if the
user REALLY wants to INSERT this data.  The database
would like to alert the user to what it has noticed
and get confirmation from the user (of course, I
mean my *application* wants to do this -- but, based
on observations made *by* the database, itself).
I've been dealing this myself, as it happens, in the context of 
importing and validating data (an ETL kind of thing).  My answer is to 
use temporary tables as a way of staging the data.  One of the 
advantages of that is that you can wrap up the DDL for the temporary 
table inside your code (whether in or out of the database) and thus 
insulate that part of the process from other changes in the destination 
tables.  Then you can fire off whatever rules you like about whether to 
alert the user of certain conditions with the data without worrying 
about commits.  Of course the transaction approach can probably handle 
that, too.


Cheers,

Eric

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Database lost after reboot

2006-04-06 Thread Tom Lane
=?ISO-8859-1?Q?Diego_Manilla_Su=E1rez?= <[EMAIL PROTECTED]> writes:
> Hi. I have PostgreSQL 7.4.8 installed on a SuSE Enterprise Server 9. 
> When I create a database, everything works fine until I restart the 
> computer. When I do that, and I try to connect to the database, I get 
> the following message;

> psql: FATAL:  database "mydb" does not exist
> DETAIL:  The database subdirectory "/var/lib/pgsql/data/base/1964658" is 
> missing.

Hmm, are you allowing the postmaster to shut down normally before you
reboot?  (Not that it shouldn't work anyway, I'm just trying to gather
details.)  What is the OID of the missing database?  Try connecting to
template1 and then doing "select oid,datname from pg_database".  If you
don't see mydb, or if it's got a different OID from the number mentioned
in the error message, that would tell something.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Backup method

2006-04-06 Thread Guy Fraser
On Wed, 2006-05-04 at 22:29 +0100, Simon Riggs wrote:
> On Wed, 2006-04-05 at 15:42 -0400, Bob Powell wrote:
> 
> > I have a systems admin that is backing up our Linux computers
> > (postgres)  by backing up the directory structure.  This of course
> > includes all the files that pertain to my postgres databases.  I
> > maintain that using pgdump and creating a file of SQL commands for
> > restore is a better method by allowing the restore on any linux box that
> > is running postgress as opposed to having to reconstruct the directory
> > on another server. 
> > 
> > Does anyone have any thoughts on this matter.  Is one way better than
> > the other?  Thanks in advance.  
> 
> If you want to do this quickly then you should use PITR. The base backup
> is faster, plus you're covered if you crash between backups.
> 
> Archivelogmode is standard for Oracle/DB2 etc installations; PITR should
> be your standard if you run PostgreSQL too. Here's why:
> 
> pg_dump produces portable backups, but that won't help you if you took
> the backup at 04:00 and your server crashes at 14:15 - you'll still lose
> *all* the transactions your business performed in the last 10+ hours.
> You'll also have to explain that away to your boss and remember she/he's
> the one handing out the raises at the end of the year...
> 
> PITR takes more thought, but then is the purpose of a backup to make
> your life easier or to recover the data for the person paying you?
> 
> Best Regards, Simon Riggs

How do you suggest one does PITR ?

It has been a while since I read the Docs, but do not recall 
any tools that allow one to do such a thing.




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Unique constraint or index, case insensitive, on multiple fields

2006-04-06 Thread MargaretGillon

I am on version 7.3. I have been able
to build a case insensitive index to keep the refullname column unique
with the following:

CREATE UNIQUE INDEX  resource_refullname
 ON resource  USING btree
  (upper(refullname) text_ops);

However I have a table where I want
to allow  a duplicate refullname if the redtid field (int4) is different.
When I try building an index using the command below I get an error. I
still need the refullname to be case insensitive.

CREATE UNIQUE INDEX resource_refullname
 ON resource  USING btree
 (redtid, (upper(refullname) text_ops));

The index will work with  (redtid,
refullname) but then the index is not case insensitive on refullname. 

Thanks,
Margaret Gillon, IS Dept., Chromalloy
Los Angeles, ext. 297


[GENERAL] Debian package for freeradius_postgresql module

2006-04-06 Thread lmyho
Hello All,

We have a project which is built on postgresql and freeradius on debian system. 
I
have installed postgresql-8.1 on the Debian system, and lately freeradius-1.1.0
also. Things seems ok, but when we started to test, we found that the postgresql
module of freeradius is missing in the debian distribution! 

After desperately checking, we were told that debian doesn't distribute the 
binary
module of freeradius for postgresql because of the incompatible license of 
these two
apps! However we can build the debian pkg from the source ourself if we need.  
So
we did it.  But this problem: we got so many...so many warnings during the 
process
of building the debian packages, tons of the warnings!  So although we have the
packages now, we don't know if we can use them with so many so many warnings??!

I want to post some of the warnings here for your advice.  Please tell me with 
such
kind of warnings, will the built packages still usable??  Further more, I am 
afraid
it is because our system is not purly dev system, so that we got those 
warnings...
so, if any one of you could possibly help us to get a v1.1.0 postgresql module 
of
freeradius, I would be so much grateful!! Or, if you can help us to get the 
newest
v1.1.1 freeradius package set fro debian (include the postgresql module), that 
will
be great also!  I deeply hope to get help from you...  

We specifically need this module bacause the codes in postgresql to work with
freeradius have been built, can't imagine all work will be trashed...:(


Please see the warning samples:

radius.c: In function 'make_secret':
radius.c:167: warning: pointer targets in passing argument 2 of 
'librad_MD5Update'
differ in signedness
radius.c: In function 'make_passwd':
radius.c:205: warning: pointer targets in passing argument 2 of 
'librad_MD5Update'
differ in signedness
radius.c: In function 'make_tunnel_passwd':
radius.c:294: warning: pointer targets in passing argument 2 of 
'librad_MD5Update'
differ in signedness

rlm_passwd.c: In function 'build_hash_table':
rlm_passwd.c:218: warning: pointer targets in passing argument 1 of 'hash' 
differ in
signedness
rlm_passwd.c:232: warning: pointer targets in passing argument 1 of 'hash' 
differ in
signedness
rlm_passwd.c: In function 'get_pw_nam':
rlm_passwd.c:299: warning: pointer targets in passing argument 1 of 'hash' 
differ in
signedness
rlm_passwd.c: In function 'passwd_authorize':
rlm_passwd.c:536: warning: pointer targets in assignment differ in signedness
rlm_preprocess.c: In function 'cisco_vsa_hack':
rlm_preprocess.c:126: warning: pointer targets in passing argument 1 of
'__builtin_strchr' differ in signedness
rlm_preprocess.c:144: warning: pointer targets in assignment differ in 
signedness
rlm_preprocess.c: In function 'rad_mangle':
rlm_preprocess.c:203: warning: pointer targets in passing argument 1 of
'__builtin_strchr' differ in signedness
rlm_preprocess.c:206: warning: pointer targets in passing argument 1 of 'strcpy'
differ in signedness
rlm_preprocess.c: In function 'huntgroup_access':
rlm_preprocess.c:375: warning: pointer targets in passing argument 1 of 
'strNcpy'
differ in signedness
rlm_preprocess.c:376: warning: pointer targets in passing argument 1 of 'strlen'
differ in signedness
rlm_preprocess.c: In function 'add_nas_attr':
rlm_preprocess.c:404: warning: pointer targets in passing argument 1 of
'ip_hostname' differ in signedness
rlm_preprocess.c:425: warning: pointer targets in passing argument 1 of
'ip_hostname' differ in signedness
rlm_radutmp.c: In function 'radutmp_checksimul':
rlm_radutmp.c:658: warning: pointer targets in assignment differ in signedness
rlm_realm.c: In function 'check_for_realm':
rlm_realm.c:209: warning: pointer targets in passing argument 1 of 'strcpy' 
differ
in signedness
rlm_sql.c: In function 'sql_groupcmp':
rlm_sql.c:564: warning: pointer targets in passing argument 1 of 'strlen' 
differ in
signedness
rlm_sql.c:564: warning: pointer targets in passing argument 2 of 
'__builtin_strcmp'
differ in signedness
rlm_sql.c:564: warning: pointer targets in passing argument 2 of 
'__builtin_strcmp'
differ in signedness
rlm_sql.c:564: warning: pointer targets in passing argument 1 of 'strlen' 
differ in
signedness
rlm_sql.c:564: warning: pointer targets in passing argument 2 of 
'__builtin_strcmp'
differ in signedness
rlm_sql.c:564: warning: pointer targets in passing argument 2 of 
'__builtin_strcmp'
differ in signedness
rlm_sql.c: In function 'rlm_sql_authorize':
rlm_sql.c:824: warning: pointer targets in assignment differ in signedness
rlm_sql.c: In function 'rlm_sql_checksimul':
rlm_sql.c:1227: warning: pointer targets in assignment differ in signedness
...

Please advise me if these warnings are serious??  

Any help would be greatly appreciated!  Thank you!!

Regrads,
leo

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broa

[GENERAL] Database lost after reboot

2006-04-06 Thread Diego Manilla Suárez
Hi. I have PostgreSQL 7.4.8 installed on a SuSE Enterprise Server 9. 
When I create a database, everything works fine until I restart the 
computer. When I do that, and I try to connect to the database, I get 
the following message;


psql: FATAL:  database "mydb" does not exist
DETAIL:  The database subdirectory "/var/lib/pgsql/data/base/1964658" is 
missing.


No dir is removed during the reboot, (I checked that, and the subdirs on 
/var/lib/pgsql/data/base were exactly the same before and after the 
reboot), and 1964658 does not exist (didn't exist before the reboot, 
either), so it seems that postgres loses the reference to the dir where 
data is stored. It doesn't happen all the time. The rest of the software 
on that computer is working fine.


Does anyone know what can be causing this behaviour?

Thanks in advance


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] stored proc vs sql query string

2006-04-06 Thread Chris Browne
[EMAIL PROTECTED] ("surabhi.ahuja") writes:
> i have heard somewhere that writing a stored procedure, is much
> better than firing a sql query(such as select * from
> table_name) onto the database.
>
> is it true and if yes how?

It can be way more efficient.

Consider two alternative ways of handling some complex processing:

  1.  Pull all the data via SELECT * FROM TABLE_NAME, and process that
  row by row, on the client.

  2.  Pull the data in as a SELECT inside a stored procedure, where
  processing takes place inside the stored procedure.

In the first case, ALL the data has to be drawn into memory on the
database server, then marshalled, then passed over to the client,
possibly across a network connection.  Processing then takes place on
the client, and updates may have to be passed back, one by one, across
the network connection, to the server.

In the second case, the same data is drawn into memory on the server.
It doesn't have to be transformed to be communicated to the client,
and there will be no substantial processing that takes place on the
client.

There's *substantial* savings in processing to be had by using stored
procedures.

> also i want to know that is the performnance in java slower as
> compared to cpp, given that the same things is being done.

There is no a priori reason to expect Java code accessing a database
to be either slower or faster than C++ code doing something
equivalent.  Commonly, database I/O is the performance bottleneck,
which would point to language choice being irrelevant.

You have given no reason to distinguish between any cases...
-- 
let name="cbbrowne" and tld="ntlug.org" in String.concat "@" [name;tld];;
http://cbbrowne.com/info/sap.html
Rules of the Evil Overlord #81. "If I am fighting with the hero atop a
moving platform, have disarmed him, and am about to finish him off and
he glances behind  me and drops flat, I too will  drop flat instead of
quizzically turning around to find out what he saw."


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] "Upcalls" (sort of) from the database

2006-04-06 Thread Dawid Kuroczko
On 4/6/06, Don Y <[EMAIL PROTECTED]> wrote:
For example, the title may match an existing entry -- butthe author may be different (e.g., misspelled, or some"other" author listed on a book having multiple authors, etc.).Ideally, I would like the database to suspend the INSERT,
ask for confirmation (and "why") and then, either committhe INSERT or abort it (based on the user's response).Nearest I can imagine, there's only one ways I can do this:issue a query that looks for these types of problems and
based on the result, let the *application* prompt theuser for confirmation.  Then, *if* confirmed, do the realINSERT.Is there a more elegant way?
Well, your application could:
BEGIN TRANSACTION;
Then it would
INSERT INTO... or call a stored procedure.  The triggers/stored procedure would
do all what's needed to perform such action, but when it notices something
suspicious it would RAISE (see PLpgSQL) a notice describing the problem(s).

If your user application notices such messages, it issues a message "WARNING:
the message, do you want to continue?" and if user presses yes, you
do COMMIT.  Otherwise you do ROLLBACK.

The thing is to use http://www.postgresql.org/docs/8.1/static/plpgsql-errors-and-messages.html


  Regards,
  Dawid



Re: [GENERAL] "Upcalls" (sort of) from the database

2006-04-06 Thread Don Y

Bernhard Weisshuhn wrote:

Don Y wrote:


[snip]


For example, the title may match an existing entry -- but
the author may be different (e.g., misspelled, or some
"other" author listed on a book having multiple authors, etc.).
Ideally, I would like the database to suspend the INSERT,
ask for confirmation (and "why") and then, either commit
the INSERT or abort it (based on the user's response).

Nearest I can imagine, there's only one ways I can do this:
issue a query that looks for these types of problems and
based on the result, let the *application* prompt the
user for confirmation.  Then, *if* confirmed, do the real
INSERT.


You could *insert* the data and then *rollback* the transaction. Then 
you would *know* the data is *valid*.
Only if the user *confirms* the action, then you do it *again* and 
actually *commit* the transaction.


Ah, OK.  More elegant.  But, it still moves responsibility for this
to the application layer, not the database, itself.  I can't see
any way of avoiding this :-(

OTOH, an API with like insert_data(, bool confirm) would
remind the application developers that the intended interface
is:

switch (insert_data(..., FALSE)) {
case INVALID:
/* something wonky in the data, abort */
break;
case QUESTIONABLE:
/* possible typographical error, require confirmation */
if (confirmed)
insert_data(..,TRUE);
break;
case LOOKS_GOOD:
insert_data(..., TRUE);
}


P.S. these* *stars* are *unnerving* ;-)


  Sorry, i've been writing specifications for the past
few days and use the "emphasis" SGML tag quite a bit  :-/
(the idea of posting in HTML is just anathema...)

--don


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] "Upcalls" (sort of) from the database

2006-04-06 Thread Bernhard Weisshuhn

Don Y wrote:

Hi,

I wasn't prepared to ask this question, yet :<  but
all the talk of stored procedures, etc. suggests
this might be a good time to venture forth...

Humor me:  assume I have done the analysis and *know*
this to be correct for my situation  :>

I want to embed a good deal of the invariant aspects
of my application *domain* in the databases that
service it -- instead of in the applications riding
*above* them.  So, I let the database, itself, do
sanity checking of data on input -- if the database
rejects the INSERT, the application *knows* there is
something wrong with the data (instead of building
libraries to check each datum in each application
and *hoping* that the checks are implemented
consistently from one application to the next, etc.)

Anyway, the problem I have is how to handle cases
where the "database" needs user confirmation of an
action (!).  I.e. it has verified that the data
coming in is legitimate (adheres to all of the rules)
and *can* be stored in the appropriate tables -- BUT,
notices "something" that causes it to wonder if the
user REALLY wants to INSERT this data.  The database
would like to alert the user to what it has noticed
and get confirmation from the user (of course, I
mean my *application* wants to do this -- but, based
on observations made *by* the database, itself).

By way of example, the *toy* application I am playing with
to explore my implementation options is a "book" database;
it tracks titles, books, authors, publishers, etc.
The sort of thing a library could use to manage its
collection.

Assume the user tries to INSERT an entry for a "book".
Legitimately, this can be:
- a new title that the database has never seen before
- a new title by an author with other titles in the database
- an existing title thus another copy of that title

However, it can also just *appear* to be a legitimate new
title!

For example, the title may match an existing entry -- but
the author may be different (e.g., misspelled, or some
"other" author listed on a book having multiple authors, etc.).
Ideally, I would like the database to suspend the INSERT,
ask for confirmation (and "why") and then, either commit
the INSERT or abort it (based on the user's response).

Nearest I can imagine, there's only one ways I can do this:
issue a query that looks for these types of problems and
based on the result, let the *application* prompt the
user for confirmation.  Then, *if* confirmed, do the real
INSERT.


You could *insert* the data and then *rollback* the transaction. Then 
you would *know* the data is *valid*.
Only if the user *confirms* the action, then you do it *again* and 
actually *commit* the transaction.


P.S. these* *stars* are *unnerving* ;-)

bkw

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] "Upcalls" (sort of) from the database

2006-04-06 Thread Don Y

Hi,

I wasn't prepared to ask this question, yet :<  but
all the talk of stored procedures, etc. suggests
this might be a good time to venture forth...

Humor me:  assume I have done the analysis and *know*
this to be correct for my situation  :>

I want to embed a good deal of the invariant aspects
of my application *domain* in the databases that
service it -- instead of in the applications riding
*above* them.  So, I let the database, itself, do
sanity checking of data on input -- if the database
rejects the INSERT, the application *knows* there is
something wrong with the data (instead of building
libraries to check each datum in each application
and *hoping* that the checks are implemented
consistently from one application to the next, etc.)

Anyway, the problem I have is how to handle cases
where the "database" needs user confirmation of an
action (!).  I.e. it has verified that the data
coming in is legitimate (adheres to all of the rules)
and *can* be stored in the appropriate tables -- BUT,
notices "something" that causes it to wonder if the
user REALLY wants to INSERT this data.  The database
would like to alert the user to what it has noticed
and get confirmation from the user (of course, I
mean my *application* wants to do this -- but, based
on observations made *by* the database, itself).

By way of example, the *toy* application I am playing with
to explore my implementation options is a "book" database;
it tracks titles, books, authors, publishers, etc.
The sort of thing a library could use to manage its
collection.

Assume the user tries to INSERT an entry for a "book".
Legitimately, this can be:
- a new title that the database has never seen before
- a new title by an author with other titles in the database
- an existing title thus another copy of that title

However, it can also just *appear* to be a legitimate new
title!

For example, the title may match an existing entry -- but
the author may be different (e.g., misspelled, or some
"other" author listed on a book having multiple authors, etc.).
Ideally, I would like the database to suspend the INSERT,
ask for confirmation (and "why") and then, either commit
the INSERT or abort it (based on the user's response).

Nearest I can imagine, there's only one ways I can do this:
issue a query that looks for these types of problems and
based on the result, let the *application* prompt the
user for confirmation.  Then, *if* confirmed, do the real
INSERT.

Is there a more elegant way?

Thanks,
--don


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] FAQ 1.1

2006-04-06 Thread Jim Nasby

On Apr 3, 2006, at 11:23 PM, Chris Browne wrote:


[EMAIL PROTECTED] (Jim Nasby) writes:

On Mar 31, 2006, at 4:17 AM, Dave Page wrote:

Given the tendency people have to remove the capitalised bits to get
'postgre', we'd probably end up with 'ostres'


Man I hate when people do that.

I think we should just rename the database to 'Fred'. :)


Yeah, someone at the office was asking me on the elevator about
whether some Post-something was somehow up and coming.

In retrospect, I think he was trying to pronounce Postgre, and arrived
at something I had never heard before...


And it doesn't help when "post-something" could also mean a very  
popular MTA...

--
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] stored proc vs sql query string

2006-04-06 Thread Kenneth Downs

Merlin Moncure wrote:


 It has been
more or less proven that functional, declaritive style coding has less
errors and is more reliable than mixed sql/procedural applciation code
given developers with equal skill.

I did not know there were empirical studies on this, I would love to be 
able to read them and cite them.  Can you point me to any in particular?
begin:vcard
fn:Kenneth  Downs
n:Downs;Kenneth 
email;internet:[EMAIL PROTECTED]
tel;work:631-689-7200
tel;fax:631-689-0527
tel;cell:631-379-0010
x-mozilla-html:FALSE
version:2.1
end:vcard


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] stored proc vs sql query string

2006-04-06 Thread Merlin Moncure
> > i have heard somewhere that writing a stored procedure, is much better than
> > firing a sql query(such as select * from table_name) onto the database.
> > is it true and if yes how?

stored procedures (functions on postgresql) eliminate a lot of
overhead.  they also provide a lot of covenience of transactions
manually opening and closing them.  There are also security and design
benefits.

> This isn't going to be true most of the time, I think.  Write SQL where you
> can, and where you can't (because you can't express something in SQL), write

stored procedures allow the application to do complex things in the
database without having complex sql inside.  it also allows multiple
application technologies to access the database without reimplementing
the query in 10 differnet places.  what happens when that query
changes?

> a procedure.  There are places where using a stored procedure can be more
> efficient, but I think starting with SQL, benchmarking and testing, and then
> determining what queries need special attention is the best way to go at the
> beginning.
>
> > also i want to know that is the performnance in java slower as compared to
> > cpp, given that the same things is being done.
>
> Java and cpp performance are not really related to postgresql performance.
> You will probably need to ask that on another list.  There are many other
> reasons to choose one language over another besides speed (in fact, I think
> speed is probably not the first thing to think about when choosing a
> language).

language choice is important such that it may expose none, some, or
all, of the internal libpq api (or implement it's own version of the
client side protocol).  cpp can often be faster because you can make
very thin wrappers over the libpq calls (STL is ideal for this).

stored procedures, particularly pl/sql funtions, have the enormous
advantage in that queries are first class objects inside the
procedural codeso you can

x :=  1 + 3;
select * from t where id = x;

without nasty string concatination or learning a whole API just to
read and write data to the database.  Another advantage is that the
can also executed by the query engine so you can do:

select outstanding_account_balance(account) from account where...

these two advantages alone mean huge reductions in code along with
corresponding savings in development and real dollars.  It has been
more or less proven that functional, declaritive style coding has less
errors and is more reliable than mixed sql/procedural applciation code
given developers with equal skill.  thus, I would argue the opposite,
use procedures everywhere, keep application code to an absolute
minumum, because it is expensive to write and changes frequently.

The main reason not to code your logic in the database is to keep your
code portable across multiple databases.   Around year 2000 I realized
pg is the only database I would ever want to use again in my
professional career.

merlin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Userid error

2006-04-06 Thread Dick Kniep
Hi list,

We are using Linux SuSE 9.3 with psycopg 1.18 and postgresql 7.4.8.
Sometimes we get the following error message on varying queries:

2006-01-18 13:19:17,807 ERROR SQLDict 185 Fout in Select ERROR:  user with ID
322 does not exist

SELECT "per_id", "per_naam1", "per_naam2", "per_telefoonnr_thuis",
"per_telefoonnr_mobiel", "per_faxnr", "per_mailadres_thuis", 
"per_mailadres_werk", "
per_adres", "per_opmerkingen", "mutmed", "mutdat", "per_aanhef",
"per_geboortedatum", "per_sofinr", "per_geslacht", "per_aanmaakdatum",
"per_debiteur_t
ime", "per_debiteur_toegevoegd", "per_roepnaam", 
"per_adressering_meisjesnaam",
"per_initialen", "per_tussenvoegsels", "per_meisjesnaam", "per_achterna
am", "cli_geboorteplaats", "cli_nationaliteit", "cli_burgerlijke_staat",
"cli_vluchteling", "cli_verblijfsvergunning", "cli_gehandicapt", "cli_autoallo
", "cli_woonsituatie", "cli_roepnaam", "cli_eigen_werknr",
"cli_soort_identificatie", "cli_identificatie", "cli_aantkinderen",
"cli_rijbewijs", "cli_uw
vcode", "cli_cwicode", "cli_opschort_vanaf", "cli_opschort_tot",
"cli_verzuimcoordinator", "cli_opschort_actief" FROM 
"adeuxproductie"."clienten"
 sqd1
  WHERE per_id in ((select cli_id from "adeuxproductie".betrokken_opdrachten
where betrokkene = 1061  and (opd_status < 90 OR (opd_status >= 90 AND ((o
pd_real_einde is NULL or opd_real_einde > '2006-01-04') OR uit_categorie is 
NULL
                        UNION (select cli_id from "adeuxproductie".planning 
et1,
"adeuxproductie".activiteiten_stappen et2, "adeuxproductie".opdrachten
 et3,  "adeuxproductie".groepen_per_medewerker et4
                        where et1.opd_id = et3.opd_id and et1.stp_id =
et2.tab_id and et4.mwg_id = et2.act_med_groep and med_id = 1061  and 
(opd_status
 < 90 OR (opd_status >= 90 AND ((opd_real_einde is NULL or opd_real_einde >
'2006-01-04') OR uit_categorie is NULL)

It looks as if the user that is connected to the database suddenly doesn't 
exist anymore. The user has been able to connect, to work for a while, and 
after a while this error occurs.

Anyone any ideas? Could this be a problem in libpq, or 
postgresql?

Thanks in advance
Dick Kniep

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Why postgres install requires physical access to

2006-04-06 Thread Andrus

> There's always VNC: http://www.realvnc.com/  That way Windows won't know
> you're not sitting in front of it, and if you've got to access it from
> across the country on a linux or BSD or MAC box, it still works.

VNC requires additional port to be opened.

I  have no free opened ports in W2K server. Also In my XP client, all 
incoming ports are closed.

Andrus. 



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] autovacuum settings

2006-04-06 Thread Jebus
I am totally new to postgres and I wondering what settings should I be
using for autovacuum ?
For now I just uncommented all the defaults, is this good enough ? The
database is not large but it is a web database so there is a lot of
inserts and updates.



autovacuum = on # enable autovacuum subprocess?
autovacuum_naptime = 60 # time between autovacuum runs, in secs
autovacuum_vacuum_threshold = 1000  # min # of tuple updates before
# vacuum
autovacuum_analyze_threshold = 500  # min # of tuple updates before
# analyze
autovacuum_vacuum_scale_factor = 0.4# fraction of rel size before
# vacuum
autovacuum_analyze_scale_factor = 0.2   # fraction of rel size before
# analyze
autovacuum_vacuum_cost_delay = -1   # default vacuum cost delay for
# autovac, -1 means use
# vacuum_cost_delay
autovacuum_vacuum_cost_limit = -1   # default vacuum cost limit for
# autovac, -1 means use
# vacuum_cost_limit


I also uncommented this too I am not what they are though ?

vacuum_cost_delay = 0   # 0-1000 milliseconds
vacuum_cost_page_hit = 1# 0-1 credits
vacuum_cost_page_miss = 10  # 0-1 credits
vacuum_cost_page_dirty = 20 # 0-1 credits
vacuum_cost_limit = 200 # 0-1 credits

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] stored proc vs sql query string

2006-04-06 Thread Sean Davis



On 4/6/06 12:12 AM, "surabhi.ahuja" <[EMAIL PROTECTED]> wrote:

> i have heard somewhere that writing a stored procedure, is much better than
> firing a sql query(such as select * from table_name) onto the database.
> is it true and if yes how?

This isn't going to be true most of the time, I think.  Write SQL where you
can, and where you can't (because you can't express something in SQL), write
a procedure.  There are places where using a stored procedure can be more
efficient, but I think starting with SQL, benchmarking and testing, and then
determining what queries need special attention is the best way to go at the
beginning.

> also i want to know that is the performnance in java slower as compared to
> cpp, given that the same things is being done.

Java and cpp performance are not really related to postgresql performance.
You will probably need to ask that on another list.  There are many other
reasons to choose one language over another besides speed (in fact, I think
speed is probably not the first thing to think about when choosing a
language).

Sean


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] pgcrypto-crypt

2006-04-06 Thread chris smith
On 4/6/06, AKHILESH GUPTA <[EMAIL PROTECTED]> wrote:
> dear all,
>  i want to encrypt and decrypt one of the fields in my table (i.e-password
> field)
>  i have searched and with the help of pgcrypto package, using function
> "crypt", i am able to encrypt my data,
>  but there is nothing which i found to decrypt that same data,
>  plz anybody give me the function to decrypt that encrypted value.

The crypt function can't be decrypted (whether it's in postgresql or
anywhere else).

Crypt is meant to be used for passwords and such that you don't need
to reverse (you only compare against).

--
Postgresql & php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] pgcrypto-crypt

2006-04-06 Thread AKHILESH GUPTA
dear all,
i want to encrypt and decrypt one of the fields in my table (i.e-password field)
i have searched and with the help of pgcrypto package, using function "crypt", i am able to encrypt my data,
but there is nothing which i found to decrypt that same data,
plz anybody give me the function to decrypt that encrypted value.
plz reply asap-- Thanks & Regards,AkhileshDAV Institute of ManagementFaridabad(Haryana)GSM:-(+919891606064)  (+911744293789)"FAILURES CAN BE FORGIVEN BUT AIMING LOW IS A CRIME"


Re: [GENERAL] [Slightly OT] data model books/resources?

2006-04-06 Thread Michael Glaesemann


On Apr 1, 2006, at 0:19 , Robert Treat wrote:


On Thursday 30 March 2006 03:03, Aaron Glenn wrote:

Anyone care to share the great books, articles, manifestos, notes,
leaflets, etc on data modelling they've come across? Ideally I'd like
to find a great college level book on data models, but I haven't come
across one that even slightly holds "definitive resource"-type  
status.




I've heard that "Relational Database Design" (ISBN: 0123264251) is  
good for
college level introductory material, though the book I generally  
recommend

most is "Practical Issues in Database Management" (ISBN: 0201485559)


Might be a bit OT your OT (as it leans towards the relational model  
in general rather than data modeling--applying the relational model  
for a particular use), but I am really enjoying "Database in Depth :  
Relational Theory for Practitioners" (ISBN: 0596100124).


Michael Glaesemann
grzm myrealbox com




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] % tsearch gendict

2006-04-06 Thread Teodor Sigaev

$ ./config.sh
config.sh : bad sustitution


Simple workaround: take dictionary generated on Gentoo.

I'll look at the problem, but I suspect that reason is a difference with Sun and 
GNU environment (echo, sed and so on).


--
Teodor Sigaev   E-mail: [EMAIL PROTECTED]
   WWW: http://www.sigaev.ru/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] full join question...

2006-04-06 Thread Jonas F. Henriksen
Thanks Tom, that worked great!!

Regards Jonas:))

On Wed, 2006-04-05 at 15:09 -0400, Tom Lane wrote:
> "Jonas F. Henriksen" <[EMAIL PROTECTED]> writes:
> > Well, the problem is I want the result on one row for each depth, so it
> > will fit nicely into a table, like this: 
> 
> > depth measuretype1_value measuretype2_value
> > 10  1.782.55
> > 20  2.12
> > 30  3.12
> > 40  1.3 1.4
> 
> I think what you have to do is filter the rows according to measuretype
> *before* you join them.  Something like
> 
> select * from
>   (select depth, value as measuretype1_value from testtable
>where measuretype = 1040) ss1
>   full join
>   (select depth, value as measuretype2_value from testtable
>where measuretype = 4001) ss2
>   using (depth);
> 
>   regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster