Re: [GENERAL] Postgresql and Macintosh

2005-02-09 Thread Jonel Rienton
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

You'll probably need to install fink or darwinports also if you're 
building from source. I have successfully built PostgreSQL 8.0.1 
recently in my Mac OS X box.  Building PostgreSQL in different 
platforms has become one of my hobbies :) so if you need help just let 
me know.

regards,

- -
Jonel Rienton
http://blogs.road14.com
Software Developer, *nix Advocate
On Feb 9, 2005, at 12:35 AM, John DeSoi wrote:


 On Feb 9, 2005, at 10:39 AM, renato.barrios wrote:

 Please tell me if Postgresql runns in an iMac.

 Sure, assuming your iMac is running OS X. You'll need to install 
 Apple's free developer tools to compile it or perhaps you might want 
 to use one of the freely available installers such as:

 http://www.entropy.ch/software/macosx/postgresql/




 John DeSoi, Ph.D.
 http://pgedit.com/
 Power Tools for PostgreSQL


 ---(end of 
 broadcast)---
 TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


-BEGIN PGP SIGNATURE-
Version: PGP 8.1

iQA/AwUBQgm/ReAwOVAnbsGCEQJTxACeJ/dSvFwQFoLrfZBhU68EEzDwyPwAniT/
7hz4jVy0eA9crZra/66VuYYT
=albG
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Postgresql and Macintosh

2005-02-09 Thread Tino Wildenhain
Am Mittwoch, den 09.02.2005, 01:42 -0600 schrieb Thomas F.O'Connell:
 I don't think it's a standard part of OS X. I think it comes with Apple 
 Remote Desktop, and it might be in the Server edition, but it's not a 
 part of the standard version.

Ok ;) But at least we can state it runs ;)



 -tfo
 
 --
 Thomas F. O'Connell
 Co-Founder, Information Architect
 Sitening, LLC
 http://www.sitening.com/
 110 30th Avenue North, Suite 6
 Nashville, TN 37203-6320
 615-260-0005
 
 On Feb 9, 2005, at 1:34 AM, Tino Wildenhain wrote:
 
  Am Mittwoch, den 09.02.2005, 16:39 +0100 schrieb renato.barrios:
  Please tell me if Postgresql runns in an iMac.
 
  IIRC postgres is even part of Mac OS X.
  (perhaps not the latest version)
 
  Regards
  Tino
 
 
 ---(end of broadcast)---
 TIP 9: the planner will ignore your desire to choose an index scan if your
   joining column's datatypes do not match
 
-- 
Tino Wildenhain [EMAIL PROTECTED]


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Postgresql and Macintosh

2005-02-09 Thread Christoph Pingel
Please tell me if Postgresql runns in an iMac.
If you're on OS X, sure. I'd recommend installing darwinports 
(http://darwinports.opendarwin.org), which makes downloading and 
compiling as easy as that:

   sudo port install postgresql8
darwinports also features several modules for accessing postgresql, I 
use psycopg for Python.
On my machine, a 'not-too-current' Powerbook, the combo runs without 
any problems.

best regards,
Christoph
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [GENERAL] More concurent transaction over single connection ?

2005-02-09 Thread Martijn van Oosterhout
On Wed, Feb 09, 2005 at 12:22:44AM +0100, NTPT wrote:
 AFAIK (7.4.x) there is one limitation in persistant connections to 
 postgresql from various frontends ( 
 http://cz.php.net/manual/en/features.persistent-connections.php ), because 
 it can not use transactions in situation where more concurent tasks use a 
 single connection (execuse my wrong english)

One backend handle one session. Nothing strane about that. That won't
change anytime soon either.

 I think it may add some benefit  to avoiding connection starting costs, 
 especially in case where database and client are in greater network 
 distance and/or need to use some expensive procedure to start connection 
 and allow a relay simple and transparent  connection pooling,  may be a 
 some type od spare servers like in Apache (MinSpareServers and Max 
 SpareServers configuration directive )

I think you're looking for pgpool, it does connection pooling.
Multiplexing multiple connection over one doesn't help, because the
process at the other end will have to copy them out to several
different backends anything, so you don't save anything.

pgpool is a much better system...

Hope this helps,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgpFxOkLPHyly.pgp
Description: PGP signature


Re: [GENERAL] More concurent transaction over single connection

2005-02-09 Thread Richard Huxton
NTPT wrote:
AFAIK (7.4.x) there is one limitation in persistant connections to 
postgresql from various frontends ( 
http://cz.php.net/manual/en/features.persistent-connections.php ), 
because it can not use transactions in situation where more concurent 
tasks use a single connection (execuse my wrong english)


I suggest to add  some sort of context identificator to 
frontend/backend protocol to overcome this limit. Ie frontend - ( like 
PHP for example ) make ONE persistant connection  and different scripts 
are served over this connection. But frontend add for each instance  of 
script a unique context identificator and postgresql server  will 
treat different contexts as they was send by different connections. 
The results wil be sorted by context  by frontend and feeded to 
apprpriate instance of the php script
You've just reinvented connections. The problem is at the application 
end really, since PHP doesn't provide a middle-ware layer to manage this 
sort of stuff. Typically, java-based application servers manage this 
sort of thing for you.

I think it may add some benefit  to avoiding connection starting costs, 
especially in case where database and client are in greater network 
distance and/or need to use some expensive procedure to start connection 
and allow a relay simple and transparent  connection pooling,  may be a 
some type od spare servers like in Apache (MinSpareServers and Max 
SpareServers configuration directive )
Perhaps take a look at pgpool connection pooling.
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Out of memory error

2005-02-09 Thread Clodoaldo Pinto
After much work i was able to do it:

The psql script was changed to:

\o '/KakaoStats/bak/groupdup1.txt'
select 
data,
usuario,
sum(pontos),
sum(wus)
from usuarios
where data = 2056
group by data, usuario
;
\o
\o '/KakaoStats/bak/groupdup2.txt'
select 
data,
usuario,
sum(pontos),
sum(wus)
from usuarios
where data  2056 and data = 2400
group by data, usuario
;
\o
...snip...
\o '/KakaoStats/bak/groupdup8.txt'
select 
data,
usuario,
sum(pontos),
sum(wus)
from usuarios
where data  2600
group by data, usuario
;
\o

Then I had 8 files which i concateneted and then i tried in pgadmin3:

truncate table usuarios2;
vacuum usuarios2;
copy usuarios2 (
data,
usuario,
pontos,
wus
)
from '/KakaoStats/bak/groupdup.txt'
;

Again out of memory error.

Tried to break it the same way i did with the select:

truncate table usuarios2;
vacuum usuarios2;
copy usuarios2 (
data,
usuario,
pontos,
wus
)
from '/KakaoStats/bak/groupdup1.txt'
;
...snip...
copy usuarios2 (
data,
usuario,
pontos,
wus
)
from '/KakaoStats/bak/groupdup8.txt'
;

And again out of memory error.  ???

Then I added begin and commit to each of the copy commands and it worked !!!

truncate table usuarios2;
vacuum usuarios2;
begin;
copy usuarios2 (
data,
usuario,
pontos,
wus
)
from '/KakaoStats/bak/groupdup1.txt'
;
commit;
...snip...
begin;
copy usuarios2 (
data,
usuario,
pontos,
wus
)
from '/KakaoStats/bak/groupdup8.txt'
;
commit;

This was with pgadmin3. Does pgadmin3 turns the whole script in one
only transaction?

My hardware: Asus a7v8x-mx motherboard, 80GB Maxtor HD, XP2600/333,
2x512MB memory at the most conservative settings.

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Out of memory error

2005-02-09 Thread Tino Wildenhain
Am Montag, den 07.02.2005, 20:05 -0200 schrieb Clodoaldo Pinto:
 On Mon, 07 Feb 2005 13:51:46 -0800, Joshua D. Drake
 [EMAIL PROTECTED] wrote:
  
  Well your first email didn't explain that you were doing the below :)
 
 In the first email I was not doing the insert. I was executing a psql script:
 
 $ psql -e -f groupdup.psql ks2
 
 This was the groupdup.psql script:
 
 \pset format unaligned
 \pset fieldsep '\t'
 \pset tuples_only
 \o '/KakaoStats/bak/groupdup.txt'
 select 
   data,
   usuario,
   sum(pontos),
   sum(wus)
 from usuarios
 group by data, usuario
 ;
 \o
 \pset tuples_only
 \pset format aligned
 
 I was planning to insert the file with copy from.

Will this go into the same database?
If so, you should probably use:

CREATE TABLE targettable AS 
select  data,
usuario,
sum(pontos) as sum_pontos,
sum(wus) as sum_wus
from usuarios
group by data, usuario;


If you are going to insert this into another database,
consider doing the step above and then use

pg_dump -t targettable to get a nice dump you can easily
restore. You can also use copy to dump your own.

HTH
Tino


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Out of memory error

2005-02-09 Thread Clodoaldo Pinto
On Wed, 09 Feb 2005 13:18:44 +0100, Tino Wildenhain [EMAIL PROTECTED] wrote:
 
 Will this go into the same database?

Yes, this *went* into the same database.

 If so, you should probably use:
 
 CREATE TABLE targettable AS
 select  data,
 usuario,
 sum(pontos) as sum_pontos,
 sum(wus) as sum_wus
 from usuarios
 group by data, usuario;
 

Didn't try the create route. I have already performed the grouping. As
the table is big, each step takes very long and i don't have more time
to investigate further.

Regards, Clodoaldo Pinto

 
 HTH
 Tino
 


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Out of memory error

2005-02-09 Thread Tino Wildenhain
Am Mittwoch, den 09.02.2005, 10:39 -0200 schrieb Clodoaldo Pinto:
 On Wed, 09 Feb 2005 13:18:44 +0100, Tino Wildenhain [EMAIL PROTECTED] wrote:
  
  Will this go into the same database?
 
 Yes, this *went* into the same database.
 
  If so, you should probably use:
  
  CREATE TABLE targettable AS
  select  data,
  usuario,
  sum(pontos) as sum_pontos,
  sum(wus) as sum_wus
  from usuarios
  group by data, usuario;
  
 
 Didn't try the create route. I have already performed the grouping. As
 the table is big, each step takes very long and i don't have more time
 to investigate further.

Well the create table ... select is the most obvious for this kind
of task. Maybe it helps next time. There are also some nice howtos
on kind of materialized views - which looks like you want to do
here.

Regards
Tino


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Postgresql and Macintosh

2005-02-09 Thread John DeSoi
On Feb 9, 2005, at 3:43 AM, Jonel Rienton wrote:
You'll probably need to install fink or darwinports also if you're
building from source. I have successfully built PostgreSQL 8.0.1
recently in my Mac OS X box.  Building PostgreSQL in different
platforms has become one of my hobbies :) so if you need help just let
me know.
Actually, I think everything you need is there without fink or 
darwinports (unless you consider readline essential). I regularly use

./configure --without-readline
which gives an install that can be copied to other machines without 
changes.

John DeSoi, Ph.D.
http://pgedit.com/
Power Tools for PostgreSQL
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] [SQL] Updating selected record

2005-02-09 Thread John DeSoi
Hi Levi,
On Feb 9, 2005, at 4:01 AM, Levente Lajko wrote:
Yes, I have found this statement. However, it doesn't prevent others 
to see that specific row in its state before the update. The point is 
to lock the row for reading as well.
Unfortunately, the postgres doc clearly states that row-level locks 
allow others to query the row.
The only solution is something like process handling which is not a 
headache at all, only I am surprised a bit not having this option in 
pg.

Any comment would be welcome.

Ok, sorry I did not completely understand your question. Perhaps you 
could write a function or rule that automatically filters out the rows 
that are being updated.

John DeSoi, Ph.D.
http://pgedit.com/
Power Tools for PostgreSQL
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] pgpool 2.5b2 released

2005-02-09 Thread Richard Huxton
Roman Neuhauser wrote:
# [EMAIL PROTECTED] / 2005-02-06 09:34:53 -:
Since I posted my original question, I realized that pgpool notes a
failure of either master or slave in its log.  Would we want something
more proactive?

snmp?
Send out a NOTIFY that can be LISTENed for.
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [GENERAL] More concurent transaction over single connection ?

2005-02-09 Thread Marco Colombo
On Wed, 9 Feb 2005, NTPT wrote:
AFAIK (7.4.x) there is one limitation in persistant connections to postgresql 
from various frontends ( 
http://cz.php.net/manual/en/features.persistent-connections.php ), because it 
can not use transactions in situation where more concurent tasks use a single 
connection (execuse my wrong english)
For the PHP case, it's not a limitation. pg_connect() and pg_pconnect() 
have the
same semantics, per specs. That is, there's no way to write a PHP program that
behaves differently depening on the usage of pg_connect or pg_pconnect.
You can always safely substitute a pg_pconnect with pg_connect, as far as PHP
is concerned (it may affect performance, but not semantics of the program).
Moreover, for a Web application, assuming you're using a multiprocess server
such as Apache, you can never tell which sub-process will handle your next
request. And database connections are per-process. So the problem you need to
solve is not multiple apps using the same connection, but also how to pass
connections along among sub-processes. Think bigger, and consider a
load-balancing solution, where requests are directed to different web frontents:
you'll need to pass database connections among different _hosts_.
It's the stateless nature for HTTP that makes web services really scalable.
Persistent connections destroy any possible state when the request is
done. You can't have a transaction span multiple requests, per design.
If you really need that, consider an application server. Anyway, beginning
a transaction in one page, and waiting for a second request from the client
in order to commit it is bad practice, since the wait can me potentially
very long (and you need a way to recover from that).
.TM.
--
  /  /   /
 /  /   /   Marco Colombo
___/  ___  /   /  Technical Manager
   /  /   /  ESI s.r.l.
 _/ _/  _/ [EMAIL PROTECTED]
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [GENERAL] indexing just a part of a string

2005-02-09 Thread Christopher Browne
In the last exciting episode, [EMAIL PROTECTED] (Christoph Pingel) wrote:
 I'm new to PostgreSQL, and it has been a pleasure so far.

 There's just one thing I'm trying to do and I didn't find any hints in
 the manual: I want to index just a part of a string in a column.

 The situation: I have roughly 300.000 rows, and in the column we're
 looking at, most of the entries have less than 200 chars. However,
 there are some (very few) that have more than 3000 chars, and
 postmaster relplies that this is too many for the index (b-tree).

 So I would like to say 'index only the first 200 chars of the column',
 which will result in a full index of 99.9 % of my entries. I did this
 in MySQL, but I didn't find it in the pg manual.

 How do I proceed?

You might look at using a functional index:

http://www.postgresql.org/docs/7.3/interactive/indexes-functional.html

You'd therefore create a first_200_chars() function, and then create
an index using that function.
-- 
wm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','gmail.com').
http://cbbrowne.com/info/postgresql.html
Signs  of  a Klingon  Programmer  #10:  You cannot  really appreciate
Dilbert unless you've read it in the original Klingon.

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] parameters handling in postgresql functions

2005-02-09 Thread Sibtay Abbas
hi all
How are function parameters generally passed in postgresql.

More specifically when we pass a variable to a plpgsql function is
passed by value or by reference

Thank You

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Postgresql and Macintosh

2005-02-09 Thread Joshua D. Drake
renato.barrios wrote:
Please tell me if Postgresql runns in an iMac.
 

Yes it does as long as you are running OS X.
Sincerely,
Joshua D. Drake

Thanks,
Renato
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
 


--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
PostgreSQL Replicator -- production quality replication for PostgreSQL
begin:vcard
fn:Joshua Drake
n:Drake;Joshua
org:Command Prompt, Inc.
adr:;;PO Box 215 ;Cascade Locks;OR;97014;US
email;internet:[EMAIL PROTECTED]
title:Consultant
tel;work:503-667-4564
tel;fax:503-210-0334
x-mozilla-html:FALSE
url:http://www.commandprompt.com
version:2.1
end:vcard


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Postgresql and Macintosh

2005-02-09 Thread Steve Atkins
On Wed, Feb 09, 2005 at 02:43:56AM -0600, Jonel Rienton wrote:

 You'll probably need to install fink or darwinports also if you're 
 building from source. I have successfully built PostgreSQL 8.0.1 
 recently in my Mac OS X box.  Building PostgreSQL in different 
 platforms has become one of my hobbies :) so if you need help just let 
 me know.

No need to play with fink or darwinports. 8.0.1 builds from source (at
least in a fairly vanilla configuration) on stock OS X 10.3 with the
apple developer tools installed.

Cheers,
  Steve

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Postgresql and Macintosh

2005-02-09 Thread Thomas F . O'Connell
True. Apple even has a page with installation instructions:
http://developer.apple.com/internet/opensource/postgres.html
-tfo
--
Thomas F. O'Connell
Co-Founder, Information Architect
Sitening, LLC
http://www.sitening.com/
110 30th Avenue North, Suite 6
Nashville, TN 37203-6320
615-260-0005
On Feb 9, 2005, at 3:14 AM, Tino Wildenhain wrote:
Ok ;) But at least we can state it runs ;)

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] parameters handling in postgresql functions

2005-02-09 Thread Richard Huxton
Sibtay Abbas wrote:
hi all
How are function parameters generally passed in postgresql.
More specifically when we pass a variable to a plpgsql function is
passed by value or by reference
By value. Always (as far as I know).
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] parameters handling in postgresql functions

2005-02-09 Thread John DeSoi
On Feb 9, 2005, at 11:10 AM, Sibtay Abbas wrote:
hi all
How are function parameters generally passed in postgresql.
More specifically when we pass a variable to a plpgsql function is
passed by value or by reference
By value. I don't believe there is any way to modify parameters if that 
is what you are looking for.

John DeSoi, Ph.D.
http://pgedit.com/
Power Tools for PostgreSQL
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] ADO adCmdStoredProc PlPgSql-SP Parameters

2005-02-09 Thread Shachar Shemesh
[EMAIL PROTECTED] wrote:
Hi,
How do I call a StoredProcdure, written in PlPqSql 
which returns a set of records (or tableWhatever)
and wants parameters,
from ADO while using adCmdStoredProc ?

 

Which driver are you using? If it's the OLE DB provider, then there is a 
solution, provided that you modify your stored procedure a little. You 
can probably write a wrapper procedure, if that's not acceptable as it is.

If your function returns refcursor, then OLE DB will interpret that to 
mean that the function has rowset returns, and will emulate the standard 
calls surrounding returning multiple results.

Please feel free to let me know if this solves your problem. Please CC 
me on replies.

 Shachar
--
Shachar Shemesh
Lingnu Open Source Consulting ltd.
Have you backed up today's work? http://www.lingnu.com/backup.html
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


[GENERAL] Understanding EXPLAIN ANALYZE output

2005-02-09 Thread Ed L.
I'm trying to optimize a large query by looking at 
EXPLAIN ANALYZE output.  Here's what I think may be 
the most relevant snippet:

21 -  Nested Loop  (cost=0.00..108.85 rows=1 width=1196) (actual 
time=4769.59..4769.59 rows=0 loops=1)
22   -  Nested Loop  (cost=0.00..64.78 rows=4 width=131) (actual 
time=0.41..72.80 rows=1014 loops=1)
23 -  Nested Loop  (cost=0.00..40.68 rows=4 width=103) (actual 
time=0.34..46.51 rows=1014 loops=1)
24   -  Nested Loop  (cost=0.00..32.40 rows=3 width=52) 
(actual time=0.24..11.07 rows=689 loops=1)
25 -  Index Scan using uc_review_reviewnum on 
review  (cost=0.00..3.85 rows=1 width=28) (actual time=0.12..0.13 rows=1 
loops=1)
26   Index Cond: (reviewnum = '1890382'::bpchar)
27 -  Index Scan using 
idx_customerorder_review_key on customerorder  (cost=0.00..27.90 rows=52 
width=24) (actual time=0.11..8.07 rows=689 loops=1)
28   Index Cond: (outer.key = 
customerorder.review_key)
29   Filter: (ordertype = 'TEST'::bpchar)
30   -  Index Scan using 
idx_occurrencehistory_customerorder_key on occurrencehistory  (cost=0.00..3.05 
rows=2 width=51) (actual time=0.02..0.04 rows=1 loops=689)
31 Index Cond: (outer.key = 
occurrencehistory.customerorder_key)
32 -  Index Scan using pk_user on user  (cost=0.00..5.80 
rows=1 width=28) (actual time=0.01..0.01 rows=1 loops=1014)
33   Index Cond: (outer.user_key = user.key)
34   -  Index Scan using idx_queryoutcome_occurrencehistory_key on 
queryoutcome  (cost=0.00..10.28 rows=28 width=1065) (actual time=4.63..4.63 
rows=0 loops=1014)
35 Index Cond: (outer.key = 
queryoutcome.occurrencehistory_key)
36 -  Index Scan using pk_querydefinition on querydefinition  
(cost=0.00..5.99 rows=1 width=58) (never executed)
37   Index Cond: (outer.querydefinition_key = querydefinition.key)

If I understand these correctly, line 22's nested loop finished
returning the last row 72.80ms into the query, and then line
21's nested loop returns its 0 rows 4769.59ms into the query.  
That would seem to mean 4696.79ms were spent executing the 
nested loop on line 21 prior to returning the first row.

If I got that right, how do I figure out what constitutes the
nested loop of line 21?

TIA.

Ed


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] EXTPROC External Procedure

2005-02-09 Thread Roy Souther
I cannot find any information about wether PostgreSQL has EXTPROC or not. Can 
someone tell me if it dose or if there is a better way to do this.

I need to be able to run a program when some data is requested from the 
database. Some query will do a SELECT on a known table and I need to have 
some event trigger that makes a call to generate a small XML file and then 
post it to a web server, get a reply back and then update a local table with 
the information it received.

I have looked at how to do this in Oracle but I want to use PostgreSQL.
For Oracle I found this, http://www.orafaq.com/faqplsql.htm#OSCOMMAND. In 
Oracle I need to make an EXTPROC call a bash script that will run wget and 
then the bash script will make a new connection to the database to send it 
the result data. The EXTPROC will not require any returned information. It 
just needs to start the action.

Is there any way this can be done in PostgreSQL? Is there some other way to do 
this?


pgpSjBmgLodVo.pgp
Description: PGP signature


Re: [GENERAL] Postgresql and Macintosh

2005-02-09 Thread Joe Lester
Watch out for the memory leak bug on OS 10.3.2 though 10.3.5 (this is 
apparently an OS X bug, not a Postgres bug):

http://archives.postgresql.org/pgsql-general/2004-08/msg00972.php
I don't know if it exists on 10.3.6 or 10.3.7 (the current version). I 
submitted a bug report to Apple many months ago, but it has not seen 
any activity yet.

Joe
On Feb 9, 2005, at 1:35 AM, John DeSoi wrote:
On Feb 9, 2005, at 10:39 AM, renato.barrios wrote:
Please tell me if Postgresql runns in an iMac.
Sure, assuming your iMac is running OS X. You'll need to install 
Apple's free developer tools to compile it or perhaps you might want 
to use one of the freely available installers such as:

http://www.entropy.ch/software/macosx/postgresql/

John DeSoi, Ph.D.
http://pgedit.com/
Power Tools for PostgreSQL
---(end of 
broadcast)---
TIP 5: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] EXTPROC External Procedure

2005-02-09 Thread Ed L.
On Wednesday February 9 2005 11:29, Roy Souther wrote:
 I cannot find any information about wether PostgreSQL has
 EXTPROC or not. Can someone tell me if it dose or if there is
 a better way to do this.

Not sure what version you're using, but sounds like a job for 
perl (plperl) + a select rule (see CREATE RULE).

Ed


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Understanding EXPLAIN ANALYZE output

2005-02-09 Thread Alvaro Herrera
On Wed, Feb 09, 2005 at 11:00:00AM -0700, Ed L. wrote:

 34   -  Index Scan using idx_queryoutcome_occurrencehistory_key on 
 queryoutcome  (cost=0.00..10.28 rows=28 width=1065) (actual time=4.63..4.63 
 rows=0 loops=1014)

   ^^   
   
 35 Index Cond: (outer.key = 
 queryoutcome.occurrencehistory_key)


 If I understand these correctly, line 22's nested loop finished
 returning the last row 72.80ms into the query, and then line
 21's nested loop returns its 0 rows 4769.59ms into the query.  
 That would seem to mean 4696.79ms were spent executing the 
 nested loop on line 21 prior to returning the first row.

4796.0 ms are the time it took for 34's index scan to complete the 1014
loops, I'd think.

-- 
Alvaro Herrera ([EMAIL PROTECTED])
La naturaleza, tan frágil, tan expuesta a la muerte... y tan viva

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] EXTPROC External Procedure

2005-02-09 Thread Alvaro Herrera
On Wed, Feb 09, 2005 at 11:29:21AM -0700, Roy Souther wrote:

 I need to be able to run a program when some data is requested from the 
 database. Some query will do a SELECT on a known table and I need to have 
 some event trigger that makes a call to generate a small XML file and then 
 post it to a web server, get a reply back and then update a local table with 
 the information it received.

You may be able to use a pl/perlu (untrusted perl) function to do this.

-- 
Alvaro Herrera ([EMAIL PROTECTED])
No reniegues de lo que alguna vez creíste

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] EXTPROC External Procedure

2005-02-09 Thread Pavel Stehule
Hello, 

code for genereting xml:

CREATE OR REPLACE FUNCTION ext.xml(varchar) RETURNS text AS $$
   use DBI; use XML::Generator::DBI; use XML::Handler::YAWriter;
   my $rv = spi_exec_query(SELECT current_setting('client_encoding'),  .
   current_database(), 1);
   my $encoding = lc($rv-{rows}[0]-{current_setting[5~});
   my $cdb = $rv-{rows}[0]-{current_database};
   my $dbh = DBI-connect (DBI:Pg:dbname=$cdb, , , { RaiseError = 
1, PrintError = 0});
   my $ya = XML::Handler::YAWriter-new (Encoding=$encoding, 
Pretty = {PrettyWhiteNewline = 1, PrettyWhiteIndent = 1,});
   my $gen = XML::Generator::DBI-new (Handler = $ya, dbh = $dbh,
   ByColumnName = 1, ShowColumns = 1,);
   $gen-execute ($_[0]);
   $dbh-disconnect ();
   return join('',@{$ya-{Strings}});
$$ LANGUAGE plperlu;
SELECT ext.xml('SELECT * FROM ext.jmena LIMIT 10');

Sending mail, or uploud file via HTTP isn't problem. PL/Perlu works very 
fine.

Regards
Pavel Stehule


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] PHP/PDO Database Abstraction Layer

2005-02-09 Thread Gavin M. Roy
PDO just went into beta, and can be downloaded from
http://pecl.php.net/package/pdo to be compiled into previous versions of
PHP.  We really should get some PHP and PgSQL people onto making sure
the PgSQL driver is top notch (if it isn't already). 

Gavin


On Mon, 2005-02-07 at 20:21 -0800, Joshua D. Drake wrote:
 Robby Russell wrote:
 
 Sincerely,
 
 Joshua D. Drake
 Command Prompt, Inc.
 503-667-4564
 
 It hasn't been updated since May 2004 though. :-/
 
 
 Hmm... Well there must be another home for it then because
 it is set to be the default database api for 5.1. Ahh now I see
 it is already been pushed into the PHP source...
 
 http://php.oregonstate.edu/manual/en/ref.pdo.php
 
 I apologize for the confusion.
 
 Sincerely,
 
 Joshua D. Drake
 
 
 
 
 
 
 
 ---(end of broadcast)---
 TIP 6: Have you searched our list archives?
 
http://archives.postgresql.org


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Understanding EXPLAIN ANALYZE output

2005-02-09 Thread Alvaro Herrera
On Wed, Feb 09, 2005 at 12:54:27PM -0700, Ed L. wrote:
 On Wednesday February 9 2005 12:04, you wrote:

  4796.0 ms are the time it took for 34's index scan to complete
  the 1014 loops, I'd think.
 
 Interesting.  Reindexing the index
 
   idx_queryoutcome_occurrencehistory_key 
 
 sped up the query to sub-second.  This is a 7.3.4 cluster.  I 
 wonder if this that 7.3 index bloat bug?

Hard to say, because you didn't provide more info (such as if the index
has the right deletion pattern), but it could be.

-- 
Alvaro Herrera ([EMAIL PROTECTED])
I call it GNU/Linux. Except the GNU/ is silent. (Ben Reiter)

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Understanding EXPLAIN ANALYZE output

2005-02-09 Thread Ed L.
On Wednesday February 9 2005 12:56, Alvaro Herrera wrote:
 On Wed, Feb 09, 2005 at 12:54:27PM -0700, Ed L. wrote:
  sped up the query to sub-second.  This is a 7.3.4 cluster. 
  I wonder if this that 7.3 index bloat bug?

 Hard to say, because you didn't provide more info (such as if
 the index has the right deletion pattern), but it could be.

What delete pattern would cause the problem?

Ed


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Understanding EXPLAIN ANALYZE output

2005-02-09 Thread Alvaro Herrera
On Wed, Feb 09, 2005 at 01:11:36PM -0700, Ed L. wrote:
 On Wednesday February 9 2005 12:56, Alvaro Herrera wrote:
  On Wed, Feb 09, 2005 at 12:54:27PM -0700, Ed L. wrote:
   sped up the query to sub-second.  This is a 7.3.4 cluster. 
   I wonder if this that 7.3 index bloat bug?
 
  Hard to say, because you didn't provide more info (such as if
  the index has the right deletion pattern), but it could be.
 
 What delete pattern would cause the problem?

A key on which you insert on one side and delete on the other.  A table
indexed by date, and you regularly delete the oldest entries, for
example.

-- 
Alvaro Herrera ([EMAIL PROTECTED])
El Maquinismo fue proscrito so pena de cosquilleo hasta la muerte
(Ijon Tichy en Viajes, Stanislaw Lem)

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] create temp table and on commit in 7.3.3

2005-02-09 Thread Tom Lane
Michael Fuhr [EMAIL PROTECTED] writes:
 The 7.3 and 7.2 documentation for CREATE TABLE both mention ON COMMIT:

 http://www.postgresql.org/docs/7.3/static/sql-createtable.html#SQL-CREATETABLE-COMPATIBILITY
 http://www.postgresql.org/docs/7.2/static/sql-createtable.html#SQL-CREATETABLE-COMPATIBILITY

 Should those versions describe ON COMMIT if they don't support it?

That section is describing features that are in the spec that we don't
support (or didn't then support).

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] create temp table and on commit in 7.3.3

2005-02-09 Thread Michael Fuhr
On Wed, Feb 09, 2005 at 03:40:49PM -0500, Tom Lane wrote:
 Michael Fuhr [EMAIL PROTECTED] writes:
  The 7.3 and 7.2 documentation for CREATE TABLE both mention ON COMMIT:
 
  http://www.postgresql.org/docs/7.3/static/sql-createtable.html#SQL-CREATETABLE-COMPATIBILITY
  http://www.postgresql.org/docs/7.2/static/sql-createtable.html#SQL-CREATETABLE-COMPATIBILITY
 
  Should those versions describe ON COMMIT if they don't support it?
 
 That section is describing features that are in the spec that we don't
 support (or didn't then support).

Ah, right...just above there I see this:

The CREATE TABLE conforms to SQL92 Intermediate and to a subset of
SQL99, with exceptions listed below and in the descriptions above.

Sorry about that.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] Understanding EXPLAIN ANALYZE output

2005-02-09 Thread Tom Lane
Ed L. [EMAIL PROTECTED] writes:
 I'm trying to optimize a large query by looking at 
 EXPLAIN ANALYZE output.  Here's what I think may be 
 the most relevant snippet:

 21 -  Nested Loop  (cost=0.00..108.85 rows=1 width=1196) (actual 
 time=4769.59..4769.59 rows=0 loops=1)
 22   -  Nested Loop  (cost=0.00..64.78 rows=4 width=131) (actual 
 time=0.41..72.80 rows=1014 loops=1)
 ...
 34   -  Index Scan using idx_queryoutcome_occurrencehistory_key on 
 queryoutcome  (cost=0.00..10.28 rows=28 width=1065) (actual time=4.63..4.63 
 rows=0 loops=1014)
 35 Index Cond: (outer.key = 
 queryoutcome.occurrencehistory_key)

 If I understand these correctly, line 22's nested loop finished
 returning the last row 72.80ms into the query, and then line
 21's nested loop returns its 0 rows 4769.59ms into the query.  

No, you don't understand correctly.  The numbers are not time into the
query, they are elapsed time spent within this node and its
children.  The above says that we spent a total of 72.80 msec executing
line 22 and its children, and a total of 4.63*1014 msec executing line
34 (which has no children, so that's all in the indexscan).  That adds
up to 4767.62 msec, so the actual joining at line 21 took only 1.97 msec.
None of this tells you anything about how far into the query all this
stuff happened.  It is however clear that line 34 is the bulk of the
time.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Understanding EXPLAIN ANALYZE output

2005-02-09 Thread Ed L.
On Wednesday February 9 2005 2:21, Tom Lane wrote:
 Ed L. [EMAIL PROTECTED] writes:
 
  21 -  Nested Loop  (cost=0.00..108.85 rows=1
  width=1196) (actual time=4769.59..4769.59 rows=0 loops=1) 22
-  Nested Loop  (cost=0.00..64.78 rows=4 width=131)
  (actual time=0.41..72.80 rows=1014 loops=1) ...
  34   -  Index Scan using
  idx_queryoutcome_occurrencehistory_key on queryoutcome 
  (cost=0.00..10.28 rows=28 width=1065) (actual
  time=4.63..4.63 rows=0 loops=1014) 35 Index
  Cond: (outer.key = queryoutcome.occurrencehistory_key)
 
  If I understand these correctly, line 22's nested loop
  finished returning the last row 72.80ms into the query, and
  then line 21's nested loop returns its 0 rows 4769.59ms into
  the query.

 No, you don't understand correctly.  The numbers are not time
 into the query, they are elapsed time spent within this node
 and its children.  The above says that we spent a total of
 72.80 msec executing line 22 and its children, and a total of
 4.63*1014 msec executing line 34 (which has no children, so
 that's all in the indexscan).  That adds up to 4767.62 msec,
 so the actual joining at line 21 took only 1.97 msec. None of
 this tells you anything about how far into the query all
 this stuff happened.  It is however clear that line 34 is the
 bulk of the time.

Very helpful, thanks.  So time spent in that node  its children 
= first number of actual time * loops?  That seems consistent 
with the fact that reindexing the index led to the huge speedup.  
If the second number of the actual time part means time 
elapsed in this node and its children until the last row was 
returned, why does it say actual time=4.63..4.63 instead of 
actual time=4.63..4767.62?  Would it say that if there had 
been 1 row returned instead of none?

Ed


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Postgresql and Macintosh

2005-02-09 Thread Tom Lane
Joe Lester [EMAIL PROTECTED] writes:
 Watch out for the memory leak bug on OS 10.3.2 though 10.3.5 (this is 
 apparently an OS X bug, not a Postgres bug):
 http://archives.postgresql.org/pgsql-general/2004-08/msg00972.php
 I don't know if it exists on 10.3.6 or 10.3.7 (the current version).

[ tries it... ]  Yup, still there in 10.3.7.

regards, tom lane

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Understanding EXPLAIN ANALYZE output

2005-02-09 Thread Tom Lane
Ed L. [EMAIL PROTECTED] writes:
 If the second number of the actual time part means time 
 elapsed in this node and its children until the last row was 
 returned, why does it say actual time=4.63..4.63 instead of 
 actual time=4.63..4767.62?

The reason we do that is to make the actual numbers comparable to the
way that the estimated costs are presented.  The estimated costs are for
a single execution of the node, but the inner side of a nestloop join is
re-executed for each row coming from the outer side.  In this case, the
outer side produced 1014 rows so we ran 1014 separate executions of the
inner indexscan.  If we just reported 4767.62 as the total time, it
would look like the estimated indexscan cost of 10.28 was completely out
of whack, which it isn't.  So instead, we report a per-execution cost
and a loops number that you have to multiply by if you are thinking
about total time spent.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[GENERAL] FOR vs Cursors

2005-02-09 Thread Van Ingen, Lane
After reading the manual (8.0). I am not certain what real advantage there is 
to using a cursor
over using a FOR loop. since FOR automatically uses a cursor anyway. They both 
avoid 
memory problems. Have I missed something or are there other reasons?

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Understanding EXPLAIN ANALYZE output

2005-02-09 Thread Martijn van Oosterhout
On Wed, Feb 09, 2005 at 02:37:39PM -0700, Ed L. wrote:
 Very helpful, thanks.  So time spent in that node  its children 
 = first number of actual time * loops?  That seems consistent 
 with the fact that reindexing the index led to the huge speedup.  
 If the second number of the actual time part means time 
 elapsed in this node and its children until the last row was 
 returned, why does it say actual time=4.63..4.63 instead of 
 actual time=4.63..4767.62?  Would it say that if there had 
 been 1 row returned instead of none?

The meaning of the times is (IIRC) derived from the numbers used in the
planning. The first number is time to first result, the second number
is time to last result. So if you're doing a LIMIT 1 query, only
the first number is relevent.

The purpose of the explain output is to determine where the bottlenecks
are and as such each node only counts time spent in it and all its
subnodes. So for example a Sort node has a really high start time
(because all subnodes must complete before sorting can complete) but
the end time is shortly after because it only needs to pump out tuples.

The complication comes because nodes can be executed in their entirety
repeatedly, especially with nested joins, the inner loop is repeatedly
executed with different parameters. I was one of the major contributers
of this feature originally and there was discussion about how to
represent this. I think the rationale is that the figures should not
appear radically different just because a Node is reexecuted a lot, so
you get an extra result loops and the numbers look like a single
index scan. If you look at the numbers the planner uses, it follows the
same pattern.

In general, the EXPLAIN ANALYZE output follows the planner output as
close as possible. If you look at the original query posted, it showed
an Index Scan costing 4.63..4.63 which means the index scan is taking
(on average) 4.63ms to return 1 row! If it displayed as 4000 or
something it would be a lot less clear what was going on.

Hope this helps,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgp86copkmZsh.pgp
Description: PGP signature


Re: [GENERAL] Understanding EXPLAIN ANALYZE output

2005-02-09 Thread Ed L.
On Wednesday February 9 2005 3:13, Martijn van Oosterhout wrote:

 In general, the EXPLAIN ANALYZE output follows the planner
 output as close as possible. If you look at the original query
 posted, it showed an Index Scan costing 4.63..4.63 which means
 the index scan is taking (on average) 4.63ms to return 1 row!
 If it displayed as 4000 or something it would be a lot less
 clear what was going on.

 Hope this helps,

Both replies help very much, thanks.  Regarding 4.63ms to return 
a row, I know I/O speeds vary widely depending on conditions and 
hardware, but what would you say is a red flag value for the 
actual time to retrieve a row.  I see that for most of the other 
times in this example, 0.01ms to 0.10ms is not unusual.

Ed


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Can't build libpq test example

2005-02-09 Thread Bruce Momjian

Strange, it looks OK to me with the -lpq.  My guess is that it is some
SCO strangeness but I am not sure what.  Do an 'nm' on libpq.so and see
if those symbols are in there.

---

Jim Morcombe wrote:
 Can anyone help me with this problem?
 
 I have installed postgresql-7.2.1 and am trying to compile one of the 
 test programs from the directory src/test/examples.
 
 I am running an old version of SCO Openserver 5
 
 I am using the following command:
 
   cc testlibpq.c -I/usr/local/pgsql/include -L/usr/local/pgsql/lib -lpq 
 -otestlibpq
 
 I receive the following error messages:
 
 undefined   first referenced
   symbol in file
 PQfinishtestlibpq.o
 PQsetdbLogintestlibpq.o
 PQstatustestlibpq.o
 PQerrorMessage  testlibpq.o
 PQexec  testlibpq.o
 PQresultStatus  testlibpq.o
 PQclear testlibpq.o
 PQnfields   testlibpq.o
 PQfname testlibpq.o
 PQgetvalue  testlibpq.o
 PQntuples   testlibpq.o
 ld fatal: Symbol referencing errors. No output written to testlibpq
 
 ---(end of broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Understanding EXPLAIN ANALYZE output

2005-02-09 Thread Martijn van Oosterhout
On Wed, Feb 09, 2005 at 03:38:32PM -0700, Ed L. wrote:
 On Wednesday February 9 2005 3:13, Martijn van Oosterhout wrote:
 
  In general, the EXPLAIN ANALYZE output follows the planner
  output as close as possible. If you look at the original query
  posted, it showed an Index Scan costing 4.63..4.63 which means
  the index scan is taking (on average) 4.63ms to return 1 row!
  If it displayed as 4000 or something it would be a lot less
  clear what was going on.
 
  Hope this helps,
 
 Both replies help very much, thanks.  Regarding 4.63ms to return 
 a row, I know I/O speeds vary widely depending on conditions and 
 hardware, but what would you say is a red flag value for the 
 actual time to retrieve a row.  I see that for most of the other 
 times in this example, 0.01ms to 0.10ms is not unusual.

That number 4.63ms is way out. Especially considering this is an
*average* over 1000+ iterations which tells you something it very wrong
with that step. Note, Index Scan can take a long time, it completely
depends on how many rows it returns. Also, if there is an additional
filter on the index (someotherfield = 'x') it may take a while to
return a single value.

But for unique indexes on primary keys (which is what this appears to
be) the times should hopefully be 1ms *average* (caching and load are
imprtant factors).

Hope this helps,
-- 
Martijn van Oosterhout   kleptog@svana.org   http://svana.org/kleptog/
 Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a
 tool for doing 5% of the work and then sitting around waiting for someone
 else to do the other 95% so you can sue them.


pgpvejs8vZU1S.pgp
Description: PGP signature


Re: [GENERAL] Can't build libpq test example

2005-02-09 Thread Larry Rosenman
Bruce Momjian wrote:
 Strange, it looks OK to me with the -lpq.  My guess is that it is
 some SCO strangeness but I am not sure what.  Do an 'nm' on libpq.so
 and see if those symbols are in there.  
 

---
 
 Jim Morcombe wrote:
 Can anyone help me with this problem?
 
 I have installed postgresql-7.2.1 and am trying to compile one of the
 test programs from the directory src/test/examples.
 
 I am running an old version of SCO Openserver 5
 
 I am using the following command:
 
   cc testlibpq.c -I/usr/local/pgsql/include -L/usr/local/pgsql/lib
 -lpq -otestlibpq 
 
 I receive the following error messages:
 
 undefined   first referenced
   symbol in file
 PQfinishtestlibpq.o
 PQsetdbLogintestlibpq.o
 PQstatustestlibpq.o
 PQerrorMessage  testlibpq.o
 PQexec  testlibpq.o
 PQresultStatus  testlibpq.o
 PQclear testlibpq.o
 PQnfields   testlibpq.o
 PQfname testlibpq.o
 PQgetvalue  testlibpq.o
 PQntuples   testlibpq.o
 ld fatal: Symbol referencing errors. No output written to testlibpq
 
 ---(end of
 broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send unregister YourEmailAddressHere to
 [EMAIL PROTECTED])

And, are you sure you have the right libpq in /usr/local/pgsql/lib ?



-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 972-414-9812 E-Mail: ler@lerctr.org
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Understanding EXPLAIN ANALYZE output

2005-02-09 Thread Ed L.
Thinking about how to make this analysis faster and less labor-
intensive ...

I know of no other way to get the detailed performance data provided
via EXPLAIN ANALYZE without just painfully disassembling a query.   
It seems it would be pretty useful w/r/t performance monitoring to 
be able to retrieve such performance numbers as those in EXPLAIN 
ANALYZE in a rowset via query.  That would seem to enable automated
identification of things like single rows taking 4.63ms to retrieve.
I can think of a number of application queries for which I would 
like to do this sort of analysis routinely across a bunch of 
database clusters.  I guess one could just parse the explain output
in the meantime but, dreaming a bit here, for example,

   SELECT node_id, op, parent_node_id, index, relation,
  cost_first, cost_last, cost_rows, cost_width,
  actual_first, actual_last, actual_rows, actual_loops,
  index_condition
   FROM pg_explain_analyze('SELECT * FROM foo');

with output similar to 

 node_id | op  | parent_node_id ...  actual_last | actual_rows | 
actual_loops ...
-+-+...--+-+--...
  21 | Nested Loop | 20 ...72.80 |1014 | 1
  22 | Nested Loop | 21 ...46.51 |1014 | 1
...
  34 | Index Scan  | 21 ... 4.63 |   0 | 1014
...

Then, as a routine measure, catch those cases like this one,

SELECT sql, op, index, relation, actual_first
FROM pg_explain_analyze('SELECT * FROM foo')
WHERE op = 'Index Scan'
  AND actual_first  1.0;

Thankfully, I'm sure there are a lot of skilled postgresql'ers 
just sitting around right now wishing they had something to do.  

Ed


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Understanding EXPLAIN ANALYZE output

2005-02-09 Thread David Fetter
On Wed, Feb 09, 2005 at 04:34:30PM -0700, Ed L. wrote:
 Thinking about how to make this analysis faster and less labor-
 intensive ...
 
 I know of no other way to get the detailed performance data provided
 via EXPLAIN ANALYZE without just painfully disassembling a query.   
 It seems it would be pretty useful w/r/t performance monitoring to 
 be able to retrieve such performance numbers as those in EXPLAIN 
 ANALYZE in a rowset via query.  That would seem to enable automated
 identification of things like single rows taking 4.63ms to retrieve.
 I can think of a number of application queries for which I would 
 like to do this sort of analysis routinely across a bunch of 
 database clusters.  I guess one could just parse the explain output
 in the meantime but, dreaming a bit here, for example,
 
SELECT node_id, op, parent_node_id, index, relation,
   cost_first, cost_last, cost_rows, cost_width,
   actual_first, actual_last, actual_rows, actual_loops,
   index_condition
FROM pg_explain_analyze('SELECT * FROM foo');
 
 with output similar to 
 
  node_id | op  | parent_node_id ...  actual_last | actual_rows | 
 actual_loops ...
 -+-+...--+-+--...
   21 | Nested Loop | 20 ...72.80 |1014 | 1
   22 | Nested Loop | 21 ...46.51 |1014 | 1
 ...
   34 | Index Scan  | 21 ... 4.63 |   0 | 1014
 ...
 
 Then, as a routine measure, catch those cases like this one,
 
 SELECT sql, op, index, relation, actual_first
 FROM pg_explain_analyze('SELECT * FROM foo')
 WHERE op = 'Index Scan'
   AND actual_first  1.0;
 
 Thankfully, I'm sure there are a lot of skilled postgresql'ers 
 just sitting around right now wishing they had something to do.  

Well, I'm a little bored; I've got tomorrow off, and this seems like
it might be doable in the kind of high-level PL/Foo's with which I'm
familiar.  What would the returning rowtype for
pg_explain_analyze(TEXT) be?

Cheers,
D
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
phone: +1 510 893 6100   mobile: +1 415 235 3778

Remember to vote!

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Understanding EXPLAIN ANALYZE output

2005-02-09 Thread Joshua D. Drake
David Fetter wrote:
On Wed, Feb 09, 2005 at 04:34:30PM -0700, Ed L. wrote:
 

Thinking about how to make this analysis faster and less labor-
intensive ...
I know of no other way to get the detailed performance data provided
via EXPLAIN ANALYZE without just painfully disassembling a query.   
It seems it would be pretty useful w/r/t performance monitoring to 
be able to retrieve such performance numbers as those in EXPLAIN 
ANALYZE in a rowset via query.  That would seem to enable automated
identification of things like single rows taking 4.63ms to retrieve.
I can think of a number of application queries for which I would 
like to do this sort of analysis routinely across a bunch of 
database clusters.  I guess one could just parse the explain output
in the meantime but, dreaming a bit here, for example,

  SELECT node_id, op, parent_node_id, index, relation,
 cost_first, cost_last, cost_rows, cost_width,
 actual_first, actual_last, actual_rows, actual_loops,
 index_condition
  FROM pg_explain_analyze('SELECT * FROM foo');
with output similar to 

node_id | op  | parent_node_id ...  actual_last | actual_rows | 
actual_loops ...
-+-+...--+-+--...
 21 | Nested Loop | 20 ...72.80 |1014 | 1
 22 | Nested Loop | 21 ...46.51 |1014 | 1
...
 34 | Index Scan  | 21 ... 4.63 |   0 | 1014
...
Then, as a routine measure, catch those cases like this one,
   SELECT sql, op, index, relation, actual_first
   FROM pg_explain_analyze('SELECT * FROM foo')
   WHERE op = 'Index Scan'
 AND actual_first  1.0;
Thankfully, I'm sure there are a lot of skilled postgresql'ers 
just sitting around right now wishing they had something to do.  
   

Well, I'm a little bored; I've got tomorrow off, and this seems like
it might be doable in the kind of high-level PL/Foo's with which I'm
familiar.  What would the returning rowtype for
pg_explain_analyze(TEXT) be?
 

You could return it as formatted text. if you want to make it simple.
J

Cheers,
D
 


--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
PostgreSQL Replicator -- production quality replication for PostgreSQL
begin:vcard
fn:Joshua Drake
n:Drake;Joshua
org:Command Prompt, Inc.
adr:;;PO Box 215 ;Cascade Locks;OR;97014;US
email;internet:[EMAIL PROTECTED]
title:Consultant
tel;work:503-667-4564
tel;fax:503-210-0334
x-mozilla-html:FALSE
url:http://www.commandprompt.com
version:2.1
end:vcard


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[GENERAL] Functions with more than 32 parameters

2005-02-09 Thread Oisin Glynn
Hi,
I am trying to keep database access encapsulated in functions from an insert 
and update point of view so as all of the applications writing to the 
database are using the same functions/statements.

My problem is I was creating functions to do inserts/updates and now I have 
a table with more than 32 columns and the functions seem to only want to 
have 32 parameters?

Is there any way around this?
I have used stored procedures in this way with Oracle/SQL server before.Or 
is there another strategy I could use?

I am using Windows 2000 Postgres 8.0.1 if it makes a difference
Thanks in advance,
Oisin
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Understanding EXPLAIN ANALYZE output

2005-02-09 Thread Ed L.
On Wednesday February 9 2005 7:31, David Fetter wrote:
 On Wed, Feb 09, 2005 at 04:34:30PM -0700, Ed L. wrote:
  Thinking about how to make this analysis faster and less
  labor- intensive ...
 
 SELECT node_id, op, parent_node_id, index, relation,
cost_first, cost_last, cost_rows, cost_width,
actual_first, actual_last, actual_rows,
  actual_loops, index_condition
 FROM pg_explain_analyze('SELECT * FROM foo');
 
  Thankfully, I'm sure there are a lot of skilled
  postgresql'ers just sitting around right now wishing they
  had something to do.

 Well, I'm a little bored; I've got tomorrow off, and this
 seems like it might be doable in the kind of high-level
 PL/Foo's with which I'm familiar.  What would the returning
 rowtype for
 pg_explain_analyze(TEXT) be?

LOL.  If you're serious, I was thinking of something like this:

node_id integer not null unique
op varchar:  Index Scan|Hash Join|Seq Scan|...
parent_node_id integer:  node_id of parent node
index varchar:  name of index for scan (oid better?)
relation varchar:  name of relation for scan (oid better?)
cost_first float:  first cost number
cost_last float:  second cost number
cost_rows bigint
cost_width integer
actual_first float:  time to first row
actual_last float:  time to last row
actual_rows bigint
actual_loops bigint
condition varchar:  what comes after Index Cond:|Hash Cond:|...
total_time float:  total query time

How's that?

That might be a start.  This is half-baked and you may be
the new head chef, so feel free.

Ed


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Understanding EXPLAIN ANALYZE output

2005-02-09 Thread Ed L.
On Wednesday February 9 2005 8:07, Ed L. wrote:
 
  Well, I'm a little bored; I've got tomorrow off, and this
  seems like it might be doable in the kind of high-level
  PL/Foo's with which I'm familiar.  What would the returning
  rowtype for
  pg_explain_analyze(TEXT) be?

 LOL.  If you're serious, I was thinking of something like
 this:

   node_id integer not null unique
   op varchar:  Index Scan|Hash Join|Seq Scan|...
   parent_node_id integer:  node_id of parent node
   index varchar:  name of index for scan (oid better?)
   relation varchar:  name of relation for scan (oid better?)
   cost_first float:  first cost number
   cost_last float:  second cost number
   cost_rows bigint
   cost_width integer
   actual_first float:  time to first row
   actual_last float:  time to last row
   actual_rows bigint
   actual_loops bigint
   condition varchar:  what comes after Index Cond:|Hash
 Cond:|... total_time float:  total query time

You might drive the rowtype by what you (alias I) hope to get out 
of it:  1) easy discovery of the slowest parts of the query, 2) 
really slow avg row retrieval times (like the 4.63ms/row 
discussed earlier in this thread), and I don't know what else.  
Part 2 seems straight forward with this scheme, less sure about 
part 1.

Ed


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [GENERAL] Understanding EXPLAIN ANALYZE output

2005-02-09 Thread Michael Fuhr
On Wed, Feb 09, 2005 at 06:31:11PM -0800, David Fetter wrote:
 On Wed, Feb 09, 2005 at 04:34:30PM -0700, Ed L. wrote:
 
  SELECT sql, op, index, relation, actual_first
  FROM pg_explain_analyze('SELECT * FROM foo')
  WHERE op = 'Index Scan'
AND actual_first  1.0;
  
  Thankfully, I'm sure there are a lot of skilled postgresql'ers 
  just sitting around right now wishing they had something to do.  
 
 Well, I'm a little bored; I've got tomorrow off, and this seems like
 it might be doable in the kind of high-level PL/Foo's with which I'm
 familiar.  What would the returning rowtype for
 pg_explain_analyze(TEXT) be?

I started looking at this earlier.  Is it even possible to get
EXPLAIN output via SPI (and thus the PLs' interfaces to SPI)?  Or
would the function have to make a new connection to the database
and issue the EXPLAIN as an ordinary client?  I suppose a proper
implementation would be written in C and would call the internal
functions that generate the output, but I was thinking about how
to do something simple in a higher-level PL.

As for the return type, I was going to look at explain_outNode()
in src/backend/commands/explain.c and get some ideas from there.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] What talk would get you to go to OSCon?

2005-02-09 Thread Bruce Momjian
Aaron Glenn wrote:
 On Tue, 8 Feb 2005 14:29:08 -0500, Robert Treat
 [EMAIL PROTECTED] wrote:
  Slony
 
 Yes.
 
  High Availability
 
 Yes.
 
  If you have other ideas please feel free to chime in, we'd really like to 
  see
  an uptick in postgresql attendees.
 
 Will Bruce and Tom be attending this year like they did in 2002?

I will be there.  Josh has asked me about a state of PostgreSQL talk,
and perhaps a talk about how the Win32 port was done.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Understanding EXPLAIN ANALYZE output

2005-02-09 Thread Greg Stark
Ed L. [EMAIL PROTECTED] writes:

 and I don't know what else.  

How about being able to search for queries where expected rows and actual rows
are far apart.

-- 
greg


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Functions with more than 32 parameters

2005-02-09 Thread Oisin Glynn
Any advice for an opensource newbie running on Windows 2000? How hard is it 
to rebuild for windows? Any instructions? What compiler?

Oisin
- Original Message - 
From: Bradley D. Snobar [EMAIL PROTECTED]
To: Oisin Glynn [EMAIL PROTECTED]
Sent: Wednesday, February 09, 2005 10:40 PM
Subject: RE: [GENERAL] Functions with more than 32 parameters


I just had the same problem the other day.
Change this file:
/postgresql-8.0.0/src/include/pg_config_manual.h
#define INDEX_MAX_KEYS  64   /*was 32 */
#define FUNC_MAX_ARGS   INDEX_MAX_KEYS
Recompile the server.
make distclean
./configure
make
make install
Oisin Glynn [EMAIL PROTECTED] wrote:
Hi,
I am trying to keep database access encapsulated in functions from an 
insert
and update point of view so as all of the applications writing to the
database are using the same functions/statements.

My problem is I was creating functions to do inserts/updates and now I 
have
a table with more than 32 columns and the functions seem to only want to
have 32 parameters?

Is there any way around this?
I have used stored procedures in this way with Oracle/SQL server before.Or
is there another strategy I could use?
I am using Windows 2000 Postgres 8.0.1 if it makes a difference
Thanks in advance,
Oisin
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
__
Switch to Netscape Internet Service.
As low as $9.95 a month -- Sign up today at 
http://isp.netscape.com/register

Netscape. Just the Net You Need.
New! Netscape Toolbar for Internet Explorer
Search from anywhere on the Web and block those annoying pop-ups.
Download now at http://channels.netscape.com/ns/search/install.jsp

--
No virus found in this incoming message.
Checked by AVG Anti-Virus.
Version: 7.0.305 / Virus Database: 265.8.6 - Release Date: 2/7/2005


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Understanding EXPLAIN ANALYZE output

2005-02-09 Thread Tom Lane
Michael Fuhr [EMAIL PROTECTED] writes:
 I started looking at this earlier.  Is it even possible to get
 EXPLAIN output via SPI (and thus the PLs' interfaces to SPI)?

Hmm ... my first reaction was sure, but:

regression=# create or replace function foo() returns setof text as $$
regression$# declare x record;
regression$# begin
regression$# for x in explain select * from tenk1 where unique1 = 42
regression$# loop
regression$#   return next x.QUERY PLAN;
regression$# end loop;
regression$# return;
regression$# end$$ language plpgsql;
CREATE FUNCTION
regression=# select * from foo();
ERROR:  cannot open non-SELECT query as cursor
CONTEXT:  PL/pgSQL function foo line 3 at for over select rows

(And no, EXECUTE doesn't help.)  This seems like an oversight.  We
already have some understanding in the backend that certain utility
commands return query results; the SPI code should be letting those
results be scanned as if they were ordinary SELECT results.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] a SELECT FOR UPDATE question

2005-02-09 Thread Michael Fuhr
On Tue, Feb 08, 2005 at 01:45:44AM -0500, Tom Lane wrote:
 Michael Fuhr [EMAIL PROTECTED] writes:
  On Tue, Feb 08, 2005 at 12:58:34AM -0500, Tom Lane wrote:
  I'm too tired to work out an example, but I think this probably doesn't
  work in general: the xmax on the version of the row you can see might
  not correspond to a live transaction, but that doesn't mean someone
  else doesn't hold a lock on the latest committed version of the row.
 
  If you could point me in the right direction I'll try to work out
  an example where my suggestion fails.
 
 I'm thinking about a multiple-update situation: your snapshot includes
 row version A, which was superseded by version B, which was superseded
 by version C.  By the time you are looking, the transaction that
 committed version B is gone so the xmax you see (B's xact) isn't locked
 anymore.  But the frontmost version of the row is still locked (by C
 or some later heir) so if you tried to update you'd block.

I've been playing with this and I'm thinking the problem you describe
could happen due to a race condition between finding a particular
transaction ID in xmax and then checking if that ID is locked.  Example:

xactA: updates row
xactB: attempts to update same row, blocks until xactA completes
xactC: query finds xactA in row's xmax
xactA: commits
xactB: unblocks and acquires a lock on the row
xactC: query to pg_locks doesn't find xactA, so assumes row not locked

Does that sound like what you're talking about?  A new query by
xactC at this point would show xactB in xmax, but that doesn't do
us any good if we've already made a decision based on the previous
queries.

In any case, whatever a transaction learns from such a check could
be out of date by the time it acts on the information, so I'm not
sure how useful it would be.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Understanding EXPLAIN ANALYZE output

2005-02-09 Thread Michael Fuhr
On Wed, Feb 09, 2005 at 11:36:22PM -0500, Tom Lane wrote:
 Michael Fuhr [EMAIL PROTECTED] writes:
  I started looking at this earlier.  Is it even possible to get
  EXPLAIN output via SPI (and thus the PLs' interfaces to SPI)?
 
 Hmm ... my first reaction was sure, but:
[snip]
 ERROR:  cannot open non-SELECT query as cursor

That's what I saw when I started playing with this.  I wasn't sure
if I was overlooking something or if SPI really didn't allow it.

 (And no, EXECUTE doesn't help.)  This seems like an oversight.  We
 already have some understanding in the backend that certain utility
 commands return query results; the SPI code should be letting those
 results be scanned as if they were ordinary SELECT results.

Any suggestions for the meantime?

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] pg_affected Change Request

2005-02-09 Thread Jan



I've been using PostgreSQL for about three 
months. So far my only objection is that "pg_affected"provides the 
number of attempts rather than the number of rows that are actually 
changed. I could easily do that with a little counter. 

I think PostgreSQL should be changed to return the 
number of rows actuallychanged (whether updated, inserted or 
deleted).


Re: [GENERAL] Postgresql and Macintosh

2005-02-09 Thread Jonel Rienton
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

seems like it still exists in 10.3.8 which was just released today

- -
Jonel Rienton
http://blogs.road14.com
Software Developer, *nix Advocate
On Feb 9, 2005, at 3:40 PM, Tom Lane wrote:

 Joe Lester [EMAIL PROTECTED] writes:
 Watch out for the memory leak bug on OS 10.3.2 though 10.3.5 (this is
 apparently an OS X bug, not a Postgres bug):
 http://archives.postgresql.org/pgsql-general/2004-08/msg00972.php
 I don't know if it exists on 10.3.6 or 10.3.7 (the current version).

 [ tries it... ]  Yup, still there in 10.3.7.

   regards, tom lane

 ---(end of 
 broadcast)---
 TIP 9: the planner will ignore your desire to choose an index scan if 
 your
   joining column's datatypes do not match


-BEGIN PGP SIGNATURE-
Version: PGP 8.1

iQA/AwUBQgrFh+AwOVAnbsGCEQJh+ACdG9eLNwpCXVw6G7u6oSbMoPL3MSoAoK25
VHagUygyifgWMHvnCe/slcek
=GXHq
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] pg_affected Change Request

2005-02-09 Thread Michael Fuhr
On Wed, Feb 09, 2005 at 09:50:41PM -0500, Jan wrote:

 I've been using PostgreSQL for about three months.  So far my
 only objection is that pg_affected provides the number of attempts
 rather than the number of rows that are actually changed.  I could
 easily do that with a little counter.

Please provide some context:  What pg_affected are you talking
about?  How are you determining that it provides attempts instead
of actual?

 I think PostgreSQL should be changed to return the number of rows
 actually changed (whether updated, inserted or deleted).

Please provide an example of what you're doing, what results you
expect or would like to see, and what results you actually get.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] Any functions to convert bit(5) to text?

2005-02-09 Thread Egyd Csaba
Hi,
I use pg8 and ZeosLib 6.5.1. Zeos can't read fields type of array and bit,
so I need to convert these fields to text on the server side. 

In case of arrays I can use array_to_string(), but I couldn't find any
functions to convert bit(*) to string (in a format like this: e.g. '10011').


Could anybody suggest me a built in function or I should write it myself?

Thank you,

-- Csaba

-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.300 / Virus Database: 265.8.6 - Release Date: 2005.02.07.
 


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Any functions to convert bit(5) to text?

2005-02-09 Thread Michael Fuhr
On Thu, Feb 10, 2005 at 07:32:35AM +0100, Együd Csaba wrote:

 In case of arrays I can use array_to_string(), but I couldn't find any
 functions to convert bit(*) to string (in a format like this: e.g. '10011').

I don't know if this will always work, but with current implementations
of PL/pgSQL you can do this:

CREATE FUNCTION bit2text(bit) RETURNS text AS '
BEGIN
RETURN $1;
END;
' LANGUAGE plpgsql IMMUTABLE STRICT;

See also CREATE CAST.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Any functions to convert bit(5) to text?

2005-02-09 Thread Michael Fuhr
On Thu, Feb 10, 2005 at 12:00:31AM -0700, Michael Fuhr wrote:
 
 CREATE FUNCTION bit2text(bit) RETURNS text AS '
 BEGIN
 RETURN $1;
 END;
 ' LANGUAGE plpgsql IMMUTABLE STRICT;

This also appears to work:

CREATE TABLE foo (b varbit);
INSERT INTO foo VALUES ('10011');
SELECT textin(bit_out(b)) FROM foo;

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] Any functions to convert bit(5) to text?

2005-02-09 Thread Michael Fuhr
On Thu, Feb 10, 2005 at 12:16:53AM -0700, Michael Fuhr wrote:

 SELECT textin(bit_out(b)) FROM foo;

Is it generally true that you can convert between types using the
above method, assuming compatible syntax?  That is, using the source
type's output function to get a cstring that you pass to the target
type's input function?  Are there any potential problems with doing
this?  Is that how the PL/pgSQL RETURN trick works?  It seems a
handy way to cast one type to another when no built-in cast or
function exists.

-- 
Michael Fuhr
http://www.fuhr.org/~mfuhr/

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings