Re: [SQL] [GENERAL] table column vs. out param [1:0]

2007-08-27 Thread Kristo Kaiv


On 23.08.2007, at 11:55, Albe Laurenz wrote:

CREATE FUNCTION b(status OUT integer, status_text OUT text)
  LANGUAGE plpgsql STABLE STRICT AS
$$DECLARE
   RENAME status TO out_status;
   RENAME status_text TO out_status_text;
BEGIN
   SELECT status, status_text
  FROM a()
  INTO out_status, out_status_text;
   RETURN;
END;$$;

See
http://www.postgresql.org/docs/current/static/plpgsql- 
declarations.html#

PLPGSQL-DECLARATION-RENAMING-VARS

Yours,
Laurenz Albe
From http://www.postgresql.org/docs/current/static/plpgsql- 
declarations.html#PLPGSQL-DECLARATION-RENAMING-VARS
Note: RENAME appears to be broken as of PostgreSQL 7.3. Fixing this  
is of low priority, since ALIAS covers most of the practical uses of  
RENAME.
Seems to work though. Could somebody please confirm/reject that this  
has been fixed?


Kristo Kaiv
http://kaiv.wordpress.com (PostgreSQL blog)




Re: [SQL] List of FKeys ?

2007-08-27 Thread Ashish Karalkar


- Original Message - 
From: "Tom Lane" <[EMAIL PROTECTED]>

To: "Andreas" <[EMAIL PROTECTED]>
Cc: 
Sent: Monday, August 27, 2007 9:18 AM
Subject: Re: [SQL] List of FKeys ?



Andreas <[EMAIL PROTECTED]> writes:

could I get a list of foreign keys that refer to a column?


The information_schema views constraint_column_usage and
referential_constraints might help you, or you could dive into the
underlying system catalogs.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Hey Andreas is ur problem is not solved use following SP, I use it for the 
same reason.


just pass the primary key column name and primary key value it will return u 
list of child table's



sp_gen_foreign_keys_tables(OUT par_result text, OUT par_childtables text, IN 
par_colname character varying, IN par_colvalue integer) AS

$BODY$

DECLARE
   err_data_entity varchar(100) default 'To find child records ';
  err_operation varchar(100) default 'Select';
  curforeign refcursor ;
   curforeign1 refcursor;
   tablename text;
   columnname text;
   var_str1 text;
   var_str2 text;
   countno integer;
   counter integer;
BEGIN
par_result :='Successfull';
   var_str1:='select distinct(fk_relation),fk_column from 
core_foreign_keys_view where pk_relation in (select pk_relation from 
core_foreign_keys_view where pk_column='''|| par_colname||''')';

   open curforeign for execute var_str1;
   found:='true';
par_childtables:='';
while found ='t' loop
  FETCH  curforeign  into tablename,columnname ;
   var_str2:='select count(*) from '|| tablename || ' where ' || columnname 
||' = '|| par_colvalue;

   IF VAR_STR2 IS NULL THEN
EXIT;
   END IF;
   open curforeign1 for execute var_str2;
FETCH  curforeign1 into countno;
   close curforeign1;
   if countno > 0 then
par_childtables:=par_childtables || tablename||'.'||columnname|| ',' ;
   end if ;



  end loop;
  close curforeign ;



END;
$BODY$
 LANGUAGE 'plpgsql' VOLATILE;



Hope this will help


With Regards
Ashish 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[SQL] SQL function

2007-08-27 Thread ashok raj
Hi,
   Is there any function in postgre to know the size of the data in a
row of a table ?


Re: [SQL] Block size with pg_dump?

2007-08-27 Thread Jean-David Beyer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Bruce Momjian wrote:
> Erik Jones wrote:
> On the way back, likewise I could pipe the tape through dd before  
> giving it
> to pg_restore.
>
> Does pg_dump care what blocksize it gets? If so, what is it?
 I assume you could pipe pg_dump into dd and specify the block size in
 dd.

>>> Of course on the way out I can do that.
>>>
>>> The main question is, If I present pg_restore with a 65536-byte  
>>> blocksize
>>> and it is expecting, e.g., 1024-bytes, will the rest of each block get
>>> skipped? I.e., do I have to use dd on the way back too? And if so,  
>>> what
>>> should the blocksize be?
>> Postgres (by default) uses 8K blocks.
> 
> That is true of the internal storage, but not of pg_dump's output
> because it is using libpq to pull rows and output them in a stream,
> meaning there is no blocking in pg_dumps output itself.
> 
Is that true for both input and output (i.e., pg_restore and pg_dump)?
I.e., can I use dd to write 65536-byte blocks to tape, and then do nothing
on running pg_restore? I.e., that pg_restore will accept any block size I
choose to offer it?

- --
  .~.  Jean-David Beyer  Registered Linux User 85642.
  /V\  PGP-Key: 9A2FC99A Registered Machine   241939.
 /( )\ Shrewsbury, New Jerseyhttp://counter.li.org
 ^^-^^ 08:25:01 up 18 days, 11:47, 2 users, load average: 4.34, 4.31, 4.27
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)
Comment: Using GnuPG with CentOS - http://enigmail.mozdev.org

iD8DBQFG0sNpPtu2XpovyZoRAvVpAKCD0YPHpZVXwIweDwDfozA/79XJSACg0Jao
qmFsnsJpy8209W8CGwhJ31Y=
=u7p6
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] Block size with pg_dump?

2007-08-27 Thread Bruce Momjian
Jean-David Beyer wrote:
> >>> The main question is, If I present pg_restore with a 65536-byte  
> >>> blocksize
> >>> and it is expecting, e.g., 1024-bytes, will the rest of each block get
> >>> skipped? I.e., do I have to use dd on the way back too? And if so,  
> >>> what
> >>> should the blocksize be?
> >> Postgres (by default) uses 8K blocks.
> > 
> > That is true of the internal storage, but not of pg_dump's output
> > because it is using libpq to pull rows and output them in a stream,
> > meaning there is no blocking in pg_dumps output itself.
> > 
> Is that true for both input and output (i.e., pg_restore and pg_dump)?
> I.e., can I use dd to write 65536-byte blocks to tape, and then do nothing
> on running pg_restore? I.e., that pg_restore will accept any block size I
> choose to offer it?

Yes.

-- 
  Bruce Momjian  <[EMAIL PROTECTED]>  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] [GENERAL] table column vs. out param [1:0]

2007-08-27 Thread Tom Lane
Kristo Kaiv <[EMAIL PROTECTED]> writes:
> From http://www.postgresql.org/docs/current/static/plpgsql- 
> declarations.html#PLPGSQL-DECLARATION-RENAMING-VARS
> Note: RENAME appears to be broken as of PostgreSQL 7.3. Fixing this  
> is of low priority, since ALIAS covers most of the practical uses of  
> RENAME.
> Seems to work though. Could somebody please confirm/reject that this  
> has been fixed?

It "works" only for very small values of "works".  See the links in
the TODO item for it.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] Block size with pg_dump?

2007-08-27 Thread Jean-David Beyer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Bruce Momjian wrote:
> Jean-David Beyer wrote:
> The main question is, If I present pg_restore with a 65536-byte  
> blocksize
> and it is expecting, e.g., 1024-bytes, will the rest of each block get
> skipped? I.e., do I have to use dd on the way back too? And if so,  
> what
> should the blocksize be?
 Postgres (by default) uses 8K blocks.
>>> That is true of the internal storage, but not of pg_dump's output
>>> because it is using libpq to pull rows and output them in a stream,
>>> meaning there is no blocking in pg_dumps output itself.
>>>
>> Is that true for both input and output (i.e., pg_restore and pg_dump)?
>> I.e., can I use dd to write 65536-byte blocks to tape, and then do nothing
>> on running pg_restore? I.e., that pg_restore will accept any block size I
>> choose to offer it?
> 
> Yes.
> 
Did not work at first:

...
pg_dump: dumping contents of table vl_ranks
51448+2 records in
401+1 records out
26341760 bytes (26 MB) copied, 122.931 seconds, 214 kB/s

So I suppose that worked. (This database just has some small initial tables
loaded. The biggest one is still empty.) But then

trillian:postgres[~]$ ./restore.db
pg_restore: [archiver] did not find magic string in file header
trillian:postgres[~]$

I fixed it by changing my backup script as follows:

$ cat backup.db
#!/bin/bash
#
#   This is to backup the postgreSQL database, stock.
#
DD=/bin/dd
DD_OPTIONS="obs=65536 of=/dev/st0"
MT=/bin_mt
MT_OPTIONS="-f /dev/st0 setblk 0"
PG_OPTIONS="--format=c --username=postgres --verbose"
PG_DUMP=/usr/bin/pg_dump

$PG_DUMP $PG_OPTIONS stock | $DD $DD_OPTIONS

and it still would not restore until I changed the restore script to this:

$ cat restore.db
#!/bin/bash

#   This is to restore database stock.
FILENAME=/dev/st0

DD=/bin/dd
DD_OPTIONS="ibs=65536 if=$FILENAME"
MT=/bin/mt
MT_OPTIONS="-f $FILENAME setblk 0"
PG_OPTIONS="--clean --dbname=stock --format=c --username=postgres --verbose"
PG_RESTORE=/usr/bin/pg_restore

$MT $MT_OPTIONS
$DD $DD_OPTIONS | $PG_RESTORE $PG_OPTIONS

It appears that I must read in the same blocksize as I wrote. My normal
backup program (BRU) can infer the blocksize from the first record, but
apparently pg_restore does not. But dd will read it if I tell it the size.
Hence the above.

The MT stuff is to tell the tape driver to accept variable block size so the
program that opens it can set it. DD can do that, but I infer that
pg_restore does not.


- --
  .~.  Jean-David Beyer  Registered Linux User 85642.
  /V\  PGP-Key: 9A2FC99A Registered Machine   241939.
 /( )\ Shrewsbury, New Jerseyhttp://counter.li.org
 ^^-^^ 11:00:01 up 18 days, 14:22, 3 users, load average: 5.54, 4.84, 4.45
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)
Comment: Using GnuPG with CentOS - http://enigmail.mozdev.org

iD8DBQFG0vQuPtu2XpovyZoRAlwcAKC5ApaGOoZrnHDUa5vgg9tx4jrqjwCeLfLV
oPLB1xCbJ0/WLYrg5/qVs2g=
=BkQ6
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


[SQL] fetch first rows of grouped data

2007-08-27 Thread Claudia Kosny

Hi there

I have a list of events that take place in a certain city at a certain 
date. Now I would like to have the first two (ordered by date) events 
for each city.


Is there a way to do this with one query?
I am using PostgreSQL 7.4.

Thanks for any tips.

Claudia



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] fetch first rows of grouped data

2007-08-27 Thread D'Arcy J.M. Cain
On Mon, 27 Aug 2007 18:36:47 +0200
Claudia Kosny <[EMAIL PROTECTED]> wrote:
> I have a list of events that take place in a certain city at a certain 
> date. Now I would like to have the first two (ordered by date) events 
> for each city.
> 
> Is there a way to do this with one query?
> I am using PostgreSQL 7.4.

I believe you are looking for the LIMIT keyword.  Check the docs on the
web site.

-- 
D'Arcy J.M. Cain <[EMAIL PROTECTED]> |  Democracy is three wolves
http://www.druid.net/darcy/|  and a sheep voting on
+1 416 425 1212 (DoD#0082)(eNTP)   |  what's for dinner.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] fetch first rows of grouped data

2007-08-27 Thread Joel Richard
That's what I thought at first, but this is not the case. She's  
looking for the first two dates in -each- city in the table. I  
initially thought that this could be accomplished with GROUP BY and  
LIMIT, but GROUP BY can only give you the first date for each city,  
not the first two dates for each city.


So far, I haven't come up with any clever ideas. I'm not sure this  
can be done in SQL.


--Joel


On Aug 27, 2007, at 12:51 PM, D'Arcy J.M. Cain wrote:


On Mon, 27 Aug 2007 18:36:47 +0200
Claudia Kosny <[EMAIL PROTECTED]> wrote:
I have a list of events that take place in a certain city at a  
certain

date. Now I would like to have the first two (ordered by date) events
for each city.

Is there a way to do this with one query?
I am using PostgreSQL 7.4.


I believe you are looking for the LIMIT keyword.  Check the docs on  
the

web site.

--
D'Arcy J.M. Cain <[EMAIL PROTECTED]> |  Democracy is three  
wolves

http://www.druid.net/darcy/|  and a sheep voting on
+1 416 425 1212 (DoD#0082)(eNTP)   |  what's for dinner.

---(end of  
broadcast)---

TIP 2: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [SQL] fetch first rows of grouped data

2007-08-27 Thread Michael Glaesemann
[Please don't top post as it makes the discussion more difficult to  
follow.]


On Aug 27, 2007, at 12:34 , Joel Richard wrote:



On Aug 27, 2007, at 12:51 PM, D'Arcy J.M. Cain wrote:


On Mon, 27 Aug 2007 18:36:47 +0200
Claudia Kosny <[EMAIL PROTECTED]> wrote:
I have a list of events that take place in a certain city at a  
certain
date. Now I would like to have the first two (ordered by date)  
events

for each city.

Is there a way to do this with one query?
I am using PostgreSQL 7.4.


I believe you are looking for the LIMIT keyword.  Check the docs  
on the

web site.


That's what I thought at first, but this is not the case. She's  
looking for the first two dates in -each- city in the table. I  
initially thought that this could be accomplished with GROUP BY and  
LIMIT, but GROUP BY can only give you the first date for each city,  
not the first two dates for each city.


So far, I haven't come up with any clever ideas. I'm not sure this  
can be done in SQL.


--Joel



I can't think of an easy way to do it, but certainly can do it in SQL  
using correlated subqueries, something like


select city, event, event_date, (
select count(event)
from events i
where i.city = o.city
and i.event_date < o.event_date
and event_date > current_date -- make sure they're future  
events

) as nearness_rank
from events o
where event_date > current_date -- make sure they're future events
having nearness_rank <= 1;

Note that this can potentially show more than 2 events if the most  
recent upcoming events "tie" (have the same event_date).


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] fetch first rows of grouped data

2007-08-27 Thread D'Arcy J.M. Cain
On Mon, 27 Aug 2007 13:34:09 -0400
Joel Richard <[EMAIL PROTECTED]> wrote:
> That's what I thought at first, but this is not the case. She's  
> looking for the first two dates in -each- city in the table. I  
> initially thought that this could be accomplished with GROUP BY and  
> LIMIT, but GROUP BY can only give you the first date for each city,  
> not the first two dates for each city.

Yes, you are correct.  My mistake.

> So far, I haven't come up with any clever ideas. I'm not sure this  
> can be done in SQL.

Well, I can think of an SQL only method involving a temporary table but
it would not be a single query:

 - Select into temp table where date is max.
 - Select from temp table union with main table where date is max and
is not in temp table.

-- 
D'Arcy J.M. Cain <[EMAIL PROTECTED]> |  Democracy is three wolves
http://www.druid.net/darcy/|  and a sheep voting on
+1 416 425 1212 (DoD#0082)(eNTP)   |  what's for dinner.

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [SQL] fetch first rows of grouped data

2007-08-27 Thread Michael Glaesemann


On Aug 27, 2007, at 13:12 , Michael Glaesemann wrote:


select city, event, event_date, (
select count(event)
from events i
where i.city = o.city
and i.event_date < o.event_date
and event_date > current_date -- make sure they're future  
events


This should be i.event_date > current_date to be explicit.


) as nearness_rank
from events o
where event_date > current_date -- make sure they're future events
having nearness_rank <= 1;


Michael Glaesemann
grzm seespotcode net



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] [GENERAL] table column vs. out param [1:0]

2007-08-27 Thread Kristo Kaiv


On 27.08.2007, at 18:07, Tom Lane wrote:


Kristo Kaiv <[EMAIL PROTECTED]> writes:

From http://www.postgresql.org/docs/current/static/plpgsql-
declarations.html#PLPGSQL-DECLARATION-RENAMING-VARS
Note: RENAME appears to be broken as of PostgreSQL 7.3. Fixing this
is of low priority, since ALIAS covers most of the practical uses of
RENAME.
Seems to work though. Could somebody please confirm/reject that this
has been fixed?


It "works" only for very small values of "works".  See the links in
the TODO item for it.

regards, tom lane

Tom, the TODO item says:
Server-Side Languages
PL/pgSQL
Fix RENAME to work on variables other than OLD/NEW

but it seems to already work. I have tested it on 8.2 and have not  
noticed any problems.


orderdb_test=# \df+ test
List of functions
-[ RECORD 1 ]---+--
Schema  | public
Name| test
Result data type| text
Argument data types | i_a text, OUT asi text
Owner   | kristok
Language| plpgsql
Source code |
: DECLARE
: --o_asi ALIAS FOR $2;
: RENAME asi TO o_asi;
: BEGIN
: select 32 into o_asi;
: return;
: END;
:
Description |

orderdb_test=# select * from test(123);
asi
-
32
(1 row)

Kristo Kaiv
http://kaiv.wordpress.com (PostgreSQL blog)




[SQL] pg & Delphi

2007-08-27 Thread Kitter Georgiy
Good day.

  I want to enter Postges into project,
which has so rectriction, that client should be written on Delphi (CBuilder).
  Please, prompt me how to adjust Delphi to use Postgres.


Dmitry Turin
SQL4  (4.2.0)  http://sql40.chat.ru
HTML6 (6.4.0)  http://html60.chat.ru
Unicode2  (2.0.1)  http://unicode2.chat.ru
Computer2 (2.0.3)  http://computer20.chat.ru

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[SQL] Request into several DBMS simultaneously on DDL and DML

2007-08-27 Thread Kitter Georgiy
(1)Let's allow to __connect into several DBMS__ at once,
last of them becomes DBMS 'by default'.
User assign any nickname to each database at connection,
except nickname 'all' and 'local', e.g.
  connect ra=database.remote.com username=Smith   password=pwds nickname=db1;
  connect ra=db.distination.com  username=Tomson  password=pwdt nickname=db2;
  connect ra=database.remote.com username=Johnson password=pwdj nickname=db3;
And user can change DBMS by default, e.g.
  local db2;
Each time, when __name of 
datatype, table, view, sequence, index,
trigger, function, procedure, user, role__
is used, it's implied, that object is in database by default, e.g.
  create  type  NAME ...
  alter   table NAME ...
  dropsequence  NAME ...
  create  index NAME ...
  replace trigger   NAME ...
  dropfunction  NAME ...
  insert  into  NAME ...
  select  ...  from NAME ...
  create  user  NAME ...
  create  role  NAME ...
  grant NAME to NAME
 
(2)Let's allow to specify __location of object by prefix__,
which is before name through colon. Prefix can be
(2.1) nickname of concrete database, e.g.
  select  ...  from db1:NAME ...
  insert  into  db3:NAME ...
(2.2) predicate "all:" (which designates all databases),
"local:" (which designate database by default), e.g.
  insert  intoall:NAME ...
  select  ...  from local:NAME ...
  select  ...  where fld > any (select ... from all:NAME ...
(2.3) marker - word, meaning all databases consecutively
(__any two markers never simultaneously mean the same database__).
Sign "%" is put before marker, e.g.
  insert  into %db1:NAME select * from %db2:NAME;

(3) As result, __new type of system information__ appears:
field, refering to other field by foreign key,
has additional bit except own value -
bit specifies, whether record, to which it refers,
is in the same or in other database.
It's necessary to not signalize about break of foreign key,
if remote database will not connected in next time.

(4) After all said, __replication__ of databases
by branded programs loss all sense -
always it's possible to make it
by extension of SQL, described above.


Document is stated on
  http://sql40.chat.ru/site/sql40/en/author/mc_eng.htm
Related ideas are on:
  http://sql40.chat.ru/site/sql40/en/author/mc2_eng.htm


Dmitry Turin
SQL4  (4.2.0)  http://sql40.chat.ru
HTML6 (6.4.0)  http://html60.chat.ru
Unicode2  (2.0.1)  http://unicode2.chat.ru
Computer2 (2.0.3)  http://computer20.chat.ru

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly