Re: [GENERAL] [Postgresql 8.2.3] autovacuum starting up even after disabling ?

2008-08-12 Thread Dushyanth
Hi Tom  Alvaro,

 Hey,

   They are all under 200 million
 
  Weird
 
  Could you fetch from pg_stat_activity the table it's processing, and its
  pg_class row and that of its toast table (if any)?

 Sorry for the delay. Required details are at
 http://pastebin.com/pastebin.php?dl=fd699fbb

Did you guys have a chance to look at this ? I scheduled a VACUUM ANALYZE
VERBOSE on all databases to run daily and this has been running since few weeks
now. 

Also note that the vacuum on my primary database is run like below to avoid
statement timeouts (set to 12 in postgresql.conf).

 vacuum.sql
SET STATEMENT_TIMEOUT TO 0;
VACUUM ANALYSE verbose;

psql -U postgres -d dbname -f /path/to/vacuum.sql

For other databases, i do 'vacuumdb -zv dbname' .

The vacuum logs and the pgfouine vacuum reports dont show anything funny. 

Autovacuum still starts up though. This process started on 09-08-2008 11.40 GMT
and ran till 10-08-2008 08:00 GMT. It also seems to be touching few other tables
apart from the tables i posted last about.

postgres 30430  0.0  0.1 2270284 60500 ? SJul24   2:15
/usr/local/postgres/pgsql-8.2.3/bin/postgres -D
/usr/local/postgres/current/foundationdata -i
postgres 30437  5.0  0.0 57380  760 ?Ds   Jul24 1196:04  \_ postgres:
logger process
postgres 31907  0.0  6.4 2271528 2129736 ?   Ss   Jul24   8:48  \_ postgres:
writer process
postgres 31908  0.0  0.0 58448  844 ?Ss   Jul24   0:06  \_ postgres:
archiver process
postgres 31909  0.0  0.0 58448  812 ?Ss   Jul24   0:00  \_ postgres:
stats collector process
postgres  7112  1.8  1.7 2291200 570796 ?Ss   11:40   0:05  \_ postgres:
autovacuum process   foundation 

I don't know what autovacuum is panicking about to warrant a force run.

Any pointers ?

TIA
Dushyanth





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] differnt behaviour of NULL in an aggregate and with an operator

2008-08-12 Thread Willy-Bas Loos
Hi,

Why is it that
  SELECT 1+null
evaluates to NULL, but
  SELECT sum(foo) FROM (VALUES(1), (NULL)) AS v(foo)
evaluates to 1 ?

WBL


Re: [GENERAL] differnt behaviour of NULL in an aggregate and with an operator

2008-08-12 Thread Craig Ringer
Willy-Bas Loos wrote:
 Hi,
 
 Why is it that
   SELECT 1+null
 evaluates to NULL, but
   SELECT sum(foo) FROM (VALUES(1), (NULL)) AS v(foo)
 evaluates to 1 ?

SUM(x) ignores null input, like COUNT(x) etc. It's the sum of all
non-null instances of x.

There's some useful explanation of the various NULL handling of
aggregates here:

http://www.postgresql.org/docs/8.3/static/sql-createaggregate.html

though I'm not sure how well it applies to the built-in aggregates.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Checkpoints writes

2008-08-12 Thread Cyril SCETBON



Greg Smith wrote:

On Thu, 7 Aug 2008, Cyril SCETBON wrote:

What's the way to count the read/write bytes of the checkpoint 
process before 8.3 (no pg_stat_bgwriter view :-[ ) I want to 
distinguish bytes written by checkpoints and others written by the 
background process


The reason that view was added was because it's really hard to figure 
that out in earlier versions.  Theoretically you could have some 
operating system level program that tracked I/O on a per-process 
basis, noting which one was the background writer process and counting 
those separately.  I found it easier to work on adding the counters 
instead.


It's not really complete, but I did have a functional prototype of a 
pg_stat_bgwriter implementation that worked against 8.2 if that helps 
you any: http://www.westnet.com/~gsmith/content/postgresql/perfmon82.htm

a really great job greg. I'll test it.

Thanks


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD



--
Cyril SCETBON

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] different results based solely on existence of index (no, seriously)

2008-08-12 Thread Willy-Bas Loos
reproduced it on:
PostgreSQL 8.3.3 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.2.3
(Ubuntu 4.2.3-2ubuntu7)
3 rows with index, 2 rows without.

can not reproduce it on:
- PostgreSQL 8.1.10 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.1.3
20070831 (prerelease) (Ubuntu 4.1.2-16ubuntu1)
- PostgreSQL 8.2.6 on i686-pc-mingw32, compiled by GCC gcc.exe (GCC) 3.4.2
(mingw-special)
- PostgreSQL 8.2.7 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.2.3
(Ubuntu 4.2.3-2ubuntu4)
they allways return 2 rows.

hth
WBL


[GENERAL] test message -- Is this post getting to the list?

2008-08-12 Thread Ow Mun Heng


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can I search for text in a function?

2008-08-12 Thread Sim Zacks
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Try:

select * from pg_proc where lower(prosrc) like '%previous_charge%';

Sim

Rob Richardson wrote:
 Greetings!
  
 Sometimes I need to track down how something happens in the database our
 application relies on, but whatever's happening may be buried in some
 old function that everybody here has forgotten about long ago.  IIRC,
 functions are stored internally merely as fields in a table owned by the
 system.  Is there a query I can use to find what function contains the
 string previous_charge?
  
 Thank you very much.
 
 *Robert D. Richardson
 *Product Engineer Software
 *
 file:///t:/Sales/Images/Marketing%20Pictures/Logos/LOGOs%20from%2010th%20Floor/RAD-CON%20Logo%20for%20Signature.jpg
 **RAD-CON, Inc.
 **TECHNOLOGY: */Innovative  Proven
 /Phone : +1.216.706.8905
 Fax:  +1.216.221.1135
 Website:  www.RAD-CON.com http://www.rad-con.com/
 E-mail:  [EMAIL PROTECTED]
 
  

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkihT50ACgkQjDX6szCBa+pQ8ACbBTLLep4RoyBNTC+PGij7TO2F
Z4AAnA/UKoxyzzJYyK+6nePYp7S3AUN9
=2ntc
-END PGP SIGNATURE-

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] big database with very small dump !?

2008-08-12 Thread Joao Ferreira gmail

On Mon, 2008-08-11 at 12:43 -0700, Vlad Kosilov wrote:
 ./data/ you may want to exclude those. I find this query useful for 
 something like this as well:
 
 select datname,pg_size_pretty(pg_database_size(oid)) from pg_database ;
 

Hello Vlad,

I ran your query and I got the 9Gigas!

I guess it should be related to index bloating, then. Do you agree ?

thx
Joao



postgres=# select datname,pg_size_pretty(pg_database_size(oid)) from
pg_database;
datname| pg_size_pretty 
---+
 postgres  | 3617 kB
 egbert| 9585 MB
 asterisk  | 3993 kB
 turba | 3673 kB
 edgereporting | 3617 kB
 template1 | 3617 kB
 template0 | 3537 kB
(7 rows)

postgres=# 


 V.
 
 Joao Ferreira gmail wrote:
  Hello all,
  
  I'm finding it very strange that my pg takes 9Giga on disk but
  pg_dumpall produces a 250Mega dump. 'VACUUM FULL' was executed
  yesterday.
  
  Is this normal ? Should I be worried ?
  
  
  details bellow:
  --
  # pg_dumpall --oids --clean  pg_dumpall.sql
  # ls -lh
  total 232M
  -rw-r--r--1 postgres postgres 231M Aug 11 15:46
  pg_dumpall.sql
  # du -sh /var/pgsql/data/
  9.4G/var/pgsql/data
  --
  
  
  thx
  joao
  
  
 
 

 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] big database with very small dump !?

2008-08-12 Thread Joao Ferreira gmail
Hello Greg, Vlad, Scott and all,

thanks for the feedback.

O forgot to mention that I execute REINDEX on all tables and INDEXes
every week (right after executing VACUUM FULL).

Is this enough to eliminate the possibility of index bloat ?



and, yes, my database has some crazy indexes. I use these indexes, and I
keep them REINDEXed to keep query execution time down. see bellow.

could these indexes be the real reason for taking up all that space ?

thanks
joao



egbert=# \d timeslots;
Table public.timeslots
  Column   |  Type   | Modifiers 
---+-+---
 str1  | text| 
 str2  | text| 
 ...
 ...
 str20 | text| 
 val1  | real| 
 ...
...
 val6  | real| 
 var   | text| 
 count | integer | 
 total | real| 
 timeslot  | integer | not null
 timestamp | integer | not null
 tsws  | integer | not null
 tses  | integer | not null
Indexes:
timeslots_strs_var_ts_key UNIQUE, btree (str1, str2, str3, str4,
str5, str6, str7, str8, str9, str10, str11, str12, str13, str14, str15,
str16, str17, str18, str19, str20, var, timeslot) CLUSTER
timeslots_timeslot_index btree (timeslot)
timeslots_timestamp_index btree (timestamp)
timeslots_var_index btree (var)

egbert=# 



On Mon, 2008-08-11 at 12:45 -0400, Greg Smith wrote:
 On Mon, 11 Aug 2008, Joao Ferreira gmail wrote:
 
  I'm finding it very strange that my pg takes 9Giga on disk but 
  pg_dumpall produces a 250Mega dump. 'VACUUM FULL' was executed 
  yesterday.
 
 If you've been running VACUUM FULL, it's probably so-called index bloat. 
 Try running the query at http://wiki.postgresql.org/wiki/Disk_Usage to 
 figure out where all your space has gone inside the database.
 
 --
 * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, M


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Need help returning record set from a dynamic sql query

2008-08-12 Thread [EMAIL PROTECTED]
Hi,

Iam new to Postgresql, now i need to create a Dynamic SQL Query for
returning the record set
based on my Input Parameters. I looked up some of the documents and worked
out some more ...

MY Postgresql Version In Local: 7.4
MY Postgresql Version In Development: 8.2


-- DROP TYPE ORDERREPORT;
CREATE TYPE ORDERREPORT AS (ORDERSID INTEGER,ORDERDATE TIMESTAMP,COMPANYNAME
TEXT,EVENTNAME CHARACTER VARYING);


-- DROP FUNCTION FUN_ORDERREPORT(IN pmOrderId INTEGER,IN pmCompanyId
INTEGER,IN pmEventId INTEGER);
CREATE OR REPLACE FUNCTION FUN_ORDERREPORT(IN pmOrderId INTEGER,IN
pmCompanyId INTEGER,IN pmEventId INTEGER) RETURNS SETOF ORDERREPORT AS
'
DECLARE
vResult ORDERREPORT%ROWTYPE;
vSql TEXT = ''
SELECT
   ORDR.ORDERSID   AS OrderID,
   ORDR.INITIATED   AS Order_Date,
   COMP.COMPANYNAMEAS Company_Name,
   EVNT.EVENTNAME  AS Event_Name
FROM
   ORDERS ORDR
   INNER JOIN COMPANY COMP ON COMP.COMPANYID = ORDR.COMPANY
   INNER JOIN EVENT EVNT ON EVNT.COMPANY = COMP.COMPANYID
WHERE
   ORDR.EVENT = EVNT.EVENTID '';
BEGIN
   IF $1 IS NOT NULL THEN
  vSql = vSql ||'' AND ORDR.ORDERSID = ''|| $1;
   END IF;

   IF $2 IS NOT NULL THEN
  vSql = vSql ||'' AND COMP.COMPANYID = ''|| $2;
   END IF;

   IF $3 IS NOT NULL THEN
  vSql = vSql ||'' AND EVNT.EVENTID = ''|| $3;
   END IF;
   EXECUTE vSql INTO vResult;
   RETURN NEXT vResult;
END '
LANGUAGE 'PLPGSQL';

Result:

events=# SELECT * FROM FUN_ORDERREPORT(102881,NULL,NULL);
ordersid | orderdate  | companyname |eventname
--++-+-
   102881 | 2006-02-10 14:49:53.002653 | PhotoMania  | Photos
(1 row)

events=# SELECT * FROM FUN_ORDERREPORT(NULL,NULL,NULL);
ordersid | orderdate  | companyname |eventname
--++-+-
   102881 | 2006-02-10 14:49:53.002653 | PhotoMania  | Photos
(1 row)


Iam getting just the first record from the recordset.
Can someone help me how can i return all query results from a Dynamic SQL
Query?


-- 
Thanks,
MuraliDharan V


Re: [GENERAL] PostgreSQL 8.3 XML parser seems not to recognize the DOCTYPE element in XML files

2008-08-12 Thread Peter Eisentraut
Am Thursday, 7. February 2008 schrieb Lawrence Oluyede:
 PostgreSQL 8.3 instead doesn't allow the insertion of XML with doctype
 in its new native data type returning this error message:

 
 ERROR:  invalid XML content
 DETAIL:  Entity: line 2: parser error : StartTag: invalid element name
 !DOCTYPE foo
  ^

It turns out that this behavior is entirely correct.  It depends on the XML 
option.  If you set the XML option to DOCUMENT, you can parse documents 
including DOCTYPE declarations.  If you set the XML option to CONTENT, then 
what you can parse is defined by the production

XMLDecl? content

which does not allow for a DOCTYPE.

The default XML option is CONTENT, which explains the behavior.

Now, the supercorrect way to parse XML values would be using the XMLPARSE() 
function, which requires you to specify the XML option inline.  That way, 
everything works.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] big database with very small dump !?

2008-08-12 Thread Joao Ferreira gmail

On Mon, 2008-08-11 at 12:45 -0400, Greg Smith wrote:
 On Mon, 11 Aug 2008, Joao Ferreira gmail wrote:
 
  I'm finding it very strange that my pg takes 9Giga on disk but 
  pg_dumpall produces a 250Mega dump. 'VACUUM FULL' was executed 
  yesterday.
 
 If you've been running VACUUM FULL, it's probably so-called index bloat. 
 Try running the query at http://wiki.postgresql.org/wiki/Disk_Usage to 
 figure out where all your space has gone inside the database.
 


egbert=# SELECT nspname || '.' || relname AS relation,
egbert-# pg_size_pretty(pg_relation_size(nspname || '.' || relname))
AS size
egbert-#   FROM pg_class C
egbert-#   LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
egbert-#   WHERE nspname NOT IN ('pg_catalog', 'information_schema')
egbert-# AND nspname !~ '^pg_toast'
egbert-# AND pg_relation_size(nspname || '.' || relname)0
egbert-#   ORDER BY pg_relation_size(nspname || '.' || relname) DESC
egbert-#   LIMIT 20;

 relation |  size   
--+-
 public.timeslots_strs_var_ts_key | 5643 MB #this is a UNIQUE clause
 public.timeslots | 2660 MB #this is the only table
 public.timeslots_timestamp_index | 583 MB  #this is an index
 public.timeslots_var_index   | 314 MB  #this is an index
 public.timeslots_timeslot_index  | 275 MB  this is an index
(5 rows)


so it seems that the UNIQUE clause is taking up more space than the data
itself... 

stil I have 2660 MB of data but the dump is about 10x smaller !!!

any hints ?




 --
 * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD
 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need help returning record set from a dynamic sql query

2008-08-12 Thread Sathish Duraiswamy
Dear murali,

We use psql for our ERP software .We found CREATE TYPE is useful in creating
new data type similar to creating domain.
For eg
   CREATE TYPE date_condition (
   condition_id int,
   from_date  date,
   to_datedate);

Instead , you can CREATE TABLE (ORDERSID INTEGER,ORDERDATE
TIMESTAMP,COMPANYNAME TEXT,EVENTNAME CHARACTER VARYING);

You can have result returning a set of records

Feel free to comment on it

Regrds
sathish

On Tue, Aug 12, 2008 at 3:08 PM, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 Hi,

 Iam new to Postgresql, now i need to create a Dynamic SQL Query for
 returning the record set
 based on my Input Parameters. I looked up some of the documents and worked
 out some more ...

 MY Postgresql Version In Local: 7.4
 MY Postgresql Version In Development: 8.2


 -- DROP TYPE ORDERREPORT;
 CREATE TYPE ORDERREPORT AS (ORDERSID INTEGER,ORDERDATE
 TIMESTAMP,COMPANYNAME TEXT,EVENTNAME CHARACTER VARYING);


 -- DROP FUNCTION FUN_ORDERREPORT(IN pmOrderId INTEGER,IN pmCompanyId
 INTEGER,IN pmEventId INTEGER);
 CREATE OR REPLACE FUNCTION FUN_ORDERREPORT(IN pmOrderId INTEGER,IN
 pmCompanyId INTEGER,IN pmEventId INTEGER) RETURNS SETOF ORDERREPORT AS
 '
 DECLARE
 vResult ORDERREPORT%ROWTYPE;
 vSql TEXT = ''
 SELECT
ORDR.ORDERSID   AS OrderID,
ORDR.INITIATED   AS Order_Date,
COMP.COMPANYNAMEAS Company_Name,
EVNT.EVENTNAME  AS Event_Name
 FROM
ORDERS ORDR
INNER JOIN COMPANY COMP ON COMP.COMPANYID = ORDR.COMPANY
INNER JOIN EVENT EVNT ON EVNT.COMPANY = COMP.COMPANYID
 WHERE
ORDR.EVENT = EVNT.EVENTID '';
 BEGIN
IF $1 IS NOT NULL THEN
   vSql = vSql ||'' AND ORDR.ORDERSID = ''|| $1;
END IF;

IF $2 IS NOT NULL THEN
   vSql = vSql ||'' AND COMP.COMPANYID = ''|| $2;
END IF;

IF $3 IS NOT NULL THEN
   vSql = vSql ||'' AND EVNT.EVENTID = ''|| $3;
END IF;
EXECUTE vSql INTO vResult;
RETURN NEXT vResult;
 END '
 LANGUAGE 'PLPGSQL';

 Result:

 events=# SELECT * FROM FUN_ORDERREPORT(102881,NULL,NULL);
 ordersid | orderdate  | companyname |eventname
 --++-+-
102881 | 2006-02-10 14:49:53.002653 | PhotoMania  | Photos
 (1 row)

 events=# SELECT * FROM FUN_ORDERREPORT(NULL,NULL,NULL);
 ordersid | orderdate  | companyname |eventname
 --++-+-
102881 | 2006-02-10 14:49:53.002653 | PhotoMania  | Photos
 (1 row)


 Iam getting just the first record from the recordset.
 Can someone help me how can i return all query results from a Dynamic SQL
 Query?


 --
 Thanks,
 MuraliDharan V



Re: [GENERAL] Need help returning record set from a dynamic sql query

2008-08-12 Thread [EMAIL PROTECTED]
Hi Sathish,

 

Thanks for your reply.

 

But I have created the type to return the record set from my join query
using a stored function.

 

I cannot able to create a table with that details .. Since those details
will be already available from different tables.

 

One more thing .. I am clear with your result set using FOR ..LOOP but mine
is not a normal query. it was built Dynamic based on my Input Parameters.

 

Please look and tell me if you are not clear with my query.

 

 

Thanks,

MuraliDharan V

 

From: Sathish Duraiswamy [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, August 12, 2008 4:10 PM
To: [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Need help returning record set from a dynamic sql
query

 

Dear murali,

We use psql for our ERP software .We found CREATE TYPE is useful in creating
new data type similar to creating domain.
For eg
   CREATE TYPE date_condition (
   condition_id int,
   from_date  date,
   to_datedate);

Instead , you can CREATE TABLE (ORDERSID INTEGER,ORDERDATE
TIMESTAMP,COMPANYNAME TEXT,EVENTNAME CHARACTER VARYING); 

You can have result returning a set of records

Feel free to comment on it

Regrds
sathish

On Tue, Aug 12, 2008 at 3:08 PM, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

Hi,

Iam new to Postgresql, now i need to create a Dynamic SQL Query for
returning the record set 
based on my Input Parameters. I looked up some of the documents and worked
out some more ...

MY Postgresql Version In Local: 7.4
MY Postgresql Version In Development: 8.2


-- DROP TYPE ORDERREPORT;
CREATE TYPE ORDERREPORT AS (ORDERSID INTEGER,ORDERDATE TIMESTAMP,COMPANYNAME
TEXT,EVENTNAME CHARACTER VARYING);


-- DROP FUNCTION FUN_ORDERREPORT(IN pmOrderId INTEGER,IN pmCompanyId
INTEGER,IN pmEventId INTEGER);
CREATE OR REPLACE FUNCTION FUN_ORDERREPORT(IN pmOrderId INTEGER,IN
pmCompanyId INTEGER,IN pmEventId INTEGER) RETURNS SETOF ORDERREPORT AS
'
DECLARE
vResult ORDERREPORT%ROWTYPE;
vSql TEXT = ''
SELECT
   ORDR.ORDERSID   AS OrderID,
   ORDR.INITIATED   AS Order_Date,
   COMP.COMPANYNAMEAS Company_Name,
   EVNT.EVENTNAME  AS Event_Name
FROM
   ORDERS ORDR
   INNER JOIN COMPANY COMP ON COMP.COMPANYID = ORDR.COMPANY
   INNER JOIN EVENT EVNT ON EVNT.COMPANY = COMP.COMPANYID
WHERE
   ORDR.EVENT = EVNT.EVENTID '';
BEGIN
   IF $1 IS NOT NULL THEN
  vSql = vSql ||'' AND ORDR.ORDERSID = ''|| $1;
   END IF;
 
   IF $2 IS NOT NULL THEN
  vSql = vSql ||'' AND COMP.COMPANYID = ''|| $2;
   END IF;

   IF $3 IS NOT NULL THEN
  vSql = vSql ||'' AND EVNT.EVENTID = ''|| $3;
   END IF;
   EXECUTE vSql INTO vResult;
   RETURN NEXT vResult;
END '
LANGUAGE 'PLPGSQL';

Result:

events=# SELECT * FROM FUN_ORDERREPORT(102881,NULL,NULL);
ordersid | orderdate  | companyname |eventname
--++-+-
   102881 | 2006-02-10 14:49:53.002653 | PhotoMania  | Photos
(1 row)

events=# SELECT * FROM FUN_ORDERREPORT(NULL,NULL,NULL);
ordersid | orderdate  | companyname |eventname
--++-+-
   102881 | 2006-02-10 14:49:53.002653 | PhotoMania  | Photos
(1 row)


Iam getting just the first record from the recordset. 
Can someone help me how can i return all query results from a Dynamic SQL
Query?


-- 
Thanks,
MuraliDharan V

 



Re: [GENERAL] big database with very small dump !?

2008-08-12 Thread Bill Moran
In response to Joao Ferreira gmail [EMAIL PROTECTED]:
 
 On Mon, 2008-08-11 at 12:43 -0700, Vlad Kosilov wrote:
  ./data/ you may want to exclude those. I find this query useful for 
  something like this as well:
  
  select datname,pg_size_pretty(pg_database_size(oid)) from pg_database ;
  
 
 Hello Vlad,
 
 I ran your query and I got the 9Gigas!
 
 I guess it should be related to index bloating, then. Do you agree ?

No, the index size is included in pg_database_size().

Perhaps do a du -hd1 /var/pgsql/data to see which directories have all
the space.  (or du -h --max-depth=1 /var/pgsql/data on Linux)

-- 
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

[EMAIL PROTECTED]
Phone: 412-422-3463x4023


IMPORTANT: This message contains confidential information and is
intended only for the individual named. If the reader of this
message is not an intended recipient (or the individual
responsible for the delivery of this message to an intended
recipient), please be advised that any re-use, dissemination,
distribution or copying of this message is prohibited. Please
notify the sender immediately by e-mail if you have received
this e-mail by mistake and delete this e-mail from your system.
E-mail transmission cannot be guaranteed to be secure or
error-free as information could be intercepted, corrupted, lost,
destroyed, arrive late or incomplete, or contain viruses. The
sender therefore does not accept liability for any errors or
omissions in the contents of this message, which arise as a
result of e-mail transmission.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] big database with very small dump !?

2008-08-12 Thread Joao Ferreira gmail

On Mon, 2008-08-11 at 10:58 -0600, Scott Marlowe wrote:

 It's likely you've got index bloat.  If you reload a pg_dump of the
 database in question into another server how much space does that take
 up? 

right. just loaded the dump into a clean database and everything came
down about 10 times...
--
NOW: (injected dump into fresh Pg):
 relation |  size  
--+
 public.timeslots | 549 MB
 public.timeslots_strs_var_ts_key | 482 MB
 public.timeslots_var_index   | 59 MB
 public.timeslots_timeslot_index  | 37 MB
 public.timeslots_timestamp_index | 37 MB
(5 rows)



BEFORE:
 relation |  size   
--+-
 public.timeslots_strs_var_ts_key | 5643 MB
 public.timeslots | 2660 MB
 public.timeslots_timestamp_index | 583 MB
 public.timeslots_var_index   | 314 MB
 public.timeslots_timeslot_index  | 275 MB


I'm confused here

on the fresh database the whole set only takes 1.3G

on the original db, even after VACUUM FULL and REINDEX it takes 9G.

can I really do anything about it ?

If I try cluster, I'm guessing I'll choose the big index and forget
about the smaller ones... is this right ?

thanks
joao


thx


  Look into using CLUSTER or REINDEX to fix the space usage.
 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] big database with very small dump !?

2008-08-12 Thread Tommy Gildseth

Joao Ferreira gmail wrote:

On Mon, 2008-08-11 at 12:45 -0400, Greg Smith wrote:

On Mon, 11 Aug 2008, Joao Ferreira gmail wrote:

I'm finding it very strange that my pg takes 9Giga on disk but 
pg_dumpall produces a 250Mega dump. 'VACUUM FULL' was executed 
yesterday.
If you've been running VACUUM FULL, it's probably so-called index bloat. 
Try running the query at http://wiki.postgresql.org/wiki/Disk_Usage to 
figure out where all your space has gone inside the database.





egbert=# SELECT nspname || '.' || relname AS relation,
egbert-# pg_size_pretty(pg_relation_size(nspname || '.' || relname))
AS size
egbert-#   FROM pg_class C
egbert-#   LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
egbert-#   WHERE nspname NOT IN ('pg_catalog', 'information_schema')
egbert-# AND nspname !~ '^pg_toast'
egbert-# AND pg_relation_size(nspname || '.' || relname)0
egbert-#   ORDER BY pg_relation_size(nspname || '.' || relname) DESC
egbert-#   LIMIT 20;

 relation |  size   
--+-

 public.timeslots_strs_var_ts_key | 5643 MB #this is a UNIQUE clause
 public.timeslots | 2660 MB #this is the only table
 public.timeslots_timestamp_index | 583 MB  #this is an index
 public.timeslots_var_index   | 314 MB  #this is an index
 public.timeslots_timeslot_index  | 275 MB  this is an index
(5 rows)


so it seems that the UNIQUE clause is taking up more space than the data
itself... 


stil I have 2660 MB of data but the dump is about 10x smaller !!!

any hints ?



I would try running a cluster on the table. This will usually clean up 
things and free diskspace both in the table and the indexes.
It does require quite extensive locking though, so might not be an 
option if you can't afford having the database unavailable for a few 
(10-15) minutes.



--
Tommy Gildseth


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] big database with very small dump !?

2008-08-12 Thread Tommy Gildseth

Joao Ferreira gmail wrote:

On Mon, 2008-08-11 at 10:58 -0600, Scott Marlowe wrote:

If I try cluster, I'm guessing I'll choose the big index and forget
about the smaller ones... is this right ?



CLUSTER will sort out all the indexes, even though you're just 
clustering on on.



--
Tommy Gildseth
DBA, Gruppe for databasedrift
Universitetet i Oslo, USIT
m: +47 45 86 38 50
t: +47 22 85 29 39

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need help returning record set from a dynamic sql query

2008-08-12 Thread Willy-Bas Loos
Iam getting just the first record from the recordset
That's because you use SELECT INTO, you should use   FOR rec IN query LOOP

Here's sample code from
http://www.postgresql.org/docs/8.1/interactive/plpgsql-control-structures.html

CREATE TABLE test (textcol varchar(10), intcol int);
INSERT INTO test VALUES ('a', 1);
INSERT INTO test VALUES ('a', 2);
INSERT INTO test VALUES ('b', 5);
INSERT INTO test VALUES ('b', 6);

CREATE OR REPLACE FUNCTION ReturnNexting(pText Text) RETURNS SETOF test AS
$$
DECLARE
rec RECORD;
BEGIN
FOR rec IN SELECT * FROM test WHERE textcol = pText LOOP
RETURN NEXT rec;
END LOOP;
RETURN;
END;
$$
LANGUAGE plpgsql;

SELECT * FROM ReturnNexting('a');


On Tue, Aug 12, 2008 at 12:58 PM, [EMAIL PROTECTED] [EMAIL PROTECTED]wrote:

  Hi Sathish,



 Thanks for your reply.



 But I have created the type to return the record set from my join query
 using a stored function.



 I cannot able to create a table with that details …. Since those details
 will be already available from different tables.



 One more thing …. I am clear with your result set using FOR ..LOOP but mine
 is not a normal query… it was built Dynamic based on my Input Parameters.



 Please look and tell me if you are not clear with my query.





 Thanks,

 MuraliDharan V



 *From:* Sathish Duraiswamy [mailto:[EMAIL PROTECTED]
 *Sent:* Tuesday, August 12, 2008 4:10 PM
 *To:* [EMAIL PROTECTED]
 *Cc:* pgsql-general@postgresql.org
 *Subject:* Re: [GENERAL] Need help returning record set from a dynamic sql
 query



 Dear murali,

 We use psql for our ERP software .We found CREATE TYPE is useful in
 creating new data type similar to creating domain.
 For eg
CREATE TYPE date_condition (
condition_id int,
from_date  date,
to_datedate);

 Instead , you can CREATE TABLE (ORDERSID INTEGER,ORDERDATE
 TIMESTAMP,COMPANYNAME TEXT,EVENTNAME CHARACTER VARYING);

 You can have result returning a set of records

 Feel free to comment on it

 Regrds
 sathish

 On Tue, Aug 12, 2008 at 3:08 PM, [EMAIL PROTECTED] [EMAIL PROTECTED]
 wrote:

 Hi,

 Iam new to Postgresql, now i need to create a Dynamic SQL Query for
 returning the record set
 based on my Input Parameters. I looked up some of the documents and worked
 out some more ...

 MY Postgresql Version In Local: 7.4
 MY Postgresql Version In Development: 8.2


 -- DROP TYPE ORDERREPORT;
 CREATE TYPE ORDERREPORT AS (ORDERSID INTEGER,ORDERDATE
 TIMESTAMP,COMPANYNAME TEXT,EVENTNAME CHARACTER VARYING);


 -- DROP FUNCTION FUN_ORDERREPORT(IN pmOrderId INTEGER,IN pmCompanyId
 INTEGER,IN pmEventId INTEGER);
 CREATE OR REPLACE FUNCTION FUN_ORDERREPORT(IN pmOrderId INTEGER,IN
 pmCompanyId INTEGER,IN pmEventId INTEGER) RETURNS SETOF ORDERREPORT AS
 '
 DECLARE
 vResult ORDERREPORT%ROWTYPE;
 vSql TEXT = ''
 SELECT
ORDR.ORDERSID   AS OrderID,
ORDR.INITIATED   AS Order_Date,
COMP.COMPANYNAMEAS Company_Name,
EVNT.EVENTNAME  AS Event_Name
 FROM
ORDERS ORDR
INNER JOIN COMPANY COMP ON COMP.COMPANYID = ORDR.COMPANY
INNER JOIN EVENT EVNT ON EVNT.COMPANY = COMP.COMPANYID
 WHERE
ORDR.EVENT = EVNT.EVENTID '';
 BEGIN
IF $1 IS NOT NULL THEN
   vSql = vSql ||'' AND ORDR.ORDERSID = ''|| $1;
END IF;

IF $2 IS NOT NULL THEN
   vSql = vSql ||'' AND COMP.COMPANYID = ''|| $2;
END IF;

IF $3 IS NOT NULL THEN
   vSql = vSql ||'' AND EVNT.EVENTID = ''|| $3;
END IF;
EXECUTE vSql INTO vResult;
RETURN NEXT vResult;
 END '
 LANGUAGE 'PLPGSQL';

 Result:

 events=# SELECT * FROM FUN_ORDERREPORT(102881,NULL,NULL);
 ordersid | orderdate  | companyname |eventname
 --++-+-
102881 | 2006-02-10 14:49:53.002653 | PhotoMania  | Photos
 (1 row)

 events=# SELECT * FROM FUN_ORDERREPORT(NULL,NULL,NULL);
 ordersid | orderdate  | companyname |eventname
 --++-+-
102881 | 2006-02-10 14:49:53.002653 | PhotoMania  | Photos
 (1 row)


 Iam getting just the first record from the recordset.
 Can someone help me how can i return all query results from a Dynamic SQL
 Query?


 --
 Thanks,
 MuraliDharan V





Re: [GENERAL] different results based solely on existence of index (no, seriously)

2008-08-12 Thread ries van Twisk


On Aug 12, 2008, at 3:53 AM, Willy-Bas Loos wrote:


reproduced it on:
PostgreSQL 8.3.3 on i486-pc-linux-gnu, compiled by GCC cc (GCC)  
4.2.3 (Ubuntu 4.2.3-2ubuntu7)

3 rows with index, 2 rows without.

can not reproduce it on:
- PostgreSQL 8.1.10 on i486-pc-linux-gnu, compiled by GCC cc (GCC)  
4.1.3 20070831 (prerelease) (Ubuntu 4.1.2-16ubuntu1)
- PostgreSQL 8.2.6 on i686-pc-mingw32, compiled by GCC gcc.exe  
(GCC) 3.4.2 (mingw-special)
- PostgreSQL 8.2.7 on i486-pc-linux-gnu, compiled by GCC cc (GCC)  
4.2.3 (Ubuntu 4.2.3-2ubuntu4)

they allways return 2 rows.

hth
WBL


reproduced on:
PostgreSQL 8.3.1 on i386-apple-darwin9.4.0, compiled by GCC i686-apple- 
darwin9-gcc-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5465)

3rows with index, 2 rows without

Ries

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg crashing

2008-08-12 Thread Magnus Hagander
Tom Lane wrote:
 Magnus Hagander [EMAIL PROTECTED] writes:
 I'll see if I can repro a case like it to see if the syslogger prevents
 the shared mem from going away when I get back to a dev box. Should be
 enough to just stick a sleep preventing it from stopping, right?
 
 The syslogger isn't restarted at all during a crash --- this isn't
 a race-condition scenario.
 
 If there is a race condition here, it must be associated with cleanup
 for a process continuing to happen after win32_waitpid has already
 reported it dead.  Hmm ... how much do we trust that bit of spaghetti
 around pgwin32_deadchild_callback?  What condition is it really waiting
 for?

I looked that code over a bit again, and it still looks good to me :-)
The wait on the handle will fire when a process exits (according to the
API). When it does, we post that information to the queue and send
SIGCHLD. And the waitpid function pick off the top of the queue.

(It's not particularly spaghettified if you know your way around those
APIs :-P That's not to say it's impossible there's a bug there, of course)

//Magnus

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need help returning record set from a dynamic sql query

2008-08-12 Thread [EMAIL PROTECTED]
Please understand.

 

I know I have to use FOR . LOOP for my query. But it is not a normal one .I
use to build that one dynamically.

 

 

From: Willy-Bas Loos [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, August 12, 2008 5:46 PM
To: [EMAIL PROTECTED]
Cc: Sathish Duraiswamy; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Need help returning record set from a dynamic sql
query

 

Iam getting just the first record from the recordset
That's because you use SELECT INTO, you should use   FOR rec IN query LOOP

Here's sample code from
http://www.postgresql.org/docs/8.1/interactive/plpgsql-control-structures.ht
ml

CREATE TABLE test (textcol varchar(10), intcol int);
INSERT INTO test VALUES ('a', 1);
INSERT INTO test VALUES ('a', 2);
INSERT INTO test VALUES ('b', 5);
INSERT INTO test VALUES ('b', 6);

CREATE OR REPLACE FUNCTION ReturnNexting(pText Text) RETURNS SETOF test AS
$$
DECLARE
rec RECORD;
BEGIN
FOR rec IN SELECT * FROM test WHERE textcol = pText LOOP
RETURN NEXT rec;
END LOOP;
RETURN;
END;
$$
LANGUAGE plpgsql;

SELECT * FROM ReturnNexting('a');



On Tue, Aug 12, 2008 at 12:58 PM, [EMAIL PROTECTED] [EMAIL PROTECTED]
wrote:

Hi Sathish,

 

Thanks for your reply.

 

But I have created the type to return the record set from my join query
using a stored function.

 

I cannot able to create a table with that details .. Since those details
will be already available from different tables.

 

One more thing .. I am clear with your result set using FOR ..LOOP but mine
is not a normal query. it was built Dynamic based on my Input Parameters.

 

Please look and tell me if you are not clear with my query.

 

 

Thanks,

MuraliDharan V

 

From: Sathish Duraiswamy [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, August 12, 2008 4:10 PM
To: [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Need help returning record set from a dynamic sql
query

 

Dear murali,

We use psql for our ERP software .We found CREATE TYPE is useful in creating
new data type similar to creating domain.
For eg
   CREATE TYPE date_condition (
   condition_id int,
   from_date  date,
   to_datedate);

Instead , you can CREATE TABLE (ORDERSID INTEGER,ORDERDATE
TIMESTAMP,COMPANYNAME TEXT,EVENTNAME CHARACTER VARYING); 

You can have result returning a set of records

Feel free to comment on it

Regrds
sathish

On Tue, Aug 12, 2008 at 3:08 PM, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

Hi,

Iam new to Postgresql, now i need to create a Dynamic SQL Query for
returning the record set 
based on my Input Parameters. I looked up some of the documents and worked
out some more ...

MY Postgresql Version In Local: 7.4
MY Postgresql Version In Development: 8.2


-- DROP TYPE ORDERREPORT;
CREATE TYPE ORDERREPORT AS (ORDERSID INTEGER,ORDERDATE TIMESTAMP,COMPANYNAME
TEXT,EVENTNAME CHARACTER VARYING);


-- DROP FUNCTION FUN_ORDERREPORT(IN pmOrderId INTEGER,IN pmCompanyId
INTEGER,IN pmEventId INTEGER);
CREATE OR REPLACE FUNCTION FUN_ORDERREPORT(IN pmOrderId INTEGER,IN
pmCompanyId INTEGER,IN pmEventId INTEGER) RETURNS SETOF ORDERREPORT AS
'
DECLARE
vResult ORDERREPORT%ROWTYPE;
vSql TEXT = ''
SELECT
   ORDR.ORDERSID   AS OrderID,
   ORDR.INITIATED   AS Order_Date,
   COMP.COMPANYNAMEAS Company_Name,
   EVNT.EVENTNAME  AS Event_Name
FROM
   ORDERS ORDR
   INNER JOIN COMPANY COMP ON COMP.COMPANYID = ORDR.COMPANY
   INNER JOIN EVENT EVNT ON EVNT.COMPANY = COMP.COMPANYID
WHERE
   ORDR.EVENT = EVNT.EVENTID '';
BEGIN
   IF $1 IS NOT NULL THEN
  vSql = vSql ||'' AND ORDR.ORDERSID = ''|| $1;
   END IF;
 
   IF $2 IS NOT NULL THEN
  vSql = vSql ||'' AND COMP.COMPANYID = ''|| $2;
   END IF;

   IF $3 IS NOT NULL THEN
  vSql = vSql ||'' AND EVNT.EVENTID = ''|| $3;
   END IF;
   EXECUTE vSql INTO vResult;
   RETURN NEXT vResult;
END '
LANGUAGE 'PLPGSQL';

Result:

events=# SELECT * FROM FUN_ORDERREPORT(102881,NULL,NULL);
ordersid | orderdate  | companyname |eventname
--++-+-
   102881 | 2006-02-10 14:49:53.002653 | PhotoMania  | Photos
(1 row)

events=# SELECT * FROM FUN_ORDERREPORT(NULL,NULL,NULL);
ordersid | orderdate  | companyname |eventname
--++-+-
   102881 | 2006-02-10 14:49:53.002653 | PhotoMania  | Photos
(1 row)


Iam getting just the first record from the recordset. 
Can someone help me how can i return all query results from a Dynamic SQL
Query?


-- 
Thanks,
MuraliDharan V

 

 



Re: [GENERAL] different results based solely on existence of index (no, seriously)

2008-08-12 Thread David Fetter
On Mon, Aug 11, 2008 at 10:35:26PM -0500, Matthew Dennis wrote:
 In reference to the script below (I know it can be rewritten, that's
 not the point), I get 3 rows if the referenced index exists but only
 two rows if it does not.  This is observable and repeatable just by
 dropping/creating the index.  Drop the index and two rows are
 returned.  Create the index, three rows are returned.  Drop the
 index, two rows again.  In addition, in no case does the selected
 column t2.c2 actually contain a value (it's always null).  Since in
 the 3 row case, it returns a row with t1.c1=2, I would have expected
 a value from t2 (if you add t2.c1 to select clause you can see that
 is null as well).
 
 It's probably worth mentioning (since it actually took me a while to
 notice) that the plans are subtlety different.  Neither plan (with
 or without index existing) actually uses the index, but in one case
 there is an extra filter node.
 
 version string is PostgreSQL 8.3.1 on i686-redhat-linux-gnu,
 compiled by GCC gcc (GCC) 4.1.2 20070925 (Red Hat 4.1.2-33)

I have reproduced it on 8.3.3.

Just FYI, a bug isn't a bug unless you can reproduce it on the latest
minor version, in this case 8.3.3, of the major version, in this case
8.3, that the bug appears in.

Cheers,
David.
-- 
David Fetter [EMAIL PROTECTED] http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: [EMAIL PROTECTED]

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: different results based solely on existence of index (no, seriously)

2008-08-12 Thread [EMAIL PROTECTED]
On Aug 12, 8:17 am, [EMAIL PROTECTED] (ries van Twisk) wrote:
 On Aug 12, 2008, at 3:53 AM, Willy-Bas Loos wrote:



  reproduced it on:
  PostgreSQL 8.3.3 on i486-pc-linux-gnu, compiled by GCC cc (GCC)  
  4.2.3 (Ubuntu 4.2.3-2ubuntu7)
  3 rows with index, 2 rows without.

  can not reproduce it on:
  - PostgreSQL 8.1.10 on i486-pc-linux-gnu, compiled by GCC cc (GCC)  
  4.1.3 20070831 (prerelease) (Ubuntu 4.1.2-16ubuntu1)
  - PostgreSQL 8.2.6 on i686-pc-mingw32, compiled by GCC gcc.exe  
  (GCC) 3.4.2 (mingw-special)
  - PostgreSQL 8.2.7 on i486-pc-linux-gnu, compiled by GCC cc (GCC)  
  4.2.3 (Ubuntu 4.2.3-2ubuntu4)
  they allways return 2 rows.

  hth
  WBL

 reproduced on:
 PostgreSQL 8.3.1 on i386-apple-darwin9.4.0, compiled by GCC i686-apple-
 darwin9-gcc-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5465)
 3rows with index, 2 rows without

 Ries

FWIW, reproduced (3 rows w/index, 2 w/o) on:
  PostgreSQL 8.3.0 on i686-pc-linux-gnu, compiled by GCC gcc (GCC)
4.1.2 (Ubuntu 4.1.2-0ubuntu4)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need help returning record set from a dynamic sql query

2008-08-12 Thread [EMAIL PROTECTED]
Hi,

 

I have changed my procedure like below,

 

CREATE OR REPLACE FUNCTION fun_orderreport(pmorderid integer, pmcompanyid
integer, pmeventid integer)

  RETURNS SETOF orderreport AS

$BODY$

DECLARE 

vResult ORDERREPORT%ROWTYPE;

vSql TEXT = '

SELECT

ORDR.ORDERSIDAS OrderID,

ORDR.INITIATED   AS Order_Date,

COMP.COMPANYNAME   AS Company_Name,

EVNT.EVENTNAME  AS Event_Name

FROM

ORDERS ORDR 

INNER JOIN COMPANY COMP ON COMP.COMPANYID = ORDR.COMPANY

INNER JOIN EVENT EVNT ON EVNT.COMPANY = COMP.COMPANYID

WHERE 

ORDR.EVENT = EVNT.EVENTID ';

BEGIN

IF $1 IS NOT NULL THEN

vSql = vSql ||' AND ORDR.ORDERSID = '|| $1;

END IF;



IF $2 IS NOT NULL THEN

vSql = vSql ||' AND COMP.COMPANYID = '|| $2;

END IF;

 

IF $3 IS NOT NULL THEN

vSql = vSql ||' AND EVNT.EVENTID = '|| $3;

END IF;

vSql = vSql || ';';

vSql = '';

-- DEALLOCATE PREPARE vSql;

FOR vResult IN EXECUTE vSql

LOOP

  RETURN NEXT vResult;

END LOOP;

RETURN;  

END $BODY$

  LANGUAGE 'plpgsql' VOLATILE;

 

 

SELECT fun_orderreport(NULL,NULL,NULL);

 

But the error I get when I execute,

 

ERROR:  cannot open multi-query plan as cursor

CONTEXT:  PL/pgSQL function fun_orderreport line 30 at for over execute
statement

 

** Error **

 

ERROR: cannot open multi-query plan as cursor

SQL state: 42P11

Context: PL/pgSQL function fun_orderreport line 30 at for over execute
statement

 

 

 

From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, August 12, 2008 6:53 PM
To: 'Willy-Bas Loos'
Cc: 'Sathish Duraiswamy'; 'pgsql-general@postgresql.org'
Subject: RE: [GENERAL] Need help returning record set from a dynamic sql
query

 

Please understand.

 

I know I have to use FOR . LOOP for my query. But it is not a normal one .I
use to build that one dynamically.

 

 

From: Willy-Bas Loos [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, August 12, 2008 5:46 PM
To: [EMAIL PROTECTED]
Cc: Sathish Duraiswamy; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Need help returning record set from a dynamic sql
query

 

Iam getting just the first record from the recordset
That's because you use SELECT INTO, you should use   FOR rec IN query LOOP

Here's sample code from
http://www.postgresql.org/docs/8.1/interactive/plpgsql-control-structures.ht
ml

CREATE TABLE test (textcol varchar(10), intcol int);
INSERT INTO test VALUES ('a', 1);
INSERT INTO test VALUES ('a', 2);
INSERT INTO test VALUES ('b', 5);
INSERT INTO test VALUES ('b', 6);

CREATE OR REPLACE FUNCTION ReturnNexting(pText Text) RETURNS SETOF test AS
$$
DECLARE
rec RECORD;
BEGIN
FOR rec IN SELECT * FROM test WHERE textcol = pText LOOP
RETURN NEXT rec;
END LOOP;
RETURN;
END;
$$
LANGUAGE plpgsql;

SELECT * FROM ReturnNexting('a');

On Tue, Aug 12, 2008 at 12:58 PM, [EMAIL PROTECTED] [EMAIL PROTECTED]
wrote:

Hi Sathish,

 

Thanks for your reply.

 

But I have created the type to return the record set from my join query
using a stored function.

 

I cannot able to create a table with that details .. Since those details
will be already available from different tables.

 

One more thing .. I am clear with your result set using FOR ..LOOP but mine
is not a normal query. it was built Dynamic based on my Input Parameters.

 

Please look and tell me if you are not clear with my query.

 

 

Thanks,

MuraliDharan V

 

From: Sathish Duraiswamy [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, August 12, 2008 4:10 PM
To: [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Need help returning record set from a dynamic sql
query

 

Dear murali,

We use psql for our ERP software .We found CREATE TYPE is useful in creating
new data type similar to creating domain.
For eg
   CREATE TYPE date_condition (
   condition_id int,
   from_date  date,
   to_datedate);

Instead , you can CREATE TABLE (ORDERSID INTEGER,ORDERDATE
TIMESTAMP,COMPANYNAME TEXT,EVENTNAME CHARACTER VARYING); 

You can have result returning a set of records

Feel free to comment on it

Regrds
sathish

On Tue, Aug 12, 2008 at 3:08 PM, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

Hi,

Iam new to Postgresql, now i need to create a Dynamic SQL Query for
returning the record set 
based on my Input Parameters. I looked up some of the documents and worked
out some more ...

MY Postgresql Version In Local: 7.4
MY Postgresql Version In Development: 8.2


-- DROP TYPE ORDERREPORT;
CREATE TYPE ORDERREPORT AS (ORDERSID INTEGER,ORDERDATE TIMESTAMP,COMPANYNAME
TEXT,EVENTNAME CHARACTER VARYING);



[GENERAL] automatic REINDEX-ing

2008-08-12 Thread Joao Ferreira gmail
Hello all

[[[ while dealing with a disk size problem I realised my REINDEX cron
script was not really being called every week :(   so... ]]]

I executed REINDEX by hand and the disk ocupation imediatelly dropped 6
Giga...!!!

is there a way to configure postgres to automatically execute the needed
REINDEXING (on indexes and tables) for a given database

something similar to auto-vacuum... I guess

thx
joao



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] different results based solely on existence of index (no, seriously)

2008-08-12 Thread Tom Lane
Matthew Dennis [EMAIL PROTECTED] writes:
 In reference to the script below (I know it can be rewritten, that's not the
 point), I get 3 rows if the referenced index exists but only two rows if it
 does not.

I don't see any failure in 8.3 branch tip.  I think the bug was fixed
here:
http://archives.postgresql.org/pgsql-committers/2008-06/msg00336.php

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] automatic REINDEX-ing

2008-08-12 Thread Lennin Caro
you can use a cron job


--- On Tue, 8/12/08, Joao Ferreira gmail [EMAIL PROTECTED] wrote:

 From: Joao Ferreira gmail [EMAIL PROTECTED]
 Subject: [GENERAL] automatic REINDEX-ing
 To: pgsql-general pgsql-general@postgresql.org
 Date: Tuesday, August 12, 2008, 3:13 PM
 Hello all
 
 [[[ while dealing with a disk size problem I realised my
 REINDEX cron
 script was not really being called every week :(   so...
 ]]]
 
 I executed REINDEX by hand and the disk ocupation
 imediatelly dropped 6
 Giga...!!!
 
 is there a way to configure postgres to automatically
 execute the needed
 REINDEXING (on indexes and tables) for a given database
 
 something similar to auto-vacuum... I guess
 
 thx
 joao
 
 
 
 -- 
 Sent via pgsql-general mailing list
 (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] automatic REINDEX-ing

2008-08-12 Thread Tom Lane
Joao Ferreira gmail [EMAIL PROTECTED] writes:
 I executed REINDEX by hand and the disk ocupation imediatelly dropped 6
 Giga...!!!

 is there a way to configure postgres to automatically execute the needed
 REINDEXING (on indexes and tables) for a given database

Generally speaking, there shouldn't be a need for automatic
reindexing.  What the above suggests is that you need more aggressive
routine vacuuming, so that you don't get into this situation in the
first place.

BTW, more aggressive routine vacuuming does NOT mean use vacuum full.
Vacuum full tends to make index bloat worse, not better.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need help returning record set from a dynamic sql query

2008-08-12 Thread Willy-Bas Loos
so use EXECUTE:


CREATE TABLE test (textcol varchar(10), intcol int);
INSERT INTO test VALUES ('a', 1);
INSERT INTO test VALUES ('a', 2);
INSERT INTO test VALUES ('b', 5);
INSERT INTO test VALUES ('b', 6);

CREATE OR REPLACE FUNCTION ReturnNexting(pText Text) RETURNS SETOF test AS
$$
DECLARE
rec RECORD;
vQuery text := 'SELECT * FROM test WHERE textcol = '''||pText||;
BEGIN
FOR rec IN EXECUTE vQuery LOOP
RETURN NEXT rec;
END LOOP;
RETURN;
END;
$$
LANGUAGE plpgsql;

SELECT * FROM ReturnNexting('a');

hth
WBL


[GENERAL] ftell error during pg_dump

2008-08-12 Thread William Garrison

Our IT administrator ran a pg_dump and received the following error:
.
.
.
pg_dump: dumping contents of table history
pg_dump: [custom archiver] WARNING: ftell mismatch with expected 
position -- ftell used

pg_dump: dumping contents of table history_archive
pg_dump: [custom archiver] WARNING: ftell mismatch with expected 
position -- ftell used

pg_dump: dumping contents of table historymetadata
pg_dump: [custom archiver] WARNING: ftell mismatch with expected 
position -- ftell used

.
.
.

This did not happen on every table, but once it did happen it was on all 
subsequent tables.  Any ideas?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] automatic REINDEX-ing

2008-08-12 Thread Joao Ferreira gmail

On Tue, 2008-08-12 at 11:53 -0400, Tom Lane wrote:
 TW, more aggressive routine vacuuming does NOT mean use vacuum
 full.
 Vacuum full tends to make index bloat worse, not better.
 
 regards, tom lane
 
Ok. so what does it mean ?



I'm a bit lost here.  I'm currently executing VACUUM FULL _and_ REINDEX
(tbls  idxs) every week.

Should I keep the REINDEX and drop VACUUM FULL ?

How do I iterate to a better approach ?

thanks.

joao



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ftell error during pg_dump

2008-08-12 Thread Tom Lane
William Garrison [EMAIL PROTECTED] writes:
 Our IT administrator ran a pg_dump and received the following error:
 pg_dump: [custom archiver] WARNING: ftell mismatch with expected 
 position -- ftell used

What platform, and exactly what version of pg_dump?  Is it possible
you ran out of disk space partway through?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] automatic REINDEX-ing

2008-08-12 Thread Scott Marlowe
On Tue, Aug 12, 2008 at 10:04 AM, Joao Ferreira gmail
[EMAIL PROTECTED] wrote:

 On Tue, 2008-08-12 at 11:53 -0400, Tom Lane wrote:
 TW, more aggressive routine vacuuming does NOT mean use vacuum
 full.
 Vacuum full tends to make index bloat worse, not better.

 regards, tom lane

 Ok. so what does it mean ?



 I'm a bit lost here.  I'm currently executing VACUUM FULL _and_ REINDEX
 (tbls  idxs) every week.

 Should I keep the REINDEX and drop VACUUM FULL ?

 How do I iterate to a better approach ?

It's better to run REGULAR vacuums more often than to vacuum full OR
reindex OR both.

If your machine doesn't have the I/O bandwidth to withstand being
vacuumed during the day then you either have to have a fairly large
free space map and vacuum off hours or buy a machine with more I/O
bandwidth.

With the sleep settings in vacuum and autovacuum you can usually get
away with autovacuum running during the day.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] automatic REINDEX-ing

2008-08-12 Thread William Garrison

Tom Lane wrote:

Joao Ferreira gmail [EMAIL PROTECTED] writes:
  

I executed REINDEX by hand and the disk ocupation imediatelly dropped 6
Giga...!!!



  

is there a way to configure postgres to automatically execute the needed
REINDEXING (on indexes and tables) for a given database



Generally speaking, there shouldn't be a need for automatic
reindexing.  What the above suggests is that you need more aggressive
routine vacuuming, so that you don't get into this situation in the
first place.

BTW, more aggressive routine vacuuming does NOT mean use vacuum full.
Vacuum full tends to make index bloat worse, not better.

regards, tom lane

  


So now that we know what that term does not mean, what does it mean?  
Just doing it more often by adjusting the autovacuum parameters?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] automatic REINDEX-ing

2008-08-12 Thread Scott Marlowe
On Tue, Aug 12, 2008 at 10:09 AM, William Garrison
[EMAIL PROTECTED] wrote:
 Tom Lane wrote:

 Joao Ferreira gmail [EMAIL PROTECTED] writes:


 I executed REINDEX by hand and the disk ocupation imediatelly dropped 6
 Giga...!!!




 is there a way to configure postgres to automatically execute the needed
 REINDEXING (on indexes and tables) for a given database


 Generally speaking, there shouldn't be a need for automatic
 reindexing.  What the above suggests is that you need more aggressive
 routine vacuuming, so that you don't get into this situation in the
 first place.

 BTW, more aggressive routine vacuuming does NOT mean use vacuum full.
 Vacuum full tends to make index bloat worse, not better.


 So now that we know what that term does not mean, what does it mean?  Just
 doing it more often by adjusting the autovacuum parameters?

exactly.  Or running cronned vacuums on particular tables if they need
it more often.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ftell error during pg_dump

2008-08-12 Thread William Garrison
I'm embarrassed  to say it is 8.2.3 :(  I'm not sure why they haven't 
upgraded our production servers to the latest 8.2 yet.  It's running on 
Windows Server 2003, and it looks like there is plenty of disk space.


I googled this and found someone reported defect 2461 for this, some 
time ago, but I'm not clear how to check the status of that bug.  I 
subscribed to pgsql-bugs so I can ask there as well.

http://archives.postgresql.org/pgsql-bugs/2006-06/msg00012.php

I also checked the logs from yesterday and I don't see anything that was 
logged while the pg_dump was running.


Tom Lane wrote:

William Garrison [EMAIL PROTECTED] writes:
  

Our IT administrator ran a pg_dump and received the following error:
pg_dump: [custom archiver] WARNING: ftell mismatch with expected 
position -- ftell used



What platform, and exactly what version of pg_dump?  Is it possible
you ran out of disk space partway through?

regards, tom lane

  
  


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] big database with very small dump !? SOLVED

2008-08-12 Thread Joao Ferreira gmail
Hi guys,

If found the reason for all this problem.


explanation: vacuum reindex cron scripts were not being executed.

I executed the operations by hand and the values became normal.

thank you all for the fine discussion.

joao




On Tue, 2008-08-12 at 13:49 +0200, Tommy Gildseth wrote:
 Joao Ferreira gmail wrote:
  On Mon, 2008-08-11 at 10:58 -0600, Scott Marlowe wrote:
 
  If I try cluster, I'm guessing I'll choose the big index and forget
  about the smaller ones... is this right ?
  
 
 CLUSTER will sort out all the indexes, even though you're just 
 clustering on on.
 
 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] automatic REINDEX-ing

2008-08-12 Thread Bill Moran
In response to Joao Ferreira gmail [EMAIL PROTECTED]:

 
 On Tue, 2008-08-12 at 11:53 -0400, Tom Lane wrote:
  TW, more aggressive routine vacuuming does NOT mean use vacuum
  full.
  Vacuum full tends to make index bloat worse, not better.
  
  regards, tom lane
  
 Ok. so what does it mean ?
 
 I'm a bit lost here.  I'm currently executing VACUUM FULL _and_ REINDEX
 (tbls  idxs) every week.
 
 Should I keep the REINDEX and drop VACUUM FULL ?

Don't vacuum full as part of regular maintenance.  Do plain vacuum.
If that's unable to keep up with the database bloat, then do it more
often.  Whether you use autovacuum or cron isn't as important as
whether you're vacuuming often enough.

Personally, I like to put explicit VACUUM commands in my applications
after operations that are known to bloat tables.  This isn't always
possible as it sometimes introduces a performance issue, but I use it
where it doesn't cause problem as it solves the bloat problem at the
point of creation.

REINDEX is normally not needed, although there _are_ some corner cases
that seem to require it.  One particular corner case is VACUUM FULL,
which tends to bloat indexes.

If you're using vacuum on a schedule appropriate to your database
activity, you'll probably not need reindex.  If you do find that your
use is one of those corner cases where reindex is necessary, then go
ahead and do it.

-- 
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

[EMAIL PROTECTED]
Phone: 412-422-3463x4023

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] size of a table on postgresql

2008-08-12 Thread aravind chandu
Hello,





 
The following is the procedure to calculate the disk space occupied by
postgresql from a flat file.




In this I didn't understood some terms  


   24 bytes: each row header (approximate)
24 bytes: one int field and one text field
   + 4 bytes: pointer on page to tuple

here row header is taken as 24 bytes and in some sites it is given
row header as 40 bytes and in some 32 bytes
http://www.sbras.ru/rus/docs/db/postgres/doc/howto/faq-english.shtml#3.6
http://www.softlab.ntua.gr/facilities/documentation/unix/postgres/faq-english.html

Is the row header value is constant or it is variable?If so what could be the 
maximun value?
what is that additional +4bytes as pointer on page to tuple.

Should all these were fixed or is there any command to find the values for 
these fields?

 Please post your comments on it.

Thank You,
Avin.




  

Re: [GENERAL] ftell error during pg_dump

2008-08-12 Thread Magnus Hagander
This is almost certainly the bug fixed in 8.2.4 and listed in the
release notes as:
Allow pg_dump to do binary backups larger than two gigabytes on Windows
(Magnus) 

If it happens to be that your dump could approach the 2Gb limit, I
suggest you upgrade to 8.2.9 and see if it goes away.


As this is entirely a client bug, there would be nothing in the logs.


//Magnus

William Garrison wrote:
 I'm embarrassed  to say it is 8.2.3 :(  I'm not sure why they haven't
 upgraded our production servers to the latest 8.2 yet.  It's running on
 Windows Server 2003, and it looks like there is plenty of disk space.
 
 I googled this and found someone reported defect 2461 for this, some
 time ago, but I'm not clear how to check the status of that bug.  I
 subscribed to pgsql-bugs so I can ask there as well.
 http://archives.postgresql.org/pgsql-bugs/2006-06/msg00012.php
 
 I also checked the logs from yesterday and I don't see anything that was
 logged while the pg_dump was running.
 
 Tom Lane wrote:
 William Garrison [EMAIL PROTECTED] writes:
  
 Our IT administrator ran a pg_dump and received the following error:
 pg_dump: [custom archiver] WARNING: ftell mismatch with expected
 position -- ftell used
 

 What platform, and exactly what version of pg_dump?  Is it possible
 you ran out of disk space partway through?

 regards, tom lane

   
  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ftell error during pg_dump

2008-08-12 Thread William Garrison

yay!
Thank you.

Magnus Hagander wrote:

This is almost certainly the bug fixed in 8.2.4 and listed in the
release notes as:
Allow pg_dump to do binary backups larger than two gigabytes on Windows
(Magnus) 

If it happens to be that your dump could approach the 2Gb limit, I
suggest you upgrade to 8.2.9 and see if it goes away.


As this is entirely a client bug, there would be nothing in the logs.


//Magnus

William Garrison wrote:
  

I'm embarrassed  to say it is 8.2.3 :(  I'm not sure why they haven't
upgraded our production servers to the latest 8.2 yet.  It's running on
Windows Server 2003, and it looks like there is plenty of disk space.

I googled this and found someone reported defect 2461 for this, some
time ago, but I'm not clear how to check the status of that bug.  I
subscribed to pgsql-bugs so I can ask there as well.
http://archives.postgresql.org/pgsql-bugs/2006-06/msg00012.php

I also checked the logs from yesterday and I don't see anything that was
logged while the pg_dump was running.

Tom Lane wrote:


William Garrison [EMAIL PROTECTED] writes:
 
  

Our IT administrator ran a pg_dump and received the following error:
pg_dump: [custom archiver] WARNING: ftell mismatch with expected
position -- ftell used



What platform, and exactly what version of pg_dump?  Is it possible
you ran out of disk space partway through?

regards, tom lane

  
  
 




  



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 8.3.1 Vs 8.3.3

2008-08-12 Thread David Siebert

I am setting up a new server and I am using OpenSuse. OpenSuse  only has
8.3.1 in the repositories so I am wondering just how critical is the
need to update? I checked out the changed and there looks like a lot of
them in 8.3.2. so I am wondering if I should just install from source or
live with the what is in the repositories for now?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ftell error during pg_dump

2008-08-12 Thread Tom Lane
William Garrison [EMAIL PROTECTED] writes:
 I'm embarrassed  to say it is 8.2.3 :(  I'm not sure why they haven't 
 upgraded our production servers to the latest 8.2 yet.  It's running on 
 Windows Server 2003, and it looks like there is plenty of disk space.

Hmm.  There was an 8.2.4 bug fix for pg_dump on Windows, but the log
message for it claimed it'd only affect dump files larger than 2GB,
and I'm not sure whether the symptoms matched this anyway.  How big
is your dump exactly?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ftell error during pg_dump

2008-08-12 Thread William Garrison
The dump is over 3GB.  So there's no question this is it.  I had a 
feeling this would all come down to not being on the latest version.


Thanks to both Tom and Magnus for your help.

Tom Lane wrote:

William Garrison [EMAIL PROTECTED] writes:
  
I'm embarrassed  to say it is 8.2.3 :(  I'm not sure why they haven't 
upgraded our production servers to the latest 8.2 yet.  It's running on 
Windows Server 2003, and it looks like there is plenty of disk space.



Hmm.  There was an 8.2.4 bug fix for pg_dump on Windows, but the log
message for it claimed it'd only affect dump files larger than 2GB,
and I'm not sure whether the symptoms matched this anyway.  How big
is your dump exactly?

regards, tom lane

  



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.3.1 Vs 8.3.3

2008-08-12 Thread Scott Marlowe
On Tue, Aug 12, 2008 at 12:51 PM, David Siebert [EMAIL PROTECTED] wrote:
 I am setting up a new server and I am using OpenSuse. OpenSuse  only has
 8.3.1 in the repositories so I am wondering just how critical is the
 need to update? I checked out the changed and there looks like a lot of
 them in 8.3.2. so I am wondering if I should just install from source or
 live with the what is in the repositories for now?

If OpenSUSE only has 8.3.1 then I'd strongly advise either switching
to a distro that updates more often (Centos or Debian or Ubuntu) or
compiling from source.  Because you're going to have this problem over
and over again if they can't get 8.3.3 packaged up and ready to go in
a reasonable amount of time.

I like Centos because you can use the RHEL rpms from the pgsql site
and they get updated pretty fast when a new version comes out.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Confronting the maximum column limitation

2008-08-12 Thread Jeff Gentry
Hi there ...

I recently discovered that there is a hard cap on the # of columns, being
at 1600.  I also understand that it is generally unfathomable that anyone
would ever feel limited by that number ... however I've managed to bump
into it myself and was looking to see if anyone had advice on how to
manage the situation.

As a bit of background, we have a Postgres database to manage information
revolving around genomic datasets, including the dataset itself.  The
actual data is treated in other applications as a matrix, and while it has
caused the DB design to be sub-optimal the model worked to just stash the
entire matrix in the DB (the rest of the DB design is proper, but the
storage of these matrices straight up is unorthodox ... for the
convenience of having everything in the same storage unit with all of the
other information, it has been worth the extra headache and potential
performance dings).

In these matrices, columns represent biological samples, rows represent
fragments of the genome and the cells are populated with values.  There
are a variety of row configurations (depending on what chip the samples
were handled on) which range in number from a few thousand to a few
hundred thousand (currently, it is constantly expanding upwards).  The
real problem lies with the columns (biological samples) in that it is
rarely the case that we'll have multiple matrices with overlap in columns
- and even in the cases where that happens, it is almost never a good idea
to treat them as the same thing.

Mind you, this is a world where having a set with a few hundred samples is
still considered pretty grandiose - I just happened to have one of the
very few out there which would come anywhere close to breaking the 1600
barrier and it is unlikely to really be an issue for at least a few (if
not more) years ... but looking down the road it'd be better to nip this
in the bud now than punt it until it becomes a real issue.

So I've seen the header file where the 1600 column limit is defined, and
I know the arguments that no one should ever want to come anywhere close
to that limit.  I'm willing to accept that these matrices could be stored
in some alternate configuration, although I don't really know what that
would be.  It's possible that the right answer might be pgsql just isn't
the right tool for this job or even punting it for down the road might be
the correct choice.  I was just hoping that some folks here might be able
to give their thoughts here.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Confronting the maximum column limitation

2008-08-12 Thread Steve Atkins


On Aug 12, 2008, at 1:15 PM, Jeff Gentry wrote:


Hi there ...

I recently discovered that there is a hard cap on the # of columns,  
being
at 1600.  I also understand that it is generally unfathomable that  
anyone
would ever feel limited by that number ... however I've managed to  
bump

into it myself and was looking to see if anyone had advice on how to
manage the situation.

As a bit of background, we have a Postgres database to manage  
information

revolving around genomic datasets, including the dataset itself.  The
actual data is treated in other applications as a matrix, and while  
it has
caused the DB design to be sub-optimal the model worked to just  
stash the

entire matrix in the DB (the rest of the DB design is proper, but the
storage of these matrices straight up is unorthodox ... for the
convenience of having everything in the same storage unit with all  
of the

other information, it has been worth the extra headache and potential
performance dings).


What operations do you perform on the data? If it's just store and
retrieve, can you serialize them into a bytea (or xml) field?

Cheers,
  Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Confronting the maximum column limitation

2008-08-12 Thread Scott Marlowe
On Tue, Aug 12, 2008 at 2:15 PM, Jeff Gentry [EMAIL PROTECTED] wrote:
 Hi there ...

 I recently discovered that there is a hard cap on the # of columns, being
 at 1600.  I also understand that it is generally unfathomable that anyone
 would ever feel limited by that number ... however I've managed to bump
 into it myself and was looking to see if anyone had advice on how to
 manage the situation.

The generic solution without making too much work is to store similar
data types in an arrayed type in the db.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.3.1 Vs 8.3.3

2008-08-12 Thread David Siebert

I do agree and really like Centos but I don't want to have to have to
admin this box myself. Our network admin likes OpenSuse and doesn't want
to have to deal with anything else.
I tried Ubuntu server a while ago and was really not impressed. It was
lacking a lot of packages that I wanted but that was a while ago.
I guess it is compile from source. yeckkk.

Scott Marlowe wrote:

On Tue, Aug 12, 2008 at 12:51 PM, David Siebert [EMAIL PROTECTED] wrote:

I am setting up a new server and I am using OpenSuse. OpenSuse  only has
8.3.1 in the repositories so I am wondering just how critical is the
need to update? I checked out the changed and there looks like a lot of
them in 8.3.2. so I am wondering if I should just install from source or
live with the what is in the repositories for now?


If OpenSUSE only has 8.3.1 then I'd strongly advise either switching
to a distro that updates more often (Centos or Debian or Ubuntu) or
compiling from source.  Because you're going to have this problem over
and over again if they can't get 8.3.3 packaged up and ready to go in
a reasonable amount of time.

I like Centos because you can use the RHEL rpms from the pgsql site
and they get updated pretty fast when a new version comes out.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Confronting the maximum column limitation

2008-08-12 Thread Jeff Gentry
On Tue, 12 Aug 2008, Steve Atkins wrote:
 What operations do you perform on the data? If it's just store and
 retrieve, can you serialize them into a bytea (or xml) field?

Store  retrieve although we take advantage of the fact that it's in a DB
to allow for subsetting (done at the postgres level), which cuts down on
client side overhead as well as network traffic.

The DB is accessed by a variety of clients (including a webapp) which
could all perform that sort of work if necessary, although it's been nice
to subset at the DB level.  I'm not very familiar w/ the serialization
methods you're talking about - would that have me needing to do full
retrieval and subsetting on the client side? (definitely not a deal
breaker, I'm just trying to get as many ideas w/ related info as possible
before bringing this whole issue up with the powers that be).


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Confronting the maximum column limitation

2008-08-12 Thread Jeff Gentry
On Tue, 12 Aug 2008, Scott Marlowe wrote:
 The generic solution without making too much work is to store similar
 data types in an arrayed type in the db.

That's a good idea.  I'll have to play w/ this one.  Thanks.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.3.1 Vs 8.3.3

2008-08-12 Thread Scott Marlowe
On Tue, Aug 12, 2008 at 3:03 PM, David Siebert [EMAIL PROTECTED] wrote:
 I do agree and really like Centos but I don't want to have to have to
 admin this box myself. Our network admin likes OpenSuse and doesn't want
 to have to deal with anything else.

I've found that adminning a dedicated pgsql box is usually pretty easy
once it's set up and running, but I get what you're saying there.  If
your network admin likes OpenSuse, then it's up to him to keep pgsql
up to date I'd guess.

 I tried Ubuntu server a while ago and was really not impressed. It was
 lacking a lot of packages that I wanted but that was a while ago.
 I guess it is compile from source. yeckkk.

You likely weren't pointing at the right repositories.  I've found
that once you get your repos set up in debian / ubuntu, it gets MUCH
easier to work with.  I especially like the ability to run multiple
pgsql versions and upgrade from one to the next easily.  However,
Ubuntu's tendency to release with questionable kernels (see 8.04 LTS
initial release) make me leary of anything they put out less than 6
months or so old.  I've never had to build much of anything from
source on ubuntu, except slony on 7.10 for pgsql 8.3.3.

I'm more a fan of centos, but have to give grudging respect to the
latest couple of versions of ubuntu server.  Once you learn the new
way of doing things, it's quite easy to keep happy.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Confronting the maximum column limitation

2008-08-12 Thread Steve Atkins


On Aug 12, 2008, at 2:11 PM, Jeff Gentry wrote:


On Tue, 12 Aug 2008, Steve Atkins wrote:

What operations do you perform on the data? If it's just store and
retrieve, can you serialize them into a bytea (or xml) field?


Store  retrieve although we take advantage of the fact that it's in  
a DB
to allow for subsetting (done at the postgres level), which cuts  
down on

client side overhead as well as network traffic.

The DB is accessed by a variety of clients (including a webapp) which
could all perform that sort of work if necessary, although it's been  
nice

to subset at the DB level.  I'm not very familiar w/ the serialization
methods you're talking about


I wasn't thinking of anything specific, more just some convenient way
of mapping the data structure into one or more larger chunks of data,
rather than one column per cell.

It may well be possible to do some of the serialization in stored  
functions

in the database, to move that away from having to implement it in
each client.


- would that have me needing to do full
retrieval and subsetting on the client side? (definitely not a deal
breaker, I'm just trying to get as many ideas w/ related info as  
possible

before bringing this whole issue up with the powers that be).




Maybe, maybe not. It would depend on how you serialized it, what
your typical subsets were and so on.

Serialization isn't the only solution to storing this sort of data (EAV
of some sort would be another), but it's something worth looking at.

I think that what's sensible to do is going to depend on the details
of the data and (more so) the ways you access it.

Cheers,
  Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to modify ENUM datatypes? (The solution)

2008-08-12 Thread Dmitry Koterov
Here is the solution about on the fly ALTER ENUM:
http://en.dklab.ru/lib/dklab_postgresql_enum/

Usage:

*-- Add a new element to the ENUM on the fly.
SELECT enum.enum_add('my_enum', 'third');*

*-- Remove an element from the ENUM on the fly.
SELECT enum.enum_del('my_enum', 'first');*

Possibly future versions of PostgreSQL will include built-in ALTER TYPE for
ENUM, all the more its implementation is not impossible, as you see above.
Hope this will be helpful.



On Wed, Apr 23, 2008 at 4:25 AM, Merlin Moncure [EMAIL PROTECTED] wrote:

 On Tue, Apr 22, 2008 at 6:11 PM, Jeff Davis [EMAIL PROTECTED] wrote:
   If you store an integer reference instead, joins are not necessarily
   expensive. If the number of distinct values is small (which is the
   normal use case for ENUM), I would expect the joins to be quite cheap.
   Beware of running into bad plans however, or making the optimizer work
   too hard (if you have a lot of other joins, too).

 Necessarily being the operative word here.  Think about an enum as
 part of a composite key for example.  It's a lot nicer to rely on enum
 for natural ordering than doing something like a functional index.

 Anyways, it's pretty easy to extend an enum...you can manually insert
 an entry into pg_enum (see the relevent docs).  Just watch out for oid
 overlap.  One thing currently that is very difficult currently to do
 is to alter the order of the enum elements.  The current state of
 things is pretty workable though.

 Scott's color/mystuff example is generally preferred for a lot of
 cases.  I _really_ prefer this to surrogate style enums where you have
 color_id...this approach makes your database unreadable IMO.  A decent
 hybrid approach which I have been using lately is char (not char)
 where the choices set is reasonably small, well represented by a
 single character, and the intrinsic ordering property is not too
 important (where an enum might be better).  In many cases though, the
 pure natural approach is simply the best.  The enum though with is
 intrinsic ordering and more efficient indexing has an important niche
 however.

 merlin

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] How to modify ENUM datatypes? (The solution)

2008-08-12 Thread Merlin Moncure
On Tue, Aug 12, 2008 at 5:40 PM, Dmitry Koterov [EMAIL PROTECTED] wrote:
 Here is the solution about on the fly ALTER ENUM:
 http://en.dklab.ru/lib/dklab_postgresql_enum/

 Usage:

 -- Add a new element to the ENUM on the fly.

 SELECT enum.enum_add('my_enum', 'third');

 -- Remove an element from the ENUM on the fly.
 SELECT enum.enum_del('my_enum', 'first');

 Possibly future versions of PostgreSQL will include built-in ALTER TYPE for
 ENUM, all the more its implementation is not impossible, as you see above.
 Hope this will be helpful.

Decent user space solution...it's easy enough.  IMO 'real' solution is
through alter type as you suggest.  It's worth noting there there is
no handling for the unlikely but still possible event of oid
wraparound.  Also, there is no 'enum_insert', which is not so pleasant
with how enums are implemented.

Also, is lgpl compatible with bsd licnese? Not that it matters, but I'm curious.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Is the primary key constraint also an index?

2008-08-12 Thread Tim Uckun
If I have a primary key constraint defined in the database do I also
need to create an index on that field for fast lookup?

The documentation on the web seems to imply that the contraint is not
an index. Is that right?

What the difference between creating a unique, not null index and
setting a primary key?

Thanks.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is the primary key constraint also an index?

2008-08-12 Thread Dann Corbit
 -Original Message-
 From: [EMAIL PROTECTED] [mailto:pgsql-general-
 [EMAIL PROTECTED] On Behalf Of Tim Uckun
 Sent: Tuesday, August 12, 2008 7:18 PM
 To: pgsql-general@postgresql.org
 Subject: [GENERAL] Is the primary key constraint also an index?
 
 If I have a primary key constraint defined in the database do I also
 need to create an index on that field for fast lookup?
 
 The documentation on the web seems to imply that the contraint is not
 an index. Is that right?
 
 What the difference between creating a unique, not null index and
 setting a primary key?

From Bruce Momjian's book:
PRIMARY KEY 
The PRIMARY KEY constraint, which marks the column that uniquely identifies 
each row, is a combination of UNIQUE and NOT NULL constraints. With this type 
of constraint, UNIQUE prevents duplicates, and NOT NULL prevents NULL values in 
the column. The next figure shows the creation of a PRIMARY KEY column.   

test= CREATE TABLE primarytest (col INTEGER PRIMARY KEY); 
NOTICE:  CREATE TABLE/PRIMARY KEY will create implicit index 
'primarytest_pkey' for table 'primarytest' 
CREATE 
test= \d primarytest  
  Table primarytest 
 Attribute |  Type   | Modifier  
---+-+-- 
 col   | integer | not null 
Index: primarytest_pkey   



Notice that an index is created automatically, and the column is defined as NOT 
NULL. 

Just as with UNIQUE, a multicolumn PRIMARY KEY constraint must be specified on 
a separate line. In the next figure, col1 and col2 are combined to form the 
primary key.   

test= CREATE TABLE primarytest2 ( 
test(col1 INTEGER,  
test(col2 INTEGER,  
test(PRIMARY KEY(col1, col2) 
test(   ); 
NOTICE:  CREATE TABLE/PRIMARY KEY will create implicit index 
'primarytest2_pkey' for table 'primarytest2' 
CREATE   




A table cannot have more than one PRIMARY KEY specification. Primary keys have 
special meaning when using foreign keys, which are covered in the next section.

While this bit of the documentation about primary key does not make the index 
relationship clear:

PRIMARY KEY (column constraint)
PRIMARY KEY ( column_name [, ... ] ) (table constraint)
The primary key constraint specifies that a column or columns of a table can 
contain only unique (non-duplicate), nonnull values. Technically, PRIMARY KEY 
is merely a combination of UNIQUE and NOT NULL, but identifying a set of 
columns as primary key also provides metadata about the design of the schema, 
as a primary key implies that other tables can rely on this set of columns as a 
unique identifier for rows. 

Only one primary key can be specified for a table, whether as a column 
constraint or a table constraint. 

The primary key constraint should name a set of columns that is different from 
other sets of columns named by any unique constraint defined for the same 
table.

This bit makes it totally obvious:

USING INDEX TABLESPACE tablespace
This clause allows selection of the tablespace in which the index associated 
with a UNIQUE or PRIMARY KEY constraint will be created. If not specified, 
default_tablespace is consulted, or temp_tablespaces if the table is temporary.

See:
http://www.postgresql.org/docs/8.3/static/sql-createtable.html


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is the primary key constraint also an index?

2008-08-12 Thread Craig Ringer
Tim Uckun wrote:
 If I have a primary key constraint defined in the database do I also
 need to create an index on that field for fast lookup?

No. Declaring field(s) as the primary key automatically adds a UNIQUE
constraint on those fields. PostgreSQL implements unique constraints
using a unique-constrained index.

PostgreSQL tells you about this when you create a table.

craig= CREATE TABLE j ( y INTEGER PRIMARY KEY );
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index j_pkey
for table j

 The documentation on the web seems to imply that the contraint is not
 an index. Is that right?

There are many types of constraint. Unique constraints. Foreign key
constraints. CHECK constraints. Probably more. Of those, unique
constraints are the only ones that will automatically create an index.

Foreign key constraints benefit from an index on the referring field, by
the way, so you should generally create an index on the referring field.
PostgreSQL doesn't do this for you since it's not strictly necessary and
the index does have a space cost and a time cost for updates, inserts
and deletes.

As for CHECK constraints - I strongly recommend reading up on them, as
they're really important for producing schema that properly ensure that
the data stored is valid at all times.

 What the difference between creating a unique, not null index and
 setting a primary key?

As far as I know, a huge amount in purely technical terms. There may
only be one primary key, where there may be several NOT NULL UNIQUE
constrained columns or column sets. Also, some clients rely on the
primary key as table metadata. DBMS front-ends (think MS Access),
reporting tools, etc tend to use this information, as do some ORM tools.

--
Craig Ringer
 Thanks.
 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] automatic REINDEX-ing

2008-08-12 Thread Ow Mun Heng
On Tue, 2008-08-12 at 08:38 -0700, Lennin Caro wrote:
 you can use a cron job
 

I have my cron setup to do database wide vacuums each night and it
usually takes ~between 4-6 hours on ~200G DB size. 

On days where there is huge activity, it can drag on for like 15+ hours.

I've recently dropped all my indexes and started to only rebuild _some_
needed ones.


What's the method for looking at index bloats anyway?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need help returning record set from a dynamic sql query

2008-08-12 Thread Sathish Duraiswamy
Murali,

Tried the same method using FOR --LOOP with EXECUTE command similar function
you described and got the same error message.

When i used raise info to check the function , i get the set of records as
result .But finally , it throws same error

Someone can help on this issue..

Regrds
sathish



On Tue, Aug 12, 2008 at 7:26 PM, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

  Hi,



 I have changed my procedure like below,



 CREATE OR REPLACE FUNCTION fun_orderreport(pmorderid integer, pmcompanyid
 integer, pmeventid integer)

   RETURNS SETOF orderreport AS

 $BODY$

 DECLARE

 vResult ORDERREPORT%ROWTYPE;

 vSql TEXT = '

 SELECT

 ORDR.ORDERSIDAS OrderID,

 ORDR.INITIATED   AS Order_Date,

 COMP.COMPANYNAME   AS Company_Name,

 EVNT.EVENTNAME  AS Event_Name

 FROM

 ORDERS ORDR

 INNER JOIN COMPANY COMP ON COMP.COMPANYID = ORDR.COMPANY

 INNER JOIN EVENT EVNT ON EVNT.COMPANY = COMP.COMPANYID

 WHERE

 ORDR.EVENT = EVNT.EVENTID ';

 BEGIN

 IF $1 IS NOT NULL THEN

 vSql = vSql ||' AND ORDR.ORDERSID = '|| $1;

 END IF;



 IF $2 IS NOT NULL THEN

 vSql = vSql ||' AND COMP.COMPANYID = '|| $2;

 END IF;



 IF $3 IS NOT NULL THEN

 vSql = vSql ||' AND EVNT.EVENTID = '|| $3;

 END IF;

 vSql = vSql || ';';

 vSql = '';

 -- DEALLOCATE PREPARE vSql;

 FOR vResult IN EXECUTE vSql

 LOOP

   RETURN NEXT vResult;

 END LOOP;

 RETURN;

 END $BODY$

   LANGUAGE 'plpgsql' VOLATILE;





 SELECT fun_orderreport(NULL,NULL,NULL);



 But the error I get when I execute,



 ERROR:  cannot open multi-query plan as cursor

 CONTEXT:  PL/pgSQL function fun_orderreport line 30 at for over execute
 statement



 ** Error **



 ERROR: cannot open multi-query plan as cursor

 SQL state: 42P11

 Context: PL/pgSQL function fun_orderreport line 30 at for over execute
 statement







 *From:* [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 *Sent:* Tuesday, August 12, 2008 6:53 PM
 *To:* 'Willy-Bas Loos'
 *Cc:* 'Sathish Duraiswamy'; 'pgsql-general@postgresql.org'
 *Subject:* RE: [GENERAL] Need help returning record set from a dynamic sql
 query



 Please understand…



 I know I have to use FOR … LOOP for my query. But it is not a normal one …I
 use to build that one dynamically.





 *From:* Willy-Bas Loos [mailto:[EMAIL PROTECTED]
 *Sent:* Tuesday, August 12, 2008 5:46 PM
 *To:* [EMAIL PROTECTED]
 *Cc:* Sathish Duraiswamy; pgsql-general@postgresql.org

 *Subject:* Re: [GENERAL] Need help returning record set from a dynamic sql
 query



 Iam getting just the first record from the recordset
 That's because you use SELECT INTO, you should use   FOR rec IN query
 LOOP

 Here's sample code from
 http://www.postgresql.org/docs/8.1/interactive/plpgsql-control-structures.html

 CREATE TABLE test (textcol varchar(10), intcol int);
 INSERT INTO test VALUES ('a', 1);
 INSERT INTO test VALUES ('a', 2);
 INSERT INTO test VALUES ('b', 5);
 INSERT INTO test VALUES ('b', 6);

 CREATE OR REPLACE FUNCTION ReturnNexting(pText Text) RETURNS SETOF test AS
 $$
 DECLARE
 rec RECORD;
 BEGIN
 FOR rec IN SELECT * FROM test WHERE textcol = pText LOOP
 RETURN NEXT rec;
 END LOOP;
 RETURN;
 END;
 $$
 LANGUAGE plpgsql;

 SELECT * FROM ReturnNexting('a');

 On Tue, Aug 12, 2008 at 12:58 PM, [EMAIL PROTECTED] [EMAIL PROTECTED]
 wrote:

 Hi Sathish,



 Thanks for your reply.



 But I have created the type to return the record set from my join query
 using a stored function.



 I cannot able to create a table with that details …. Since those details
 will be already available from different tables.



 One more thing …. I am clear with your result set using FOR ..LOOP but mine
 is not a normal query… it was built Dynamic based on my Input Parameters.



 Please look and tell me if you are not clear with my query.





 Thanks,

 MuraliDharan V



 *From:* Sathish Duraiswamy [mailto:[EMAIL PROTECTED]
 *Sent:* Tuesday, August 12, 2008 4:10 PM
 *To:* [EMAIL PROTECTED]
 *Cc:* pgsql-general@postgresql.org
 *Subject:* Re: [GENERAL] Need help returning record set from a dynamic sql
 query



 Dear murali,

 We use psql for our ERP software .We found CREATE TYPE is useful in
 creating new data type similar to creating domain.
 For eg
CREATE TYPE date_condition (
condition_id int,
from_date  date,
to_datedate);

 Instead , you can CREATE TABLE (ORDERSID INTEGER,ORDERDATE
 TIMESTAMP,COMPANYNAME TEXT,EVENTNAME CHARACTER VARYING);

 You can have result returning a set of records

 Feel free to comment on it

 Regrds
 sathish

 On Tue, Aug 12, 2008 at 3:08 PM, [EMAIL PROTECTED]