[GENERAL] plruby on windows

2008-10-31 Thread [EMAIL PROTECTED]
Hi,

I know it has been posted before, but it's been some time since that
and there has been no definitive (good) answer, so: has anyone been
able to build and use PL/Ruby with postgres 8.3 on windows? I have had
no problems on my linux machine, but now I need to get it working on
windows...

Thanks in advance for any replies or tips how it could be done... Tom.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing repeating dates / events

2008-09-06 Thread [EMAIL PROTECTED]
On Sat, Sep 6, 2008 at 10:32 AM, Jorge Godoy <[EMAIL PROTECTED]> wrote:

> Em Saturday 06 September 2008 14:12:49 [EMAIL PROTECTED] escreveu:
> > Hi
> > I want to store dates / events for example birthdays (or may 5th) that
> > repeats every year..
> > what is the best way to do in postgres?
> > if i use timestamp it is going to be use the current year.. how do i do
> > this?
> Along with the timestamp store a boolean that indicates if the event should
> consider the year or not.
>
> But you might surely design it better, specially for things that repeat on
> intervals other than yearly.
>
yes i am trying to build a alerting system where events will repeat
weekly/monthly/annually

what is the best way to store this kind of information in postgres?
thanks


Re: [GENERAL] storing repeating dates / events

2008-09-06 Thread [EMAIL PROTECTED]
On Sat, Sep 6, 2008 at 10:21 AM, brian <[EMAIL PROTECTED]> wrote:

> [EMAIL PROTECTED] wrote:
>
>> Hi
>> I want to store dates / events for example birthdays (or may 5th) that
>> repeats every year..
>> what is the best way to do in postgres?
>> if i use timestamp it is going to be use the current year.. how do i do
>> this?
>>
>  A timestamp includes the year so there shouldn't be any problem. Although,
> i can't say i'm sure what you think might be the problem. Perhaps you should
> clarify what it is you want to store, as well as your concerns.


if it includes the year, how do i query  out and find birthdays for this
this year? like if i had a table with column

user text,
dob timestamp without timezone,

can you tell me what the query will be to find todays birthday?


[GENERAL] storing repeating dates / events

2008-09-06 Thread [EMAIL PROTECTED]
Hi
I want to store dates / events for example birthdays (or may 5th) that
repeats every year..
what is the best way to do in postgres?
if i use timestamp it is going to be use the current year.. how do i do
this?


Re: [GENERAL] Need help returning record set from a dynamic sql query

2008-08-13 Thread [EMAIL PROTECTED]
Hi Sathish,

I too mentioned the same thing. I have changed my code and checked ...but 
not got that worked.

Here is the code which I finally got worked !!!

CREATE OR REPLACE FUNCTION fun_orderreport(pmorderid integer, pmcompanyid 
integer, pmeventid integer) 
RETURNS SETOF orderreport AS 
$BODY$ 
DECLARE 
vResult ORDERREPORT%ROWTYPE; 
vSql TEXT = ' 

SELECT 
ORDR.ORDERSID AS OrderID, 
ORDR.INITIATED AS Order_Date, 
COMP.COMPANYNAME AS Company_Name, 
EVNT.EVENTNAME AS Event_Name 
FROM 
ORDERS ORDR 
INNER JOIN COMPANY COMP ON COMP.COMPANYID = ORDR.COMPANY 
INNER JOIN EVENT EVNT ON EVNT.COMPANY = COMP.COMPANYID 
WHERE 
ORDR.EVENT = EVNT.EVENTID '; 

BEGIN 

IF $1 IS NOT NULL THEN 
vSql = vSql ||' AND ORDR.ORDERSID = '|| $1; 

END IF; 

IF $2 IS NOT NULL THEN 
vSql = vSql ||' AND COMP.COMPANYID = '|| $2; 
END IF; 

IF $3 IS NOT NULL THEN 
vSql = vSql ||' AND EVNT.EVENTID = '|| $3; 
END IF; 

vSql = vSql || ';'; 

FOR vResult IN EXECUTE vSql 
LOOP 
RETURN NEXT vResult; 
END LOOP; 
RETURN; 
END $BODY$ 
LANGUAGE 'plpgsql' VOLATILE; 

SELECT * FROM fun_orderreport(NULL,NULL,NULL);


Thanks,
MuraliDharan V

-Original Message-
From: Gnanavel Shanmugam [mailto:[EMAIL PROTECTED] 
Sent: Thursday, August 14, 2008 12:11 PM
To: Sathish Duraiswamy
Cc: Willy-Bas Loos; pgsql-general@postgresql.org; [EMAIL PROTECTED]
Subject: Re: [GENERAL] Need help returning record set from a dynamic sql query

Just a thought

Why can't you create a temporary table from your dynamic query and use that 
temp table in the for loop.

Thnx,
Gnanavel

- Original Message -
From: "Sathish Duraiswamy" <[EMAIL PROTECTED]>
To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
Cc: "Willy-Bas Loos" <[EMAIL PROTECTED]>, pgsql-general@postgresql.org
Sent: Wednesday, August 13, 2008 9:40:51 AM GMT +05:30 Chennai, Kolkata, 
Mumbai, New Delhi
Subject: Re: [GENERAL] Need help returning record set from a dynamic sql query


Murali, 

Tried the same method using FOR --LOOP with EXECUTE command similar function 
you described and got the same error message. 

When i used raise info to check the function , i get the set of records as 
result .But finally , it throws same error 

Someone can help on this issue.. 

Regrds 
sathish 




On Tue, Aug 12, 2008 at 7:26 PM, [EMAIL PROTECTED] < [EMAIL PROTECTED] > wrote: 






Hi, 



I have changed my procedure like below, 



CREATE OR REPLACE FUNCTION fun_orderreport(pmorderid integer, pmcompanyid 
integer, pmeventid integer) 

RETURNS SETOF orderreport AS 

$BODY$ 


DECLARE 

vResult ORDERREPORT%ROWTYPE; 

vSql TEXT = ' 

SELECT 

ORDR.ORDERSID AS OrderID, 

ORDR.INITIATED AS Order_Date, 

COMP.COMPANYNAME AS Company_Name, 

EVNT.EVENTNAME AS Event_Name 

FROM 

ORDERS ORDR 

INNER JOIN COMPANY COMP ON COMP.COMPANYID = ORDR.COMPANY 

INNER JOIN EVENT EVNT ON EVNT.COMPANY = COMP.COMPANYID 

WHERE 

ORDR.EVENT = EVNT.EVENTID '; 

BEGIN 

IF $1 IS NOT NULL THEN 

vSql = vSql ||' AND ORDR.ORDERSID = '|| $1; 

END IF; 



IF $2 IS NOT NULL THEN 

vSql = vSql ||' AND COMP.COMPANYID = '|| $2; 

END IF; 



IF $3 IS NOT NULL THEN 

vSql = vSql ||' AND EVNT.EVENTID = '|| $3; 

END IF; 

vSql = vSql || ';'; 

vSql = ''; 

-- DEALLOCATE PREPARE vSql; 

FOR vResult IN EXECUTE vSql 

LOOP 

RETURN NEXT vResult; 

END LOOP; 

RETURN; 

END $BODY$ 

LANGUAGE 'plpgsql' VOLATILE; 





SELECT fun_orderreport(NULL,NULL,NULL); 



But the error I get when I execute, 



ERROR: cannot open multi-query plan as cursor 

CONTEXT: PL/pgSQL function "fun_orderreport" line 30 at for over execute 
statement 



****** Error ** 



ERROR: cannot open multi-query plan as cursor 

SQL state: 42P11 

Context: PL/pgSQL function "fun_orderreport" line 30 at for over execute 
statement 









From: [EMAIL PROTECTED] [mailto: [EMAIL PROTECTED] ] 
Sent: Tuesday, August 12, 2008 6:53 PM 
To: 'Willy-Bas Loos' 
Cc: 'Sathish Duraiswamy'; ' pgsql-general@postgresql.org ' 
Subject: RE: [GENERAL] Need help returning record set from a dynamic sql query 




Please understand… 



I know I have to use FOR … LOOP for my query. But it is not a normal one …I use 
to build that one dynamically. 






From: Willy-Bas Loos [mailto: [EMAIL PROTECTED] ] 
Sent: Tuesday, August 12, 2008 5:46 PM 
To: [EMAIL PROTECTED] 
Cc: Sathish Duraiswamy; pgsql-general@postgresql.org 



Subject: Re: [GENERAL] Need help returning record set from a dynamic sql query 







>Iam getting just the first record from the recordset 
That's because you use SELECT INTO, you should us

Re: [GENERAL] Need help returning record set from a dynamic sql query

2008-08-12 Thread [EMAIL PROTECTED]
Hi,

 

I have changed my procedure like below,

 

CREATE OR REPLACE FUNCTION fun_orderreport(pmorderid integer, pmcompanyid
integer, pmeventid integer)

  RETURNS SETOF orderreport AS

$BODY$

DECLARE 

vResult ORDERREPORT%ROWTYPE;

vSql TEXT = '

SELECT

ORDR.ORDERSIDAS OrderID,

ORDR.INITIATED   AS Order_Date,

COMP.COMPANYNAME   AS Company_Name,

EVNT.EVENTNAME  AS Event_Name

FROM

ORDERS ORDR 

INNER JOIN COMPANY COMP ON COMP.COMPANYID = ORDR.COMPANY

INNER JOIN EVENT EVNT ON EVNT.COMPANY = COMP.COMPANYID

WHERE 

ORDR.EVENT = EVNT.EVENTID ';

BEGIN

IF $1 IS NOT NULL THEN

vSql = vSql ||' AND ORDR.ORDERSID = '|| $1;

END IF;



IF $2 IS NOT NULL THEN

vSql = vSql ||' AND COMP.COMPANYID = '|| $2;

END IF;

 

IF $3 IS NOT NULL THEN

vSql = vSql ||' AND EVNT.EVENTID = '|| $3;

END IF;

vSql = vSql || ';';

vSql = '';

-- DEALLOCATE PREPARE vSql;

FOR vResult IN EXECUTE vSql

LOOP

  RETURN NEXT vResult;

END LOOP;

RETURN;  

END $BODY$

  LANGUAGE 'plpgsql' VOLATILE;

 

 

SELECT fun_orderreport(NULL,NULL,NULL);

 

But the error I get when I execute,

 

ERROR:  cannot open multi-query plan as cursor

CONTEXT:  PL/pgSQL function "fun_orderreport" line 30 at for over execute
statement

 

** Error **

 

ERROR: cannot open multi-query plan as cursor

SQL state: 42P11

Context: PL/pgSQL function "fun_orderreport" line 30 at for over execute
statement

 

 

 

From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, August 12, 2008 6:53 PM
To: 'Willy-Bas Loos'
Cc: 'Sathish Duraiswamy'; 'pgsql-general@postgresql.org'
Subject: RE: [GENERAL] Need help returning record set from a dynamic sql
query

 

Please understand.

 

I know I have to use FOR . LOOP for my query. But it is not a normal one .I
use to build that one dynamically.

 

 

From: Willy-Bas Loos [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, August 12, 2008 5:46 PM
To: [EMAIL PROTECTED]
Cc: Sathish Duraiswamy; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Need help returning record set from a dynamic sql
query

 

>Iam getting just the first record from the recordset
That's because you use SELECT INTO, you should use   FOR rec IN  LOOP

Here's sample code from
http://www.postgresql.org/docs/8.1/interactive/plpgsql-control-structures.ht
ml

CREATE TABLE test (textcol varchar(10), intcol int);
INSERT INTO test VALUES ('a', 1);
INSERT INTO test VALUES ('a', 2);
INSERT INTO test VALUES ('b', 5);
INSERT INTO test VALUES ('b', 6);

CREATE OR REPLACE FUNCTION ReturnNexting(pText Text) RETURNS SETOF test AS
$$
DECLARE
rec RECORD;
BEGIN
FOR rec IN SELECT * FROM test WHERE textcol = pText LOOP
RETURN NEXT rec;
END LOOP;
RETURN;
END;
$$
LANGUAGE plpgsql;

SELECT * FROM ReturnNexting('a');

On Tue, Aug 12, 2008 at 12:58 PM, [EMAIL PROTECTED] <[EMAIL PROTECTED]>
wrote:

Hi Sathish,

 

Thanks for your reply.

 

But I have created the type to return the record set from my join query
using a stored function.

 

I cannot able to create a table with that details .. Since those details
will be already available from different tables.

 

One more thing .. I am clear with your result set using FOR ..LOOP but mine
is not a normal query. it was built Dynamic based on my Input Parameters.

 

Please look and tell me if you are not clear with my query.

 

 

Thanks,

MuraliDharan V

 

From: Sathish Duraiswamy [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, August 12, 2008 4:10 PM
To: [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Need help returning record set from a dynamic sql
query

 

Dear murali,

We use psql for our ERP software .We found CREATE TYPE is useful in creating
new data type similar to creating domain.
For eg
   CREATE TYPE date_condition (
   condition_id int,
   from_date  date,
   to_datedate);

Instead , you can CREATE TABLE (ORDERSID INTEGER,ORDERDATE
TIMESTAMP,COMPANYNAME TEXT,EVENTNAME CHARACTER VARYING); 

You can have result returning a set of records

Feel free to comment on it

Regrds
sathish

On Tue, Aug 12, 2008 at 3:08 PM, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

Hi,

Iam new to Postgresql, now i need to create a Dynamic SQL Query for
returning the record set 
based on my Input Parameters. I looked up some of the documents and worked
out some more ...

MY Postgresql Versio

[GENERAL] Re: different results based solely on existence of index (no, seriously)

2008-08-12 Thread [EMAIL PROTECTED]
On Aug 12, 8:17 am, [EMAIL PROTECTED] (ries van Twisk) wrote:
> On Aug 12, 2008, at 3:53 AM, Willy-Bas Loos wrote:
>
>
>
> > reproduced it on:
> > "PostgreSQL 8.3.3 on i486-pc-linux-gnu, compiled by GCC cc (GCC)  
> > 4.2.3 (Ubuntu 4.2.3-2ubuntu7)"
> > 3 rows with index, 2 rows without.
>
> > can not reproduce it on:
> > - "PostgreSQL 8.1.10 on i486-pc-linux-gnu, compiled by GCC cc (GCC)  
> > 4.1.3 20070831 (prerelease) (Ubuntu 4.1.2-16ubuntu1)"
> > - "PostgreSQL 8.2.6 on i686-pc-mingw32, compiled by GCC gcc.exe  
> > (GCC) 3.4.2 (mingw-special)"
> > - "PostgreSQL 8.2.7 on i486-pc-linux-gnu, compiled by GCC cc (GCC)  
> > 4.2.3 (Ubuntu 4.2.3-2ubuntu4)"
> > they allways return 2 rows.
>
> > hth
> > WBL
>
> reproduced on:
> PostgreSQL 8.3.1 on i386-apple-darwin9.4.0, compiled by GCC i686-apple-
> darwin9-gcc-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5465)
> 3rows with index, 2 rows without
>
> Ries

FWIW, reproduced (3 rows w/index, 2 w/o) on:
  PostgreSQL 8.3.0 on i686-pc-linux-gnu, compiled by GCC gcc (GCC)
4.1.2 (Ubuntu 4.1.2-0ubuntu4)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Need help returning record set from a dynamic sql query

2008-08-12 Thread [EMAIL PROTECTED]
Please understand.

 

I know I have to use FOR . LOOP for my query. But it is not a normal one .I
use to build that one dynamically.

 

 

From: Willy-Bas Loos [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, August 12, 2008 5:46 PM
To: [EMAIL PROTECTED]
Cc: Sathish Duraiswamy; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Need help returning record set from a dynamic sql
query

 

>Iam getting just the first record from the recordset
That's because you use SELECT INTO, you should use   FOR rec IN  LOOP

Here's sample code from
http://www.postgresql.org/docs/8.1/interactive/plpgsql-control-structures.ht
ml

CREATE TABLE test (textcol varchar(10), intcol int);
INSERT INTO test VALUES ('a', 1);
INSERT INTO test VALUES ('a', 2);
INSERT INTO test VALUES ('b', 5);
INSERT INTO test VALUES ('b', 6);

CREATE OR REPLACE FUNCTION ReturnNexting(pText Text) RETURNS SETOF test AS
$$
DECLARE
rec RECORD;
BEGIN
FOR rec IN SELECT * FROM test WHERE textcol = pText LOOP
RETURN NEXT rec;
END LOOP;
RETURN;
END;
$$
LANGUAGE plpgsql;

SELECT * FROM ReturnNexting('a');



On Tue, Aug 12, 2008 at 12:58 PM, [EMAIL PROTECTED] <[EMAIL PROTECTED]>
wrote:

Hi Sathish,

 

Thanks for your reply.

 

But I have created the type to return the record set from my join query
using a stored function.

 

I cannot able to create a table with that details .. Since those details
will be already available from different tables.

 

One more thing .. I am clear with your result set using FOR ..LOOP but mine
is not a normal query. it was built Dynamic based on my Input Parameters.

 

Please look and tell me if you are not clear with my query.

 

 

Thanks,

MuraliDharan V

 

From: Sathish Duraiswamy [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, August 12, 2008 4:10 PM
To: [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Need help returning record set from a dynamic sql
query

 

Dear murali,

We use psql for our ERP software .We found CREATE TYPE is useful in creating
new data type similar to creating domain.
For eg
   CREATE TYPE date_condition (
   condition_id int,
   from_date  date,
   to_datedate);

Instead , you can CREATE TABLE (ORDERSID INTEGER,ORDERDATE
TIMESTAMP,COMPANYNAME TEXT,EVENTNAME CHARACTER VARYING); 

You can have result returning a set of records

Feel free to comment on it

Regrds
sathish

On Tue, Aug 12, 2008 at 3:08 PM, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

Hi,

Iam new to Postgresql, now i need to create a Dynamic SQL Query for
returning the record set 
based on my Input Parameters. I looked up some of the documents and worked
out some more ...

MY Postgresql Version In Local: 7.4
MY Postgresql Version In Development: 8.2


-- DROP TYPE ORDERREPORT;
CREATE TYPE ORDERREPORT AS (ORDERSID INTEGER,ORDERDATE TIMESTAMP,COMPANYNAME
TEXT,EVENTNAME CHARACTER VARYING);


-- DROP FUNCTION FUN_ORDERREPORT(IN pmOrderId INTEGER,IN pmCompanyId
INTEGER,IN pmEventId INTEGER);
CREATE OR REPLACE FUNCTION FUN_ORDERREPORT(IN pmOrderId INTEGER,IN
pmCompanyId INTEGER,IN pmEventId INTEGER) RETURNS SETOF ORDERREPORT AS
'
DECLARE
vResult ORDERREPORT%ROWTYPE;
vSql TEXT = ''
SELECT
   ORDR.ORDERSID   AS OrderID,
   ORDR.INITIATED   AS Order_Date,
   COMP.COMPANYNAMEAS Company_Name,
   EVNT.EVENTNAME  AS Event_Name
FROM
   ORDERS ORDR
   INNER JOIN COMPANY COMP ON COMP.COMPANYID = ORDR.COMPANY
   INNER JOIN EVENT EVNT ON EVNT.COMPANY = COMP.COMPANYID
WHERE
   ORDR.EVENT = EVNT.EVENTID '';
BEGIN
   IF $1 IS NOT NULL THEN
  vSql = vSql ||'' AND ORDR.ORDERSID = ''|| $1;
   END IF;
 
   IF $2 IS NOT NULL THEN
  vSql = vSql ||'' AND COMP.COMPANYID = ''|| $2;
   END IF;

   IF $3 IS NOT NULL THEN
  vSql = vSql ||'' AND EVNT.EVENTID = ''|| $3;
   END IF;
   EXECUTE vSql INTO vResult;
   RETURN NEXT vResult;
END '
LANGUAGE 'PLPGSQL';

Result:

events=# SELECT * FROM FUN_ORDERREPORT(102881,NULL,NULL);
ordersid | orderdate  | companyname |eventname
--++-+-
   102881 | 2006-02-10 14:49:53.002653 | PhotoMania  | Photos
(1 row)

events=# SELECT * FROM FUN_ORDERREPORT(NULL,NULL,NULL);
ordersid | orderdate  | companyname |eventname
--++-+-
   102881 | 2006-02-10 14:49:53.002653 | PhotoMania  | Photos
(1 row)


Iam getting just the first record from the recordset. 
Can someone help me how can i return all query results from a Dynamic SQL
Query?


-- 
Thanks,
MuraliDharan V

 

 



Re: [GENERAL] Need help returning record set from a dynamic sql query

2008-08-12 Thread [EMAIL PROTECTED]
Hi Sathish,

 

Thanks for your reply.

 

But I have created the type to return the record set from my join query
using a stored function.

 

I cannot able to create a table with that details .. Since those details
will be already available from different tables.

 

One more thing .. I am clear with your result set using FOR ..LOOP but mine
is not a normal query. it was built Dynamic based on my Input Parameters.

 

Please look and tell me if you are not clear with my query.

 

 

Thanks,

MuraliDharan V

 

From: Sathish Duraiswamy [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, August 12, 2008 4:10 PM
To: [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Need help returning record set from a dynamic sql
query

 

Dear murali,

We use psql for our ERP software .We found CREATE TYPE is useful in creating
new data type similar to creating domain.
For eg
   CREATE TYPE date_condition (
   condition_id int,
   from_date  date,
   to_datedate);

Instead , you can CREATE TABLE (ORDERSID INTEGER,ORDERDATE
TIMESTAMP,COMPANYNAME TEXT,EVENTNAME CHARACTER VARYING); 

You can have result returning a set of records

Feel free to comment on it

Regrds
sathish

On Tue, Aug 12, 2008 at 3:08 PM, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

Hi,

Iam new to Postgresql, now i need to create a Dynamic SQL Query for
returning the record set 
based on my Input Parameters. I looked up some of the documents and worked
out some more ...

MY Postgresql Version In Local: 7.4
MY Postgresql Version In Development: 8.2


-- DROP TYPE ORDERREPORT;
CREATE TYPE ORDERREPORT AS (ORDERSID INTEGER,ORDERDATE TIMESTAMP,COMPANYNAME
TEXT,EVENTNAME CHARACTER VARYING);


-- DROP FUNCTION FUN_ORDERREPORT(IN pmOrderId INTEGER,IN pmCompanyId
INTEGER,IN pmEventId INTEGER);
CREATE OR REPLACE FUNCTION FUN_ORDERREPORT(IN pmOrderId INTEGER,IN
pmCompanyId INTEGER,IN pmEventId INTEGER) RETURNS SETOF ORDERREPORT AS
'
DECLARE
vResult ORDERREPORT%ROWTYPE;
vSql TEXT = ''
SELECT
   ORDR.ORDERSID   AS OrderID,
   ORDR.INITIATED   AS Order_Date,
   COMP.COMPANYNAMEAS Company_Name,
   EVNT.EVENTNAME  AS Event_Name
FROM
   ORDERS ORDR
   INNER JOIN COMPANY COMP ON COMP.COMPANYID = ORDR.COMPANY
   INNER JOIN EVENT EVNT ON EVNT.COMPANY = COMP.COMPANYID
WHERE
   ORDR.EVENT = EVNT.EVENTID '';
BEGIN
   IF $1 IS NOT NULL THEN
  vSql = vSql ||'' AND ORDR.ORDERSID = ''|| $1;
   END IF;
 
   IF $2 IS NOT NULL THEN
  vSql = vSql ||'' AND COMP.COMPANYID = ''|| $2;
   END IF;

   IF $3 IS NOT NULL THEN
  vSql = vSql ||'' AND EVNT.EVENTID = ''|| $3;
   END IF;
   EXECUTE vSql INTO vResult;
   RETURN NEXT vResult;
END '
LANGUAGE 'PLPGSQL';

Result:

events=# SELECT * FROM FUN_ORDERREPORT(102881,NULL,NULL);
ordersid | orderdate  | companyname |eventname
--++-+-
   102881 | 2006-02-10 14:49:53.002653 | PhotoMania  | Photos
(1 row)

events=# SELECT * FROM FUN_ORDERREPORT(NULL,NULL,NULL);
ordersid | orderdate  | companyname |eventname
--++-+-
   102881 | 2006-02-10 14:49:53.002653 | PhotoMania  | Photos
(1 row)


Iam getting just the first record from the recordset. 
Can someone help me how can i return all query results from a Dynamic SQL
Query?


-- 
Thanks,
MuraliDharan V

 



[GENERAL] Need help returning record set from a dynamic sql query

2008-08-12 Thread [EMAIL PROTECTED]
Hi,

Iam new to Postgresql, now i need to create a Dynamic SQL Query for
returning the record set
based on my Input Parameters. I looked up some of the documents and worked
out some more ...

MY Postgresql Version In Local: 7.4
MY Postgresql Version In Development: 8.2


-- DROP TYPE ORDERREPORT;
CREATE TYPE ORDERREPORT AS (ORDERSID INTEGER,ORDERDATE TIMESTAMP,COMPANYNAME
TEXT,EVENTNAME CHARACTER VARYING);


-- DROP FUNCTION FUN_ORDERREPORT(IN pmOrderId INTEGER,IN pmCompanyId
INTEGER,IN pmEventId INTEGER);
CREATE OR REPLACE FUNCTION FUN_ORDERREPORT(IN pmOrderId INTEGER,IN
pmCompanyId INTEGER,IN pmEventId INTEGER) RETURNS SETOF ORDERREPORT AS
'
DECLARE
vResult ORDERREPORT%ROWTYPE;
vSql TEXT = ''
SELECT
   ORDR.ORDERSID   AS OrderID,
   ORDR.INITIATED   AS Order_Date,
   COMP.COMPANYNAMEAS Company_Name,
   EVNT.EVENTNAME  AS Event_Name
FROM
   ORDERS ORDR
   INNER JOIN COMPANY COMP ON COMP.COMPANYID = ORDR.COMPANY
   INNER JOIN EVENT EVNT ON EVNT.COMPANY = COMP.COMPANYID
WHERE
   ORDR.EVENT = EVNT.EVENTID '';
BEGIN
   IF $1 IS NOT NULL THEN
  vSql = vSql ||'' AND ORDR.ORDERSID = ''|| $1;
   END IF;

   IF $2 IS NOT NULL THEN
  vSql = vSql ||'' AND COMP.COMPANYID = ''|| $2;
   END IF;

   IF $3 IS NOT NULL THEN
  vSql = vSql ||'' AND EVNT.EVENTID = ''|| $3;
   END IF;
   EXECUTE vSql INTO vResult;
   RETURN NEXT vResult;
END '
LANGUAGE 'PLPGSQL';

Result:

events=# SELECT * FROM FUN_ORDERREPORT(102881,NULL,NULL);
ordersid | orderdate  | companyname |eventname
--++-+-
   102881 | 2006-02-10 14:49:53.002653 | PhotoMania  | Photos
(1 row)

events=# SELECT * FROM FUN_ORDERREPORT(NULL,NULL,NULL);
ordersid | orderdate  | companyname |eventname
--++-+-
   102881 | 2006-02-10 14:49:53.002653 | PhotoMania  | Photos
(1 row)


Iam getting just the first record from the recordset.
Can someone help me how can i return all query results from a Dynamic SQL
Query?


-- 
Thanks,
MuraliDharan V


[GENERAL] PostgreSQL: Database schema for messaging service (similar to facebook)

2008-08-10 Thread [EMAIL PROTECTED]
Hello,

I'm trying to create a messageing service, like in facebook. Basically
a member can write messages to another member. It will have three main
functions. One, basic messaging to another member. Two, notification
from system to a group of members (a list of members), Three, an
update report to a group of members (a list of members).

I was looking over the net for examples and found this schema:
(http://lh4.ggpht.com/arrival123/SJ-XLk2257I/Bhs/eRY9Nd4VLkw/
facebook_emails_model.gif)

But its a little bit odd for me because I don't need to extend from
My_Email_Logins table, actually I do not fully understand why the
schema creator decided to extends the four tables from
My_EMail_Logins.

Also, the schema will have duplicate data for Subject and Message
within the four tables (Mail_Boxes, Sent_Messages, Notifications, and
Updates)

My current schema, which is a work in progress looks like this:




CREATE SEQUENCE tm_Messages_MessageID_seq;
CREATE TABLE tm_Messages (
   MessageID  integer  NOT NULL PRIMARY
KEY DEFAULT nextval('tm_Messages_MessageID_seq'),
   SentDate   timestamp   NOT NULL
);


CREATE SEQUENCE tm_ReceivedMessages_ReceivedMessageID_seq;
CREATE TABLE tm_ReceivedMessages (
   ReceivedMessageID  integer  NOT NULL PRIMARY
KEY DEFAULT nextval('tm_ReceivedMessages_ReceivedMessageID_seq'),
   Username   varchar(256)   NOT NULL
default '',
   Subjectvarchar(128)   NOT NULL,
   Body   text   NOT NULL,
   MessageReadboolean  NOT NULL default
'0'
);



CREATE SEQUENCE tm_SentMessages_SentMessageID_seq;
CREATE TABLE tm_SentMessages (
   SentMessageID  integer  NOT NULL PRIMARY
KEY DEFAULT nextval('tm_SentMessages_SentMessageID_seq'),
-- MessageID  integer  NOT NULL REFERENCES
tm_Messages (MessageID),
   ToUsername varchar(256)   NOT NULL
default '',
   Subjectvarchar(128)   NOT NULL,
   Body   text   NOT NULL
);



CREATE SEQUENCE tm_Notifications_NotificationID_seq;
CREATE TABLE tm_Notifications (
   NotificationID integer  NOT NULL PRIMARY
KEY DEFAULT nextval('tm_Notifications_NotificationID_seq'),
--  MessageID  integer  NOT NULL
REFERENCES tm_Messages (MessageID),
   ToUsername varchar(256)   NOT NULL
default '',
   NotificationType   integer   NOT NULL,
   FromUsername   varchar(256)   NOT NULL
default '',
   Subjectvarchar(128)   NOT NULL,
   Body   text   NOT NULL,
   NotificationCheckedboolean  NOT NULL default
'0'
);



CREATE SEQUENCE tm_Updates_UpdateID_seq;
CREATE TABLE tm_Updates (
   UpdateID   integer  NOT NULL PRIMARY
KEY DEFAULT nextval('tm_Updates_UpdateID_seq'),
--  MessageID  integer  NOT NULL
REFERENCES tm_Messages (MessageID),
   ToUsername varchar(256)   NOT NULL
default '',
   FromUsername   varchar(256)   NOT NULL
default '',
   Fullname   varchar(128)   NOT NULL
default,
   Subjectvarchar(256)   NOT NULL,
   Body   text   NOT NULL,
   MessageReadboolean  NOT NULL default
'0'
);



Its very similar to the schema in the picture, but with the
My_Email_Logins table renamed as tm_Messages table, holding just the
date.

Can someone give me pointers or share their knowledge of a messaging
system they have implemented? I also found that this schema although
common is hard to google because of the query strings are ambiguous.

PS. I hope at my code will help someone out there as well.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] mac install question

2008-07-23 Thread [EMAIL PROTECTED]

> Date: Wed, 23 Jul 2008 14:12:45 -0400
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> Subject: Re: [GENERAL] mac install question
> CC: pgsql-general@postgresql.org
> 
> On Wed, Jul 23, 2008 at 1:58 PM, [EMAIL PROTECTED]
> <[EMAIL PROTECTED]> wrote:
> >
> > Please excuse my lack of mac knowledge.  I installed postgresql 8.3 using 
> > the mac os x 1 click installer onto my brand new powerbook.  The install 
> > appeared to go very smooth.  If I go to Postgresql under Applications it 
> > appears as if I can start and stop postgres and open pgadmin.  I even 
> > created a test database under pgadmin with no errors.   So everything 
> > appears cool but...
> >
> > I can't seem to run any of the postgresql commands from a shell.  If I open 
> > a terminal and try to run psql or createdb or any of the commands I get 
> > this error:
> >
> > /Library/PostgreSQL/8.3/bin/createdb test
> > dyld: Library not loaded: 
> > /Users/buildfarm/pginstaller/server/staging/osx/lib/libpq.5.dylib
> 
> That would seem to indicate that the installer doesn't set things up
> properly for command-line access.  You should talk to whoever created
> it, since it isn't (to my knowledge) part of the official Postgres
> distribution.
> 
> -Doug


Well I got it from a link on postgresql.org.  Of course it does say that it is 
a beta installer.

http://www.postgresql.org/download/macosx


_
Keep your kids safer online with Windows Live Family Safety.
http://www.windowslive.com/family_safety/overview.html?ocid=TXT_TAGLM_WL_family_safety_072008

[GENERAL] mac install question

2008-07-23 Thread [EMAIL PROTECTED]

Please excuse my lack of mac knowledge.  I installed postgresql 8.3 using the 
mac os x 1 click installer onto my brand new powerbook.  The install appeared 
to go very smooth.  If I go to Postgresql under Applications it appears as if I 
can start and stop postgres and open pgadmin.  I even created a test database 
under pgadmin with no errors.   So everything appears cool but...

I can't seem to run any of the postgresql commands from a shell.  If I open a 
terminal and try to run psql or createdb or any of the commands I get this 
error:

/Library/PostgreSQL/8.3/bin/createdb test
dyld: Library not loaded: 
/Users/buildfarm/pginstaller/server/staging/osx/lib/libpq.5.dylib
  Referenced from: /Library/PostgreSQL/8.3/bin/createdb
  Reason: image not found
Trace/BPT trap

OR here is me running the psql command

./psql
dyld: Library not loaded: 
/Users/buildfarm/pginstaller/server/staging/osx/lib/libpq.5.dylib
  Referenced from: /Library/PostgreSQL/8.3/bin/./psql
  Reason: image not found
Trace/BPT trap

The directory /Users/buildfarm doesn't even appear to exist.  

I don't have much experience using mac much less the mac terminal so I assume 
since everything works from the Applications/PostgreSQL 8.3 Gui that I must 
just be doing something wrong.   

Any help appreciated thanks
_
Time for vacation? WIN what you need- enter now!
http://www.gowindowslive.com/summergiveaway/?ocid=tag_jlyhm
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Out of memry with large result set

2008-07-15 Thread [EMAIL PROTECTED]

> Try copy (query) to stdout.
>
> For me, psql sits at 4.9mb ram on a 3x10^16 row query.
>
> klint.
>

Thanks Klint.
Can I use a 'copy to' for a query ? I thought I can only do 'copy
table to stdout'
I will do some tests tomorrow and keep you inform.

Olivier

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Out of memry with large result set

2008-07-14 Thread [EMAIL PROTECTED]
Hello,

I am doing a query via psql on a huge database, and I want to have its
output piped to other unix processes.
As the result set is very big, I've got: "out of memory for query
result".
How can I manage that, without playing with cursors, as I do not want
to change the sql query ?

Under mysql, I have the same issue, but by using: mysql -quick, I have
what I want.
Is there something equivalent under postgresql ? Or should I use
another clients ?

Thanks,

Olivier

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] manual Installation for thread safety

2008-07-09 Thread [EMAIL PROTECTED]
In order to use the --enable-thread-safety build switch I need to do a
manual build... UNLESS there is a way of doing this from the RPM command
line, which I have not been able to find.  Where do I get 8.2.5-1?  This
version came with my installation of Fedora 8 so I presume it is the version
I should use to ensure complete compatibility with the other software on the
system.

Thanks for your help!

-- 
Best Regards,
Lynn P. Tilby
Ph: 480 632-8635
[EMAIL PROTECTED]


[GENERAL] postgres generates too much processes per minute

2008-06-19 Thread [EMAIL PROTECTED]
Hello,

i have the problem that postgres ist starting and stopping several (up
to 4)  processes per minute, so that the error log in windows is
running full, with more than 14 entries every minute.

Does someone know, how to reduce the start and the end of so many
processes, is there a variable or something ?

regards

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Setting up phppgadmin under https/ssl (Apache)

2008-05-19 Thread [EMAIL PROTECTED]
Wow...so does no one use phppgadmin on servers that they are connected
to via the internet?  Or if you do, how do you go about securing it so
that no one snoops your password?

Thanks


On May 16, 12:34 am, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
wrote:
> Hi...
>
> This is as much an apache question as anything else, but I think it's
> appropriate here.
>
> I've been using phppgadmin on my local machine.  Now I've installed it
> on a remote server running Ubuntu lts 8.04.
>
> I figured I'd try and put it under ssl/https under Apache (mod_ssl).
> I've created a test certificate, but I'm having trouble figuring out
> exactly how to get phppgadmin working under SSL.  Can someone step me
> through the process?
>
> Thanks


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Setting up phppgadmin under https/ssl (Apache)

2008-05-16 Thread [EMAIL PROTECTED]
Hi...

This is as much an apache question as anything else, but I think it's
appropriate here.

I've been using phppgadmin on my local machine.  Now I've installed it
on a remote server running Ubuntu lts 8.04.

I figured I'd try and put it under ssl/https under Apache (mod_ssl).
I've created a test certificate, but I'm having trouble figuring out
exactly how to get phppgadmin working under SSL.  Can someone step me
through the process?

Thanks

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Unacceptable postgres performance vs. Microsoft sqlserver

2008-04-15 Thread [EMAIL PROTECTED]
Let me just start off by saying that I *want* to use postgresql.
That's my goal.  I do not want to use SQLServer.  I'm posting this
message not to slam postgres, but to ask for someone to help me figure
out what I'm doing wrong.

I've used postgres for several years as the backend to web
applications. I'm not entirely new to it, but I've never needed to
bother with performance tuning.  Netflix is running a contest in which
they will give you a set of movie ratings and you try to develop an
algorithm to predict future ratings.  I've imported this data into a
postgresql database, and the performance is abysmal.  Here's my setup:

Core 2 Quad with 4GB RAM and two SATAII hard drives in RAID 0

I install debian linux and postgresql 8.1 (this is the latest version
that is available through the debian package manager).  I import the
Netflix data into a table with the following characteristics:

Create table rating (movieid int, userid int, rating int4, ratingdate
date)

There are 180 million rows.  I've done the import 3 times, it takes on
average 815 seconds.  I'm not too worried about that.  Now for a test,
I run the query, select count(*) from rating;

This takes an average of 172 seconds.  Call it three minutes.

Now, on the same machine, I install windows XP and Microsoft SQL
server 2000.  The import took 742 seconds.  The count(*) query took 22
seconds.

22 seconds.  What's gong on?

Another test.  In postgres I added an index to the userid column and
then counted distinct userids.  The average run time over three
queries was 4666 seconds, or 78 minutes.  Unbelievable.

On SQL Server, with *no* index, the same query takes on average 414
seconds, or about 7 minutes.  Ten times faster!

I'm sure someone will flame me and say that I'm a worthless noob and
if only I was smart I would know what I'm doing wrong.  So let me just
repeat: I want to use postgres.  That's my goal.  Please tell me what
I can do to make the performance of this acceptable.  It's not a
complicated setup.  One table.  A couple of count(*) queries.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unacceptable postgres performance vs. Microsoft sqlserver

2008-04-15 Thread [EMAIL PROTECTED]
On Apr 14, 12:34 pm, Chris Browne <[EMAIL PROTECTED]> wrote:
> If the entire purpose of your application is to run COUNT(*)

haha.  no.  I see your point.

I'll be doing statistical functions on group-by's.  So I'll go back
and give it another try with queries like those.  And I'll use some of
the functions that other people here have suggeted.  I hope this
works!

thanks.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unacceptable postgres performance vs. Microsoft sqlserver

2008-04-15 Thread [EMAIL PROTECTED]
On Apr 14, 2:17 pm, [EMAIL PROTECTED] ("David Wilson") wrote:
> For instance, your count of distinct userids is probably not using the
> index you just created. If it still isn't using it after you ANALYZE
> the table, try rewriting the query using group by (select count(*)
> from (select userid from mytable group by userid) tmp). I recently had
> a similar performance issue on a 75m row table, and the above helped.
>
thanks.  There's a lot of good info in your post.  I'll give it a try.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] High Availability / Replication with Sequoia

2008-04-09 Thread [EMAIL PROTECTED]
Thanks for the help guys,

I should clear up a little what I am trying to achieve I think.

The primary users of this db and application will be located in an
office, each user with a desktop machine, all networked. They need to
work with this DB in a fairly heavy kind of way, in so far as to say
that 80% of their day will be working with the application and the db.

The primary source of data will / must be located on a database server
that is actually in a different facility. It is possible to reach this
server from the office, and is done so daily, however the speed of
connection is very slow and is frequently disconnected - in short
unrelaible. To implement an extension of this 'primary' db with the
associated hardware and licensing costs at the local site is beyond
what the business is willing to pay. It also goes directly against the
'structure' that has been laid out by the IT group in that they want
all the db servers in a single location - regardless of business
impact they want to make their budget savings.

So, what I want to do is to satisfy the IT group by keeping a 'master'
copy of the db on their off-site facility, which in fact will be
populated from a source system sitting on my desk. The ETL tools will
be used for creating a completely (or as near as possible) automated
system for populating the 'master' that is offsite.

What I wanted to do next was to have Postgres installed on each of the
local users machines, along with the application they require, and run
them as a cluster - if one db goes down or one machine dies the client
software / app can still connect to the cluster and keep functioning
from another machine. I could then have the defective machine attended
to and if necessary re-built... In short the ability to work would not
be interrupted. Or at least thats the hope.

These desktops shut down each night too, as the staff leave to go
home. There is no possibility to install a server locally
(unfortunately). So with this in mind I was hoping that the
'automatic' nature of Sequoia would allow for recovery / updating from
the master or others in the cluster and keep all the local db's up to
date without the users having to do anything.

There is also a desire to have a mobile copy of this db / app for some
of the mobile users that come in to the office. They wont be able to
update while external due to the way the network is designed, but once
back in the office they could do this. I was hoping once again to keep
this as effortless as possible for the users. I am still hoping that
this may be achieveable.

In summary, what we are looking at is an install of Postgres on each
machine, a copy of Tomcat running the application, and maybe Sequoia
or Slony or some combination of both. ETL is handled separately (by
me) and the users are supposed to just be able to get on with their
work.

Do you think this is achieveable or am I up the creek and reaching too
far here?

Cheers

The Frog (you caught me out - its not my real name!)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] High Availability / Replication with Sequoia

2008-04-08 Thread [EMAIL PROTECTED]
Hi Guys,

I have been testing / working with Postgres for a work project, and so
far I am really impressed with this DB system. Takes a little getting
used to, but I am really beginning to love it.

I am looking now at a scenario that does not seem to be a native
ability of Postgres, but might possibly be overcome with Sequoia. I am
hoping that there exists the possibility of using Sequoia to replicate
a DB between / among a number of machines in the office, some of which
are not always connected to the lan.

The scenario is like this. On each of the machines I would want to
have Postgres installed and only to accepting connections from the
local machine. Also on each of these machines would be running Tomcat
or similar hosting the required application (app to connect to local
Postgres installation). Sequoia would then be used as a form of
replication from machine to machine to ensure that the database is
kept up to date.

The application does not allow writeback to the db, so for all intents
and purposes you can consider it read only.

To keep the applications database up to date with new information I
would be using ETL applications like Spoon / PDI. This will be done to
an as yet undecided 'point of origin', but it is probably safe to say
that it will be a commercial db server somewhere on our network. The
latency from our network to the 'Data Warehouse' (read as badly
managed dogs breakfast) is huge. Suffice to say the desire for local
db's is high, as is the desire to make the application portable for
our sometimes connected laptop users.

Does anyone have any experience or comments that they would like to
share about this sort of scenario? Its a fairly big jump from just
having Postgres running on my laptop for dev purposes to pushing this
to multiple machines and I would really appreciate any feedback you
guys might have.

Thanks in advance

The Frog

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using PL/R for predictive analysis of data.

2008-03-17 Thread [EMAIL PROTECTED]
Hi Sam,

Thankyou for the suggestions. They make perfect sense to me. I
appreciate your time and input. The lack of optimiser usage was
something that I had not considered, and I thank you for making me
aware of it.

Cheers

The Frog

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using PL/R for predictive analysis of data.

2008-03-14 Thread [EMAIL PROTECTED]



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Using PL/R for predictive analysis of data.

2008-03-12 Thread [EMAIL PROTECTED]
Hi Everyone,

I am wanting to ask some opinions on implementing PL/R into V8.3 on
Win32. I have a need to be able to perform some relatively demanding
statistical functions as the basis of producing data for reports.

In short R appears to have more than enough capability to do the job
(from a statistical perspective), however there doesnt seem to be that
much discussion on using the PL/R implementation, or for that matter
tutorials on using PL/R.

What I would like to know is:
1/ Is is possible to create a view that has its columns based on the
output of a PL/R function?

2/ Are there special considerations for the source data?

3/ Has anyone any experience with NonLinear Regression Analysis using
PL/R to predict future outcomes as a resultset? (ie/ the equivalent of
output from an SQL SELECT statement)

Any advice or counsel would be greatly appreciated.

I am not super famailar with Postgres, but I am famailar with other
large scale databases as well as some desktop ones so dont be afraid
to throw technical answers if you need to :-)

Greatly appreciated

The Frog

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL vs. MySQL benchmarks on SMP FreeBSD 7.0

2008-03-05 Thread [EMAIL PROTECTED]

Apologizes if this has already been announced -

http://www.scribd.com/doc/551889/Introducing-Freebsd-70

a presentation of the SMP in FreeBSD 7.0 using PostgreSQL and MySQL to
produce benchmarks.

Notable quotes -

a) MySQL degrades after utilizing all CPUs, while PostgreSQL does not
(the explanation is that MySQL has scalability problem).

b) PostgreSQL is in general 35%-45% faster.

I can't tell if all is true, just wanted to make it public, if it hasn't
been already made.

Iv


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] win32: how to backup (dump does not work)

2008-02-26 Thread [EMAIL PROTECTED]
Hi,

I urgently need a way to simply backup the database's data on win32
(e.g. the 'data' directory).

As discussed earlier I can't use dump/dumpall since my data needs
persistent tableoids which, however, are not the same after a restore.

AFAIK stopping the server, zipping data dir, and restarting the server
creates a zip file which is not easily portable to other computers due
to some ntfs file system permission problems.

Does anyone have another idea how to backup (besides disk images)?

Thank You
Felix



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Function problem

2008-02-22 Thread [EMAIL PROTECTED]
I solved my problem :)

the problem is in the query that calls my function:

select * from calcolo_inventario('26','2008-02-22','05')
where giacenza > 0

because the resulset has a negative row that doesn't appear on screen

Thank you very much to all of you
Enrico

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] uninstalling tsearch2 error: "gin_tsvector_ops" does not exist for access method "gin"

2008-02-19 Thread [EMAIL PROTECTED]
mmands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:49: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:50: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:51: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:52: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:53: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:54: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:55: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:56: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:57: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:58: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:59: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:60: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:61: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:62: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:63: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:64: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:65: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:66: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:67: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:68: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:69: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:70: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:71: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:72: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
psql:/usr/local/pgsql/share/contrib/uninstall_tsearch2.sql:73: ERROR:
current transaction is aborted, commands ignored until end of
transaction block
ROLLBACK

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] questions about very large table and partitioning

2008-02-19 Thread [EMAIL PROTECTED]

Hi,

Thank you guys.


Enrico Sirola wrote:


Il giorno 18/feb/08, alle ore 17:37, [EMAIL PROTECTED] ha scritto:

1) PostgreSQL only support partition by inheritance, and rules have to
be created for each child table, this will result *a lot of* rules if
the number of child tables is large.

Are there some smart ways to avoid this kind of mass ?


you can obtain the same result using a trigger, but you must replace the 
trigger function every time you add/remove a partition. The trigger also 
has an additional feature: you can use "copy in" in the "father" table, 
while copy in bypasses the rules subsystem



2) I have added check constraints for child tables. According to the
documents, "query performance can be improved dramatically for certain
kinds of queries". Does this mean that the query can be improved only if
the query contains the constrained column? What will happen if the
constrained column doesn't appear in the WHERE clause?


if the constraint doesn't appear in the where clause, then it is 
executed in all partitions



3) Is partition by inheritance the only appropriate way to organize very
large table in PostgreSQL ?


don't know. I think partitioning is useful when you perform partitions 
"rotation" e.g. when you periodically delete old rows and insert new 
ones (think about log files). In this case you should periodically 
perform vacuums to ensure that the dead rows gets recycled otherwise the 
DB will continue to grow. Partitions help a lot in this case (also 
autovacuum does)
I'd try to tune autovacuum for your workload, and only at a second time 
I'd try to partition the tables.
There has been some discussion on partitioning in this list in the past. 
Try also to take a look at the archives for last june or july

Bye,
e.




I have tried to do partition with inheritance and rules. First, I created 
master table and many child table, and also the rules for insert, delete and 
update. Then I do some select, insert, delete and update operations on the 
master to test if it works. However, the insert an delete work very well, but 
the update operation seems never return. I tried several times, and could wait 
it to return and killed the process.


I tried the commands manually, and it seemed very weird.
The delete command:
DELETE FROM master_table WHERE id='' AND data_type='aaa'
and select command with the same condition expression:
SELECT * FROM master_table WHERE id='' AND data_type='aaa'
both return without delay.
But the update command with the same condition expression:
UPDATE master_table set data_value='somevalue' WHERE id='' AND 
data_type='aaa'
didn't return in 5 minutes.

Every table has index and constraint on column "id". I have already set 
constraint_exclusion=true.


Why the update command runs so slow ?

Thanks a lot.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Analogue to SQL Server UniqueIdentifier?

2008-02-18 Thread [EMAIL PROTECTED]

Thanks Bill.

I discovered that 8.3 supports a UUID datatype. Thus a CREATE DOMAIN 
uniqueidentifier AS uuid works fine for the aliasing. There are no SQL 
Server style functions for UUID creation but I can handle this in the client 
code. Works a treat.


Jerry 




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] questions about very large table and partitioning

2008-02-18 Thread [EMAIL PROTECTED]

Hi all,

I'm trying to create a very large table with more than 0.6 billion rows,
which is really a big number, so I think I have to create partitioned
tables after some googling.

However, I have a few questions about partitioning in PostgreSQL.

1) PostgreSQL only support partition by inheritance, and rules have to
be created for each child table, this will result *a lot of* rules if
the number of child tables is large.

Are there some smart ways to avoid this kind of mass ?

2) I have added check constraints for child tables. According to the
documents, "query performance can be improved dramatically for certain
kinds of queries". Does this mean that the query can be improved only if
the query contains the constrained column? What will happen if the
constrained column doesn't appear in the WHERE clause?

3) Is partition by inheritance the only appropriate way to organize very
large table in PostgreSQL ?

Thanks in advance.



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Analogue to SQL Server UniqueIdentifier?

2008-02-16 Thread [EMAIL PROTECTED]
Hi

My porting experiment has encountered the SQL Server UniqueIdentifier problem. 
I can see one or two suggestions about this have been made over the years but 
I'd like to try and stay close to the original. So:

I'm wondering if I can use a combination of a domain 'hack' for syntatic 
compatibillity and an externally implemented function to handle generation.

More specifically, given a table defined thus:

CREATE TABLE jazz(
UUID UniqueIdentifier DEFAULT newIdentifier(),
rootname VARCHAR(255),
data_source VARCHAR(1024),
date_created DATETIME DEFAULT GETDATE())

1. Can I handle the UniqueIdentifier datatype via a domain that aliases 
UniqueIdentifier to char(X) (for example) ? This seems to work fine for the 
DATETIME datatype.
2. Implement newIdentifier() in some extension DLL that simply calls 
CoCreateGUID() ?

or does uuid-ossp do this for me?

Thx.

Jerry.

[GENERAL] Metadata/ODBC query

2008-02-16 Thread [EMAIL PROTECTED]
Hi

I'm experimenting with PostgreSQL 8.3.0 on Windows connecting via ODBC. One 
curiosity so far is this:

If I use pgAdmin and run "SELECT catalog_name FROM Information_Schema.Schemata" 
I get data back as expected.

If I connect via ODBC and issue the same query I don't see any data. SQLFetch() 
simply returns SQL_NO_DATA_FOUND. Both ODBC test apps work fine when accessing 
the same server/database running queries such as "SELECT table_name FROM 
INFORMATION_SCHEMA.TABLES WHERE table_type = 'BASE TABLE'"

I'm using the latest psqlODBC Windows installer from the web site. The driver 
is set to show system tables ...

Any clues?

Thanks.

Jerry.

[GENERAL] a newbie question on table design

2008-02-15 Thread [EMAIL PROTECTED]

Hi all,

I have a large sdf file with many records of molecules and associated
data items and I want to save them in a PostgreSQL database. There are
about less than 40 data items for every molecule(the names of the data
items fore each molecule are the same, but values differ).  The number
of molecules may exceed 20 million.

Now I have come up with two ways to construct the table:

1) a table with about 40 columns, every data item has its corresponding
column, and one molecule corresponds to one row in the table.

This is direct and simple. The drawbacks is if I want to add more data
types to the database, I have to modify the structure of the table.

2) a table with just 3 columns:

   CREATE TABLE mytable(
   id  serial,
   data_name   text,
   data_value  text
   );

Then a single molecule will corresonds to about 40 rows in the database.

If I need to add more data types to the table, I just need to add new
rows with new "data_name" column values. The drawback of this table is
it has too many rows(40 times of the former one) and waste a lot space.

Which one is better, or there are some other smart ways ?

I have another question. Since the data I need to save is huge, is it
appropriate that I save the data value in compressed format ?

Regards,

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] password option in pg_dumpall

2008-02-07 Thread [EMAIL PROTECTED]
is there anyway of specifying password in the command line interface
of pg_dumpall??

this my script, and it asks for password for every host...
thanks

'''
#!/bin/sh
for line in `cat /home/mark/work/infrastructure/farm_all`
do
pg_dumpall -h $line -U postgres | bzip2 > "$line.bz2"
done
'''

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Alternative to tableoids?

2008-02-04 Thread [EMAIL PROTECTED]
Hi,

what's an alternative to tableoids?

As I've learned today they are not consistant across pg_dump/restore.

I need to point to (lots of dynamically added) tables and used tableoids
before.

Are there other solutions to point to tables besides using tableoids 
or wasting disk space using the strings of 'schema' and 'tablename' 
everywhere as a reference?

And hints welcome :-)

Felix



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] commit fails, rollback needed?

2008-02-04 Thread [EMAIL PROTECTED]
Hi,

when committing a transaction returns with an error: Do I have to
rollback the transaction in this case?

Felix



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] backup tableoids (pg_class.oid) ?

2008-02-04 Thread [EMAIL PROTECTED]
Hi,

my db structure relays on the OIDs of tables (stored as OIDs of
pg_class).

However, 'pg_dumpall -o ...' seems to save the oids of all data but not
the tableoids, so, when I restore data I get different tableoids than I
had before!

Is there anything I could do to backup and restore even the tableoids
(besides stopping server and copying the files) ?

Thank you

Felix



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PGSQL ERROR: FATAL: terminating connection due to administrator command

2008-02-04 Thread [EMAIL PROTECTED]

Alvaro Herrera wrote:

It's not unlikely that the server is crashing.  This should not happen.
Please examine the logs and let us know what you find.

If it's indeed crashing, we would like to see a more complete report
(possibly including a backtrace from a core dump).


I enabled the highest level of logging possible (debug5) and there us 
nothing logged when this happens.


Can this mean that it is some php <-> PostgreSQL problem?

--


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] PGSQL ERROR: FATAL: terminating connection due to administrator command

2008-02-04 Thread [EMAIL PROTECTED]

Jorge Godoy wrote:

Have you checked your code and seen what is really happening on these clicks?


I have been trying to catch the error from within php, however 
PostgreSQL (in the described case) dies in a way which does not return 
any error.


The error returned is that php can't connect to the db server.

I had some cases where looking for unexisting information or trying to 
operate on unexisting data crashed the specific process of the backend that 
was serving my application.


The behavior happens as well simply on refresh, so I can't imagine it 
being due to missing data...



Enable query logging and see what is happening.


I did not know about query logging, thank you for the tip, I'll try that 
now.


Iv

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] PGSQL ERROR: FATAL: terminating connection due to administrator command

2008-02-04 Thread [EMAIL PROTECTED]
I have a PostgreSQL 8.2.6 on FreeBSD 6.3 (but the described behavior 
appears during several earlier minor versions as well) - which powers a 
php based web application. What I experience is the message (below) 
which appears during the first 5-7 clicks after the database has been 
cleanly imported (dropped and imported) -


PGSQL ERROR: FATAL: terminating connection due to administrator command
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.

The message does not appear on every click. It can appear on the 1st, 
then not on the 2nd, then on the 3rd and so on. After few more clicks 
everything goes well until the next import.


I could not find anything meaningful in the logs...

Has somebody else experienced anything similar?

Iv

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Building 8.1.11 on FC4

2008-01-16 Thread [EMAIL PROTECTED]
Good morning everyone!

As you may have guessed from previous posts, i just recently migrated my
main database server from 7.4.x to 8.1.11.  This part went pretty smoothly.
:)

One of the client machines in my architecture is a Fedora Core 4 box.  I
unfortunately cannot change this so I had to build 8.1.11 packages. When I
was building the packages it would fail unless I passed 'pgfts 0'.

Of course now, I am running into deadlock issues that never existed before.
There have been no code changes and the behavior of the code seems to
suggest to me that threads may be the issue.

Has anyone actually built 8.1.x on FC4?  My guess is that I just need a
little trick to force rpmbuild to finish successfully with thread safe
enabled but it has been suggested that perhaps FC4 is not capable of being
thread safe.  So if anyone could offer any advice that might help me sort
this out, I would truly appreciate it.

TIA

-bill


Re: [GENERAL] data volume resizing and large objects

2008-01-04 Thread [EMAIL PROTECTED]
Thank you both!!  That got me back up and running ( for now ).  Hopefully
this will give me enough enough time to finish the migration next week.

Thanks again!!

On Jan 4, 2008 4:21 PM, Alvaro Herrera <[EMAIL PROTECTED]> wrote:

> [EMAIL PROTECTED] wrote:
>
> > Is there anyway to force 8.0 to start at a specific value?  That would
> at
> > least get us back up and running.
>
> Stop the system (cleanly) and use pg_resetxlog -o.
>
> --
> Alvaro Herrera
> http://www.CommandPrompt.com/
> PostgreSQL Replication, Consulting, Custom Development, 24x7 support
>


Re: [GENERAL] data volume resizing and large objects

2008-01-04 Thread [EMAIL PROTECTED]
Tom

You hit the nail on the head actually! Further delving into the issue
revealed that but I hadnt had a chance to post.

We recently used copy to put some lobs back into the database, about 70k of
them.  All of these lobs were assigned sequential oids incremented by 1.  We
then wrapped around and ended up in that block of oids again.

The irony of all of this is we have been trying to migrate to 8.1 for months
now but have not been able to do so due to infrastructure issues beyond our
control.

Is there anyway to force 8.0 to start at a specific value?  That would at
least get us back up and running.

Thanks

-bill

On Jan 4, 2008 3:42 PM, Tom Lane <[EMAIL PROTECTED]> wrote:

> "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> writes:
> > lo_creat fails with:
> >  ERROR: large object  already exists
>
> Well, does it?
>
> What PG version is this?  I'm suspicious that it's pre-8.1 and the
> problem is that your OID counter has wrapped around to a region that
> is already pretty densely populated with large objects.  8.1 and up
> are able to pick unused OIDs but earlier versions just fail on
> collisions.
>
>regards, tom lane
>


[GENERAL] data volume resizing and large objects

2008-01-04 Thread [EMAIL PROTECTED]
Hello All

I recently ran out of disk space on the volume hosting my default
tablespace.  As it turned out, the sysad didnt allocate all of the disk to
the volume when he installed it, so he resized the volume and restarted the
database.  The server came up fine so I assumed all was well ( the server
would not start before the resize ), however now when i get failures
whenever I try to insert lobs via my application.

My application is written in perl and uses DBD::Pg and DBI to access the
database.  The failure occurs when i attempt to call lo_creat:

 $dbh->func($mode, 'lo_creat');

lo_creat fails with:

 ERROR: large object  already exists

This leads me to believe that my database lost its list of existing lob oids
and is experiencing collisions but I am not sure where to go from here.  I
could really use some additional insight into what is happening here and
perhaps some direction in how to remedy the problem.

TIA

-bill


[TLM] [GENERAL] How to insert on duplicate key?

2007-12-27 Thread [EMAIL PROTECTED]

Hi all,

I have a table like this,
CREATE TABLE mytable(
avarchar(40),
btext,
ctext,
PRIMARY KEY (a, b)
);

What I want to do is:
insert a record into a table, and when the record already 
exists(according to the primary key), update it.


I know that there is a ON DUPLICATE clause with MySQL, so I'm wondering is 
there a quick and clean way to do this in PostgreSQL ?


I have googled and currently the only way I can find is do query first and 
then update or insert.


Thanks a lot.

Regards,

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] How to insert on duplicate key?

2007-12-24 Thread [EMAIL PROTECTED]

Michael Glaesemann wrote:


On Dec 24, 2007, at 22:03 , [EMAIL PROTECTED] wrote:

I have googled and currently the only way I can find is do query first 
and then update or insert.


Or alternatively, UPDATE and see if you've affected any rows. If not, 
insert.


Michael Glaesemann
grzm seespotcode net





Thanks for your quick reply!

Is the update and insert method quick?

I have tried the query and update/insert way, and it was very slow when more 
than 1 million records have been inserted. (I have more than 20 million 
records to insert.)


Thanks again!

Xiao Jianfeng

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] How to insert on duplicate key?

2007-12-24 Thread [EMAIL PROTECTED]

Hi all,

I have a table like this,
CREATE TABLE mytable(
avarchar(40),
btext,
ctext,
PRIMARY KEY (a, b)
);

What I want to do is:
insert a record into a table, and when the record already 
exists(according to the primary key), update it.


I know that there is a ON DUPLICATE clause with MySQL, so I'm wondering is 
there a quick and clean way to do this in PostgreSQL ?


I have googled and currently the only way I can find is do query first and 
then update or insert.


Thanks a lot.

Regards,

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Restoring 8.0 db to 8.1

2007-12-21 Thread [EMAIL PROTECTED]
Hi

I am trying to upgrade my database from 8.0 to 8.1 and am looking for a
little info/advice.

This is a production database that we are migrating and it is in CONSTANT
use, so the maintenance window must be small and hopefully mostly off-hours.

We use a PITR/LVM snapshot solution for our backups and were hoping to
simply restore the filesystem and startup under 8.1.  Obviously this didnt
work, and I know the doc says a conversion is necessary, however I havent
found anything with enough detail to sway me from a better solution than
pg_restore.

The problem with pg_restore is that our database takes 3+ weeks to restore
from a dump file.  This is not an acceptable window for us.  ( Approximately
3 days of this is data + lobs, and the rest indexes and constraints. If we
are doing something wrong here, I am all ears as well )

Could anyone point me to the information I am missing or offer some middle
ground solutions?

TIA

Bill


[GENERAL] accessing multiple databases using dblink

2007-12-13 Thread [EMAIL PROTECTED]

Good morning to everybody,
I've to resolve this situation: I've a collection of many different databases, all identical, and the name of those databases is stored inside a table in another "central 
management" database.
In an ideal world, I'd like with a single query to be able to recover the name of each database from the table that does contains them, and use it to do a query on each table on 
each databse joining the results.

At the moment I'm doing it by external code, but performaces are awfull...
At first I tough something similar to
select * from tab_databases,dblink('dbname=' || tab_databases.name,'select 
count(id) from tab_data')
could work, but it seems its not allowed to reference rows from another table 
inside the from level.

Putting dblink outside that level allow me to use the dynamic dbname, but in 
that case I would need a row resource, while dblink give back a table 
resource...

Is it possible to resolve this inside the databse?
Thanks a lot...

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] VTD-XML Tutorial by Code Examples Posted Options

2007-12-12 Thread [EMAIL PROTECTED]
The C version: 
http://downloads.sourceforge.net/vtd-xml/c_tutorial_by_code_examples

The C# version: 
http://downloads.sourceforge.net/vtd-xml/CSharp_tutorial_by_code_exam...

The Java version: 
http://downloads.sourceforge.net/vtd-xml/Java_tutorial_by_code_exampl...

Also some latest articles:

Schemaless Java-XML databinding with VTD-XML
http://www.onjava.com/pub/a/onjava/2007/09/07/schema-less-java-xml-da...

Index XML documents with VTD-XML
http://webservices.sys-con.com/read/453082.htm

Improve XPath Efficiency with VTD-XML
http://www.devx.com/xml/Article/34045





---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] postgres freezes up on a query from slony

2007-12-04 Thread [EMAIL PROTECTED]
Hi All,

Not sure if this is a slony issue or a postgres issue...I'm posting on
both.

I'm running slony on a one master/two subscriber system.  One of the
subscribers seems to get stuck on a group of queries, and I can't seem
to figure out why.  If I do a select on pg_stat_activity, I get the
following:

 datid | datname | procpid | usesysid | usename
|
current_query |
waiting |  query_start  |
backend_start | client_addr  | client_port
---+-+-+--+--
+--
+-+---
+---+--+-
16384 | tii |   12204 |16392 | slony| update only
"public"."m_report_stats" set date_start='2007-12-03 13:27:05.661155'
where objectid='56917411'; | f   | 2007-12-04 11:20:23.839088-08 |
2007-12-04 11:20:23.005228-08 |  |
-1 : update only
"public"."m_object_paper" set overwriteflag='t' where
id='56069688';
 : insert into
"public"."m_search_list" (nodeid,id) values
('0','45844662');
 : insert into
"public"."m_search_list" (nodeid,id) values
('1','45844662');
 : insert into
"public"."m_search_list" (nodeid,id) values
('4','45844662');
 : update only
"public"."m_dg_read" set delete_flag='t' where
id='1474821';
 : insert into
"public"."m_search_list" (nodeid,id) values
('5','45844662');
 : insert into
"public"."m_search_list" (nodeid,id) values
('14','45844662');
 : update only
"public"."m_user" set duration='02:52:24.744252' where
id='10369924';
 : insert into
"public"."m_search_list" (nodeid,id) values
('32','45844662');
 :

What I've done so far is do an strace on the process, and I get the
following:
tii-db1 postgres # strace -p 12204
Process 12204 attached - interrupt to quit
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
select(0, NULL, NULL, NULL, {0, 1000})  = 0 (Timeout)
Process 12204 detached
tii-db1 postgres #


I've also set log_min_error_statement=debug5 on postgresql.conf, did a
restart, and the logs show the following:
Dec  4 11:20:23 tii-db1 postgres[12204]: [1134-1] 2007-12-04
11:20:23.846 PST [user=slony,db=tii [local] PID:12204 XID:
129605365]DEBUG:  ProcessQuery
Dec  4 11:20:23 tii-db1 postgres[12204]: [1134-2] 2007-12-04
11:20:23.846 PST [user=slony,db=tii [local] PID:12204 XID:
129605365]STATEMENT:  update only "public"."m_report_stats"
Dec  4 11:20:23 tii-db1 postgres[12204]: [1134-3]  set
date_start='2007-12-03 13:27:05.661155' where objectid='56917411';
Dec  4 11:20:23 tii-db1 postgres[12204]: [1134-4]   update only
"public"."m_object_paper" set overwriteflag='t' where id='56069688';
Dec  4 11:20:23 tii-db1 postgres[12204]: [1134-5]   insert into
"public"."m_search_list" (nodeid,id) values ('0','45844662');
Dec  4 11:20:23 tii-db1 postgres[12204]: [1134-6]   insert into
"public"."m_search_list" (nodeid,id) values ('1','45844662');
Dec  4 11:20:23 tii-db1 postgres[12204]: [1134-7]   insert into
"public"."m_search_list" (nodeid,id) values ('4','45844662');
Dec  4 11:20:23 tii-db1 postgres[12204]: [1134-8]   update only
"public"."m_dg_read" set delete_flag='t' where id='1474821';
Dec  4 11:20:23 tii-db1 postgres[12204]: [1134-9]   insert into
"public"."m_search_list" (nodeid,id) values ('5','45844662');
Dec  4 11:20:23 tii-db1 postgres[12204]: [1134-10]  insert into
"public"."m_search_list" (nodeid,id) values ('14','45844662');
Dec  4 11:20:23 tii-db1 postgres[12204]: [1134-11]  update only
"public"."m_user" set duration='02:52:24.744252' where id='10369924';
Dec  4 11:20:23 tii-db1 postgres[12204]: [1134-12]  insert into
"public"."m_search_list" (nodeid,id) values ('32','45844662');
Dec  4 11:20:23 tii-db1 postgres[12204]: [1134-13]

For this process, the log is just stuck here and doesn't do anythi

[GENERAL] PostgreSQL DB split

2007-11-28 Thread [EMAIL PROTECTED]
Hi,

I'm currently using a PostgreSQL DB on my disk memory, but right now I need to 
split this DB in this way:

a. the first DB (or first DB partion) will be stored in the Flash 
memory and it will contain the system configuration values;
b. the second DB (or second DB partion) will be stored in the RAM 
memory and it will contain the state values.
   These values will be modified many times and they will be deleted 
when the system goes down.

I can read that I can create a PostgreSQL DB on the RAMDisk partion, but I'm 
wondering if is it possible to create
one DB with two schemas in two different memory location (RAM and flash)?

Otherwise, I can create two DBs in two different location (RAM and flash) but 
is it possible to execute a SQL query
where the join relationship concerns two tables located in the different DBs? 
Can I use the dblink in order to create
only one DB connection?


Can someone help me? Thanks in advance.

--Tomas


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] unable to createuser in postgres 8.2.5 opensolaris

2007-11-18 Thread [EMAIL PROTECTED]
I initialized a database directory and it started fine. But when I tried to
create user I got this error.
Can you tell what is wrong?
thanks


createuser --superuser --createdb --createrole -P postgres
Enter password for new role:
Enter it again:
createuser: could not connect to database postgres: server closed the
connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.

the logfile is here:

LOG:  could not bind IPv6 socket: Cannot assign requested address
HINT:  Is another postmaster already running on port 5432? If not, wait a
few seconds and retry.
LOG:  database system was shut down at 2007-11-18 19:50:38 PST
LOG:  checkpoint record is at 0/42CC84
LOG:  redo record is at 0/42CC84; undo record is at 0/0; shutdown TRUE
LOG:  next transaction ID: 0/593; next OID: 10820
LOG:  next MultiXactId: 1; next MultiXactOffset: 0
LOG:  database system is ready
LOG:  setsockopt(TCP_NODELAY) failed: Option not supported by protocol


[GENERAL] unexplainable error

2007-11-16 Thread [EMAIL PROTECTED]

I have Apache/php/PostgreSQL on FreeBSD, all latest stable versions.

Every time, after mass import into the database (it's a development 
server, so the import updates the database with full dump from the 
production server) - the first several clicks on the development web 
site return -


PGSQL ERROR: FATAL:  terminating connection due to administrator command
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.

After hitting few times the refresh button of the browser, all is back 
to normal until the next import.


If the FreeBSD is restarted before that, the problem does not exist.

Could anybody advice on the possible reason?

Kind regards,
Iv

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] 8.2.5 -> 8.3 beta tsearch2 help

2007-10-15 Thread [EMAIL PROTECTED]
i tried to restore a dump from 8.2.5 in which i had used tsearch2 and
contains lot of tsearch2 index. when i tried to restore it to 8.3beta it
complained about 'unknow command \r\n' . i know it is from tsearch2. doesnt
8.3 have tsearch2 enabled by default?
with 8.2xx i used to create a database and run tsearch2.sql [from
share/contrib/tsearch2 ], how do i do it in 8.3?
thanks


[GENERAL] importing large files

2007-10-01 Thread [EMAIL PROTECTED]
Hello,

I need to import between 100 millions to one billion records in a
table. Each record is composed of  two char(16) fields. Input format
is a huge csv file.I am running on a linux box with 4gb of ram.
First I create the table. Second I 'copy from' the cvs file. Third I
create the index on the first field.
The overall process takes several hours. The cpu seems to be the
limitation, not the memory or the IO.
Are there any tips to improve the speed ?

Thanks very much,

Olivier


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] how to ignore invalid byte sequence for encoding without using sql_ascii?

2007-10-01 Thread [EMAIL PROTECTED]
I am now importing the dump file of wikipedia into my postgresql using
maintains/importDump.php. It fails on 'ERROR: invalid byte sequence
for encoding UTF-8'. Is there any way to let pgsql just ignore the
invalid characters ( i mean that drop the invalid ones ), that the
script will keep going without die on this error.

I know that i can using sql_ascii or even modify the importDump.php,
but those are not so easy to do as i thought.

thanks for help


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Event-driven programming?

2007-09-14 Thread [EMAIL PROTECTED]
On Sep 12, 3:05 pm, [EMAIL PROTECTED] ("D. Dante Lorenso") wrote:
> Pavel Stehule wrote:
> > 2007/9/12, Jay Dickon Glanville <[EMAIL PROTECTED]>:
> >> - I write a function (it doesn't matter what language it's in:
> >> PL/pgSQL, PL/Java, etc)
> >> - I register that function as a "post-commit" callback function
> >> - when a client commits a transaction, the function gets called, and
> >> the database passes the function some general information as to the
> >> content of the transaction
>
> >> Note how similar this process is to writing triggers.  The only
> >> problem I have with triggers is that events get generated per-table.
> >> I'd like to get notifications based on transactions, not table
> >> changes.
>
> >> What I'd like to be able to do with this event is to notify any
> >> applications of this change, so they can update their cached view of
> >> the database.
>
> Although I'm happy to use triggers as-is (not per transaction, etc) I've
> also wondered about firing events from the database.  I'm curious to
> know if anyone has attempted to write a trigger that will open a socket
> and send an event packet to an application server on the network.
>
> I've considered using a message queue like JMS to manage events on my
> network and have PostgreSQL fire off UDP messages to a socket server
> that would insert jobs into the message queue as triggers get fired in
> the database.  Doing this would be an alternative to storing the queue
> as a database table and having to use polling to constantly check the
> database for events in the queue.
>
> I am interested what anybody might contribute to this thread.  Let us
> know what you tried whether it worked or not, it might be useful.
>
> -- Dante

Depending on your reliability requirements UDP may not be a great
choice.

But, since you asked about what's been tried, my (successful so far)
production setup is along the lines of:

1. process A accepts multiple data flows, inserts "work to be done"
items into a table in batches and calls NOTIFY.
2. process B LISTENs for notifications (with a blocking read on the
socket connection to Postgres) and takes them as a signal to look for
"work items to be done". It also checks every N minutes of idle time
for "work items to be done" in case the NOTIFY/LISTEN mechanism is
broken (haven't seen that situation yet).

As for recovery, process B looks for work items on startup, then drops
into the LISTEN / blocking_read mode.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Statistics collection question

2007-09-10 Thread [EMAIL PROTECTED]
On Sep 4, 10:54 pm, [EMAIL PROTECTED] (Tom Lane) wrote:
> "Phoenix Kiula" <[EMAIL PROTECTED]> writes:
> > Would appreciate any help. Why do indexed queries take so much time?
> > It's a simple DB with "10 relations" including tables and indexes.
> > Simple inserts and updates, about 5000 a day, but non-trivial
> > concurrent selects (about 45 million a day). Works fine when I
> > restart, but a day later all goes cattywumpus.
>
> BTW, just to be perfectly clear: all you do is stop and restart the
> postmaster (using what commands exactly?), and everything is fast again?
> That's sufficiently unheard-of that I want to be entirely sure we
> understood you correctly.




Yes, I noticed starting the postgres database again had an effect of
speed. But this does not seem to be working anymore so I suppose
something else needs fixing.

When I do a "select * from pg_locks", some of them show up as
"Exclusive Lock". This I suppose means that the whole table is locked,
right? How can I find from the "transaction id" which precise SQL
statement is taking this time? I do not have anything that should!
Simple SELECT, INSERT and UPDATE stuff in our fairly straightforward
application, and I hope that autovacuum and auto-analyze do not take
up this exclusive locks?

Ref: output of the select from pg_locks --



=# select * from pg_locks;
-[ RECORD 1 ]-+
locktype  | transactionid
database  |
relation  |
page  |
tuple |
transactionid | 4700
classid   |
objid |
objsubid  |
transaction   | 4700
pid   | 21989
mode  | ExclusiveLock
granted   | t

-[ RECORD 2 ]-+
locktype  | relation
database  | 41249
relation  | 10328
page  |
tuple |
transactionid |
classid   |
objid |
objsubid  |
transaction   | 4700
pid   | 21989
mode  | AccessShareLock
granted   | t




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] posgres tunning

2007-08-19 Thread [EMAIL PROTECTED]
On Jul 23, 5:18 am, [EMAIL PROTECTED] ("Gavin M. Roy") wrote:
> You might want to look at pgBouncer to pool your drupal pgsql needs.  I've
> found with 2000 needed connections, I can pool out to only 30 backends and
> still push 8k transactions per second.
>



How you do use pgBouncer -- through an application developed in PHP or
Perl? It would be lovely if you can share some info about this
seemingly useful app which comes with so little documentation on how
to actually get using. The Skype site mentions the install bits, but
not how to actually query the pgBouncer stuff instead of the database.
Very hard to come by some actual code. Could you please share some?
Many thanks!


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] How can I avoid PGPool as a single point of failure?

2007-08-17 Thread [EMAIL PROTECTED]
My application currently has a single PG 8.2 database server, and I'm
bringing more boxes online to mitigate the risk of a single point of
failure.

I'm interested in using PGPool to do the load balancing, and it was
suggested that I put one server running PGPool in front of two
database servers. This only seems to move my risk to the PGPool
machine.

Is it possible to point two servers running PGPool at the same two
database servers? If so, I seem to recall reading about being able to
use some kind of virtual IP address to split the traffic to both.

Any ideas are greatly appreciated!


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Using PITR Backup and Recovery

2007-08-14 Thread [EMAIL PROTECTED]
We recently moved to PITR backup and recovery solution as defined in the
documentation.  Our basic setup executes the backup start command, and then
takes a snapshot of the filesystem and backups wal files.  However, we have
database files ( not wal files ) that change while the system is in backup
mode.  This happens during every backup.  Is this normal?

Any insight appreciated.

-bill


Re: [GENERAL] Need help with bash script and postgresql

2007-07-24 Thread [EMAIL PROTECTED]
On Jul 23, 6:11 am, [EMAIL PROTECTED] ("Raymond O'Donnell") wrote:
> On 23/07/2007 11:04, Andy Dale wrote:
>
> > The posgres command in the loop should look like so (not sure about the
> > password):
>
> As I understand it, you supply the password via a pgpass file - you
> can't include it on the command line.
>
> Ray.
>
> ---
> Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
> [EMAIL PROTECTED]
> ---

FWIW you can supply the password via a PGPASSWORD environment
variable, e.g.:

prompt$ PGPASSWORD=secret psql -h pghost -U pguser -d thedb -c "SELECT
'x'"


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Ordering in SELECT statement

2007-06-26 Thread [EMAIL PROTECTED]


Hello,

I use the following command "SELECT * FROM employees ORDER BY name"
(a very simple sql statement) the main thing here is that I get a
list ordered without taking into accound the spaces. For example, I
get the following listing:
ABAB
AB  CD
ABD  E
AB  EF

and what I need is the following ("old fashion", that is, the
"SPACE" is another character whose ASCII value is before any other
LATIN letter's!!)
AB  CD
AB  EF
ABAB
ABD  E

Does any one know how to order get an old fashioned SORT??? 




Re: [GENERAL] parametered views

2007-06-12 Thread [EMAIL PROTECTED]
i have 4 tables :

date_table (date_id,.)
A_table(A_table_id, something1,something2.)
A1_table(A1_table_id references A_Table(A_Table_id),A11,A12)
A2_table(A2_table_id references A_Table(A_table_id),A21,A22,...)

so i want to create a view with date_id,A_table_id,something1,
if something2=x then my view containts
 (A1_table.A11 and A1_table.A12)
 else it's containts (A2_table.A21 and A2_table.A22))

so my view depends of something2 value.

I hope i'm specific

Thx
Lhaj



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] parametered views

2007-06-12 Thread [EMAIL PROTECTED]
On Jun 9, 8:12 am, Rodrigo De León <[EMAIL PROTECTED]> wrote:
> On Jun 8, 7:59 pm, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
> wrote:
>
>
>
> > i have 4 tables :
>
> > date_table (date_id,.)
> > A_table(A_table_id, something1,something2.)
> > A1_table(A1_table_id references A_Table(A_Table_id),A11,A12)
> > A2_table(A2_table_id references A_Table(A_table_id),A21,A22,...)
>
> > so i want to create a view with date_id,A_table_id,something1,
> > if something2=x then my view containts
> >  (A1_table.A11 and A1_table.A12)
> >  else it's containts (A2_table.A21 and A2_table.A22))
>
> > so my view depends of something2 value.
>
> > I hope i'm specific
>
> > Thx
> > Lhaj
>
> create view foobar as
> select date_id,A_table_id,something1
> ,case when something2=x then A1_table.A11 else A2_table.A21 end as foo
> ,case when something2=x then A1_table.A12 else A2_table.A22 end as bar
> from (... etc., etc. ...)

OK, thank you (the idea is here i will try to do that)


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Create a table B with data coming from table A

2007-06-11 Thread [EMAIL PROTECTED]
My original table is like that:

ID  A1  A2  A3  cnt
12341   0   0   4
12341   0   1   8
12341   1   1   5
12351   0   0   6
12351   0   1   7
12351   1   1   12

I have to create a new table B:

ID  B1  B2  B3  S
12344   8   5   17
12356   7   12  25

The combination (A1=1,A2=0,A3=0) gives B1
The combination (A1=1,A2=0,A3=0) gives B2
The combination (A1=1,A2=1,A3=1) gives B3

S = B1+B2+B3

I think it's a classical problem, but i can't see to problem key

Thx,
Lhaj


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] autovacuum vacuums entire database...is this right?

2007-06-03 Thread [EMAIL PROTECTED]
It's been about a month and a half, and I'm getting this VACUUM
again.  This time, I'm wondering if there's any way to tell if
autovacuum is doing a database-wide vacuum for the sake of xid
wraparound or for some other reason.  Is there some sort of entry that
gets put into the log, and if so, what log level would it be at?

If this doesn't get logged, could I make this a feature request?

Thanks!
--Richard



On Apr 15, 6:35 am, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I'm just wondering if autovacuum is ever supposed to vacuum the entire
> database during one of its runs.  As far as I remember, it's supposed
> to vacuum one table at a time, based on the
> autovacuum_vacuum_threshold, autovacuum_analyze_threshold, etc.
> settings.
>
> For some reason, autovacuum decided to run a vacuum on my entire
> database (29GB large), and it's taking forever:
>
> select now(), query_start, current_query, backend_start, procpid,
> usename from pg_stat_activity where current_query <> '';
>   now  |  query_start  |
> current_query | backend_start | procpid | usename
> ---+---
> +---+---+-+--
>  2007-04-15 06:34:27.925042-07 | 2007-04-14 22:23:31.283894-07 |
> VACUUM| 2007-04-14 22:23:31.274121-07 |9406 | postgres
>
> Is this expected behavior?
>
> --Richard



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] pg_connect sometimes works sometimes not

2007-04-26 Thread [EMAIL PROTECTED]

Richard Huxton wrote:

Try some code like this:


OK I'll try it now and write back.

Thanks!
Iv

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] pg_connect sometimes works sometimes not

2007-04-26 Thread [EMAIL PROTECTED]

Richard Huxton wrote:

Did you try pg_last_error()?


pg_last_error() does not seem to work. It requires connection as 
parameter, so if pg_connect() fails - it has nothing to operate on.


Or am I missing something?


Are you logging connection attempts/failures? Details in the manuals.


Checked the manual, but found only parameter to log the successful 
connections.


Is there such to log the failed connection attempts (incl. the reason)?

Thank you,
Iv

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] pg_connect sometimes works sometimes not

2007-04-25 Thread [EMAIL PROTECTED]

Richard Huxton wrote:
> Did you try pg_last_error()?

No. Will try now.

> Are you logging connection attempts/failures? Details in the manuals.

Understood.

Thank you very much!
Iv

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] pg_connect sometimes works sometimes not

2007-04-25 Thread [EMAIL PROTECTED]

Richard Huxton wrote:

[EMAIL PROTECTED] wrote:
What other variable (which are sometimes there, sometimes not) reasons 
there can be pg_connect to fail?


What result-code/error do you get? What do your logs show?


I have -

$connection = pg_connect("$host $db $user $pass");

When I get the error it is because $connection is 'false'.

Thought of using pg_result_error - but it seems applicable only to 
queries (i.e. with pg_query or pg_execute). How can I get an error code?


PostgreSQL is instructed to log into the syslog, which is 
/var/log/messages. There are only two type of things there from today -


[1-1] WARNING: nonstandard use of \\ in a string literal at character XXX

[1-2] HINT: Use the escape string syntax for backslashes, e.g., E'\\'.

But it does not seem like any of these are related to pg_connect, or am 
I wrong (I guess they are related to bad code somewhere).


Thank you,
Iv


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] pg_connect sometimes works sometimes not

2007-04-25 Thread [EMAIL PROTECTED]

Hello,

we migrated a php code from FreeBSD 5.x, PostgreSQL 8.x and php 4.x - to 
the latest versions of these, keeping the configuration options. Now 
pg_connect started to fail on irregular intervals for no obvious reason. 
Before we had a problem with the number of connections, but we monitored 
them and they are less than 10 (out of 100 available).


What other variable (which are sometimes there, sometimes not) reasons 
there can be pg_connect to fail?


Thank you,
Iv

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] FIN_WAIT_2

2007-04-24 Thread [EMAIL PROTECTED]
hi all, 
I installed postgresql 8.2.3 in a freebsd server, my client application is
written in C++ builder + zeoslib and I haven't any problem until now, but now
with 8.2.3 version I have many connection that remains in FIN_WAIT_2, any
suggest?

regards,
Enrico


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] binding 64-bit integer

2007-04-20 Thread [EMAIL PROTECTED]
After change the SQL clause to "SELECT * from mytable WHERE mykey=$1::int8", 
the binding passed!

Thanks for everyone's reply.

- Original Message 
From: Tom Lane <[EMAIL PROTECTED]>
To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
Cc: pgsql-general@postgresql.org
Sent: Thursday, April 19, 2007 8:41:22 PM
Subject: Re: [GENERAL] binding 64-bit integer


"[EMAIL PROTECTED]" <[EMAIL PROTECTED]> writes:
> I wanted to bind a 64-bit integer, but it failed:
> "ERROR:  incorrect binary data format in bind
> parameter 1".

It sorta looks like you are trying to send that value to a parameter
that the server doesn't think is int8.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] binding 64-bit integer

2007-04-19 Thread [EMAIL PROTECTED]
Hi all,

I'm using Solaris 10 with 64-bit libpq library.

I wanted to bind a 64-bit integer, but it failed:
"ERROR:  incorrect binary data format in bind
parameter 1".

The code would succeed if the type of "val" is
uint32_t.

Doe anyone know how to fix this?  Thanks a lot!

-
uint64_t  val;
const char  *paramValues[1];
int  paramLengths[1];
int  paramFormats[1];
const char *sql_clause = "SELECT * FROM mytable  WHERE
mykey = $1";

paramValues[0] = (char *) &val;
paramLengths[0] = sizeof(val);
paramFormats[0] = 1;

res = PQexecParams(conn, sql_clause, 1, NULL,
  paramValues, paramLengths, paramFormats, 1);



__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] autovacuum vacuums entire database...is this right?

2007-04-16 Thread [EMAIL PROTECTED]
Hi,

I'm just wondering if autovacuum is ever supposed to vacuum the entire
database during one of its runs.  As far as I remember, it's supposed
to vacuum one table at a time, based on the
autovacuum_vacuum_threshold, autovacuum_analyze_threshold, etc.
settings.

For some reason, autovacuum decided to run a vacuum on my entire
database (29GB large), and it's taking forever:

select now(), query_start, current_query, backend_start, procpid,
usename from pg_stat_activity where current_query <> '';
  now  |  query_start  |
current_query | backend_start | procpid | usename
---+---
+---+---+-+--
 2007-04-15 06:34:27.925042-07 | 2007-04-14 22:23:31.283894-07 |
VACUUM| 2007-04-14 22:23:31.274121-07 |9406 | postgres

Is this expected behavior?

--Richard


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] median query causes disk to fill up

2007-04-14 Thread [EMAIL PROTECTED]
Hi,

I'm trying to run a query to find the median value, organized by
date.  However, when I run the query, it runs for about 4 hours, and
then quits with the following message:

> ERROR:  could not write block 10447102 of temporary file: No space left on 
> device
> HINT:  Perhaps out of disk space?

My table has 512327 rows and is the following:

> Table "public.m_uop_times"
>  Column |Type | Modifiers
> +-+---
>  objectid   | integer |
>  date_part  | double precision|
>  date_start | timestamp without time zone |
>  date_processed | timestamp without time zone |
>  gen_time   | integer |
> Indexes:
> "m_uop_date_idx" btree (date_processed)
> "m_uop_epoch_idx" btree (date_part)
> "m_uop_gen_idx" btree (gen_time)
> "m_uop_objectid_idx" btree (objectid)
> "m_uop_start_idx" btree (date_start)

The date_part column is actually simply "EXTRACT (EPOCH FROM
date_start::date)" so that I could put an index on that date, and the
gen_time column is actually "date_processed-date_start" so that there
could be an index on that difference as well.

My median query is copied from
http://book.itzero.com/read/others/0602/OReilly.SQL.Cookbook.Dec.2005_html/0596009763/sqlckbk-CHP-7-SECT-10.html

Here it is:
> select date_start, avg(gen_time)
>   from (
> select a.date_start::date, a.gen_time
>   from m_uop_times a, m_uop_times b
>  where a.date_part = b.date_part
>  group by a.date_start::date, a.gen_time
> having sum(case when a.gen_time=b.gen_time then 1 else 0 end)
>   >= abs(sum(sign(a.gen_time - b.gen_time)))
>) as foo
> group by date_start;

Basically, I want to find the median gen_time for each day.

Would anyone know a better way to do this, or have suggestions on how
I can make this work without dying?

Any help appreciated!
--Richard


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] median query causes disk to fill up

2007-04-14 Thread [EMAIL PROTECTED]
Sorry, I forgot to also mention that I am running this on a machine
with 80GB free disk space, and 1GB RAM (but I wouldn't think that this
would be the problem, would it?)

--Richard



On Apr 13, 9:25 am, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I'm trying to run a query to find the median value, organized by
> date.  However, when I run the query, it runs for about 4 hours, and
> then quits with the following message:
>
> > ERROR:  could not write block 10447102 of temporary file: No space left on 
> > device
> > HINT:  Perhaps out of disk space?
>
> My table has 512327 rows and is the following:
>
> > Table "public.m_uop_times"
> >  Column |Type | Modifiers
> > +-+---
> >  objectid   | integer |
> >  date_part  | double precision|
> >  date_start | timestamp without time zone |
> >  date_processed | timestamp without time zone |
> >  gen_time   | integer |
> > Indexes:
> > "m_uop_date_idx" btree (date_processed)
> > "m_uop_epoch_idx" btree (date_part)
> > "m_uop_gen_idx" btree (gen_time)
> > "m_uop_objectid_idx" btree (objectid)
> > "m_uop_start_idx" btree (date_start)
>
> The date_part column is actually simply "EXTRACT (EPOCH FROM
> date_start::date)" so that I could put an index on that date, and the
> gen_time column is actually "date_processed-date_start" so that there
> could be an index on that difference as well.
>
> My median query is copied 
> fromhttp://book.itzero.com/read/others/0602/OReilly.SQL.Cookbook.Dec.2005...
>
> Here it is:
>
> > select date_start, avg(gen_time)
> >   from (
> > select a.date_start::date, a.gen_time
> >   from m_uop_times a, m_uop_times b
> >  where a.date_part = b.date_part
> >  group by a.date_start::date, a.gen_time
> > having sum(case when a.gen_time=b.gen_time then 1 else 0 end)
> >   >= abs(sum(sign(a.gen_time - b.gen_time)))
> >) as foo
> > group by date_start;
>
> Basically, I want to find the median gen_time for each day.
>
> Would anyone know a better way to do this, or have suggestions on how
> I can make this work without dying?
>
> Any help appreciated!
> --Richard



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] DTrace and PostgreSQL

2007-04-13 Thread [EMAIL PROTECTED]
On Apr 12, 8:13 pm, "Karen Hill" <[EMAIL PROTECTED]> wrote:
> I was wondering if DTrace could tell me how many inserts are being
> done in a pl/pgsql function while in a loop for example.  As you know
> a pl/pgsql function executes in a single transaction so the DTrace
> probe "transaction__commit(int)" I believe is not helpful here.  Could
> DTrace measure how many inserts are being done in a transaction that
> has not yet been commited, especially if that transaction block is in
> a pl/pgsql function?  This would be extremely useful as when one has a
> bunch of inserts one could be able to see how far along the pl/pgsql
> function was.

Karen, having fun with communicating with your V125??

FYI:
http://pgfoundry.org/docman/view.php/1000163/230/PostgreSQL-DTrace-Users-Guide.pdf

Rayson



>
> regards,
> karen



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] use superuser connection from php

2007-04-04 Thread [EMAIL PROTECTED]

Albe Laurenz wrote:
> You can connect as superuser on a different connection and issue that
> SELECT statement.

OK

> But I wouldn't do that. What if there is a problem and all availaible
> superuser connections are exhausted? You would not be able to connect
> to the database any more, even as superuser.

True.

> I would rather examine the SQLSTATE you get from the failed connection
> attempt. PostgreSQL returns SQLSTATE 53300 (TOO MANY CONNECTIONS) when
> the maximum os reached. This way you can distinguish that case from 
others.


Oh, thanks a lot for the hint!

Iv


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] use superuser connection from php

2007-04-04 Thread [EMAIL PROTECTED]

Martijn van Oosterhout wrote:
> To use the superuser connections you need to login as superuser...

Eh... OK. I feel a bit stupid :) Thanks ;)

Iv


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] inserting multiple values in version 8.1.5

2007-04-03 Thread [EMAIL PROTECTED]

I need to do like 1000 inserts periodically from a web app. Is it better to
do 1000 inserts or 1 insert with the all 1000 rows? Is using copy command
faster than inserts?
thanks

On 4/2/07, Chris <[EMAIL PROTECTED]> wrote:


[EMAIL PROTECTED] wrote:
> Hi
> I am trying to insert multiple values into a table like this.
> INSERT INTO tab_name (col1, col2) VALUES (val1, val2), (val3, val4)
>
> This works in postgres version 8.2.1
>
> My production server runs in 8.1.5. It gives me
> ERROR:  syntax error at or near "," at character 35

That came in at v8.2.

You can't use it in 8.1.5.

--
Postgresql & php tutorials
http://www.designmagick.com/



[GENERAL] inserting multiple values in version 8.1.5

2007-04-02 Thread [EMAIL PROTECTED]

Hi
I am trying to insert multiple values into a table like this.
INSERT INTO tab_name (col1, col2) VALUES (val1, val2), (val3, val4)

This works in postgres version 8.2.1

My production server runs in 8.1.5. It gives me
ERROR:  syntax error at or near "," at character 35

What to do?
thanks


[GENERAL] use superuser connection from php

2007-04-02 Thread [EMAIL PROTECTED]

hello,

we have a php application which gets from time to time database errors 
which look like there are not enough connections (we have 100 
connections allowed to postgresql) -


i read that there are two db connections reserved for su. is there a way 
to use them from php in order to check if the database is really out of 
connections (with - 'SELECT count(*) FROM pg_stat_activity') or the 
problem is different?


thanks,
iv


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Using PostgreSQL to archive personal email

2007-03-21 Thread [EMAIL PROTECTED]
Hi,

Does anyone know of any apps using PostgreSQL to archive their
personal email and make it searchable?  And that runs on Mac OS X?

thanks,
matt


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] PgSql on Vista?

2007-03-18 Thread [EMAIL PROTECTED]

Ok thanks i've fixed.
The problem is with 8.2.3 installer, i tried 8.1.8 and I had no problem.

Thanks

On 3/15/07, Dave Page <[EMAIL PROTECTED]> wrote:



> --- Original Message ---
> From: Alvaro Herrera <[EMAIL PROTECTED]>
> To: Dave Page <[EMAIL PROTECTED]>
> Sent: 15/03/07, 22:32:50
> Subject: Re: [GENERAL] PgSql on Vista?
>
> Dave Page escribió:
> > Paul Lambert wrote:
> >
> > >After install completes you can turn it back on... if you want -
> > >personally I leave it off, it's an incredibly annoying "feature".
> >
> > Doesn't the security center keep popping up to point out that it's
> > turned off?
>
> You mean, like this?
>
> 
http://images.apple.com/movies/us/apple/getamac/apple-getamac-security_480x376.mov
>

I haven't watched yet as I'm on my pda, but would it seem a little ironic if I 
pointed out that the Windows distribution of PostgreSQL is built on a Macbook 
Pro these days?

/D




--
(¯`·._)
XF86Config ne sa MOLTE più del diavolo. O forse serve ad evocare il
diavolo stesso, non ho capito bene.
(¯`·._)

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Trigger for Audit Table

2007-03-09 Thread [EMAIL PROTECTED]
You can/should create it as an AFTER UPDATE trigger.  The OLD row will
contain the previous values.

eg:
  INSERT INTO template_history
  ( template_id, path, content, last_updated_time, person )
  values
  (OLD.id, OLD.path, OLD.content, OLD.last_updated_time, OLD.person);

On Mar 9, 2:45 pm, [EMAIL PROTECTED] (Bill Moseley) wrote:

> My trigger is very simple:
>
> CREATE OR REPLACE FUNCTION audit_template() RETURNS TRIGGER AS '
> BEGIN
> INSERT INTO template_history
> ( template_id, path, content, last_updated_time, 
> person )
> select
> id, path, content, last_updated_time, person
> from
> template where id = 1;
>
> RETURN NEW;
> END'
> language 'plpgsql';
>
> CREATE TRIGGER template_history_add BEFORE UPDATE ON template
> for each row execute procedure audit_template();
>
> I realize this is a *BEFORE* UPDATE trigger, but I have this vague
> memory of seeing a post stating that you can't be sure the existing
> row has not been updated yet. Perhaps that was just a concern if
> another trigger was to modify the row.  But, I can't seem to find that
> post now which is why I'm asking for the sanity check.
>
> Are there potential problems with this setup?
>
> --
> Bill Moseley
> [EMAIL PROTECTED]
>
> ---(end of broadcast)---
> TIP 5: don't forget to increase your free space map settings



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] how to pass an array to the plpgsql function from Java Code

2007-03-06 Thread [EMAIL PROTECTED]
On Mar 4, 11:34 am, [EMAIL PROTECTED] ("Sandeep Kumar
Jakkaraju") wrote:
> how to pass an array to the plpgsql function from Java Code ??

If nothing else, you could use the ARRAY[] constructor:

int [] ar = {1,2,3};
PreparedStament pre= connection.prepareStatement( " select
test(ARRAY[?,?,?]) ");

pre.setArray(1,ar[0]);
pre.setArray(2,ar[1]);
pre.setArray(3,ar[2]);




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] select all matches for a regular expression ?

2007-02-25 Thread [EMAIL PROTECTED]
I'm going to disagree and say it can be done (maybe).

Use regexp_replace() to convert non-numeric characters.  Depending on
your final needs, you could leave it as a comma-separated list or
split it to an array.


select string_to_array(regexp_replace(regexp_replace('hello4 is 4 very
n1ce num8er', '[^0-9]+', ',', 'g'), '^,|,$', '', 'g'),',');

{4,4,1,8}



On Feb 23, 10:18 am, [EMAIL PROTECTED] ("Anton Melser") wrote:
> On 23/02/07, Tom Lane <[EMAIL PROTECTED]> wrote:
>
> > "Anton Melser" <[EMAIL PROTECTED]> writes:
> > > I need to be able to get all the matches for a particular regexp from
> > > a text field that I need to use in another query in a function. Is
> > > this possible with plpgsql? Do I have to install the perl language?
>
> > You need plperl (or pltcl; likely plpython would work too) --- the
> > built-in regex functions don't have any way to return more than the
> > first match.  There's a patch pending to provide more functionality
> > here for 8.3, but it won't help you today.
>
> Thanks for the info
> Cheers
> Anton
>
> ---(end of broadcast)---
> TIP 4: Have you searched our list archives?
>
>http://archives.postgresql.org/



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] SQL Question - Using Group By

2007-02-25 Thread [EMAIL PROTECTED]
You could use COUNT() in conjunction with NULLIF:

select "Type",
count(nullif("Active", false)) as "Active Count",
count(nullif("Active", true)) as "Inactive Count",
100 * count(nullif("Active", false)) / count(*) as "Active Percent"
from table_name group by "Type"


On Feb 23, 2:50 pm, "Mike" <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I have a question about using Group By.
>
> On a table like this:
>
> Type (varchar) | Active (boolean)
> 
> Type One   |  False
> Type Two   |  True
> Type One   |  True
> Type Fifty   |  Flase
> Type Two   |  True
>
> Having this table I want a report grouping Types and giving me more
> statistics such as:
>
> Type|Active Count| Inactive Count|Active
> Percent
>
> How do i do that?
>
> I can think of :
>
> select Type from table_name group by Type
>
> But that doesn't give me how many active and inactive each had!
>
> Please help me here understand how to approach this.
>
> Thank you,
> Mike



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Stored Procedure examples

2007-02-15 Thread [EMAIL PROTECTED]
> I searched on postgreSql site and found
> a topic "Stored Procedure Example". But actually, they
> showed how to write a function on postgreSql database.

A procedure is a function with a return type of void.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] flaky hardware?

2007-02-12 Thread [EMAIL PROTECTED]

Saw this in the postgres 8.2.3 server log today:

2007-02-10 00:27:12 PST PANIC:  XX001: corrupted item pointer: offset  
= 0, size = 0

2007-02-10 00:27:12 PST LOCATION:  PageIndexMultiDelete, bufpage.c:624
2007-02-10 00:27:56 PST LOG:  0: autovacuum process (PID 25471)  
was terminated by signal 6

2007-02-10 00:27:56 PST LOCATION:  LogChildExit, postmaster.c:2430
2007-02-10 00:27:56 PST LOG:  0: terminating any other active  
server processes

2007-02-10 00:27:56 PST LOCATION:  HandleChildCrash, postmaster.c:2315

Smells like flaky hardware. Anyone have any tips?
Incidentally, Does anyone know of a good linux hard drive test/ 
verification tool?


-jay


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] DBI-Link 2.0

2007-02-08 Thread [EMAIL PROTECTED]
Hi,

Is there any form of manual for DBI-link out there?

Any link is greatly appreciated!

Regards,
Jo.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] DBI-Link 2.0

2007-02-08 Thread [EMAIL PROTECTED]
On Feb 8, 8:54 am, Hannes Dorbath <[EMAIL PROTECTED]> wrote:
> On 07.02.2007 17:59, [EMAIL PROTECTED] wrote:
>
> > Is there any form of manual for DBI-link out there?
>
> > Any link is greatly appreciated!
>
> I think all available documentation comes with it in various README
> files. Do you have a specific problem?
>
> --
> Regards,
> Hannes Dorbath

I worked out all the problems that i had (and there were some), but i
have to pass on the information now to a developper that has no
experience with dbi-link.  I'll just try to explain him what i know
already and pass him the implementation.txt file.

thanks!

Jo De Haes.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] line folding versus shop line

2007-02-08 Thread [EMAIL PROTECTED]
Hi,

I'm using postgresql 7.4. If I execute SQL Select statement, then the
records fold around lines.

I would like to turn this off, so that lines do not fold.

I know the option in less, which one has to set to "-S". I have
exported the variable and it works with textfiles.

However, if I use psql then lines are folded even so the enviornment
variable of LESS is set to S.

How do I turn line folding off, so that records do not wrap around???

Fritz


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


  1   2   3   >