Re: [GENERAL] Upgrade to 9.1 causing function problem

2012-02-24 Thread Willem Buitendyk
I did create the schemas with PgAdmin.  As a test I also created another schema 
in psql and it too has the same problems with the function not working.  I also 
created the function this time without relying on search_path and even altered 
the function and tables names slightly just in case there was some kind of 
conflict.  Consequently there was no log error with search_path anymore but 
again the function will not work even though it appears to go through the 
motions of working.

select test2._crab_set_process_month_trial('2012-01-01');


CREATE OR REPLACE FUNCTION test2._crab_set_process_month_trial(date)
  RETURNS void AS
$BODY$

BEGIN

update test2.activity_trial set action_month = $1;


END;
$BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100;
ALTER FUNCTION test2._crab_set_process_month_trial(date)
  OWNER TO postgres;


CREATE TABLE test2.activity_trial
(
  action_month date NOT NULL,
  CONSTRAINT idkeymonth PRIMARY KEY (action_month )
)
WITH (
  OIDS=FALSE
);
ALTER TABLE test2.activity_trial
  OWNER TO postgres;




On 2012-02-23, at 6:04 PM, Adrian Klaver wrote:

 On Thursday, February 23, 2012 1:53:42 pm Willem Buitendyk wrote:
 Both via psql and PgAdmin.
 
 Yes only one database cluster.
 
 
 Another thought.
 Did you CREATE the schema using PgAdmin and if so,  might you have 
 inadvertently 
 put in a trailing or leading space ?
 I ask because if I remember correctly PgAdmin by default quotes object names 
 and 
 that would trap the space character.
 
 I know  you showed this previously:
 
 crabby;crabdata;postgres
 
 On the chance that spaces where trimmed out of the above what does the query 
 below show?:
 
 SELECT length(schema_name), schema_name from information_schema.schemata;
 
 -- 
 Adrian Klaver
 adrian.kla...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrade to 9.1 causing function problem

2012-02-24 Thread Willem Buitendyk
Ok I must be doing something wrong.  I tried the same test on my old server 
running 8.3 which has had no problem with anything up till now (but also only 
working within public schema).  So I am obviously not working with schemas 
correctly.  Will read the manual for hopefully a deeper understanding.



 I did create the schemas with PgAdmin.  As a test I also created another 
 schema in psql and it too has the same problems with the function not 
 working.  I also created the function this time without relying on 
 search_path and even altered the function and tables names slightly just in 
 case there was some kind of conflict.  Consequently there was no log error 
 with search_path anymore but again the function will not work even though it 
 appears to go through the motions of working.
 
 select test2._crab_set_process_month_trial('2012-01-01');
 
 
 CREATE OR REPLACE FUNCTION test2._crab_set_process_month_trial(date)
  RETURNS void AS
 $BODY$
 
 BEGIN
 
 update test2.activity_trial set action_month = $1;
 
 
 END;
 $BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100;
 ALTER FUNCTION test2._crab_set_process_month_trial(date)
  OWNER TO postgres;
 
 
 CREATE TABLE test2.activity_trial
 (
  action_month date NOT NULL,
  CONSTRAINT idkeymonth PRIMARY KEY (action_month )
 )
 WITH (
  OIDS=FALSE
 );
 ALTER TABLE test2.activity_trial
  OWNER TO postgres;
 
On 2012-02-23, at 6:04 PM, Adrian Klaver wrote:

 On Thursday, February 23, 2012 1:53:42 pm Willem Buitendyk wrote:
 Both via psql and PgAdmin.
 
 Yes only one database cluster.
 
 
 Another thought.
 Did you CREATE the schema using PgAdmin and if so,  might you have 
 inadvertently 
 put in a trailing or leading space ?
 I ask because if I remember correctly PgAdmin by default quotes object names 
 and 
 that would trap the space character.
 
 I know  you showed this previously:
 
 crabby;crabdata;postgres
 
 On the chance that spaces where trimmed out of the above what does the query 
 below show?:
 
 SELECT length(schema_name), schema_name from information_schema.schemata;
 
 -- 
 Adrian Klaver
 adrian.kla...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrade to 9.1 causing function problem

2012-02-24 Thread Willem Buitendyk
I tried as you suggested and my results are:

crabby=# SELECT length(schema_name), schema_name from information_schema.schemat
a;
 length |schema_name
+
  8 | pg_toast
  9 | pg_temp_1
 15 | pg_toast_temp_1
 10 | pg_catalog
  6 | public
 18 | information_schema
  8 | crabdata
(7 rows)


So it seems that crabdata schema is not with extra space character or such.  
Likewise I created another schema earlier in a test (called test) from psql and 
it exhibited the same behaviour.
I've spent a whole week migrating from 8.3 to 9.1 and am loathe to repeat the 
process because I might have a funky installation.  In all other respects 
everything
seems to work ok.  Failing all else I can try a re-installation.  If I go down 
this road are there any suggestions to wipe the slate clean to give myself
the best fighting chance of having this work?  Using windows 7 64 bit with 
postgresql 9.1 32 bit and postgis.  I am also making sure to operate from the 
correct database.

Here are the two problems as such:

1) setting the search_path to another schema returns the error in the server 
log:

2012-02-24 11:32:59.456 PST @[3868]: WARNING:  invalid value for parameter 
search_path: crabdata, public
2012-02-24 11:32:59.456 PST @[3868]: DETAIL:  schema crabdata does not exist  

As noted at the beginning of this post - crabdata is clearly present and does 
not contain any extraneous characters.

2) using designated schema designation in functions and tables still fail to 
work correctly.  Such as:  

select crabdata._crab_set_report_month('2012-01-01');

CREATE OR REPLACE FUNCTION crabdata._crab_set_report_month(date)
  RETURNS void AS
$BODY$

BEGIN

update activity_month set action_month = $1;
perform * from _crab_pop_tag_day_over();

END;
$BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100;
ALTER FUNCTION crabdata._crab_set_report_month(date)
  OWNER TO postgres;
GRANT EXECUTE ON FUNCTION crabdata._crab_set_report_month(date) TO public;
GRANT EXECUTE ON FUNCTION crabdata._crab_set_report_month(date) TO postgres;


CREATE TABLE crabdata.activity_month
(
  action_month date NOT NULL,
  CONSTRAINT idkeymonth PRIMARY KEY (action_month )
)
WITH (
  OIDS=FALSE
);
ALTER TABLE crabdata.activity_month
  OWNER TO postgres;
GRANT ALL ON TABLE crabdata.activity_month TO postgres;
GRANT ALL ON TABLE crabdata.activity_month TO public;














On 2012-02-23, at 6:04 PM, Adrian Klaver wrote:

 On Thursday, February 23, 2012 1:53:42 pm Willem Buitendyk wrote:
 Both via psql and PgAdmin.
 
 Yes only one database cluster.
 
 
 Another thought.
 Did you CREATE the schema using PgAdmin and if so,  might you have 
 inadvertently 
 put in a trailing or leading space ?
 I ask because if I remember correctly PgAdmin by default quotes object names 
 and 
 that would trap the space character.
 
 I know  you showed this previously:
 
 crabby;crabdata;postgres
 
 On the chance that spaces where trimmed out of the above what does the query 
 below show?:
 
 SELECT length(schema_name), schema_name from information_schema.schemata;
 
 -- 
 Adrian Klaver
 adrian.kla...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Upgrade to 9.1 causing function problem

2012-02-23 Thread Willem Buitendyk
I have recently upgraded my database from 8.4 to 9.1.  In the process I have 
moved everything to a different schema.  Postgis is occupying the public 
schema.  Everything is working fine except for some of my own functions. Here 
is a small function and table that will not update when I perform the following 
code:select _me_set_process_month('2012-01-01'); It will run but the 
resulting table will not update.  Any ideas?


CREATE OR REPLACE FUNCTION _me_set_process_month(date)
  RETURNS void AS
$BODY$

BEGIN

update activity_month set action_month = $1;


END;
$BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100;
ALTER FUNCTION _me_set_process_month(date)
  OWNER TO postgres;



CREATE TABLE activity_month
(
  action_month date NOT NULL,
  CONSTRAINT idkeymonth PRIMARY KEY (action_month )
)
WITH (
  OIDS=FALSE
);
ALTER TABLE activity_month
  OWNER TO postgres;
GRANT ALL ON TABLE activity_month TO public;
GRANT ALL ON TABLE activity_month TO postgres;
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrade to 9.1 causing function problem

2012-02-23 Thread Willem Buitendyk
Here are the log returns:

2012-02-23 11:31:44 PST WARNING invalid value for parameter search_path: 
crabdata   
2012-02-23 11:31:44 PST DETAIL  schema crabdata does not exist

Bizarre because I did set my search path to the schema crabdata but now it is 
saying that crabdata doesn't exist.  But it clearly does.  I'm at a loss.

Willem



On 2012-02-23, at 11:41 AM, Adrian Klaver wrote:

 On Thursday, February 23, 2012 11:18:46 am Willem Buitendyk wrote:
 I have recently upgraded my database from 8.4 to 9.1.  In the process I
 have moved everything to a different schema.  Postgis is occupying the
 public schema.  Everything is working fine except for some of my own
 functions. Here is a small function and table that will not update when I
 perform the following code:select _me_set_process_month('2012-01-01');
It will run but the resulting table will not update.  Any ideas?
 
 What does the Postgres log say?
 What is your search_path set to?
 Is there more than one activity_month?
 
 My guess is that your search_path is limiting the visibility of the table. 
 Without an explicit schema qualification of the table, Postgres will use the 
 search_path to locate a table. The logs should say something or you can 
 change 
 the function to point to a schema qualified table name, if you do want to 
 change 
 the search_path.
 
 -- 
 Adrian Klaver
 adrian.kla...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrade to 9.1 causing function problem

2012-02-23 Thread Willem Buitendyk
crabby=# \dv
   List of relations
  Schema  |   Name| Type |  Owner
--+---+--+--
 crabdata | Total XXX X by XXX   | view | postgres


also;

crabby=# show search_path;
 search_path
-
 crabdata
(1 row)


On 2012-02-23, at 12:16 PM, Rob Sargent wrote:

 On 02/23/2012 12:49 PM, Willem Buitendyk wrote:
 Here are the log returns:
 
 2012-02-23 11:31:44 PST  WARNING invalid value for parameter 
 search_path: crabdata   
 2012-02-23 11:31:44 PST  DETAIL  schema crabdata does not exist
 
 Bizarre because I did set my search path to the schema crabdata but now it 
 is saying that crabdata doesn't exist.  But it clearly does.  I'm at a loss.
 
 Willem
 
 
 
 On 2012-02-23, at 11:41 AM, Adrian Klaver wrote:
 
 On Thursday, February 23, 2012 11:18:46 am Willem Buitendyk wrote:
 I have recently upgraded my database from 8.4 to 9.1.  In the process I
 have moved everything to a different schema.  Postgis is occupying the
 public schema.  Everything is working fine except for some of my own
 functions. Here is a small function and table that will not update when I
 perform the following code:select _me_set_process_month('2012-01-01');
It will run but the resulting table will not update.  Any ideas?
 What does the Postgres log say?
 What is your search_path set to?
 Is there more than one activity_month?
 
 My guess is that your search_path is limiting the visibility of the table.
 Without an explicit schema qualification of the table, Postgres will use the
 search_path to locate a table. The logs should say something or you can 
 change
 the function to point to a schema qualified table name, if you do want to 
 change
 the search_path.
 
 -- 
 Adrian Klaver
 adrian.kla...@gmail.com
 
 Who owes/owned crabdata schema?
 
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrade to 9.1 causing function problem

2012-02-23 Thread Willem Buitendyk
I have it set in postgresql.conf and I've also used:

alter user postgres set search_path = crabdata,public;

looking at the logs after issuing the above alter user command I get:

2012-02-23 13:03:09 PST WARNING invalid value for parameter search_path: 
crabdata, public   
2012-02-23 13:03:09 PST DETAIL  schema crabdata does not exist

If i look in the schemata table in the catalogs I see crabdata schema is there:

crabby;pg_toast;postgres
crabby;pg_temp_1;postgres
crabby;pg_toast_temp_1;postgres
crabby;pg_catalog;postgres
crabby;public;postgres
crabby;information_schema;postgres
crabby;crabdata;postgres


I should note this is on Windows 7 64 bit - using the 32 bit installation of 
postgresql 9.1 



On 2012-02-23, at 12:57 PM, Adrian Klaver wrote:

 On 02/23/2012 11:49 AM, Willem Buitendyk wrote:
 Here are the log returns:
 
 2012-02-23 11:31:44 PST  WARNING invalid value for parameter 
 search_path: crabdata   
 2012-02-23 11:31:44 PST  DETAIL  schema crabdata does not exist
 
 Bizarre because I did set my search path to the schema crabdata but now it 
 is saying that crabdata doesn't exist.  But it clearly does.  I'm at a loss.
 
 Where did you set the search_path, in postgressql.conf of from a SET command?
 
 
 Willem
 
 
 
 -- 
 Adrian Klaver
 adrian.kla...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrade to 9.1 causing function problem

2012-02-23 Thread Willem Buitendyk
Both via psql and PgAdmin.  

Yes only one database cluster.


On 2012-02-23, at 1:32 PM, Adrian Klaver wrote:

 On 02/23/2012 01:08 PM, Willem Buitendyk wrote:
 I have it set in postgresql.conf and I've also used:
 
 alter user postgres set search_path = crabdata,public;
 
 looking at the logs after issuing the above alter user command I get:
 
 2012-02-23 13:03:09 PST  WARNING invalid value for parameter 
 search_path: crabdata, public   
 2012-02-23 13:03:09 PST  DETAIL  schema crabdata does not exist
 
 
 I should note this is on Windows 7 64 bit - using the 32 bit installation of 
 postgresql 9.1
 
 How are you doing these changes, via psql or PgAdmin or both?
 Is the previous 8.4 instance of Postgres still up and running?
 Are you sure you are only working with one database cluster?
 
 
 
 
 
 
 -- 
 Adrian Klaver
 adrian.kla...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Cannot connect remotely to postgresql

2012-01-19 Thread Willem Buitendyk
I have 8.2 installed on 64bit windows 7.  I have no problem making a local 
connection.  However, when I make changes to pg_hba.conf such as add:

local all all trust

I still cannot connect through a VPN.  On a hunch that my pg server was not 
using the config files in C:\Program Files (x86)\PostgreSQL\8.3\data  I 
changed the port in postgresql.conf to 5433 and restarted the server.  After 
doing this I am still able to connect the server using psql -h localhost -U 
postgres -d xxx  I am assuming (perhaps incorrectly) that I shouldn't be able 
to do this.  So now I'm completely stumped.  I've searched my computer and 
can't find any other conf files.  I recently set $PGDATA to C:\Program Files 
(x86)\PostgreSQL\8.3\data\ and the same in my $PATH for bin.  I do notice that 
lib is not installed in my $PATH but assume that would not affect my 
connection.  Any ideas?



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot connect remotely to postgresql

2012-01-19 Thread Willem Buitendyk

On 2012-01-19, at 10:10 AM, Raymond O'Donnell wrote:

 On 19/01/2012 17:27, Willem Buitendyk wrote:
 I have 8.2 installed on 64bit windows 7.  I have no problem making a
 local connection.  However, when I make changes to pg_hba.conf such
 as add:
 
 local all all trust
 
 What is the exact error message you're getting?
 
 Did you restart the server after changing pg_hba.conf?
 
 Also, I don't think local rules do anything on windows - you need to
 add a host rule as the connections are over TCP/IP (though I could be
 wrong).
 
 I still cannot connect through a VPN.  On a hunch that my pg server
 was not using the config files in C:\Program Files
 (x86)\PostgreSQL\8.3\data  I changed the port in postgresql.conf to
 5433 and restarted the server.  After doing this I am still able to
 connect the server using psql -h localhost -U postgres -d xxx  I am
 assuming (perhaps incorrectly) that I shouldn't be able to do this.
 
 That does seem odd - you should need the -p option for anything other
 than the standard port.
 
 Is there any chance that you have more than one installation running on
 the machine, and the other one is listening on port 5432?

There is only one service listed.  If I try the following:

C:\Users\Willempostgres -D C:\Program Files (x86)\PostgreSQL\8.3\data

I get:

2012-01-19 10:48:06 PST LOG:  loaded library $libdir/plugins/plugin_debugger.dl
l
2012-01-19 10:48:06 PST LOG:  could not bind IPv4 socket: No error
2012-01-19 10:48:06 PST HINT:  Is another postmaster already running on port 543
3? If not, wait a few seconds and retry.
2012-01-19 10:48:06 PST WARNING:  could not create listen socket for 10.0.1.7

There appears to be no other instance of postgresql running on my system other 
then the one.  
I will try a restart without the service starting automatically and try a 
manual start next.

 
 Ray.
 
 -- 
 Raymond O'Donnell :: Galway :: Ireland
 r...@iol.ie


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot connect remotely to postgresql

2012-01-19 Thread Willem Buitendyk
I tried manually starting without the service automatically running using 
pg_ctl start -D c:\program files (x86)\etc etc  which reported back that i 
might have another postmaster running.  I then did pg_ctl reload -D c:\program 
files (x86)\etc etc and it sent a signal and voila it worked.  I have since 
put everything back to having the postgresql service start automatically upon 
machine startup and its back to not working.  In fact, when I run pg_ctl status 
from a fresh boot with the postgresql service automatically starting I get the 
return message of: pg_ctl: no server running.

So perhaps there is something with 8.3 and windows 64 specifically in that the 
configuration files are loading from somewhere else.  Very peculiar behaviour.  
I have some resolve from my madness.  At least I can manually start the service 
and have it running properly.


On 2012-01-19, at 10:10 AM, Raymond O'Donnell wrote:

 On 19/01/2012 17:27, Willem Buitendyk wrote:
 I have 8.2 installed on 64bit windows 7.  I have no problem making a
 local connection.  However, when I make changes to pg_hba.conf such
 as add:
 
 local all all trust
 
 What is the exact error message you're getting?
 
 Did you restart the server after changing pg_hba.conf?
 
 Also, I don't think local rules do anything on windows - you need to
 add a host rule as the connections are over TCP/IP (though I could be
 wrong).
 
 I still cannot connect through a VPN.  On a hunch that my pg server
 was not using the config files in C:\Program Files
 (x86)\PostgreSQL\8.3\data  I changed the port in postgresql.conf to
 5433 and restarted the server.  After doing this I am still able to
 connect the server using psql -h localhost -U postgres -d xxx  I am
 assuming (perhaps incorrectly) that I shouldn't be able to do this.
 
 That does seem odd - you should need the -p option for anything other
 than the standard port.
 
 Is there any chance that you have more than one installation running on
 the machine, and the other one is listening on port 5432?
 
 Ray.
 
 -- 
 Raymond O'Donnell :: Galway :: Ireland
 r...@iol.ie


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Extract Week

2009-05-07 Thread Willem Buitendyk
Any ideas how to extract a non-iso week from timestamp?  In other words, 
weeks that start on Sunday and end on Saturday?  We have the dow 
function which returns the non-iso day of the week, why not a non-iso 
week function?


Cheers,

Willem

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] SPI_ERROR_CONNECT

2008-02-11 Thread Willem Buitendyk
I am receiving a SPI_ERROR_CONNECT error.  From what I'm reading I could 
fix this in C using SPI_push().  How does one fix this with PL/PGSql?


Return error:
---
NOTICE:  current day = 1

ERROR:  SPI_connect failed: SPI_ERROR_CONNECT
CONTEXT:  PL/pgSQL function pop_tag_day_over line 17 at FOR over 
SELECT rows


** Error **

ERROR: SPI_connect failed: SPI_ERROR_CONNECT
SQL state: XX000
Context: PL/pgSQL function pop_tag_day_over line 17 at FOR over SELECT 
rows

---

Here is my function:
---
CREATE OR REPLACE FUNCTION pop_tag_day_over()
 RETURNS void AS
$BODY$
DECLARE
   current_row RECORD;
   trans_day integer;   
BEGIN

   trans_day := 0;
  
   truncate table day_over;


   FOR i IN 1..(extract('day' 
from(last_day(process_month(-1)::integer LOOP


   execute 'CREATE OR REPLACE VIEW temp_tags_18 AS SELECT datetime, 
tagnum, tagtype, vrn

   FROM tag
   WHERE datetime = process_month()
   AND datetime  (process_month() - 18 + ' || trans_day || ')
   ORDER BY vrn, tagnum, datetime';
 
   FOR current_row IN SELECT * from temp_tags_18_counted

   LOOP
   IF current_row.day_count = 1 THEN
  
   insert into day_over (vrn,process_day) values 
(current_row.vrn,(1 + trans_day) );   


   END IF;
   END LOOP;   
   raise notice 'current day = %',trans_day+1;   
   trans_day := i;
  
   END LOOP;
  
END;

$BODY$
 LANGUAGE 'plpgsql';
---

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] SPI_ERROR_CONNECT

2008-02-11 Thread Willem Buitendyk



Tom Lane wrote:

That's a fairly bad workaround (assuming that the function is a
 legitimate candidate to be IMMUTABLE) because it defeats potential
 optimizations.

 What I'd suggest you do instead is rethink your apparently widespread
 habit of whacking your view definitions around on-the-fly.  This would
 never have worked at all before PG 8.3 (and as you can see we still have
 some bugs left in supporting it in 8.3 :-().  Even when it does work,
 there is a whole lot of frantic paddling going on just under the
 surface.  We may sail serenely on like the swan, but not very speedily

Yep, already started reorganizing so that I don't have to hack away at 
the views so much.  So far I've been

able to do without the functions that would only work with volatile.

cheers,

willem

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] SPI_ERROR_CONNECT

2008-02-11 Thread Willem Buitendyk

The problem was with the following:

  FOR current_row IN SELECT * from temp_tags_18_counted

The select from the [temp_tags_18_counted]  view is made up of 3 cross 
joins.  When I simplify and remove the joins everything works.  I tried 
this with some test data with only a few rows and the joins in place and 
it works too.
In the production data table there are about 250K rows.  Is it possible 
that calls to queries are colliding here or not giving each other enough 
time before being whisked around to next call in the FOR loop?


cheers,

willem

Tom Lane wrote:

Willem Buitendyk [EMAIL PROTECTED] writes:
  

ERROR:  SPI_connect failed: SPI_ERROR_CONNECT
CONTEXT:  PL/pgSQL function pop_tag_day_over line 17 at FOR over 
SELECT rows



Hm, what PG version is this?  And could we have a complete test case
not just the function?  (I don't feel like trying to reverse-engineer
your tables and views...)

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend

  



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] SPI_ERROR_CONNECT

2008-02-11 Thread Willem Buitendyk

Thanks Tom,

I sent you a test case.  The problem has since been resolved by changing 
one of my functions to VOLATILE instead of IMMUTABLE.  This has caught 
me twice now in the last few days.  I hope my learning of this will be a 
little more IMMUTABLE :)


cheers,
willem
PG 8.3

Tom Lane wrote:

Willem Buitendyk [EMAIL PROTECTED] writes:
  

The problem was with the following:
   FOR current_row IN SELECT * from temp_tags_18_counted



  
The select from the [temp_tags_18_counted]  view is made up of 3 cross 
joins.  When I simplify and remove the joins everything works.  I tried 
this with some test data with only a few rows and the joins in place and 
it works too.
In the production data table there are about 250K rows.  Is it possible 
that calls to queries are colliding here or not giving each other enough 
time before being whisked around to next call in the FOR loop?



No.  Please provide a test case instead of speculating.  And, again,
what is the PG version?

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster

  



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] pg_restore seems slow

2008-02-10 Thread Willem Buitendyk
I did use the 'd' switch but I didn't use the 'C' switch so I'm not sure 
a database was actually created.  Anyways, after I used the correct 
switches all work fast - really fast.  About a 1M records per minute.  I 
was able to peek into the server processes to see the current copy 
commands in effect.  Would still like to see a progress indicator though :)


Willem

Gurjeet Singh wrote:
On Feb 9, 2008 10:42 AM, Willem Buitendyk [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


I'm trying to restore my database from 8.26 into 8.3 (win32) but find
the process to be exceedingly slow.  The database has about 60M
records.
I realize there will be differences based on hardware, available
memory,
complexity of records but when I first tried a restore with the
verbose
option I was able to calculate based on the index incrementing that it
was inserting about 6500 records per minute.
At that rate it would take 153 hours to restore my db.  I then tried
minimizing the verbosity window and would open it only after a minute
and the speed was improved to about 2 records per minute.  I'm
hoping without the verbose option that the speed increases to at least
20 records per minute which would be a fairly reasonable 5 hours.
So is there any way besides using verbose to calculate the speed at
which pg_restore is inserting records?  It would be great to have a
'progress' option so that a person could time going out for a sail in
the morning and then return at just the right time.  Guess you
know what
I'd rather be doing instead of staring at the command prompt :)

---(end of
broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


By any chance, are you using -d or -D option while doing pg_dump?

Best regards,
--
[EMAIL PROTECTED]
[EMAIL PROTECTED] gmail | hotmail | indiatimes | yahoo }.com

EnterpriseDB  http://www.enterprisedb.com

17° 29' 34.37N,   78° 30' 59.76E - Hyderabad
18° 32' 57.25N,   73° 56' 25.42E - Pune
37° 47' 19.72N, 122° 24' 1.69 W - San Francisco *

http://gurjeet.frihost.net

Mail sent from my BlackLaptop device 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] Mechanics of Select

2008-02-10 Thread Willem Buitendyk
I have the following function that returns the first day of the next 
month from whatever date is inserted.  If I use this as part of a select 
statement then it takes almost twice as long to perform.  Is this 
because for each scanned record this function is being called?  If so 
any ideas how I could make this only occur once?


For instance:

select * from track where datetime = '2007-04-01' and datetime   
'2007-05-01'; takes about 30 ms to return 650K rows.


select * from track where datetime = '2007-04-01' and datetime  
first_day_next_month'2007-04-01'; takes about 55 ms to return 650K rows


CREATE OR REPLACE FUNCTION first_day_next_month(inputdate date)
 RETURNS date AS
$BODY$
declare
inputmonth1 integer;
inputyear1 integer;
inputmonth2 integer;
inputyear2 integer;
resultdate date;
BEGIN
inputmonth1 = extract(month from inputdate)::integer;  
inputyear1 = extract(year from inputdate)::integer;


if inputmonth1 = 12 then
inputyear2 = inputyear1 + 1;
else
inputyear2 = inputyear1;
end if;

if inputmonth1 = 12 then
inputmonth2 = 1;
else
inputmonth2 = inputmonth1 + 1;
end if;

resultdate = (inputyear2)::text || '-' || (inputmonth2)::text || '-' || 
'01';

resultdate = to_date(resultdate::text,'-MM-DD');

RETURN resultdate;
END;
$BODY$
 LANGUAGE 'plpgsql' VOLATILE
 COST 100;



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Mechanics of Select

2008-02-10 Thread Willem Buitendyk
As others have suggested my big problem with the function I wrote was 
that I had made it Volatile instead of Immutable (it is no doubt 
suffering from code bloat as well).  That made all the difference. 
Curiously though - I tried it just with the date_trunc function and it 
was just as slow as my old Volatile function.


select * from track where datetime = '2007-04-01' and datetime  
date_trunc('month','2007-04-01'::timestamp)+interval '1 month'; was 
about 55s
select * from track where datetime = '2007-04-01' and datetime  
first_day_next_month('2007-04-01'); was about 36s


cheers

Greg Smith wrote:

On Sun, 10 Feb 2008, Willem Buitendyk wrote:

I have the following function that returns the first day of the next 
month from whatever date is inserted.


See if you can do this with date_trunc instead to avoid calling a 
function, which avoids the whole thing.  The first day of next month is:


select date_trunc('month',now())+interval '1 month';

I'd be curious how the runtime using that compares with the plpgsql 
version you've done.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] pg_restore seems slow

2008-02-09 Thread Willem Buitendyk
I'm trying to restore my database from 8.26 into 8.3 (win32) but find 
the process to be exceedingly slow.  The database has about 60M records. 
I realize there will be differences based on hardware, available memory, 
complexity of records but when I first tried a restore with the verbose 
option I was able to calculate based on the index incrementing that it 
was inserting about 6500 records per minute.
At that rate it would take 153 hours to restore my db.  I then tried 
minimizing the verbosity window and would open it only after a minute 
and the speed was improved to about 2 records per minute.  I'm 
hoping without the verbose option that the speed increases to at least 
20 records per minute which would be a fairly reasonable 5 hours.  
So is there any way besides using verbose to calculate the speed at 
which pg_restore is inserting records?  It would be great to have a 
'progress' option so that a person could time going out for a sail in 
the morning and then return at just the right time.  Guess you know what 
I'd rather be doing instead of staring at the command prompt :) 


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Oracle Analytical Functions

2008-01-31 Thread Willem Buitendyk

Hey Adam,

I tried your sequence method this morning on an unsorted table and for 
some reason the order by's aren't working.  If I create a sorted view 
(client_id, datetime) on the 'all_client_times' table and then use that 
view with your sequence method all works fine.  The strange thing is 
that my table which has about 750K rows only ends up returning 658 rows 
with your sequence method using the unsorted table.  In fact, when I 
tried the same thing with the lagfunc() method you wrote earlier on an 
unsorted table the same thing occurs - only returning 658 rows instead 
of the 750K. Again, all works well with lagfunc() if I use it on a 
sorted view and I remove the order by in the function.  This is not too 
much of a problem as I can use a sorted view first but I don't 
understand why this is happening.  Perhaps this is a bug?


As well, I am finding that the lagfunc() is consistently faster than the 
sequence method.


cheers,

Willem

Adam Rich wrote:

I'm trying to replicate the use of Oracle's 'lag' and 'over
partition by' analytical functions in my query.  I have a table
(all_client_times) such as:
and I would like to create a new view that takes the first table and
calculates the time difference in minutes between each row so that
the result is something like:
  


I thought of a another way of doing this.  In my tests, it's a little
faster, too.

DROP SEQUENCE if exists seq1;
DROP SEQUENCE if exists seq2;
CREATE TEMPORARY SEQUENCE seq1 CACHE 1000;
CREATE TEMPORARY SEQUENCE seq2 CACHE 1000;

select a.client_id, b.datetime, a.datetime as previousTime, (b.datetime -
a.datetime) as difftime from
(select nextval('seq1') as s, client_id, datetime from all_client_times
order by client_id, datetime OFFSET 0) as a
inner join
(select nextval('seq2') as s, client_id, datetime from all_client_times
order by client_id, datetime OFFSET 0) as b
on a.s=(b.s-1) where a.client_id=b.client_id


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly

  



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Oracle Analytical Functions

2008-01-31 Thread Willem Buitendyk
The 'all_client_times' table has 753698 rows.  The lagfunc() on the 
sorted view returns 753576 rows and appears to work exactly as needed.  
Using the function on an unsorted table returns only 686 rows and is 
missing a whole lot of data.  Running the count query returns 122 - 
which is correct as the amount of clients that I have.  Each client has 
between 5 - 7K  records each.


The way I see it is for each client there will be one row, namely, the 
first in the series, that will not be included in the final results as 
it would not have a previous time.  With that in mind, if I take my 
table row count as 753698 and minus the amount of clients I have, 122,  
then I should get the number of results as 753576 which is correct when 
I use your methods on a sorted table but which is not correct when I  
use your methods on an unsorted table.


willem

Adam Rich wrote:

Hi Willem,

  

for some reason the order by's aren't working.



Could you provide more details?  Do you get a specific error message?

  

only returning 658 rows instead of the 750K.



You should not expect the same row count in both source table and
result set.  Even in your example -- you provided 8 source rows, and
4 result rows.  You can determine the correct number of results via
the number of records, related to client_ids having two or more records 
in all_client_times, minus one.  It may be true that you have 750k

records but only 658 rows that satisfy this requirement.

What do you get for this query?

select count(*) from ( select client_id, count(*) as rows 
from all_client_times group by client_id having count(*)  1 ) as x



Adam




  



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Oracle Analytical Functions

2008-01-31 Thread Willem Buitendyk
Here is a little test example.  It seems that the second order by 
condition is not working - in this case datetime.


create table arb_test (
client_id integer,
arbnum integer);

insert into arb_test values (2,1);
insert into arb_test values (2,33);
insert into arb_test values (2,6);
insert into arb_test values (2,76);
insert into arb_test values (2,111);
insert into arb_test values (2,10);
insert into arb_test values (2,55);
insert into arb_test values (7,12);
insert into arb_test values (7,6);
insert into arb_test values (7,144);
insert into arb_test values (7,63);
insert into arb_test values (7,87);
insert into arb_test values (7,24);
insert into arb_test values (7,22);
insert into arb_test values (1,14);
insert into arb_test values (1,23);
insert into arb_test values (1,67);
insert into arb_test values (1,90);
insert into arb_test values (1,2);
insert into arb_test values (1,5);
insert into arb_test values (5,8);
insert into arb_test values (5,42);
insert into arb_test values (5,77);
insert into arb_test values (5,9);
insert into arb_test values (5,89);
insert into arb_test values (5,23);
insert into arb_test values (5,11);

DROP SEQUENCE if exists seq1;
DROP SEQUENCE if exists seq2;
CREATE TEMPORARY SEQUENCE seq1 CACHE 1000;
CREATE TEMPORARY SEQUENCE seq2 CACHE 1000;

select a.client_id, b.arbnum, a.arbnum as previousarbnum, (b.arbnum -
a.arbnum) as diffarbnum from
(select nextval('seq1') as s, client_id, arbnum from arb_test
order by client_id, arbnum OFFSET 0) as a
inner join
(select nextval('seq2') as s, client_id, arbnum from arb_test
order by client_id, arbnum OFFSET 0) as b
on a.s=(b.s-1) where a.client_id=b.client_id;

--create or replace view arb_view as select * from arb_test order by 
client_id, arbnum;


Here are the results:

client_id | arbnum | previousarbnum | diffarbnum
---+++
1 | 23 | 14 |  9
1 | 67 | 23 | 44
1 | 90 | 67 | 23
1 |  2 | 90 |-88
1 |  5 |  2 |  3
2 | 33 |  1 | 32
2 |  6 | 33 |-27
2 | 76 |  6 | 70
2 |111 | 76 | 35
2 | 10 |111 |   -101
2 | 55 | 10 | 45
5 | 42 |  8 | 34
5 | 77 | 42 | 35
5 |  9 | 77 |-68
5 | 89 |  9 | 80
5 | 23 | 89 |-66
5 | 11 | 23 |-12
7 |  6 | 12 | -6
7 |144 |  6 |138
7 | 63 |144 |-81
7 | 87 | 63 | 24
7 | 24 | 87 |-63

When I used a sorted view:

create or replace view arb_view as select * from arb_test order by 
client_id, arbnum;


and redid it the results are:

client_id | arbnum | previousarbnum | diffarbnum
---+++
1 |  5 |  2 |  3
1 | 14 |  5 |  9
1 | 23 | 14 |  9
1 | 67 | 23 | 44
1 | 90 | 67 | 23
2 |  6 |  1 |  5
2 | 10 |  6 |  4
2 | 33 | 10 | 23
2 | 55 | 33 | 22
2 | 76 | 55 | 21
2 |111 | 76 | 35
5 |  9 |  8 |  1
5 | 11 |  9 |  2
5 | 23 | 11 | 12
5 | 42 | 23 | 19
5 | 77 | 42 | 35
5 | 89 | 77 | 12
7 | 12 |  6 |  6
7 | 22 | 12 | 10
7 | 24 | 22 |  2
7 | 63 | 24 | 39
7 | 87 | 63 | 24
7 |144 | 87 | 57
(23 rows)


This works the way it should. 


--drop table arb_test;
--drop view arb_view;

willem
The 'all_client_times' table has 753698 rows.  The lagfunc() on the 
sorted view returns 753576 rows and appears to work exactly as 
needed.  Using the function on an unsorted table returns only 686 rows 
and is missing a whole lot of data.  Running the count query returns 
122 - which is correct as the amount of clients that I have.  Each 
client has between 5 - 7K  records each.


The way I see it is for each client there will be one row, namely, the 
first in the series, that will not be included in the final results as 
it would not have a previous time.  With 

[GENERAL] Oracle Analytical Functions

2008-01-30 Thread Willem Buitendyk
I'm trying to replicate the use of Oracle's 'lag' and 'over partition 
by' analytical functions in my query.  I have a table (all_client_times) 
such as:


client_id, datetime
122, 2007-05-01 12:00:00
122, 2007-05-01 12:01:00
455, 2007-05-01 12:02:00
455, 2007-05-01 12:03:00
455, 2007-05-01 12:08:00
299, 2007-05-01 12:10:00
299, 2007-05-01 12:34:00

and I would like to create a new view that takes the first table and 
calculates the time difference in minutes between each row so that the 
result is something like:


client_id,datetime, previousTime, difftime
122,2007-05-01 12:01:00, 2007-05-01 12:00:00, 1
455,2007-05-01 12:03:00, 2007-05-01 12:02:00, 1
455,2007-05-01 12:08:00, 2007-05-01 12:03:00, 5
299,2007-05-01 12:34:00, 2007-05-01 12:10:00, 24

In Oracle I can achieve this with:

CREATE OR REPLACE VIEW client_time_diffs AS SELECT client_id,datetime, 
LAG(datetime, 1) OVER (partition by client_id ORDER BY 
client_id,datetime) AS previoustime from all_client_times;


Any idea how I could replicate this in SQL from PG.  Would this be an 
easy thing to do in Pl/pgSQL?  If so could anyone give any directions as 
to where to start?


Appreciate the help,

Willem


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Oracle Analytical Functions

2008-01-30 Thread Willem Buitendyk

I tried this function but it keeps returning an error such as:

ERROR: invalid input syntax for integer: 2007-05-05 00:34:08
SQL state: 22P02
Context: PL/pgSQL function lagfunc line 10 at assignment

I checked and there are no datetime values in the client_id field 
anywhere in my table 'all_client_times'


I have no idea what is going on here ...

Thanks for the code though - it has taught me a lot all ready; such as 
using, OUT and SETOF Record


Willem

Adam Rich wrote:

and I would like to create a new view that takes the first table and
calculates the time difference in minutes between each row so that the
result is something like:

client_id,datetime, previousTime, difftime
122,2007-05-01 12:01:00, 2007-05-01 12:00:00, 1
455,2007-05-01 12:03:00, 2007-05-01 12:02:00, 1
455,2007-05-01 12:08:00, 2007-05-01 12:03:00, 5
299,2007-05-01 12:34:00, 2007-05-01 12:10:00, 24

Any idea how I could replicate this in SQL from PG.  Would this be an
easy thing to do in Pl/pgSQL?  If so could anyone give any directions
as to where to start?



You can create a set-returning function, that cursors over the table,
like this:


CREATE OR REPLACE FUNCTION lagfunc(
	OUT client_id INT, 
	OUT datetime timestamp, 
	OUT previousTime timestamp, 
	OUT difftime interval)
RETURNS SETOF RECORD as $$ 
DECLARE

thisrow RECORD;
last_client_id INT;
last_datetime timestamp;
BEGIN

FOR thisrow IN SELECT * FROM all_client_times ORDER BY client_id,
datetime LOOP
IF thisrow.client_id = last_client_id THEN
client_id := thisrow.datetime;
datetime := thisrow.datetime;
previousTime := last_datetime;
difftime = datetime-previousTime;
RETURN NEXT;
END IF;
last_client_id := thisrow.client_id;
last_datetime := thisrow.datetime;
END LOOP;

   RETURN;
END;
$$ LANGUAGE plpgsql;

select * from lagfunc() limit 10;
select * from lagfunc() where client_id = 455;


Here I used an interval, but you get the idea.





---(end of broadcast)---
TIP 6: explain analyze is your friend

  



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Oracle Analytical Functions

2008-01-30 Thread Willem Buitendyk

Thanks Reece,

I got this to work for me.  The only problem was with the ORDER BY 
clause which did not seem to work properly.  I took it out and instead 
used a sorted view for the data table.


Cheers,

Willem

Reece Hart wrote:

create table data (
client_id integer,
datetime timestamp not null
);
create index data_client_id on data(client_id);

copy data from STDIN DELIMITER ',';
122,2007-05-01 12:00:00
122,2007-05-01 12:01:00
455,2007-05-01 12:02:00
455,2007-05-01 12:03:00
455,2007-05-01 12:08:00
299,2007-05-01 12:10:00
299,2007-05-01 12:34:00
\.

CREATE OR REPLACE FUNCTION visits (
OUT client_id INTEGER,
OUT datetime_1 TIMESTAMP,
OUT datetime_2 TIMESTAMP,
OUT dur INTERVAL )
RETURNS SETOF RECORD
LANGUAGE plpgsql
AS $_$
DECLARE
rp data%ROWTYPE;-- previous data table record
r data%ROWTYPE; -- data table record, more recent than
rp
BEGIN
rp = (NULL,NULL);
FOR r IN SELECT * FROM data ORDER BY client_id,datetime LOOP
   IF rp.client_id = r.client_id THEN
  client_id = r.client_id;
  datetime_1 = r.datetime;
  datetime_2 = rp.datetime;
  dur = r.datetime-rp.datetime;
  RETURN NEXT;
END IF;
rp = r;
END LOOP;
RETURN;
END;
$_$;


[EMAIL PROTECTED] select * from visits() order by client_id,datetime_1;
 client_id | datetime_1  | datetime_2  |   dur
---+-+-+--

   122 | 2007-05-01 12:01:00 | 2007-05-01 12:00:00 | 00:01:00
   299 | 2007-05-01 12:34:00 | 2007-05-01 12:10:00 | 00:24:00
   455 | 2007-05-01 12:03:00 | 2007-05-01 12:02:00 | 00:01:00
   455 | 2007-05-01 12:08:00 | 2007-05-01 12:03:00 | 00:05:00
(4 rows)


-Reece

  




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Oracle Analytical Functions

2008-01-30 Thread Willem Buitendyk

Found the error:

client_id := thisrow.datetime;

should be

client_id := thisrow.client_id;

All works well now,

Thanks very much,

Willem

Willem Buitendyk wrote:

I tried this function but it keeps returning an error such as:

ERROR: invalid input syntax for integer: 2007-05-05 00:34:08
SQL state: 22P02
Context: PL/pgSQL function lagfunc line 10 at assignment

I checked and there are no datetime values in the client_id field 
anywhere in my table 'all_client_times'


I have no idea what is going on here ...

Thanks for the code though - it has taught me a lot all ready; such as 
using, OUT and SETOF Record


Willem

Adam Rich wrote:

and I would like to create a new view that takes the first table and
calculates the time difference in minutes between each row so that the
result is something like:

client_id,datetime, previousTime, difftime
122,2007-05-01 12:01:00, 2007-05-01 12:00:00, 1
455,2007-05-01 12:03:00, 2007-05-01 12:02:00, 1
455,2007-05-01 12:08:00, 2007-05-01 12:03:00, 5
299,2007-05-01 12:34:00, 2007-05-01 12:10:00, 24

Any idea how I could replicate this in SQL from PG.  Would this be an
easy thing to do in Pl/pgSQL?  If so could anyone give any directions
as to where to start?



You can create a set-returning function, that cursors over the table,
like this:


CREATE OR REPLACE FUNCTION lagfunc(
OUT client_id INT, OUT datetime timestamp, OUT 
previousTime timestamp, OUT difftime interval)

RETURNS SETOF RECORD as $$ DECLARE
thisrow RECORD;
last_client_id INT;
last_datetime timestamp;
BEGIN

FOR thisrow IN SELECT * FROM all_client_times ORDER BY client_id,
datetime LOOP
IF thisrow.client_id = last_client_id THEN
client_id := thisrow.datetime;
datetime := thisrow.datetime;
previousTime := last_datetime;
difftime = datetime-previousTime;
RETURN NEXT;
END IF;
last_client_id := thisrow.client_id;
last_datetime := thisrow.datetime;
END LOOP;

   RETURN;
END;
$$ LANGUAGE plpgsql;

select * from lagfunc() limit 10;
select * from lagfunc() where client_id = 455;


Here I used an interval, but you get the idea.





---(end of broadcast)---
TIP 6: explain analyze is your friend

  



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] Analyze Explanation

2008-01-29 Thread Willem Buitendyk
When I run Analyze I get the following notice repeated for many of my 
tables:


NOTICE:   no notnull values, invalid stats

Is this just refering to my empty tables?


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Bulk Load Ignore/Skip Feature

2007-11-16 Thread Willem Buitendyk
My apologies.  I misinterpreted that last post.  I have not been able to 
try pgloader as I am using the windows platform. 


Martijn van Oosterhout wrote:

On Thu, Nov 15, 2007 at 08:09:46PM -0800, Willem Buitendyk wrote:
  
Damn - so the unqiue contraint is still an issue.  What gives?  Why is 
it so hard to implement this in Postgresql?  sigh - if only I had more time.



Can you explain? The server ofcourse still generates error messages in
the logs, there's no way around that. However it looks to me that the
data ended up in the database correctly? Or did I miss something?

Have a nice day,
  



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Windows Installer Problems/Stall Solution Error 2769

2007-11-15 Thread Willem Buitendyk
I've just gone through the better part of a day trying to resolve the 
windows installer stalling out for Postgresql version 8.2.5


DEBUG: Error 2769:

The faq mentions a cygwin path problem, but I did not have cygwin 
installed. I tried turning off all antiv-virus, firewalls and  
reconfiguring registry and still it would not work.  Finally I 
temporarily removed my whole path (anti-virus and firewall were not 
turned off), issued Path from command line and tried the msi installer 
again - voila, the installation worked!  There is probably something 
suspect with my PATH but that is another issue.


Hope this helps others avoid this frustration.

Cheers

Willem

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Bulk Load Ignore/Skip Feature

2007-11-15 Thread Willem Buitendyk
Damn - so the unqiue contraint is still an issue.  What gives?  Why is 
it so hard to implement this in Postgresql?  sigh - if only I had more time.



Ow Mun Heng wrote:

On Wed, 2007-11-14 at 00:02 -0800, Willem Buitendyk wrote:
  

Perfect - that appears to be exactly what I was looking for.



  

Reg Me Please wrote:


Il Wednesday 14 November 2007 05:50:36 Willem Buitendyk ha scritto:
  
  

Will Postgresql ever implement an ignore on error feature when bulk
loading data?  Currently it is my understanding that any record that
violates a unique constraint will cause the copy from command to halt
execution instead of skipping over the violation and logging it - as is
done in Oracle and DB2.


pgloader

http://pgfoundry.org/projects/pgloader/

  
  


I believe the last time I tried this, there was still some issues with
it. See attached email. (if it makes it to the list)




__ NOD32 2657 (20071114) Information __

This message was checked by NOD32 antivirus system.
http://www.eset.com
  




Subject:
PgLoader unable to handle pkey dups Was [Re: {Spam} [GENERAL] pgloader 
- Can't find textreader/csvreader]

From:
Ow Mun Heng [EMAIL PROTECTED]
Date:
Mon, 27 Aug 2007 18:01:54 +0800
To:
Dimitri Fontaine [EMAIL PROTECTED]

To:
Dimitri Fontaine [EMAIL PROTECTED]
CC:
pgsql-general@postgresql.org


On Mon, 2007-08-27 at 11:27 +0200, Dimitri Fontaine wrote:

  
We've just made some tests here with 2.2.1 and as this release contains the 
missing files, it works fine without any installation.



Yep.. I can confirm that it works.. I am using the csv example.

Goal : similar functionality much like mysql's mysqlimport --replace
(overwrite any rows which has duplicate primary keys)

$ psql pgloader  csv/csv.sql
$ ../pgloader.py -Tvc examples/pgloader.conf csv

pgloader=# alter table csv add primary key (a,b,c);
pgloader=# \d csv
Table public.csv
 Column | Type | Modifiers
+--+---
 a  | bigint   | not null
 b  | bigint   | not null
 c  | character(2) | not null
 d  | text |
Indexes:
csv_pkey PRIMARY KEY, btree (a, b, c)

pgloader=# select * from csv;
a |b | c  |   d
--+--++
 33996344 | 33996351 | GB | United Kingdom
 50331648 | 68257567 | US | United States
 68257568 | 68257599 | CA | Canada
 68257600 | 68259583 | US | United States
 68259584 | 68259599 | CA | Canada

$cat csv/csv.data
2.6.190.56,2.6.190.63,33996344,33996351,GB,Error Kingdom
4.17.143.0,4.17.143.15,68259584,68259599,CA,new Country
Note : only columns 3 to 6 are taken for loading)

$ psql pgloader  csv/csv.sql
$ ../pgloader.py -vc pgloader.conf csv
Using pgloader.conf configuration file
Will consider following sections:
  csv

[csv] parse configuration
Notice: reject log in /tmp/csv.rej.log
Notice: rejected data in /tmp/csv.rej
[csv] data import
Notice: COPY csv data

Error: Please check PostgreSQL logs
HINT:  double check your client_encoding, datestyle and copy_delimiter
settings

$sudo tail -f /var/log/pglog/postgresxx-xx-xx.log
ERROR:  duplicate key violates unique constraint csv_pkey
CONTEXT:  COPY csv, line 1: 33996344,33996351,Error Kingdom,GB
STATEMENT:  COPY csv (a, b, d, c)  FROM stdin USING DELIMITERS ','

So.. doesn't really solve my issue.
Dang it..


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
  



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Bulk Load Ignore/Skip Feature

2007-11-14 Thread Willem Buitendyk

Perfect - that appears to be exactly what I was looking for.

Cheers

Reg Me Please wrote:

Il Wednesday 14 November 2007 05:50:36 Willem Buitendyk ha scritto:
  

Will Postgresql ever implement an ignore on error feature when bulk
loading data?  Currently it is my understanding that any record that
violates a unique constraint will cause the copy from command to halt
execution instead of skipping over the violation and logging it - as is
done in Oracle and DB2.

Are there alternative ways of dealing with this scenario that won't
consume as much time?

Appreciate any help  - would love to migrate away from Oracle.

Cheers



pgloader

http://pgfoundry.org/projects/pgloader/

  



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Windows x64 Port

2007-11-14 Thread Willem Buitendyk
Thanks Magnus.  Looking forward to 8.4, but I gather that will be some 
time coming.


Willem

Magnus Hagander wrote:

Willem Buitendyk wrote:
  

Is there any plan to port Postgresql to windows x64?  I can currently
run Postgresql as 32 bit inside Vista 64 - would I see better
performance if Postgresql was running under 64 bit.  My biggest concern
is memory - at 32 bit is not Postgresql limited to 4GB in windows?



It's something we hope will be worked on for 8.4, but there are no firm
plans.

It's limited to 2Gb, actually, but *per process*. Since each backend is
it's own process, you can use way more than 2Gb RAM on a 64-bit system.
You can't use it for shared memory, but you can use it for local backend
memory (work_mem). But you'll need a lot of backends to do it, and you
will see other pieces of performance get worse with loads of backend.

Oh, and your RAM will still be used for disk cache, since that's managed
by the kernel.

//Magnus

__ NOD32 2658 (20071114) Information __

This message was checked by NOD32 antivirus system.
http://www.eset.com



  



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] Windows x64 Port

2007-11-14 Thread Willem Buitendyk
Is there any plan to port Postgresql to windows x64?  I can currently 
run Postgresql as 32 bit inside Vista 64 - would I see better 
performance if Postgresql was running under 64 bit.  My biggest concern 
is memory - at 32 bit is not Postgresql limited to 4GB in windows?


Thanks,

Willem

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Bulk Load Ignore/Skip Feature

2007-11-13 Thread Willem Buitendyk
Will Postgresql ever implement an ignore on error feature when bulk 
loading data?  Currently it is my understanding that any record that 
violates a unique constraint will cause the copy from command to halt 
execution instead of skipping over the violation and logging it - as is 
done in Oracle and DB2.


Are there alternative ways of dealing with this scenario that won't 
consume as much time?


Appreciate any help  - would love to migrate away from Oracle.

Cheers



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq