[GENERAL] query a table from one database to another

2010-01-27 Thread AI Rumman
I am using Postgresql 8.3

I have two databases:
db1
db2

db1 holds a table tab1.

Is it possible to get the value of the above tab1 in db2 database?

Regards


Re: [GENERAL] dynamic crosstab

2010-01-27 Thread Pierre Chevalier

Hello,
Pavel Stehule claviota:

...
you cannot get crosstab via SELECT statement. There is workaround
http://okbob.blogspot.com/2008/08/using-cursors-for-generating-cross.html
  


All right, I've just tried it: it works just fine in my case! Thanks a lot!

Except a few things, but I am not (yet) familiar at all with postgresql 
functions.

I have tried to customize a bit your function, tu suit some of my needs:

- when I have NULL values in my EAV (Entity-Attribute-Value) table, I 
don't want zeroes to appear in the output table;
- the total at the right end does not make sense in my case; I replaced 
it with a COUNT;


therefore, I did as follows (changes are *inside stars*, I hope the 
formatting will work!):


BEGIN
FOR r IN EXECUTE 'SELECT DISTINCT '
 || dimx_name || '::text AS val ' || dimx_source
LOOP
col_list := array_append(col_list, 'SUM(CASE ' || dimx_name
|| ' WHEN ' || quote_literal(r.val) || ' THEN ' || expr
|| ' ELSE *NULL* END) AS ' || quote_ident(r.val) || '');
END LOOP;
query := 'SELECT ' || dimy_name || ', '
|| array_to_string(col_list, ',')
* || ', COUNT(' || expr || ') AS Count '*
|| dimy_source || ' GROUP BY ' || dimy_name;
OPEN result NO SCROLL FOR EXECUTE query;
RETURN result;
END;
$$ LANGUAGE plpgsql STRICT;


Now, I still have some issues: as far as I can remember, in m$ access 
(yes, I know...), a long time ago, I used to do PIVOT queries on EAV 
tables, where I could chose which operation was to be made on the 
variable: simply the value (without GROUPing), or a SUM, AVG, etc. I 
don't have any running acce$$ handy, so I can't check this, I'm afraid.
In the case of your function, if I understand well, the line with the 
GROUP BY does the trick. I will try to play with it. Later on.



Something else: I am quite familiar with strict SQL, I use postgreSQL a 
lot, but I am not familiar with functions and, also, cursors. So I am a 
bit surprised by the behaviour of the cursor: I am reading doc...
But what I would like to do is to redirect the output of the function 
(that is, the 'result' cursor) to a view, which will be used in other 
places. I thought something like FETCH INTO would do the trick, but it 
doesn't.



Also, I need, at some point, to export the output to some CSV file. I 
usually do a quick bash script as follows:


echo COPY (SELECT * FROM dh_litho ORDER BY id, depto) TO stdout WITH 
CSV HEADER; | psql bdexplo  somefile.csv


And then I can feed somefile.csv to whatever program I want. I tried to 
do this with the cursor and the FETCH ALL, but it didn't work out well, 
as I had guessed...


pie...@duran:~$ pie...@duran:~/fix_bd_amc$ echo COPY (
  SELECT do_cross_cursor('lab_pjcsa_analytecode', 'FROM 
lab_ana_results','sample_id',

'FROM lab_ana_results_sel ',
'value_num');
  FETCH ALL FROM result WITH CSV HEADER;
  ) TO stdout WITH CSV HEADER ; | psql bdexplo
bash: pie...@duran:~/fix_bd_amc$: Aucun fichier ou dossier de ce type
pie...@duran:~$ ERREUR:  erreur de syntaxe sur ou près de « ; »
bash: ERREUR: : commande introuvable
bash: » : commande introuvable
pie...@duran:~$ LIGNE 4 :   'value_num');
bash: Erreur de syntaxe près du symbole inattendu « ) »
pie...@duran:~$ ^
bash: ^ : commande introuvable

(sorry about the French!)


I could not do this trick: any idea of how I could do this? I guess I 
should wrap the whole transaction into a one-line statement to be fed to 
to psql, but I can't figure out how to do it... Some help?


A+
Pierre

PS: I am used to normal mailing lists, but I got quite confused by the 
approach from grokbase: I thought I was posting on the grokbase list 
(http://grokbase.com/), and I see that the list 
pgsql-general@postgresql.org was the one I was posting to...

Sorry for the noise, I am RTFMing at the moment...

--

Pierre Chevalier
   Mesté Duran
   32100 Condom
 Tél+fax  :09 75 27 45 62
   05 62 28 06 83
06 37 80 33 64
 Émail: pierre.chevalier1967CHEZfree.fr
 icq# : 10432285
 http://pierremariechevalier.free.fr/
 Logiciels Libres dans le Gers: http://gnusquetaires.org/





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dynamic crosstab

2010-01-27 Thread Pavel Stehule
2010/1/27 Pierre Chevalier pierre.chevalier1...@free.fr:
 Hello,
 Pavel Stehule claviota:

 ...
 you cannot get crosstab via SELECT statement. There is workaround
 http://okbob.blogspot.com/2008/08/using-cursors-for-generating-cross.html


 All right, I've just tried it: it works just fine in my case! Thanks a lot!

 Except a few things, but I am not (yet) familiar at all with postgresql
 functions.
 I have tried to customize a bit your function, tu suit some of my needs:

 - when I have NULL values in my EAV (Entity-Attribute-Value) table, I don't
 want zeroes to appear in the output table;
 - the total at the right end does not make sense in my case; I replaced it
 with a COUNT;

 therefore, I did as follows (changes are *inside stars*, I hope the
 formatting will work!):

 BEGIN
 FOR r IN EXECUTE 'SELECT DISTINCT '
  || dimx_name || '::text AS val ' || dimx_source
 LOOP
 col_list := array_append(col_list, 'SUM(CASE ' || dimx_name
 || ' WHEN ' || quote_literal(r.val) || ' THEN ' || expr
 || ' ELSE *NULL* END) AS ' || quote_ident(r.val) || '');
 END LOOP;
 query := 'SELECT ' || dimy_name || ', '
 || array_to_string(col_list, ',')
 * || ', COUNT(' || expr || ') AS Count '*
 || dimy_source || ' GROUP BY ' || dimy_name;
 OPEN result NO SCROLL FOR EXECUTE query;
 RETURN result;
 END;
 $$ LANGUAGE plpgsql STRICT;


 Now, I still have some issues: as far as I can remember, in m$ access (yes,
 I know...), a long time ago, I used to do PIVOT queries on EAV tables, where
 I could chose which operation was to be made on the variable: simply the
 value (without GROUPing), or a SUM, AVG, etc. I don't have any running
 acce$$ handy, so I can't check this, I'm afraid.
 In the case of your function, if I understand well, the line with the GROUP
 BY does the trick. I will try to play with it. Later on.


 Something else: I am quite familiar with strict SQL, I use postgreSQL a lot,
 but I am not familiar with functions and, also, cursors. So I am a bit
 surprised by the behaviour of the cursor: I am reading doc...
 But what I would like to do is to redirect the output of the function (that
 is, the 'result' cursor) to a view, which will be used in other places. I
 thought something like FETCH INTO would do the trick, but it doesn't.


 Also, I need, at some point, to export the output to some CSV file. I
 usually do a quick bash script as follows:

 echo COPY (SELECT * FROM dh_litho ORDER BY id, depto) TO stdout WITH CSV
 HEADER; | psql bdexplo  somefile.csv

 And then I can feed somefile.csv to whatever program I want. I tried to do
 this with the cursor and the FETCH ALL, but it didn't work out well, as I
 had guessed...


hmm ...it cannot work :(. You cannot forward FETCH ALL statement on
server side - without programming in C

in this case you need small application for reading cursor and
transformation to CVS

Pavel

 pie...@duran:~$ pie...@duran:~/fix_bd_amc$ echo COPY (
  SELECT do_cross_cursor('lab_pjcsa_analytecode', 'FROM
  lab_ana_results','sample_id',
        'FROM lab_ana_results_sel ',
        'value_num');
  FETCH ALL FROM result WITH CSV HEADER;
  ) TO stdout WITH CSV HEADER ; | psql bdexplo
 bash: pie...@duran:~/fix_bd_amc$: Aucun fichier ou dossier de ce type
 pie...@duran:~$ ERREUR:  erreur de syntaxe sur ou près de « ; »
 bash: ERREUR: : commande introuvable
 bash: » : commande introuvable
 pie...@duran:~$ LIGNE 4 :       'value_num');
 bash: Erreur de syntaxe près du symbole inattendu « ) »
 pie...@duran:~$                             ^
 bash: ^ : commande introuvable

 (sorry about the French!)


 I could not do this trick: any idea of how I could do this? I guess I should
 wrap the whole transaction into a one-line statement to be fed to to psql,
 but I can't figure out how to do it... Some help?

 A+
 Pierre

 PS: I am used to normal mailing lists, but I got quite confused by the
 approach from grokbase: I thought I was posting on the grokbase list
 (http://grokbase.com/), and I see that the list pgsql-general@postgresql.org
 was the one I was posting to...
 Sorry for the noise, I am RTFMing at the moment...

 --
 
 Pierre Chevalier
   Mesté Duran
   32100 Condom
  Tél+fax  :    09 75 27 45 62
               05 62 28 06 83
                06 37 80 33 64
  Émail    :     pierre.chevalier1967CHEZfree.fr
  icq#     :     10432285
  http://pierremariechevalier.free.fr/
  Logiciels Libres dans le Gers: http://gnusquetaires.org/
 





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] create role in a pl/pgsql trigger

2010-01-27 Thread Keresztury Balázs
Thanks Craig, this one worked!

Balazs

-Original Message-
From: Craig Ringer [mailto:cr...@postnewspapers.com.au] 
Sent: Wednesday, January 27, 2010 12:34 AM
To: Keresztury Balázs
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] create role in a pl/pgsql trigger

On 27/01/2010 1:09 AM, Keresztury Balázs wrote:
 hi,

 I would like to write a trigger on a table which contains detailed
 information about users. If I add a new user to this table, I'd like this
 trigger to add the user to the database as well. Later I also want to
 implement triggers for updating and deleting, but I got stuck at the first
 part of this task.

 Everything went fine until the point I had to insert the username into the
 create role command. Appearently the command takes only parameters without
 the ' signs, and the language supports only substituting parameters with
the
 apostrophe.

 Any idea?

Use 'EXECUTE' (without USING because EXECUTE ... USING doesn't work 
for utility statements).

--
Craig Ringer


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dynamic crosstab

2010-01-27 Thread Pavel Stehule
2010/1/27 Pierre Chevalier pierre.chevalier1...@free.fr:
 Pavel Stehule claviota:

 ...
 But what I would like to do is to redirect the output of the function
 (that
 is, the 'result' cursor) to a view, which will be used in other places. I
 thought something like FETCH INTO would do the trick, but it doesn't.


 Also, I need, at some point, to export the output to some CSV file. I
 usually do a quick bash script as follows:

 echo COPY (SELECT * FROM dh_litho ORDER BY id, depto) TO stdout WITH CSV
 HEADER; | psql bdexplo  somefile.csv

 ...


 hmm ...it cannot work :(. You cannot forward FETCH ALL statement on
 server side - without programming in C


 Ach! Too bad... Oh but... I used to program in C, long time ago, on HP-UX...

 in this case you need small application for reading cursor and
 transformation to CVS


 Actually, if the small application was reading cursor, and transforming it
 to a VIEW, this would solve both problems at once:
 something like:

 CREATE VIEW crosstabbed_thing AS
 (cursor_to_dataset(SELECT do_cross_cursor(...)));

no it isn't possible. VIEW have to have fixed numbers of columns.

You can write function that reads a cursor, create temp table, store
result and will do a copy from temp table.

There is one significant rule - any SELECT based statement have to
have known number of columns in planner time - so number of colums
must not depend on the data. There are no any workaround for it. You
can do only don't use fixed SELECT statemens (VIEWS too - it is stored
SELECT).

look on SPI interface http://www.postgresql.org/docs/8.4/interactive/spi.html

http://www.postgresql.org/docs/8.4/interactive/spi-examples.html

but you have to use cursor based interface.

Pavel


 And then:
 echo COPY (SELECT * FROM crosstabbed_thing) TO stdout WITH CSV HEADER; |
 psql  youpi.csv

 And there we are!
 What about this plan? The cursor_to_dataset() should be written, in C if I
 understand well.
 I have to dig out my old C book, and browse through postgresql APIs, code
 examples,etc. I guess...

 A+
 Pierre

 --
 
 Pierre Chevalier
   Mesté Duran
   32100 Condom
  Tél+fax  :    09 75 27 45 62
               05 62 28 06 83
                06 37 80 33 64
  Émail    :     pierre.chevalier1967CHEZfree.fr
  icq#     :     10432285
  http://pierremariechevalier.free.fr/
  Logiciels Libres dans le Gers: http://gnusquetaires.org/
 





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problem after installing triggering function

2010-01-27 Thread Yan Cheng Cheok
I have a stored procedure execute the following code :

INSERT INTO unit(fk_lot_id, cycle)
VALUES(_lotID, _cycle) RETURNING  * INTO _unit;
raise notice 'AFTER INSERT INTO UNIT,  _unit.unit_id = %', _unit.unit_id ;

unit_id column, is an auto generated primary key. I will always get a non-null 
value.

However, after I install a trigger function, and create a table named unit_0 
inherit from table unit, 

NOTICE:  AFTER INSERT INTO UNIT,  _unit.unit_id = NULL

will be printed.

The following is the trigger function :

within trigger function, it able to detect unit table (represented by NEW) is 
having unit_id 28.

why outside trigger function, I will get null?

Thanks!

DECLARE
unit_table_index bigint;
low bigint;
high bigint;
unit_table_name text;
BEGIN
unit_table_index = NEW.unit_id  20;
-- 2^20 = 1048576
low = unit_table_index * 1048576;
high = low + 1048575;
unit_table_name = 'unit_' || unit_table_index;

-- code to dynamically create unit_0, unit_1, unit_2 ...
IF NOT EXISTS(SELECT * FROM information_schema.tables WHERE table_name = 
unit_table_name) THEN
EXECUTE 'CREATE TABLE ' || quote_ident(unit_table_name) || '
(
  PRIMARY KEY (unit_id),
  CHECK (unit_id between ' || low || ' and ' || high || '),

  CONSTRAINT fk_lot_id_' || unit_table_index || ' FOREIGN KEY 
(fk_lot_id)
  REFERENCES lot (lot_id) MATCH SIMPLE
  ON UPDATE NO ACTION ON DELETE CASCADE 
) INHERITS (unit);';

EXECUTE 'CREATE INDEX idx_unit_id_' || unit_table_index ||
  ' ON ' || quote_ident(unit_table_name) ||
  ' USING btree
  (unit_id);';
  
EXECUTE 'CREATE INDEX idx_fk_lot_id_' || unit_table_index ||
  ' ON ' || quote_ident(unit_table_name) ||
  ' USING btree
  (fk_lot_id);';
END IF;


-- NOTICE:  IN unit_insert_trigger, table is unit_0
-- NOTICE:  IN unit_insert_trigger, NEW.unit_id is 28
raise notice 'IN unit_insert_trigger, table is %', unit_table_name;
raise notice 'IN unit_insert_trigger, NEW.unit_id is %', NEW.unit_id;

EXECUTE 'INSERT INTO ' || quote_ident(unit_table_name) || 
'(unit_id, fk_lot_id, cycle) VALUES (' || 
NEW.unit_id || ',' || NEW.fk_lot_id || ',' || NEW.cycle || ')';

RETURN NULL;
END;


Thanks and Regards
Yan Cheng CHEOK


  


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dynamic crosstab

2010-01-27 Thread Pavel Stehule
2010/1/27 Pierre Chevalier pierre.chevalier1...@free.fr:
 Pavel Stehule claviota:

 ...
 Actually, if the small application was reading cursor, and transforming
 it
 to a VIEW, this would solve both problems at once:
 something like:

 CREATE VIEW crosstabbed_thing AS
 (cursor_to_dataset(SELECT do_cross_cursor(...)));


 no it isn't possible. VIEW have to have fixed numbers of columns.


 Ach, flute... ;-(

 You can write function that reads a cursor, create temp table, store
 result and will do a copy from temp table.


 Well... Not extremely elegant (it reminds me when I was stuck with access
 and I could not do nested queries...), but why not?
 Actually, if the table is a real temporary one (CREATE TEMPORARY TABLE), it
 should not induce too much mess in the database layout.

 There is one significant rule - any SELECT based statement have to
 have known number of columns in planner time - so number of colums
 must not depend on the data. There are no any workaround for it. You
 can do only don't use fixed SELECT statemens (VIEWS too - it is stored
 SELECT).


 All right, it makes sense now...
 Nut... Idea! (careful...) what about if we do, just like in a VIEW, a CREATE
 OR REPLACE, systematically when we do this kind of function? The only
 drawback I can think of is that we can't have anything dependant on the VIEW
 we generate.

no, you cannot do it. You cannot create view and same statements ask
on this view on top level.

if you would to understand it - you have to understand to process
pipeline: parser, planner, optimizer, executor. If you understand to
this stages, then you will understand what is possible and what not.


 Another idea (more danger...): what about setting a sort of flag which says
 that this VIEW should *not* be included in the planner? And it will have
 unexpected number of columns? Would this be *absolutely* impossible to
 state?

:) sorry - you can do it, but not in pg - or you have to rewrite 50%
of low level code


 look on SPI interface
 http://www.postgresql.org/docs/8.4/interactive/spi.html
 http://www.postgresql.org/docs/8.4/interactive/spi-examples.html
 but you have to use cursor based interface.


 I'll try to Read The French Manual, rather than the one in English! I'll
 look for it...


 But the whole point is: this need of a generic cross-tab is really annoying
 for a large number of people, it has been there for a long time, and I know
 some people who just walk away from postgreSQL only because this feature is
 lacking, and they return happily to their m$-access, therefore ignoring the
 pure wealth of postgreSQL: sad, isn't it?...

ms access hasn't klient-server architecture. Generating of crosstab is
client side task - more - iOLTP databases are not the best tool for
it. Better are OLAP databases with GUI clients - like Excel or MS
Access.

Regards
Pavel Stehule



 A+
 Pierre

 --
 
 Pierre Chevalier
   Mesté Duran
   32100 Condom
  Tél+fax  :    09 75 27 45 62
               05 62 28 06 83
                06 37 80 33 64
  Émail    :     pierre.chevalier1967CHEZfree.fr
  icq#     :     10432285
  http://pierremariechevalier.free.fr/
  Logiciels Libres dans le Gers: http://gnusquetaires.org/
 





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 100% of CPU utilization postgres process

2010-01-27 Thread hubert depesz lubaczewski
On Tue, Jan 26, 2010 at 05:12:51PM -0500, Greg Smith wrote:
 Hashimoto Yuya wrote:
  Judging from the result, I could see that stats collector process
  caused this unusually high CPU utilization rate.
  I found similar problem at
  http://archives.postgresql.org/pgsql-general/2008-06/msg00934.php,
  although there seemed
  no clear cause proven nor the statement that it's because of postgres bug.
 
 Right, that thread concluded with
 http://archives.postgresql.org/pgsql-general/2008-06/msg01026.php where
 Tom suggested it looked like a FreeBSD bug on that version. I just poked
 around a bit, and there do seem to have been a number of bugs in their
 poll() implementation in various versions of that OS, so it seems
 reasonable this is just another one of those.
 
 Note sure if Depez is reading this list or not, just added him to the cc
 list here. Herbert, did you ever get anywhere with tracking this issue down?

No. The database was of friend of a friend, and afair they upgraded, and
afterwards they didn't contact me back.

Best regards,

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Variadic polymorpic functions

2010-01-27 Thread Pavel Stehule
Hello

I add sprintf function. Now I think, we can add new contrib module
(string functions) with both function - format and sprintf. These
functions are relative different, so they can exists separately.
Format is simpler and faster. Sprintf is more powerful but slower.

postgres=# select pst.format('now is %', current_time);
  format
---
 now is 16:34:26.203728+01
(1 row)

postgres=# select pst.sprintf('now is %s', current_time);
 sprintf
--
 now is 16:34:45.24919+01

Regards
Pavel Stehule

2010/1/25 Vincenzo Romano vincenzo.rom...@notorand.it:
 2010/1/25 Pavel Stehule pavel.steh...@gmail.com:
 2010/1/25 Vincenzo Romano vincenzo.rom...@notorand.it:
 2010/1/23 Pavel Stehule pavel.steh...@gmail.com:
 2010/1/22 Vincenzo Romano vincenzo.rom...@notorand.it:
 2010/1/22 Tom Lane t...@sss.pgh.pa.us:
 Vincenzo Romano vincenzo.rom...@notorand.it writes:
 2010/1/22 Tom Lane t...@sss.pgh.pa.us:
 regression=# CREATE FUNCTION q( fmt text, variadic args any )

 And this would allow for a stdarg-like argument list?

 Yeah, it should work, given suitable C code.

 Great!


 I wrote this function year ago.

 look on content

 http://pgfoundry.org/projects/pstcollection/

 Pavel,
 that format() function should be included into official contribs.
 What about HOWTO compile?

 There are not consensus about final semantic - some people prefer
 sprintf like, some others PostgreSQL RAISE NOTICE like.

 Whatever you prefer would be OK as far as it is documented.
 In my opinion, the main usage for such a function is in the dynamic SQL code
 generation in PL/PgSQL functions:

 EXECUTE pst.format(  );

 In this very case the sprintf-like syntax/semantics would be much more
 powerful, but
 the current one is OK if you think that there's nothing similar at the moment.

 Again, this function looks to be a badly missing one and including it
 at least into the
 default contrib collection would help a lot of users.

 --
 Vincenzo Romano
 NotOrAnd Information Technologies
 NON QVIETIS MARIBVS NAVTA PERITVS



pstcoll.tar.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] relation between records in main and toast tables

2010-01-27 Thread Igor Neyman
Hello,
 
Let's say TableA has toastable column, the contents of this column is
stored in let's say pg_toast_1234.
 
Is there a query to find which records (chunk_id, chunk_seq) in
pg_toast_1234 store data for specific record in TableA (i.e. with PK
column value eq. '567')?
 
Igor Neyman



Re: [GENERAL] Variadic polymorpic functions

2010-01-27 Thread Vincenzo Romano
2010/1/27 Pavel Stehule pavel.steh...@gmail.com:
 Hello

 I add sprintf function. Now I think, we can add new contrib module
 (string functions) with both function - format and sprintf. These
 functions are relative different, so they can exists separately.
 Format is simpler and faster. Sprintf is more powerful but slower.

 postgres=# select pst.format('now is %', current_time);
          format
 ---
  now is 16:34:26.203728+01
 (1 row)

 postgres=# select pst.sprintf('now is %s', current_time);
         sprintf
 --
  now is 16:34:45.24919+01

 Regards
 Pavel Stehule

Yeah!

But why still on  separate schema?
I'd rather put them all in the public one, so you don't need the pst. anymore.
Just like (most of) all other contrib mudules ...

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Variadic polymorpic functions

2010-01-27 Thread Pavel Stehule
2010/1/27 Vincenzo Romano vincenzo.rom...@notorand.it:
 2010/1/27 Pavel Stehule pavel.steh...@gmail.com:
 Hello

 I add sprintf function. Now I think, we can add new contrib module
 (string functions) with both function - format and sprintf. These
 functions are relative different, so they can exists separately.
 Format is simpler and faster. Sprintf is more powerful but slower.

 postgres=# select pst.format('now is %', current_time);
          format
 ---
  now is 16:34:26.203728+01
 (1 row)

 postgres=# select pst.sprintf('now is %s', current_time);
         sprintf
 --
  now is 16:34:45.24919+01

 Regards
 Pavel Stehule

 Yeah!

 But why still on  separate schema?
 I'd rather put them all in the public one, so you don't need the pst. 
 anymore.
 Just like (most of) all other contrib mudules ...

if you like, you can set a search_path

it is cleaner than put all to public schema. I prefer ADA
modul.function notation - it is more readable for me.

Regards
Pavel Stehule





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] syntax error during function call

2010-01-27 Thread Aycock, Jeff R.
Hello,

 

I have a function with three parameters that would populate a table in
one schema from another table of the same name in another schema.  The
tables are dynamically selected at execution time.

 

CREATE OR REPLACE FUNCTION schema_1.getAllSnapShot(user_id text,
begin_dt date, end_dt date) RETURNS SETOF schema_1.snapshot_table AS

$BODY$

DECLARE

r schema_1.snapshot_table%rowtype;



BEGIN

FOR r IN SELECT * FROM schema_1.snapshot_table

LOOP

 

DECLARE whoami text := r;



BEGIN



EXECUTE 'SELECT *, $1, now() INTO schema_1.'||whoami||' FROM
schema_2.'||whoami||' where created_dt between $2 and $3;'

USING user_id, begin_dt, end_dt;

 

END;

RETURN NEXT r;

END LOOP;

RETURN;

END

$BODY$

LANGUAGE 'plpgsql' ;

 

The snapshot_table has only one column for the table name.

 

The function call would look like this:

 

SELECT * FROM schema_1.getAllSnapShot('xyz9','2009-01-01','2010-01-01');

 

However, I get this error:

 

ERROR:  syntax error at or near (

LINE 1: SELECT *, $1, now() INTO schema_1.(table_A) FROM
schema_2.(table_A) where created_dt between $2 and $3;

 

I tried different escape characters for the row variable (whoami) but
get the same error as above.

 

I also tried other approaches, including using tabname::regclass for
the table names but nothing seem to work.

 

Any suggestion would be greatly appreciated.

 

Thanks,

Jeff

 



Re: [GENERAL] syntax error during function call

2010-01-27 Thread Raymond O'Donnell
On 27/01/2010 15:40, Aycock, Jeff R. wrote:

 BEGIN
 
 FOR r IN SELECT * FROM schema_1.snapshot_table
 
 LOOP
 
  
 
 DECLARE whoami text := r;


I could be wrong, but I don't think that the DECLARE inside the loop is
correct. I think you have to declare whoami with the rest of your
variables in the DECLARE block at the top of the function, and then you
can assign to it inside the loop.

Ray.

-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Variadic polymorpic functions

2010-01-27 Thread Tom Lane
Vincenzo Romano vincenzo.rom...@notorand.it writes:
 But why still on  separate schema?
 I'd rather put them all in the public one, so you don't need the pst. 
 anymore.
 Just like (most of) all other contrib mudules ...

If this were to get committed, it would definitely get made to look just
like all the other contrib modules; so forget the separate schema.

But what I'm wondering is whether it should be contrib or in core.
Is there some potential reason why someone might not want it installed?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Variadic polymorpic functions

2010-01-27 Thread Vincenzo Romano
2010/1/27 Pavel Stehule pavel.steh...@gmail.com:
 2010/1/27 Vincenzo Romano vincenzo.rom...@notorand.it:
 But why still on  separate schema?
 I'd rather put them all in the public one, so you don't need the pst. 
 anymore.
 Just like (most of) all other contrib mudules ...

 if you like, you can set a search_path

 it is cleaner than put all to public schema. I prefer ADA
 modul.function notation - it is more readable for me.

Correct, but then things like tablefunc should go in a separate schema.
I'd prefer to have consistency more than readability.
But that's just my opinion.

-- 
Vincenzo Romano
NotOrAnd Information Technologies
NON QVIETIS MARIBVS NAVTA PERITVS

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Variadic polymorpic functions

2010-01-27 Thread Pavel Stehule
2010/1/27 Tom Lane t...@sss.pgh.pa.us:
 Vincenzo Romano vincenzo.rom...@notorand.it writes:
 But why still on  separate schema?
 I'd rather put them all in the public one, so you don't need the pst. 
 anymore.
 Just like (most of) all other contrib mudules ...

 If this were to get committed, it would definitely get made to look just
 like all the other contrib modules; so forget the separate schema.


I have not problem with it. Code to contrib module can be more
modificated. Current code is for pgfoundry and I prefer some
separation.

regards
Pavel

 But what I'm wondering is whether it should be contrib or in core.
 Is there some potential reason why someone might not want it installed?


                        regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Variadic polymorpic functions

2010-01-27 Thread Vincenzo Romano
2010/1/27 Tom Lane t...@sss.pgh.pa.us:
 Vincenzo Romano vincenzo.rom...@notorand.it writes:
 But why still on  separate schema?
 I'd rather put them all in the public one, so you don't need the pst. 
 anymore.
 Just like (most of) all other contrib modules ...

 If this were to get committed, it would definitely get made to look just
 like all the other contrib modules; so forget the separate schema.

 But what I'm wondering is whether it should be contrib or in core.
 Is there some potential reason why someone might not want it installed?

I'm currently using it to solve a number of problems with dynamic SQL code
in PL/PgSQL functions.

I'm using EXECUTE psd.format(...) in order to overcome a number of
limitations with the
EXECUTE ... USING form I was not able to solve otherwise.

Something known in the past as wishful thinking! :-)

-- 
Vincenzo Romano
NotOrAnd Information Technologies
NON QVIETIS MARIBVS NAVTA PERITVS

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] syntax error during function call

2010-01-27 Thread Aycock, Jeff R.
Raymond,

I tried your suggestion but the result is the same when whoami is
declared at the top of the function and assigned inside the loop.

Thanks for the suggestion anyway.

-Original Message-
From: Raymond O'Donnell [mailto:r...@iol.ie] 
Sent: Wednesday, January 27, 2010 11:00 AM
To: Aycock, Jeff R.
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] syntax error during function call

On 27/01/2010 15:40, Aycock, Jeff R. wrote:

 BEGIN
 
 FOR r IN SELECT * FROM schema_1.snapshot_table
 
 LOOP
 
  
 
 DECLARE whoami text := r;


I could be wrong, but I don't think that the DECLARE inside the loop is
correct. I think you have to declare whoami with the rest of your
variables in the DECLARE block at the top of the function, and then you
can assign to it inside the loop.

Ray.

-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] syntax error during function call

2010-01-27 Thread Pavel Stehule

 ERROR:  syntax error at or near (

 LINE 1: SELECT *, $1, now() INTO schema_1.(table_A) FROM schema_2.(table_A)
 where created_dt between $2 and $3;



schema_1.(table_A) is nonsense. have to be schema_1.table_A

regards
Pavel Stehule

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] syntax error during function call

2010-01-27 Thread Aycock, Jeff R.
Pavel,

Per your suggestion I modified one line below BEGIN to look like this:

EXECUTE 'SELECT *, $1, now() INTO schema_1.'||whoami||' FROM 
schema_2.'||whoami||' where created_dt between $2 and $3;'


However, it is still giving me the same syntax error as before.  I must be 
missing something here though.

Thanks for the suggestion, however.  

Regards,
Jeff Aycock


-Original Message-
From: Pavel Stehule [mailto:pavel.steh...@gmail.com] 
Sent: Wednesday, January 27, 2010 11:13 AM
To: Aycock, Jeff R.
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] syntax error during function call


 ERROR:  syntax error at or near (

 LINE 1: SELECT *, $1, now() INTO schema_1.(table_A) FROM schema_2.(table_A)
 where created_dt between $2 and $3;



schema_1.(table_A) is nonsense. have to be schema_1.table_A

regards
Pavel Stehule

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] syntax error during function call

2010-01-27 Thread Adrian Klaver

On 01/27/2010 08:27 AM, Aycock, Jeff R. wrote:

Pavel,

Per your suggestion I modified one line below BEGIN to look like this:

EXECUTE 'SELECT *, $1, now() INTO schema_1.'||whoami||' FROM
schema_2.'||whoami||' where created_dt between $2 and $3;'


However, it is still giving me the same syntax error as before.  I must be 
missing something here though.


On a hunch try CURRENT_TIMESTAMP instead of now().



Thanks for the suggestion, however.

Regards,
Jeff Aycock





--
Adrian Klaver
adrian.kla...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] syntax error during function call

2010-01-27 Thread Andreas Kretschmer
Aycock, Jeff R. jeff.r.ayc...@saic.com wrote:

 Pavel,
 
 Per your suggestion I modified one line below BEGIN to look like this:
 
 EXECUTE 'SELECT *, $1, now() INTO schema_1.'||whoami||' FROM 
 schema_2.'||whoami||' where created_dt between $2 and $3;'
 ^^  

How ist the correct table-name?


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
If I was god, I would recompile penguin with --enable-fly.   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem after installing triggering function

2010-01-27 Thread Tom Lane
Yan Cheng Cheok ycch...@yahoo.com writes:
 I have a stored procedure execute the following code :
 INSERT INTO unit(fk_lot_id, cycle)
 VALUES(_lotID, _cycle) RETURNING  * INTO _unit;
 raise notice 'AFTER INSERT INTO UNIT,  _unit.unit_id = %', _unit.unit_id ;

 unit_id column, is an auto generated primary key. I will always get a 
 non-null value.

 However, after I install a trigger function, and create a table named unit_0 
 inherit from table unit, 

 NOTICE:  AFTER INSERT INTO UNIT,  _unit.unit_id = NULL

 will be printed.

If you installed it as a BEFORE trigger, the problem is here:

 RETURN NULL;

That's suppressing the INSERT action.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] syntax error during function call

2010-01-27 Thread Aycock, Jeff R.
Adrian,

I tried that as well and got the same error result.

Regards,
Jeff Aycock


-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@gmail.com] 
Sent: Wednesday, January 27, 2010 11:33 AM
To: Aycock, Jeff R.
Cc: Pavel Stehule; pgsql-general@postgresql.org
Subject: Re: [GENERAL] syntax error during function call

On 01/27/2010 08:27 AM, Aycock, Jeff R. wrote:
 Pavel,

 Per your suggestion I modified one line below BEGIN to look like this:

 EXECUTE 'SELECT *, $1, now() INTO schema_1.'||whoami||' FROM
 schema_2.'||whoami||' where created_dt between $2 and $3;'


 However, it is still giving me the same syntax error as before.  I
must be missing something here though.

On a hunch try CURRENT_TIMESTAMP instead of now().


 Thanks for the suggestion, however.

 Regards,
 Jeff Aycock




-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query a table from one database to another

2010-01-27 Thread Joshua Tolley
On Wed, Jan 27, 2010 at 02:52:58PM +0600, AI Rumman wrote:
I am using Postgresql 8.3
 
I have two databases:
db1
db2
 
db1 holds a table tab1.
 
Is it possible to get the value of the above tab1 in db2 database? 
 
Regards

You'll need something like the dblink contrib module to get one database to
talk to another.

--
Joshua Tolley / eggyknap
End Point Corporation
http://www.endpoint.com


signature.asc
Description: Digital signature


Re: [GENERAL] syntax error during function call

2010-01-27 Thread Adrian Klaver

On 01/27/2010 08:41 AM, Aycock, Jeff R. wrote:

Adrian,

I tried that as well and got the same error result.

Regards,
Jeff Aycock




I went back to the original function and assuming no cut/paste errors 
there is a ';' missing after the last END.


END;

RETURN NEXT r;

END LOOP;

RETURN;

END

$BODY$

LANGUAGE 'plpgsql' ;


--
Adrian Klaver
adrian.kla...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] syntax error during function call

2010-01-27 Thread Aycock, Jeff R.
Thanks for the catch.  

However, this did not fix the syntax error.

-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@gmail.com] 
Sent: Wednesday, January 27, 2010 11:47 AM
To: Aycock, Jeff R.
Cc: Pavel Stehule; pgsql-general@postgresql.org
Subject: Re: [GENERAL] syntax error during function call

On 01/27/2010 08:41 AM, Aycock, Jeff R. wrote:
 Adrian,

 I tried that as well and got the same error result.

 Regards,
 Jeff Aycock



I went back to the original function and assuming no cut/paste errors 
there is a ';' missing after the last END.

 END;

 RETURN NEXT r;

 END LOOP;

 RETURN;

END

$BODY$

LANGUAGE 'plpgsql' ;


-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] syntax error during function call

2010-01-27 Thread Adrian Klaver

On 01/27/2010 08:50 AM, Aycock, Jeff R. wrote:

Thanks for the catch.

However, this did not fix the syntax error.





You are sure the function is being replaced with versions that have the 
changes? In other words does \df+ show the changes?


--
Adrian Klaver
adrian.kla...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] syntax error during function call

2010-01-27 Thread Adrian Klaver

On 01/27/2010 08:50 AM, Aycock, Jeff R. wrote:

Thanks for the catch.

However, this did not fix the syntax error.



Looking back at this thread I second Andreas's suggestion. It seems the 
syntax is right but the names are wrong. What is the result when you do 
SELECT * FROM schema_1.snapshot_table?



--
Adrian Klaver
adrian.kla...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] syntax error during function call

2010-01-27 Thread Sam Mason
On Wed, Jan 27, 2010 at 10:40:17AM -0500, Aycock, Jeff R. wrote:
 EXECUTE 'SELECT *, $1, now() INTO schema_1.'||whoami||' FROM 
 schema_2.'||whoami||' where created_dt between $2 and $3;'

You'll also need to expand those other parameters.  The code is executed
in an independent scope and hence PG doesn't know what $1, $2 or $3 are.

The builtin function quote_literal is probably best to use here,
especially for the TEXT type.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] syntax error during function call

2010-01-27 Thread Aycock, Jeff R.
Osvaldo,

That did the trick!  Like you said, it had to do with the composite type.  I 
added the column name to the row variable assignment and it works now.

Thanks for the tip and thanks to everybody else for your assistance. 

Have a great day.

Regards,
Jeff Aycock


-Original Message-
From: Osvaldo Kussama [mailto:osvaldo.kuss...@gmail.com] 
Sent: Wednesday, January 27, 2010 11:57 AM
To: Aycock, Jeff R.
Subject: Re: [GENERAL] syntax error during function call

2010/1/27 Aycock, Jeff R. jeff.r.ayc...@saic.com:
 Hello,



 I have a function with three parameters that would populate a table in one
 schema from another table of the same name in another schema.  The tables
 are dynamically selected at execution time.



 CREATE OR REPLACE FUNCTION schema_1.getAllSnapShot(user_id text, begin_dt
 date, end_dt date) RETURNS SETOF schema_1.snapshot_table AS

 $BODY$

 DECLARE

     r schema_1.snapshot_table%rowtype;



 BEGIN

     FOR r IN SELECT * FROM schema_1.snapshot_table

r is a composite type.
http://www.postgresql.org/docs/current/interactive/rowtypes.html



     LOOP



     DECLARE whoami text := r;

I believe you need use: r.column_name
DECLARE whoami text := r.cloumn_name;





     BEGIN



     EXECUTE 'SELECT *, $1, now() INTO schema_1.'||whoami||' FROM
 schema_2.'||whoami||' where created_dt between $2 and $3;'

     USING user_id, begin_dt, end_dt;



     END;

     RETURN NEXT r;

     END LOOP;

     RETURN;

 END

 $BODY$

 LANGUAGE 'plpgsql' ;



 The snapshot_table has only one column for the table name.



 The function call would look like this:



 SELECT * FROM schema_1.getAllSnapShot('xyz9','2009-01-01','2010-01-01');



 However, I get this error:



 ERROR:  syntax error at or near (

 LINE 1: SELECT *, $1, now() INTO schema_1.(table_A) FROM schema_2.(table_A)
 where created_dt between $2 and $3;



 I tried different escape characters for the row variable (whoami) but get
 the same error as above.



 I also tried other approaches, including using tabname::regclass for the
 table names but nothing seem to work.



 Any suggestion would be greatly appreciated.



Osvaldo

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PL/R Windows binary for PostgreSQL 8.4.x available for testing

2010-01-27 Thread Joe Conway
Many people have been disappointed by the lack of a PL/R Windows binary
distribution since PostgreSQL 8.3 came out. Unfortunately the switch
from MinGW to MSVC as the Windows build system in Postgres 8.3
effectively rendered Windows an unsupported platform for PL/R.

I *finally* have made significant progress in this regard and have a
binary plr.dll available for testing with PostgreSQL 8.4.x.

Please find it here:
   http://www.joeconway.com/plr/plr.zip

This was compiled with VC 2005, against Postgres 8.4.x and R-2.10.x. and
is reported to work against somewhat older versions of R, but will *not*
work with earlier PostgreSQL. I'll have to build a binary for 8.3.x
specifically. Before I do anything more, however, I was hoping some
people might test it.

There is presently no installer. Drop plr.dll in your $libdir directory
-- typically that will be somewhere similar to:

C:\Program Files\PostgreSQL\8.4\lib

The zip also includes plr.sql which must be run to load plr into your
database; copy that somewhere similar to:

C:\Program Files\PostgreSQL\8.4\share\contrib

Finally there is a doc folder with html and pdf documentation.

Please let me know if you test -- successful or not.

Thanks,

Joe



signature.asc
Description: OpenPGP digital signature


[GENERAL] indexes problem

2010-01-27 Thread J Scanf
Hi all. I have a problem with two tables of same structure:
'orders_stat_pre' and 'orders_stat_pre_new'.
store=# \d orders_stat_pre
 Column | Type   | Modifiers
++---
 id | integer| not null
 user_name  | text   |
 category_name  | character varying(10)  |
 ctime  | timestamp without timezone |
Indexes:
orders_stat_pre_pkey PRIMARY KEY, btree (id)
orders_stat_pre_user_idx btree (user_name, category_name, ctime DESC)


store=# \d orders_stat_pre_new
 Column | Type   | Modifiers
++---
 id | integer|
 user_name  | text   |
 category_name  | character varying(10)  |
 ctime  | timestamp without timezone |
Indexes:
orders_stat_pre_new_user_idx btree (user_name, category_name, ctime
DESC)

I try to select last 10 orders from old table (i.e. 'orders_stat_pre'):

store=# explain select * from orders_stat_pre where user_name = 'Alex' and
category_name = 'Books' order by ctime desc limit 10;
  QUERY PLAN
---
 Limit  (cost=0.00..40.40 rows=10
width=335)
   -  Index Scan using orders_stat_pre_user_idx on orders_stat_pre
(cost=0.00..15505.87 rows=3838 width=335)
 Index Cond: ((user_name = 'Alex'::text) AND ((category_name)::text
= 'Books'::text))
(3
rows)


Then I do the same query on new table (i.e. 'orders_stat_pre_new'):

store=# explain select * from orders_stat_pre_new where user_name = 'Alex'
and category_name = 'Books' order by ctime desc limit 10;
  QUERY PLAN

 Limit  (cost=1719969.83..1719969.86 rows=10
width=563)
   -  Sort  (cost=1719969.83..1719981.08 rows=4499
width=563)
 Sort Key: ctime
 -  Seq Scan on orders_stat_pre_new  (cost=0.00..1719872.61
rows=4499 width=563)
   Filter: ((user_name = 'Alex'::text) AND
((category_name)::text = 'Books'::text))
(5
rows)


I'm confused on how can I optimize the last query? Or where I can find
corresponding info. Thank you!


Re: [GENERAL] dynamic crosstab

2010-01-27 Thread Pierre Chevalier

Pavel Stehule claviota:

...
Actually, if the small application was reading cursor, and transforming it
to a VIEW, this would solve both problems at once:
something like:

CREATE VIEW crosstabbed_thing AS
(cursor_to_dataset(SELECT do_cross_cursor(...)));


no it isn't possible. VIEW have to have fixed numbers of columns.
  


Ach, flute... ;-(


You can write function that reads a cursor, create temp table, store
result and will do a copy from temp table.
  


Well... Not extremely elegant (it reminds me when I was stuck with 
access and I could not do nested queries...), but why not?
Actually, if the table is a real temporary one (CREATE TEMPORARY TABLE), 
it should not induce too much mess in the database layout.



There is one significant rule - any SELECT based statement have to
have known number of columns in planner time - so number of colums
must not depend on the data. There are no any workaround for it. You
can do only don't use fixed SELECT statemens (VIEWS too - it is stored
SELECT).
  


All right, it makes sense now...
Nut... Idea! (careful...) what about if we do, just like in a VIEW, a 
CREATE OR REPLACE, systematically when we do this kind of function? The 
only drawback I can think of is that we can't have anything dependant on 
the VIEW we generate.


Another idea (more danger...): what about setting a sort of flag which 
says that this VIEW should *not* be included in the planner? And it will 
have unexpected number of columns? Would this be *absolutely* impossible 
to state?



look on SPI interface http://www.postgresql.org/docs/8.4/interactive/spi.html
http://www.postgresql.org/docs/8.4/interactive/spi-examples.html
but you have to use cursor based interface.
  


I'll try to Read The French Manual, rather than the one in English! I'll 
look for it...



But the whole point is: this need of a generic cross-tab is really 
annoying for a large number of people, it has been there for a long 
time, and I know some people who just walk away from postgreSQL only 
because this feature is lacking, and they return happily to their 
m$-access, therefore ignoring the pure wealth of postgreSQL: sad, isn't 
it?...


A+
Pierre

--

Pierre Chevalier
   Mesté Duran
   32100 Condom
 Tél+fax  :09 75 27 45 62
   05 62 28 06 83
06 37 80 33 64
 Émail: pierre.chevalier1967CHEZfree.fr
 icq# : 10432285
 http://pierremariechevalier.free.fr/
 Logiciels Libres dans le Gers: http://gnusquetaires.org/





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dynamic crosstab

2010-01-27 Thread Pierre Chevalier

Pavel Stehule claviota:

...
But what I would like to do is to redirect the output of the function (that
is, the 'result' cursor) to a view, which will be used in other places. I
thought something like FETCH INTO would do the trick, but it doesn't.


Also, I need, at some point, to export the output to some CSV file. I
usually do a quick bash script as follows:

echo COPY (SELECT * FROM dh_litho ORDER BY id, depto) TO stdout WITH CSV
HEADER; | psql bdexplo  somefile.csv

...



hmm ...it cannot work :(. You cannot forward FETCH ALL statement on
server side - without programming in C
  


Ach! Too bad... Oh but... I used to program in C, long time ago, on HP-UX...


in this case you need small application for reading cursor and
transformation to CVS
  


Actually, if the small application was reading cursor, and transforming 
it to a VIEW, this would solve both problems at once:

something like:

CREATE VIEW crosstabbed_thing AS
(cursor_to_dataset(SELECT do_cross_cursor(...)));

And then:
echo COPY (SELECT * FROM crosstabbed_thing) TO stdout WITH CSV HEADER; 
| psql  youpi.csv


And there we are!
What about this plan? The cursor_to_dataset() should be written, in C if 
I understand well.
I have to dig out my old C book, and browse through postgresql APIs, 
code examples,etc. I guess...


A+
Pierre

--

Pierre Chevalier
   Mesté Duran
   32100 Condom
 Tél+fax  :09 75 27 45 62
   05 62 28 06 83
06 37 80 33 64
 Émail: pierre.chevalier1967CHEZfree.fr
 icq# : 10432285
 http://pierremariechevalier.free.fr/
 Logiciels Libres dans le Gers: http://gnusquetaires.org/





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Primary Key Increment Doesn't Seem Correct Under Table Partition

2010-01-27 Thread Alban Hertroys
On 27 Jan 2010, at 2:00, Yan Cheng Cheok wrote:
 However, whenever I insert row into measurement table,
 I realize its primary key value is going from 2, 4, 6, 8,
 10...
 
 May I know how can I prevent this?
 
 Apparently nextval on that sequence gets called multiple
 times in your queries.
 
 
 Do you have any idea why it happens so?

It's something you're doing in your queries that causes this. You haven't 
provided any info about that, so I can only guess.
The problem could for example be that you're selecting nextval() of the 
sequence and then don't fill in the value for that column in a subsequent 
INSERT, causing the default to call nextval() again.

 Also, is it necessary to create index for
 measurement_id found in measurement's child table? I am
 concern on the read speed.

 But...
 
 measurement table does contains
 
 CONSTRAINT pk_measurement_id PRIMARY KEY (measurement_id),
 
 when measurement_0 child table inherits from measurement table, isn't it will 
 inherit PRIMARY KEY (measurement_id)?
 
 Do I still have to create index for measurement_0.measurement_id ?


Ah I see, you were asking about table inheritance. To quote the documentation 
at http://www.postgresql.org/docs/8.4/interactive/ddl-inherit.html:

All check constraints and not-null constraints on a parent table are 
automatically inherited by its children. Other types of constraints (unique, 
primary key, and foreign key constraints) are not inherited.

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4b60897b10606504295220!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Primary Key Increment Doesn't Seem Correct Under Table Partition

2010-01-27 Thread Alban Hertroys
On 27 Jan 2010, at 4:22, Yan Cheng Cheok wrote:

 Hello all,
 
 I solve my problem using the following. It seems that when inherit from 
 parent table, the parent table's constraint is not being carried over to 
 child table.
 
 CREATE OR REPLACE FUNCTION measurement_insert_trigger()
 RETURNS TRIGGER AS 
 $BODY$DECLARE
measurement_table_index bigint;
measurement_table_name text;
 BEGIN
-- 20 is just an example here right now. The true value will be 100,000,000
measurement_table_index = NEW.measurement_id % 20;
measurement_table_name = 'measurement_' || measurement_table_index;
 
-- Since measurement_id for parent table is already a bigserial
-- Do I still need to create index for child's measurement_id?
 
IF NOT EXISTS(SELECT * FROM information_schema.tables WHERE table_name = 
 measurement_table_name) THEN
EXECUTE 'CREATE TABLE ' || quote_ident(measurement_table_name) || '
(
  CONSTRAINT pk_measurement_id_' || measurement_table_index || ' 
 PRIMARY KEY (measurement_id),
  CONSTRAINT fk_unit_id_' || measurement_table_index || ' FOREIGN KEY 
 (fk_unit_id)
  REFERENCES unit (unit_id) MATCH SIMPLE
  ON UPDATE NO ACTION ON DELETE CASCADE
) INHERITS (measurement);';
EXECUTE 'CREATE INDEX ' || quote_ident(measurement_table_name) || 
 '_measurement_id ON ' || quote_ident(measurement_table_name) || 
 '(measurement_id);';  


I think you should actually add the constraints back in there, not just create 
an index.

EXECUTE 'ALTER TABLE ' || ... ||
' ADD PRIMARY KEY (measurement_id),' ||
' ADD FOREIGN KEY (fk_unit_id) REFERENCES unit (unit_id) MATCH SIMPLE ON UPDATE 
NO ACTION ON DELETE CASCADE;';

One thing to note though is that this primary key is not guaranteed to be 
unique across different partitions or in the parent table (as the rows aren't 
actually IN the parent table).

END IF;
 
EXECUTE 'INSERT INTO ' || quote_ident(measurement_table_name) || 
 '(measurement_id, fk_unit_id, v) VALUES (' || NEW.measurement_id || ',' || 
 NEW.fk_unit_id || ',' || quote_literal(NEW.v) || ')';
RETURN NULL;
 END;$BODY$
 LANGUAGE plpgsql;

Alban Hertroys

--
Screwing up is the best way to attach something to the ceiling.


!DSPAM:737,4b608af610606065868549!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Google Summer of Code 2010 is on! (applications due March 9)

2010-01-27 Thread Selena Deckelmann
Hi!

I'm happy to facilitate this and get the details in for our
application. Seems like we have lots of things that we could get
students involved with, and of course, we tend to get interesting
projects pitched to us that we haven't thought of before.

I've attended the Mentor Summit after GSoC for the last two years, and
found it to be a great resource for learning how all projects are
recruiting and retaining new members. Many of these projects admire
Postgres for it's stability over the years, and I've been able to
communicate some of the work that we've all done on the commit fest,
our review process and generally how our community works to many other
projects.

Those who would like to mentor and help administrate the program,
please get in touch with me directly sel...@postgresql.org. I'll set
up some lists to keep us in sync, and we'll report out what happens.

-selena

-- Forwarded message --
From: LH (Leslie Hawthorn) lho...@gmail.com
Date: Tue, Jan 26, 2010 at 4:10 PM
Subject: GSoC 2010 is on.
To: Google Summer of Code Mentors List
google-summer-of-code-mentors-l...@googlegroups.com


Hello folks,

If you're not following the program discussion list, you may have
missed this mail with some details about timing for GSoC 2010:

http://groups.google.com/group/google-summer-of-code-discuss/browse_thread/thread/d839c0b02ac15b3f

Cheers,
LH

---

Hello everyone,
Many of you were wondering if Google Summer of Code 2010 is on for
2010 and the answer is yes! We will begin accepting application from
would-be mentoring organizations beginning March 8th at approximately
19:00 UTC, with applications closing on March 12th at 23:00 UTC.
Students can apply between 19:00 UTC on March 29th to 19:00 UTC on
April 9th.
We will be updating the program website [0], including the FAQs [1],
over the next few days. Not much is going to change except the
timeline, though there are a few FAQs that will be added. If you have
any questions in the interim please do send them to the list. Please
note that questions about which project you should select, etc., will
likely be responded to with depends on your skill set, so please
take the time to do some research [2] about which project is right for
you.
While mentoring organizations for 2010 will be announced until March
18th, a great number of the projects that have participated in the
past [3] will return once again in 2010. Of course, that's not a
guarantee and we cannot accept every great project that applies. But
if you want to get a jump start on your participation in GSoC 2010,
considering lurking in the IRC channel of a past participating project
and seeing what useful things you can learn about their code base and
community. Check out project forums to see if it's a place you'd like
to spend your time.
I'll be sending a follow up message in a few minutes about what to
expect in the next few days/weeks and how you can help Google Summer
of Code 2010.
Google Summer of Code 2010 is on! Celebrate!
[0] - http://socghop.appspot.com
[1] - http://socghop.appspot.com/document/show/gsoc_program/google/gsoc2009...
[2] - http://delicious.com/gsoc2009
[3] - http://socghop.appspot.com/gsoc/program/accepted_orgs/google/gsoc2009
Cheers,
LH
-- 
Leslie Hawthorn
Program Manager - Open Source
Google Inc.

-- 
http://chesnok.com/daily - me
http://endpoint.com - work

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] indexes problem

2010-01-27 Thread Andy Colson

On 1/27/2010 7:32 AM, J Scanf wrote:

Hi all. I have a problem with two tables of same structure:
'orders_stat_pre' and 'orders_stat_pre_new'.
store=# \d orders_stat_pre
  Column | Type   | Modifiers
++---
  id | integer| not null
  user_name  | text   |
  category_name  | character varying(10)  |
  ctime  | timestamp without timezone |
Indexes:
orders_stat_pre_pkey PRIMARY KEY, btree (id)
orders_stat_pre_user_idx btree (user_name, category_name, ctime DESC)


store=# \d orders_stat_pre_new
  Column | Type   | Modifiers
++---
  id | integer|
  user_name  | text   |
  category_name  | character varying(10)  |
  ctime  | timestamp without timezone |
Indexes:
orders_stat_pre_new_user_idx btree (user_name, category_name, ctime DESC)

I try to select last 10 orders from old table (i.e. 'orders_stat_pre'):

store=# explain select * from orders_stat_pre where user_name = 'Alex'
and category_name = 'Books' order by ctime desc limit 10;
   QUERY PLAN
---
  Limit  (cost=0.00..40.40 rows=10 width=335)
-  Index Scan using orders_stat_pre_user_idx on orders_stat_pre
(cost=0.00..15505.87 rows=3838 width=335)
  Index Cond: ((user_name = 'Alex'::text) AND
((category_name)::text = 'Books'::text))
(3 rows)

Then I do the same query on new table (i.e. 'orders_stat_pre_new'):

store=# explain select * from orders_stat_pre_new where user_name =
'Alex' and category_name = 'Books' order by ctime desc limit 10;
   QUERY PLAN

  Limit  (cost=1719969.83..1719969.86 rows=10 width=563)
-  Sort  (cost=1719969.83..1719981.08 rows=4499 width=563)
  Sort Key: ctime
  -  Seq Scan on orders_stat_pre_new  (cost=0.00..1719872.61
rows=4499 width=563)
Filter: ((user_name = 'Alex'::text) AND
((category_name)::text = 'Books'::text))
(5 rows)

I'm confused on how can I optimize the last query? Or where I can find
corresponding info. Thank you!



1) an explain analyze might give more useful info.
2) are your stats up do date?  run 'analyze orders_stat_pre_new' and try 
again
3) you might be indexing too much.  An index on just user_name might be 
enough.  The extra fields (category_name and ctime) may not help you as 
much as you think it will.  A more complicated index (multiple fields) 
makes it harder for PG to use.  Drop that index and create one on just 
user_name and compare the times.


-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Amazon EC2 CPU Utilization

2010-01-27 Thread Mike Bresnahan
I have deployed PostgresSQL 8.4.1 on a Fedora 9 c1.xlarge (8x1 cores) instance
in the Amazon E2 Cloud. When I run pgbench in read-only mode (-S) on a small
database, I am unable to peg the CPUs no matter how many clients I throw at it.
In fact, the CPU utilization never drops below 60% idle. I also tried this on
Fedora 12 (kernel 2.6.31) and got the same basic result. What's going on here?
Am I really only utilizing 40% of the CPUs? Is this to be expected on virtual
(xen) instances?

[r...@domu-12-31-39-0c-88-c1 ~]# uname -a 
Linux domU-12-31-39-0C-88-C1 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20
17:48:28 EST 2009 x86_64 x86_64 x86_64 GNU/Linux

-bash-4.0# pgbench -S -c 16 -T 30 -h domU-12-31-39-0C-88-C1 -U postgres
Password: 
starting vacuum...end.
transaction type: SELECT only
scaling factor: 64
query mode: simple
number of clients: 16
duration: 30 s
number of transactions actually processed: 590508
tps = 19663.841772 (including connections establishing)
tps = 19710.041020 (excluding connections establishing)

top - 15:55:05 up  1:33,  2 users,  load average: 2.44, 0.98, 0.44
Tasks: 123 total,  11 running, 112 sleeping,   0 stopped,   0 zombie
Cpu(s): 18.9%us,  8.8%sy,  0.0%ni, 70.6%id,  0.0%wa,  0.0%hi,  1.7%si,  0.0%st
Mem:   7348132k total,  1886912k used,  5461220k free,34432k buffers
Swap:0k total,0k used,0k free,  1456472k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND   
   
 
 2834 postgres  15   0  191m  72m  70m S   16  1.0   0:00.66 postmaster
   
  
 2838 postgres  15   0  191m  66m  64m R   15  0.9   0:00.62 postmaster
   
  
 2847 postgres  15   0  191m  70m  68m S   15  1.0   0:00.59 postmaster
   
  
 2837 postgres  15   0  191m  72m  70m S   14  1.0   0:00.47 postmaster
   
  
 2842 postgres  15   0  191m  66m  64m R   14  0.9   0:00.48 postmaster
   
  
 2835 postgres  15   0  191m  69m  67m S   14  1.0   0:00.54 postmaster
   
  
 2839 postgres  15   0  191m  69m  67m R   14  1.0   0:00.60 postmaster
   
  
 2840 postgres  15   0  191m  68m  67m R   14  1.0   0:00.58 postmaster
   
  
 2833 postgres  15   0  191m  68m  66m R   14  1.0   0:00.50 postmaster
   
  
 2845 postgres  15   0  191m  70m  68m R   14  1.0   0:00.50 postmaster
   
  
 2846 postgres  15   0  191m  67m  65m R   14  0.9   0:00.51 postmaster
   
  
 2836 postgres  15   0  191m  66m  64m S   12  0.9   0:00.43 postmaster
   
  
 2844 postgres  15   0  191m  68m  66m R   11  1.0   0:00.40 postmaster
   
  
 2841 postgres  15   0  191m  65m  64m R   11  0.9   0:00.43 postmaster
   
  
 2832 postgres  15   0  191m  67m  65m S   10  0.9   0:00.38 postmaster
   
  
 2843 postgres  15   0  191m  67m  66m S   10  0.9   0:00.43 postmaster
   
   

[r...@domu-12-31-39-0c-88-c1 ~]# iostat -d 2 -x
Linux 2.6.21.7-2.ec2.v1.2.fc8xen (domU-12-31-39-0C-88-C1)   01/27/10

Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s 

Re: [GENERAL] Amazon EC2 CPU Utilization

2010-01-27 Thread Jim Mlodgenski
On Wed, Jan 27, 2010 at 3:59 PM, Mike Bresnahan
mike.bresna...@bestbuy.comwrote:

 I have deployed PostgresSQL 8.4.1 on a Fedora 9 c1.xlarge (8x1 cores)
 instance
 in the Amazon E2 Cloud. When I run pgbench in read-only mode (-S) on a
 small
 database, I am unable to peg the CPUs no matter how many clients I throw at
 it.
 In fact, the CPU utilization never drops below 60% idle. I also tried this
 on
 Fedora 12 (kernel 2.6.31) and got the same basic result. What's going on
 here?
 Am I really only utilizing 40% of the CPUs? Is this to be expected on
 virtual
 (xen) instances?

 I have seen behavior like this in the past on EC2. I believe your
bottleneck may be pulling the data out of cache. I benchmarked this a while
back and found that memory speeds are not much faster than disk speeds on
EC2. I am not sure if that is true of Xen in general or if its just limited
to the cloud.


 [r...@domu-12-31-39-0c-88-c1 ~]# uname -a
 Linux domU-12-31-39-0C-88-C1 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20
 17:48:28 EST 2009 x86_64 x86_64 x86_64 GNU/Linux

 -bash-4.0# pgbench -S -c 16 -T 30 -h domU-12-31-39-0C-88-C1 -U postgres
 Password:
 starting vacuum...end.
 transaction type: SELECT only
 scaling factor: 64
 query mode: simple
 number of clients: 16
 duration: 30 s
 number of transactions actually processed: 590508
 tps = 19663.841772 (including connections establishing)
 tps = 19710.041020 (excluding connections establishing)

 top - 15:55:05 up  1:33,  2 users,  load average: 2.44, 0.98, 0.44
 Tasks: 123 total,  11 running, 112 sleeping,   0 stopped,   0 zombie
 Cpu(s): 18.9%us,  8.8%sy,  0.0%ni, 70.6%id,  0.0%wa,  0.0%hi,  1.7%si,
  0.0%st
 Mem:   7348132k total,  1886912k used,  5461220k free,34432k buffers
 Swap:0k total,0k used,0k free,  1456472k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND


  2834 postgres  15   0  191m  72m  70m S   16  1.0   0:00.66 postmaster


  2838 postgres  15   0  191m  66m  64m R   15  0.9   0:00.62 postmaster


  2847 postgres  15   0  191m  70m  68m S   15  1.0   0:00.59 postmaster


  2837 postgres  15   0  191m  72m  70m S   14  1.0   0:00.47 postmaster


  2842 postgres  15   0  191m  66m  64m R   14  0.9   0:00.48 postmaster


  2835 postgres  15   0  191m  69m  67m S   14  1.0   0:00.54 postmaster


  2839 postgres  15   0  191m  69m  67m R   14  1.0   0:00.60 postmaster


  2840 postgres  15   0  191m  68m  67m R   14  1.0   0:00.58 postmaster


  2833 postgres  15   0  191m  68m  66m R   14  1.0   0:00.50 postmaster


  2845 postgres  15   0  191m  70m  68m R   14  1.0   0:00.50 postmaster


  2846 postgres  15   0  191m  67m  65m R   14  0.9   0:00.51 postmaster


  2836 postgres  15   0  191m  66m  64m S   12  0.9   0:00.43 postmaster


  2844 postgres  15   0  191m  68m  66m R   11  1.0   0:00.40 postmaster


  2841 postgres  15   0  191m  65m  64m R   11  0.9   0:00.43 postmaster


  2832 postgres  15   0  191m  67m  65m S   10  0.9   0:00.38 postmaster


  2843 postgres  15   0  191m  67m  66m S   10  0.9   0:00.43 postmaster



 [r...@domu-12-31-39-0c-88-c1 ~]# iostat -d 2 -x
 Linux 2.6.21.7-2.ec2.v1.2.fc8xen (domU-12-31-39-0C-88-C1)   01/27/10

 Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sda1  0.5715.011.323.5634.39   148.5737.52
 0.28   57.35   3.05   1.49
 sdb1  0.03   112.385.50   12.1187.98   995.9161.57
 1.88  106.61   2.23   3.93

 Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sda1  0.00 0.000.001.79 0.0028.5716.00
 0.002.00   1.50   0.27
 sdb1  0.00 4.460.00   14.29 0.00   150.0010.50
 0.37   26.00   2.56   3.66

 Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sda1  0.00 0.000.000.00 0.00 0.00 0.00
 0.000.00   0.00   0.00
 sdb1  0.00 0.000.000.00 0.00 0.00 0.00
 0.000.00   0.00   0.00

 Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sda1  0.00 3.570.000.79 0.0034.9244.00
 0.003.00   3.00   0.24
 sdb1  0.00 0.000.000.00 0.00 0.00 0.00
 0.000.00   0.00   0.00




 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
--
Jim Mlodgenski
EnterpriseDB (http://www.enterprisedb.com)


[GENERAL] Memory Usage and OpenBSD

2010-01-27 Thread Jeff Ross
I'm not getting something about the best way to set up a server using 
PostgreSQL as a backend for a busy web server running drupal.


The postgresql performance folks
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
say that in a server with more that 1GB of ram

a reasonable starting value for shared_buffers is 1/4 of the memory in your 
system.


Okay, this server has 4GB of ram.  pgtune suggests the following values for 
predominately web based usage:


maintenance_work_mem = 240MB # pgtune wizard 2010-01-27
checkpoint_completion_target = 0.7 # pgtune wizard 2010-01-27
effective_cache_size = 2816MB # pgtune wizard 2010-01-27
work_mem = 18MB # pgtune wizard 2010-01-27
wal_buffers = 4MB # pgtune wizard 2010-01-27
checkpoint_segments = 8 # pgtune wizard 2010-01-27
shared_buffers = 960MB # pgtune wizard 2010-01-27
max_connections = 200 # pgtune wizard 2010-01-27

Here is where I'm not doing something right.  With my shared_buffers at 960MB, 
I need to adjust kern.shminfo.shmmax to 1GB (1073741824) to get postgres to 
start.  I thought I'd need to also adjust kern.shminfo.shmmall value as well 
but that seems to change automatically whenever I adjust kern.shminfo.shmmax.


$ sysctl -a | grep kern.s

kern.securelevel=1
kern.saved_ids=1
kern.somaxconn=128
kern.sominconn=80
kern.sysvmsg=1
kern.sysvsem=1
kern.sysvshm=1
kern.stackgap_random=262144
kern.splassert=1
kern.seminfo.semmni=256
kern.seminfo.semmns=2048
kern.seminfo.semmnu=30
kern.seminfo.semmsl=60
kern.seminfo.semopm=100
kern.seminfo.semume=10
kern.seminfo.semusz=100
kern.seminfo.semvmx=32767
kern.seminfo.semaem=16384
kern.shminfo.shmmax=1073741824
kern.shminfo.shmmin=1
kern.shminfo.shmmni=128
kern.shminfo.shmseg=128
kern.shminfo.shmall=262144

At these values postgres will start and top shows a large amount of memory 
still free:

Memory: Real: 55M/465M act/tot  Free: 3433M  Swap: 0K/8197M used/tot

Running a simple select only pgbench test against it will fail with an out of 
memory error as it tries to vacuum --analyze the newly created database with 
750 tuples.


pgbench -i -s 75 -h varley.openvistas.net -U _postgresql pgbench
vacuumdb --analyze -h varley.openvistas.net U _postgresql pgbench

When I run this and have top refreshing every second, I never see the free 
memory drop below 3400M, so I'm not sure what memory we are running out of. 
systat -i shows similar amounts of memory yet free.


The postgresql logs can perhaps shed some light on this for someone more 
knowledgeable than myself.  Here is what is in the log immediately prior to 
the out of memory error:


2010-01-27 14:07:26.326319500 TopMemoryContext: 60712 total in 7 blocks; 4488 
free (8 chunks); 56224 used
2010-01-27 14:07:26.326374500   TopTransactionContext: 8192 total in 1 blocks; 
5408 free (0 chunks); 2784 used
2010-01-27 14:07:26.326389500   Operator class cache: 8192 total in 1 blocks; 
3848 free (0 chunks); 4344 used
2010-01-27 14:07:26.326412500   MessageContext: 8192 total in 1 blocks; 5488 
free (1 chunks); 2704 used
2010-01-27 14:07:26.326434500   smgr relation table: 8192 total in 1 blocks; 
2816 free (0 chunks); 5376 used
2010-01-27 14:07:26.326440500   TransactionAbortContext: 32768 total in 1 
blocks; 32752 free (0 chunks); 16 used
2010-01-27 14:07:26.326462500   Portal hash: 8192 total in 1 blocks; 3912 free 
(0 chunks); 4280 used
2010-01-27 14:07:26.326469500   PortalMemory: 8192 total in 1 blocks; 8040 
free (0 chunks); 152 used
2010-01-27 14:07:26.326490500 PortalHeapMemory: 15360 total in 4 blocks; 
7944 free (12 chunks); 7416 used
2010-01-27 14:07:26.326496500   ExecutorState: 8192 total in 1 blocks; 
7928 free (0 chunks); 264 used
2010-01-27 14:07:26.326517500 ExprContext: 0 total in 0 blocks; 0 free 
(0 chunks); 0 used
2010-01-27 14:07:26.326539500   TupleSort: 24600 total in 2 blocks; 7584 
free (0 chunks); 17016 used
2010-01-27 14:07:26.326562500   TupleSort: 92266520 total in 17 blocks; 
8379568 free (9 chunks); 83886952 used
2010-01-27 14:07:26.326584500   Relcache by OID: 8192 total in 1 blocks; 2856 
free (0 chunks); 5336 used
2010-01-27 14:07:26.326624500   CacheMemoryContext: 667696 total in 20 blocks; 
235240 free (7 chunks); 432456 used
2010-01-27 14:07:26.326646500 pgbench_accounts_pkey: 1024 total in 1 
blocks; 344 free (0 chunks); 680 used
2010-01-27 14:07:26.326652500 pg_constraint_contypid_index: 1024 total in 
1 blocks; 304 free (0 chunks); 720 used
2010-01-27 14:07:26.326674500 pg_constraint_conrelid_index: 1024 total in 
1 blocks; 304 free (0 chunks); 720 used
2010-01-27 14:07:26.326681500 pg_constraint_conname_nsp_index: 1024 total 
in 1 blocks; 240 free (0 chunks); 784 used
2010-01-27 14:07:26.326703500 pg_shdepend_reference_index: 1024 total in 1 
blocks; 240 free (0 chunks); 784 used
2010-01-27 14:07:26.326724500 pg_index_indrelid_index: 1024 total in 1 
blocks; 304 free (0 chunks); 720 used
2010-01-27 14:07:26.326730500 pg_inherits_relid_seqno_index: 1024 

Re: [GENERAL] Memory Usage and OpenBSD

2010-01-27 Thread Tom Lane
Jeff Ross jr...@wykids.org writes:
 Running a simple select only pgbench test against it will fail with an out of
 memory error as it tries to vacuum --analyze the newly created database with 
 750 tuples.

Better look at the ulimit values the postmaster is started with;
you shouldn't be getting that out-of-memory error AFAICS, unless
there's a very restrictive ulimit on what an individual process can
allocate.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Amazon EC2 CPU Utilization

2010-01-27 Thread Mike Bresnahan
Jim Mlodgenski jimmy76 at gmail.com writes:
 I have seen behavior like this in the past on EC2. I believe your bottleneck
may be pulling the data out of cache. I benchmarked this a while back and found
that memory speeds are not much faster than disk speeds on EC2. I am not sure if
that is true of Xen in general or if its just limited to the cloud.  

When the CPU is waiting for a memory read, are the CPU cycles not charged to the
currently running process?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Usage and OpenBSD

2010-01-27 Thread Jeff Ross

Tom Lane wrote:

Jeff Ross jr...@wykids.org writes:

Running a simple select only pgbench test against it will fail with an out of
memory error as it tries to vacuum --analyze the newly created database with 
750 tuples.


Better look at the ulimit values the postmaster is started with;
you shouldn't be getting that out-of-memory error AFAICS, unless
there's a very restrictive ulimit on what an individual process can
allocate.

regards, tom lane



Thanks!

OpenBSD makes a _postgresql user on install and it is in the daemon class with 
the following values:


daemon:\
:ignorenologin:\
:datasize=infinity:\
:maxproc=infinity:\
:openfiles-cur=128:\
:stacksize-cur=8M:\
:localcipher=blowfish,8:\
:tc=default:

The OpenBSD specific readme suggests making a special postgresql login class 
and bumping openfiles-cur to 768 but I don't see how that helps here.  Would 
bumping stacksize also help?


Jeff Ross


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Amazon EC2 CPU Utilization

2010-01-27 Thread Greg Smith

Mike Bresnahan wrote:

top - 15:55:05 up  1:33,  2 users,  load average: 2.44, 0.98, 0.44
Tasks: 123 total,  11 running, 112 sleeping,   0 stopped,   0 zombie
Cpu(s): 18.9%us,  8.8%sy,  0.0%ni, 70.6%id,  0.0%wa,  0.0%hi,  1.7%si,  0.0%st
Mem:   7348132k total,  1886912k used,  5461220k free,34432k buffers
Swap:0k total,0k used,0k free,  1456472k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND   
   
 
 2834 postgres  15   0  191m  72m  70m S   16  1.0   0:00.66 postmaster   
  
 2838 postgres  15   0  191m  66m  64m R   15  0.9   0:00.62 postmaster
   


Could you try this again with top -c, which will label these 
postmaster processes usefully, and include the pgbench client itself in 
what you post?  It's hard to sort out what's going on in these 
situations without that style of breakdown.


--
Greg Smith2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com  www.2ndQuadrant.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Amazon EC2 CPU Utilization

2010-01-27 Thread John R Pierce


I have seen behavior like this in the past on EC2. I believe your 
bottleneck may be pulling the data out of cache. I benchmarked this a 
while back and found that memory speeds are not much faster than disk 
speeds on EC2. I am not sure if that is true of Xen in general or if 
its just limited to the cloud.  


that doesn't make much sense.

more likely, he's disk IO bound, but hard to say as that iostat output 
only showed a couple 2 second slices of work.   the first output, which 
shows average since system startup, seems to show the system has had 
relatively high average wait times of 100ms on the average, yet the 
samples below only show 0, 2, 3mS await.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Amazon EC2 CPU Utilization

2010-01-27 Thread Mike Bresnahan
John R Pierce pierce at hogranch.com writes:
 more likely, he's disk IO bound, but hard to say as that iostat output 
 only showed a couple 2 second slices of work.   the first output, which 
 shows average since system startup, seems to show the system has had 
 relatively high average wait times of 100ms on the average, yet the 
 samples below only show 0, 2, 3mS await.

I don't think the problem is disk I/O. The database easily fits in the available
RAM (in fact there is a ton of RAM free) and iostat does not show a heavy load.
 





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Amazon EC2 CPU Utilization

2010-01-27 Thread Mike Bresnahan
 Could you try this again with top -c, which will label these 
 postmaster processes usefully, and include the pgbench client itself in 
 what you post?  It's hard to sort out what's going on in these 
 situations without that style of breakdown.

I had run pgbench on a separate instance last time, but this time I ran it on
the same machine. With the -c option, top(1) reports that many of the postgres
processes are idle.

top - 18:25:23 up 8 min,  2 users,  load average: 1.52, 1.32, 0.55
Tasks: 218 total,  15 running, 203 sleeping,   0 stopped,   0 zombie
Cpu(s): 32.3%us, 17.5%sy,  0.0%ni, 49.6%id,  0.0%wa,  0.0%hi,  0.0%si,  0.6%st
Mem:   7358492k total,  1620500k used,  5737992k free,11144k buffers
Swap:0k total,0k used,0k free,  1248388k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND   
   
 
 1323 postgres  20   0 50364 2192 1544 R 56.7  0.0   0:03.19 pgbench -S -c 16 -T
30 
 
 1337 postgres  20   0  197m 114m 112m R 25.4  1.6   0:01.35 postgres: postgres
postgres [local] SELECT
  
 1331 postgres  20   0  197m 113m 111m R 24.4  1.6   0:01.16 postgres: postgres
postgres [local] idle  
  
 1335 postgres  20   0  197m 114m 112m R 24.1  1.6   0:01.30 postgres: postgres
postgres [local] SELECT
  
 1340 postgres  20   0  197m 113m 112m R 22.7  1.6   0:01.28 postgres: postgres
postgres [local] idle  
  
 1327 postgres  20   0  197m 114m 113m R 22.1  1.6   0:01.26 postgres: postgres
postgres [local] idle  
  
 1328 postgres  20   0  197m 114m 113m R 21.8  1.6   0:01.32 postgres: postgres
postgres [local] SELECT
  
 1332 postgres  20   0  197m 114m 112m R 21.8  1.6   0:01.11 postgres: postgres
postgres [local] SELECT
  
 1326 postgres  20   0  197m 112m 110m R 21.4  1.6   0:01.10 postgres: postgres
postgres [local] idle  
  
 1325 postgres  20   0  197m 112m 110m R 20.8  1.6   0:01.28 postgres: postgres
postgres [local] SELECT
  
 1330 postgres  20   0  197m 113m 111m R 20.4  1.6   0:01.21 postgres: postgres
postgres [local] idle  
  
 1339 postgres  20   0  197m 113m 111m R 20.4  1.6   0:01.10 postgres: postgres
postgres [local] idle  
  
 1333 postgres  20   0  197m 114m 112m S 20.1  1.6   0:01.08 postgres: postgres
postgres [local] SELECT
  
 1336 postgres  20   0  197m 113m 111m S 19.8  1.6   0:01.10 postgres: postgres
postgres [local] SELECT
  
 1329 postgres  20   0  197m 113m 111m S 19.1  1.6   0:01.21 postgres: postgres
postgres [local] idle  
  
 1338 postgres  20   0  197m 114m 112m R 19.1  1.6   0:01.28 postgres: postgres
postgres [local] SELECT
  
 1334 postgres  20   0  197m 114m 112m R 18.8  1.6   0:01.00 postgres: postgres
postgres [local] idle  
  
 1214 root  20   0 14900 1348  944 R  0.3  0.0   0:00.41 top -c
   
  






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Amazon EC2 CPU Utilization

2010-01-27 Thread Mike Bresnahan
Greg Smith greg at 2ndquadrant.com writes:
 Could you try this again with top -c, which will label these 
 postmaster processes usefully, and include the pgbench client itself in 
 what you post?  It's hard to sort out what's going on in these 
 situations without that style of breakdown.

As a further experiment, I ran 8 pgbench processes in parallel. The result is
about the same.

top - 18:34:15 up 17 min,  2 users,  load average: 0.39, 0.40, 0.36
Tasks: 217 total,   8 running, 209 sleeping,   0 stopped,   0 zombie
Cpu(s): 22.2%us,  8.9%sy,  0.0%ni, 68.7%id,  0.0%wa,  0.0%hi,  0.0%si,  0.3%st
Mem:   7358492k total,  1611148k used,  5747344k free,11416k buffers
Swap:0k total,0k used,0k free,  1248408k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND   
   
 
 1506 postgres  20   0  197m 134m 132m S 29.4  1.9   0:09.27 postgres: postgres
postgres [local] idle  
  
 1524 postgres  20   0  197m 134m 132m R 29.4  1.9   0:05.13 postgres: postgres
postgres [local] idle  
  
 1509 postgres  20   0  197m 134m 132m R 27.1  1.9   0:08.58 postgres: postgres
postgres [local] SELECT
  
 1521 postgres  20   0  197m 134m 132m R 26.4  1.9   0:05.77 postgres: postgres
postgres [local] SELECT
  
 1512 postgres  20   0  197m 134m 132m S 26.1  1.9   0:07.62 postgres: postgres
postgres [local] idle  
  
 1520 postgres  20   0  197m 134m 132m R 25.8  1.9   0:05.31 postgres: postgres
postgres [local] idle  
  
 1515 postgres  20   0  197m 134m 132m S 23.8  1.9   0:06.94 postgres: postgres
postgres [local] SELECT
  
 1527 postgres  20   0  197m 134m 132m S 21.8  1.9   0:04.46 postgres: postgres
postgres [local] SELECT
  
 1517 postgres  20   0 49808 2012 1544 R  5.3  0.0   0:01.02 pgbench -S -c 1 -T
30 
  
 1507 postgres  20   0 49808 2012 1544 R  4.6  0.0   0:01.70 pgbench -S -c 1 -T
30 
  
 1510 postgres  20   0 49808 2008 1544 S  4.3  0.0   0:01.32 pgbench -S -c 1 -T
30 
  
 1525 postgres  20   0 49808 2012 1544 S  4.3  0.0   0:00.79 pgbench -S -c 1 -T
30 
  
 1516 postgres  20   0 49808 2016 1544 S  4.0  0.0   0:01.00 pgbench -S -c 1 -T
30 
  
 1504 postgres  20   0 49808 2012 1544 R  3.3  0.0   0:01.81 pgbench -S -c 1 -T
30 
  
 1513 postgres  20   0 49808 2016 1544 S  3.0  0.0   0:01.07 pgbench -S -c 1 -T
30 
  
 1522 postgres  20   0 49808 2012 1544 S  3.0  0.0   0:00.86 pgbench -S -c 1 -T
30 
  
 1209 postgres  20   0 63148 1476  476 S  0.3  0.0   0:00.11 postgres: stats
collector process  
 






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Usage and OpenBSD

2010-01-27 Thread Tom Lane
Jeff Ross jr...@wykids.org writes:
 Tom Lane wrote:
 Better look at the ulimit values the postmaster is started with;

 OpenBSD makes a _postgresql user on install and it is in the daemon class 
 with 
 the following values:

 daemon:\
  :ignorenologin:\
  :datasize=infinity:\
  :maxproc=infinity:\
  :openfiles-cur=128:\
  :stacksize-cur=8M:\
  :localcipher=blowfish,8:\
  :tc=default:

 The OpenBSD specific readme suggests making a special postgresql login class 
 and bumping openfiles-cur to 768 but I don't see how that helps here.  Would 
 bumping stacksize also help?

Huh --- that looks okay to me.  The stacksize is not what you're hitting
(and 8MB is plenty sufficient anyway, at least unless you like recursive
functions).  I concur with the readme that 128 open files isn't much,
but that's not what you're hitting either.  The only thing that comes to
mind is that on Linux there are several different ulimit values that are
related to maximum per-process data space.  I don't know BSD very well
so I can't say if datasize is the only such value for BSD, but it'd be
worth checking.  (Hmm, on OS X which is at least partly BSDish, I see
-m and -v in addition to -d, so I'm suspicious OpenBSD might have these
concepts too.)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How much work is it to add/drop columns, really?

2010-01-27 Thread A B
Hello there.

I read http://www.postgresql.org/docs/current/static/sql-altertable.html
and find it interesting that

 Adding a column with a non-null default or changing the type of an
existing column will require the entire table to be rewritten. This
might take a significant amount of time for a large table; and it will
temporarily require double the disk space.

So adding a new column WITHOUT any default value is actually a quite
cheap operation then? Some quick tests seem to indicate that.
So if you can live with having a null values there until the value is
set (or you let  a cron job run and set the value to a desired
default value for one row at a time), then adding columns will not
be a real problem? No serious locking for a long time?

And droping a column seems even quicker

The DROP COLUMN form does not physically remove the column, but
simply makes it invisible to SQL operations. Subsequent insert and
update operations in the table will store a null value for the column.
Thus, dropping a column is quick but it will not immediately reduce
the on-disk size of your table, as the space occupied by the dropped
column is not reclaimed. The space will be reclaimed over time as
existing rows are updated. 

So that is really quick then?

Will autovaccum or other tools try to rewrite or be clever  and
optimize and causing a total rewrite of the table?

Any other problems with adding/dropping columns that I'm unaware of?

Best wishes.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem after installing triggering function

2010-01-27 Thread Yan Cheng Cheok

Thanks and Regards
Yan Cheng CHEOK


--- On Thu, 1/28/10, Tom Lane t...@sss.pgh.pa.us wrote:

 From: Tom Lane t...@sss.pgh.pa.us
 Subject: Re: [GENERAL] Problem after installing triggering function
 To: Yan Cheng Cheok ycch...@yahoo.com
 Cc: pgsql-general@postgresql.org
 Date: Thursday, January 28, 2010, 12:34 AM
 Yan Cheng Cheok ycch...@yahoo.com
 writes:
  I have a stored procedure execute the following code
 :
      INSERT INTO unit(fk_lot_id,
 cycle)
      VALUES(_lotID, _cycle)
 RETURNING  * INTO _unit;
      raise notice 'AFTER INSERT
 INTO UNIT,  _unit.unit_id = %', _unit.unit_id ;
 
  unit_id column, is an auto generated primary key. I
 will always get a non-null value.
 
  However, after I install a trigger function, and
 create a table named unit_0 inherit from table unit, 
 
  NOTICE:  AFTER INSERT INTO UNIT, 
 _unit.unit_id = NULL
 
  will be printed.
 
 If you installed it as a BEFORE trigger, the problem is
 here:
     

You are right. I am inserting BEFORE trigger.

CREATE TRIGGER insert_unit_trigger
BEFORE INSERT ON unit
FOR EACH ROW EXECUTE PROCEDURE unit_insert_trigger();


      RETURN NULL;
 
 That's suppressing the INSERT action.
 
 

But... I am not implementing table partition. I want to ensure my parent table 
unit is empty, and unit_0 is being filled. But, all my query can be 
performed through parent table unit.

I am referring to 
http://www.if-not-true-then-false.com/2009/11/howto-create-postgresql-table-partitioning-part-1/

        
 regards, tom lane
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Primary Key Increment Doesn't Seem Correct Under Table Partition

2010-01-27 Thread Yan Cheng Cheok

Thanks and Regards
Yan Cheng CHEOK


--- On Thu, 1/28/10, Alban Hertroys dal...@solfertje.student.utwente.nl wrote:

 From: Alban Hertroys dal...@solfertje.student.utwente.nl
 Subject: Re: [GENERAL] Primary Key Increment Doesn't Seem Correct Under Table 
 Partition
 To: Yan Cheng Cheok ycch...@yahoo.com
 Cc: pgsql-general@postgresql.org
 Date: Thursday, January 28, 2010, 2:50 AM
 On 27 Jan 2010, at 4:22, Yan Cheng
 Cheok wrote:
 
  Hello all,
  
  I solve my problem using the following. It seems that
 when inherit from parent table, the parent table's
 constraint is not being carried over to child table.
  
  CREATE OR REPLACE FUNCTION
 measurement_insert_trigger()
  RETURNS TRIGGER AS 
  $BODY$DECLARE
     measurement_table_index bigint;
     measurement_table_name text;
  BEGIN
     -- 20 is just an example here right now.
 The true value will be 100,000,000
     measurement_table_index =
 NEW.measurement_id % 20;
     measurement_table_name = 'measurement_'
 || measurement_table_index;
  
     -- Since measurement_id for parent table
 is already a bigserial
     -- Do I still need to create index for
 child's measurement_id?
  
     IF NOT EXISTS(SELECT * FROM
 information_schema.tables WHERE table_name =
 measurement_table_name) THEN
         EXECUTE 'CREATE TABLE ' ||
 quote_ident(measurement_table_name) || '
         (
           CONSTRAINT
 pk_measurement_id_' || measurement_table_index || ' PRIMARY
 KEY (measurement_id),
           CONSTRAINT
 fk_unit_id_' || measurement_table_index || ' FOREIGN KEY
 (fk_unit_id)
              
 REFERENCES unit (unit_id) MATCH SIMPLE
               ON
 UPDATE NO ACTION ON DELETE CASCADE     
   
         ) INHERITS
 (measurement);';
         EXECUTE 'CREATE INDEX ' ||
 quote_ident(measurement_table_name) || '_measurement_id ON '
 || quote_ident(measurement_table_name) ||
 '(measurement_id);';      
 
 
 I think you should actually add the constraints back in
 there, not just create an index.
 

Thanks. The example I seen here doesn't use ALERT TABLE

http://www.if-not-true-then-false.com/2009/11/howto-create-postgresql-table-partitioning-part-1/

But I assume both shall doing the same thing.

 EXECUTE 'ALTER TABLE ' || ... ||
 ' ADD PRIMARY KEY (measurement_id),' ||
 ' ADD FOREIGN KEY (fk_unit_id) REFERENCES unit (unit_id)
 MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;';
 
 One thing to note though is that this primary key is not
 guaranteed to be unique across different partitions or in
 the parent table (as the rows aren't actually IN the parent
 table).
 
     END IF;
  
     EXECUTE 'INSERT INTO ' ||
 quote_ident(measurement_table_name) || '(measurement_id,
 fk_unit_id, v) VALUES (' || NEW.measurement_id || ',' ||
 NEW.fk_unit_id || ',' || quote_literal(NEW.v) || ')';
     RETURN NULL;
  END;$BODY$
  LANGUAGE plpgsql;
 
 Alban Hertroys
 
 --
 Screwing up is the best way to attach something to the
 ceiling.
 
 
 !DSPAM:737,4b608af610606065868549!
 
 
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem after installing triggering function

2010-01-27 Thread Yan Cheng Cheok
Sorry. Some correction.

Change

But... I am not implementing table partition

to

But... I am *now* implementing table partition

Thanks and Regards
Yan Cheng CHEOK


--- On Thu, 1/28/10, Yan Cheng Cheok ycch...@yahoo.com wrote:

 From: Yan Cheng Cheok ycch...@yahoo.com
 Subject: Re: [GENERAL] Problem after installing triggering function
 To: Tom Lane t...@sss.pgh.pa.us
 Cc: pgsql-general@postgresql.org
 Date: Thursday, January 28, 2010, 8:53 AM
 
 Thanks and Regards
 Yan Cheng CHEOK
 
 
 --- On Thu, 1/28/10, Tom Lane t...@sss.pgh.pa.us
 wrote:
 
  From: Tom Lane t...@sss.pgh.pa.us
  Subject: Re: [GENERAL] Problem after installing
 triggering function
  To: Yan Cheng Cheok ycch...@yahoo.com
  Cc: pgsql-general@postgresql.org
  Date: Thursday, January 28, 2010, 12:34 AM
  Yan Cheng Cheok ycch...@yahoo.com
  writes:
   I have a stored procedure execute the following
 code
  :
       INSERT INTO unit(fk_lot_id,
  cycle)
       VALUES(_lotID, _cycle)
  RETURNING  * INTO _unit;
       raise notice 'AFTER INSERT
  INTO UNIT,  _unit.unit_id = %', _unit.unit_id ;
  
   unit_id column, is an auto generated primary key.
 I
  will always get a non-null value.
  
   However, after I install a trigger function, and
  create a table named unit_0 inherit from table unit, 
  
   NOTICE:  AFTER INSERT INTO UNIT, 
  _unit.unit_id = NULL
  
   will be printed.
  
  If you installed it as a BEFORE trigger, the problem
 is
  here:
      
 
 You are right. I am inserting BEFORE trigger.
 
 CREATE TRIGGER insert_unit_trigger
     BEFORE INSERT ON unit
     FOR EACH ROW EXECUTE PROCEDURE
 unit_insert_trigger();
 
 
       RETURN NULL;
  
  That's suppressing the INSERT action.
  
  
 
 But... I am not implementing table partition. I want to
 ensure my parent table unit is empty, and unit_0 is
 being filled. But, all my query can be performed through
 parent table unit.
 
 I am referring to 
 http://www.if-not-true-then-false.com/2009/11/howto-create-postgresql-table-partitioning-part-1/
 
         
  regards, tom lane
  
  -- 
  Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
  To make changes to your subscription:
  http://www.postgresql.org/mailpref/pgsql-general
  
 
 
 
 





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Amazon EC2 CPU Utilization

2010-01-27 Thread Jim Mlodgenski
On Wed, Jan 27, 2010 at 6:37 PM, Mike Bresnahan
mike.bresna...@bestbuy.comwrote:

 Greg Smith greg at 2ndquadrant.com writes:
  Could you try this again with top -c, which will label these
  postmaster processes usefully, and include the pgbench client itself in
  what you post?  It's hard to sort out what's going on in these
  situations without that style of breakdown.

 As a further experiment, I ran 8 pgbench processes in parallel. The result
 is
 about the same.

 Let's start from the beginning. Have you tuned your postgresql.conf file?
What do you have shared_buffers set to? That would have the biggest effect
on a test like this.


 top - 18:34:15 up 17 min,  2 users,  load average: 0.39, 0.40, 0.36
 Tasks: 217 total,   8 running, 209 sleeping,   0 stopped,   0 zombie
 Cpu(s): 22.2%us,  8.9%sy,  0.0%ni, 68.7%id,  0.0%wa,  0.0%hi,  0.0%si,
  0.3%st
 Mem:   7358492k total,  1611148k used,  5747344k free,11416k buffers
 Swap:0k total,0k used,0k free,  1248408k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND


  1506 postgres  20   0  197m 134m 132m S 29.4  1.9   0:09.27 postgres:
 postgres
 postgres [local] idle

  1524 postgres  20   0  197m 134m 132m R 29.4  1.9   0:05.13 postgres:
 postgres
 postgres [local] idle

  1509 postgres  20   0  197m 134m 132m R 27.1  1.9   0:08.58 postgres:
 postgres
 postgres [local] SELECT

  1521 postgres  20   0  197m 134m 132m R 26.4  1.9   0:05.77 postgres:
 postgres
 postgres [local] SELECT

  1512 postgres  20   0  197m 134m 132m S 26.1  1.9   0:07.62 postgres:
 postgres
 postgres [local] idle

  1520 postgres  20   0  197m 134m 132m R 25.8  1.9   0:05.31 postgres:
 postgres
 postgres [local] idle

  1515 postgres  20   0  197m 134m 132m S 23.8  1.9   0:06.94 postgres:
 postgres
 postgres [local] SELECT

  1527 postgres  20   0  197m 134m 132m S 21.8  1.9   0:04.46 postgres:
 postgres
 postgres [local] SELECT

  1517 postgres  20   0 49808 2012 1544 R  5.3  0.0   0:01.02 pgbench -S -c
 1 -T
 30

  1507 postgres  20   0 49808 2012 1544 R  4.6  0.0   0:01.70 pgbench -S -c
 1 -T
 30

  1510 postgres  20   0 49808 2008 1544 S  4.3  0.0   0:01.32 pgbench -S -c
 1 -T
 30

  1525 postgres  20   0 49808 2012 1544 S  4.3  0.0   0:00.79 pgbench -S -c
 1 -T
 30

  1516 postgres  20   0 49808 2016 1544 S  4.0  0.0   0:01.00 pgbench -S -c
 1 -T
 30

  1504 postgres  20   0 49808 2012 1544 R  3.3  0.0   0:01.81 pgbench -S -c
 1 -T
 30

  1513 postgres  20   0 49808 2016 1544 S  3.0  0.0   0:01.07 pgbench -S -c
 1 -T
 30

  1522 postgres  20   0 49808 2012 1544 S  3.0  0.0   0:00.86 pgbench -S -c
 1 -T
 30

  1209 postgres  20   0 63148 1476  476 S  0.3  0.0   0:00.11 postgres:
 stats
 collector process







 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
--
Jim Mlodgenski
EnterpriseDB (http://www.enterprisedb.com)


Re: [GENERAL] How much work is it to add/drop columns, really?

2010-01-27 Thread Scott Marlowe
On Wed, Jan 27, 2010 at 5:45 PM, A B gentosa...@gmail.com wrote:
 Hello there.

 I read http://www.postgresql.org/docs/current/static/sql-altertable.html
 and find it interesting that

  Adding a column with a non-null default or changing the type of an
 existing column will require the entire table to be rewritten. This
 might take a significant amount of time for a large table; and it will
 temporarily require double the disk space.

 So adding a new column WITHOUT any default value is actually a quite
 cheap operation then? Some quick tests seem to indicate that.

You are correct.  It's the nullability AND non-default vaoue that
makes it cheap.  Adding an empty column is cheap.

 So if you can live with having a null values there until the value is
 set (or you let  a cron job run and set the value to a desired
 default value for one row at a time), then adding columns will not
 be a real problem? No serious locking for a long time?

Exactly.  In fact you can run a job that updates x columns at a time,
run vacuum, then update x columns again to keep bloat down.  as long
as x is about 1/10th or less of the total rows in the table you should
be able to keep it from bloating terribly.

 And droping a column seems even quicker

 The DROP COLUMN form does not physically remove the column, but
 simply makes it invisible to SQL operations. Subsequent insert and
 update operations in the table will store a null value for the column.
 Thus, dropping a column is quick but it will not immediately reduce
 the on-disk size of your table, as the space occupied by the dropped
 column is not reclaimed. The space will be reclaimed over time as
 existing rows are updated. 

 So that is really quick then?

Ayup.

 Will autovaccum or other tools try to rewrite or be clever  and
 optimize and causing a total rewrite of the table?

Nope

 Any other problems with adding/dropping columns that I'm unaware of?

The only thing I can think of is some issues with views on top of
those tables, or maybe other tables that reference it.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dynamic crosstab

2010-01-27 Thread Scott Marlowe
On Wed, Jan 27, 2010 at 2:14 AM, Pavel Stehule pavel.steh...@gmail.com wrote:

 hmm ...it cannot work :(. You cannot forward FETCH ALL statement on
 server side - without programming in C

 in this case you need small application for reading cursor and
 transformation to CVS

If I'm understanding what you're doing could you write a function to
return a set of record then run the cursor inside the function?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] query a table from one database to another

2010-01-27 Thread AI Rumman
I am getting the error:


LINE 1: select dblink_connect('dbname=postgres');
   ^
HINT:  No function matches the given name and argument types. You might need
to add explicit type casts.


Please tell me why? I am using Postgresql 8.3


On Wed, Jan 27, 2010 at 10:45 PM, Joshua Tolley eggyk...@gmail.com wrote:

  On Wed, Jan 27, 2010 at 02:52:58PM +0600, AI Rumman wrote:
 I am using Postgresql 8.3
 
 I have two databases:
 db1
 db2
 
 db1 holds a table tab1.
 
 Is it possible to get the value of the above tab1 in db2 database?
 
 Regards

 You'll need something like the dblink contrib module to get one database to
 talk to another.

 --
 Joshua Tolley / eggyknap
 End Point Corporation
 http://www.endpoint.com

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.9 (GNU/Linux)

 iEYEARECAAYFAktgbbEACgkQRiRfCGf1UMOaJgCghp24Dl0A/6guXGq9i1lFvZXO
 5LUAnR6QcObCEdttp9/dh1hhdopZzI7e
 =kq0K
 -END PGP SIGNATURE-




[GENERAL] Function nesting issue

2010-01-27 Thread 张海峰
i have 2 functions, naming a and b, both outputing a resultset(cursor)
and a integer.
a calls b

a:
CREATE OR REPLACE FUNCTION public.t_outer (out o_rs
pg_catalog.refcursor, out o_i integer) RETURNS record AS
...
select t_inner(o_rs, o_i);
...

b:
CREATE OR REPLACE FUNCTION public.t_inner (out o_rs
pg_catalog.refcursor, out o_i integer) RETURNS record AS
...

Compilation is ok, but when i call a, it says:
ERROR: function b(refcursor, integer) does not exist
No function matches the given name and argument types. You might need
to add explicit type casts.

So, my question is whether postgreSQL supporting this type of nesting?

thanks. if this is an old question, please forgive me.

regards

-- 
Alferd.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Usage and OpenBSD

2010-01-27 Thread Scott Marlowe
On Wed, Jan 27, 2010 at 4:42 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 related to maximum per-process data space.  I don't know BSD very well
 so I can't say if datasize is the only such value for BSD, but it'd be
 worth checking.  (Hmm, on OS X which is at least partly BSDish, I see
 -m and -v in addition to -d, so I'm suspicious OpenBSD might have these
 concepts too.)

Isn't the usual advice here is to log the ulimit setting from the pg
startup script so you can what it really is for the user at the moment
they're starting up the db? (I think some guy named Tom mentioned
doing that before.)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general