Re: [GENERAL] Using sequences in SQL text files

2008-02-20 Thread HHB

Thank you all for your help.
Let me summer what I'm trying to do:
I have an empty database that I want to populate it with data.
I created SQL text files, categories.sql, books.sql 
As I'm a Java guy, I use Apache Ant SQL taks to run those SQL text files.
So, I run categories.sql file first, then books.sql
It seems to me that Brian's solution supposed that I have all SQL insert
statements are in one file, this isn't my case (actually, I have many SQL
files that
I want to run, merging them in one SQL isn't practical).
So, is it possible to do so:

categories.sql
don't insert an id here
INSERT INTO categories ('..', '..' ...) VALUES ('..', '..', ...); 
INSERT INTO categories ('..', '..' ...) VALUES ('..', '..', ...); 

books.sql
INSERT INTO books (category_id, '..', ...) VALUES
(CAST(currval('categories_id_seq') AS INT), '..', ...);
INSERT INTO books (category_id, '..', ...) VALUES
(CAST(currval('categories_id_seq') AS INT), '..', ...); 

Thank you all for your time.



brian ally wrote:
 
 HHB wrote:
 Hi.
 I have sequence for each table in my database.
 In order to populate same data in the database, I created some SQL text
 files.
 ---
 insert into categories values (id value from sequence, '..', '...');
 insert into books values (id value from sequence, '..', '...', '..', fk
 to
 category id);
 ---
 Is it possible to do so?
 How to use a sequence in such text files?
 Thanks.
 
 I think it depends. If this is a new database then you can leave off the 
 SERIAL id values and let the sequence do its thing. To insert the 
 foreign key into books you can use currval() like so:
 
 -- don't insert an id here
 INSERT INTO categories ('..', '..' ...) VALUES ('..', '..', ...);
 
 --
 INSERT INTO books (category_id, '..', ...) VALUES 
 (CAST(currval('categories_id_seq') AS INT), '..', ...);
 INSERT INTO books (category_id, '..', ...) VALUES 
 (CAST(currval('categories_id_seq') AS INT), '..', ...);
 
 
 INSERT INTO categories ('..', '..' ...) VALUES ('..', '..', ...);
 INSERT INTO books (category_id, '..', ...) VALUES 
 (CAST(currval('categories_id_seq') AS INT), '..', ...);
 ...
 
 If the data is from a dump (and so the sequence IDs--and foreign key 
 relations--already exist) you'll need to use setval() afterwards to 
 reset where the sequences should begin from afterwards.
 
 After all of your inserts (this time with the existing IDs):
 
 SELECT setval('books_id_seq', max(id)) FROM books;
 SELECT setval('categories_id_seq', max(id)) FROM categories;
 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
 
 

-- 
View this message in context: 
http://www.nabble.com/Using-sequences-in-SQL-text-files-tp15561422p15584090.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] uninstalling tsearch2 error: gin_tsvector_ops does not exist for access method gin

2008-02-20 Thread Magnus Hagander
On Tue, Feb 19, 2008 at 07:58:20PM -0500, Tom Lane wrote:
 [EMAIL PROTECTED] [EMAIL PROTECTED] writes:
  when i try to uninstall tsearch2 i get this error,
 
 Hmm, maybe you originally put tsearch2 into some other schema than
 public?  If so, try setting search_path to point to that schema
 before you run the uninstall script.

Or it could be the opposite situation. The tsearch2 install-script in 8.2
is hardcoded to always put things in the public schema even if you try to
put them elsewhere (you have to edit the script to get it where you want),
but the uninstall script is not hardcoded. So if there's a different
default schema, it will be installed in public but attempt to delete it
frmo the other schema.

//Magnus

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Using sequences in SQL text files

2008-02-20 Thread Thomas




On 20 févr. 08, at 08:57, HHB wrote:


Thank you all for your help.
Let me summer what I'm trying to do:
I have an empty database that I want to populate it with data.
I created SQL text files, categories.sql, books.sql 
As I'm a Java guy, I use Apache Ant SQL taks to run those SQL text  
files.

So, I run categories.sql file first, then books.sql
It seems to me that Brian's solution supposed that I have all SQL  
insert
statements are in one file, this isn't my case (actually, I have  
many SQL

files that
I want to run, merging them in one SQL isn't practical).
So, is it possible to do so:

categories.sql
don't insert an id here
INSERT INTO categories ('..', '..' ...) VALUES ('..', '..', ...);
INSERT INTO categories ('..', '..' ...) VALUES ('..', '..', ...);

books.sql
INSERT INTO books (category_id, '..', ...) VALUES
(CAST(currval('categories_id_seq') AS INT), '..', ...);
INSERT INTO books (category_id, '..', ...) VALUES
(CAST(currval('categories_id_seq') AS INT), '..', ...);

Thank you all for your time.



With this, all books will be created in only 1 category : the last  
created, and i think this is not what you want.


If you really want to separate books insertion from categories  
insertion, your INSERT statements for books must be smarter and look  
like :


INSERT INTO books (category_id, )
SELECT category_id, 
FROM categories WHERE category_name = '';

ex:
INSERT INTO books (category_id, name)
SELECT category_id, 'Lord of the rings'
FROM categories WHERE category_name = 'Fantasy';

(assuming you have a 'Fantasy' category)

Hope this helps.

Tom
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] longest prefix match

2008-02-20 Thread Dragan Zubac

Hello

Anybody got any ideas/experiences/links for 'longest prefix match' 
solution in PostgreSQL ?
Basically,put some telephone prefices in some kind of trie,and be able 
to perform fast lookups ?


Sincerely

Dragan Zubac

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] Regex query not using index

2008-02-20 Thread Postgres User
a final question: why does this syntax do a seq scan + filter:

select * from tablea where fielda = fielda   -or-  select * from
tablea where fielda in (fielda)

while this syntax results in no filter, seq scan only

select * from tablea where 1 = 1

it seems that both where clauses should be ignored by the optimizer-
or am i missing something

On Feb 19, 2008 9:45 PM, Tom Lane [EMAIL PROTECTED] wrote:
 Postgres User [EMAIL PROTECTED] writes:
  Yes that works, but the whole point of the exercise is replace many OR
  statements with 1 regex expression.  So it's not what I'm looking for.

 Unfortunately, Postgres is not as intelligent as you are.  There is
 no mechanism to rewrite a multi-branch regex condition into multiple
 indexscans.  I recommend going back to the OR's.

 regards, tom lane


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] SPI-functions and transaction control

2008-02-20 Thread Mikko Partio
On Feb 19, 2008 12:12 AM, Alvaro Herrera [EMAIL PROTECTED] wrote:

 Mikko Partio escribió:

  Now, I was wondering if a c function would be faster, and with the help
 of
  the manual I have written a function that can insert tuples from one
 table
  to another. As the manual states (
  http://www.postgresql.org/docs/8.3/interactive/spi.html), there is no
 way to
  catch the constraint violation error with SPI though. The manual still
  mentions that there is an undocumented way of doing this, has anybody
 ever
  done this? I was looking the 'exception' code at
  src/pl/plpgsql/src/pl_exec.c but it's quite frightening since I really
 don't
  have that much experience in c :)

 I think you should try DefineSavepoint, RollbackToSavepoint and
 ReleaseSavepoint.  Take a close look at plpgsql's callers for those
 functions, because I don't think it's all that straightforward.  OTOH
 you also need some PG_TRY blocks.  There are many fine details here,
 perhaps too many if you're not battered enough in C.

 I think I would suggest trying to do it purely with SQL, temp tables,
 etc.  If you can afford a table lock, it could be a lot easier and
 faster than setting a savepoint per tuple.



The import is done with a temp table, the bottleneck being the copying of
rows from the temp table to the actual table.

I managed to solve the issue with PG_TRY blocks (idea copied from
http://archives.postgresql.org/pgsql-hackers/2006-02/msg00836.php).  As you
said, I'm not battered with c so I guess not all the cornercases are handled
but for now it seems to work ok.
The c-version of the function is in fact 5 times faster than the original
plpgsql version.

Regards

Mikko


Re: [GENERAL] Regex query not using index

2008-02-20 Thread Martijn van Oosterhout
On Wed, Feb 20, 2008 at 12:56:54AM -0800, Postgres User wrote:
 a final question: why does this syntax do a seq scan + filter:
 
 select * from tablea where fielda = fielda   -or-  select * from
 tablea where fielda in (fielda)
 
 while this syntax results in no filter, seq scan only
 
 select * from tablea where 1 = 1
 
 it seems that both where clauses should be ignored by the optimizer-
 or am i missing something

WHERE fielda = fielda will only match non-null rows...

Have a nice day,
-- 
Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
 Those who make peaceful revolution impossible will make violent revolution 
 inevitable.
  -- John F Kennedy


signature.asc
Description: Digital signature


Re: [GENERAL] Regex query not using index

2008-02-20 Thread Tino Wildenhain

Postgres User wrote:

im trying to allow the client to pass a varchar param into my
function, and want to avoid any parsing of the parameter inside the
function, or code to build a sql string.

if the function can use this code, it will be compiled and optimized
(unlike a dynamic sql stirng)

select * from mytable where fielda ~ p_param


No, you should never let users specify raw regex. at best they can
hog down your server. Regex is a state engine and you can create
endless loops.

Maybe we can see the overall picture of your query?

Regards
Tino

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] longest prefix match

2008-02-20 Thread Jorge Godoy
Em Wednesday 20 February 2008 05:55:07 Dragan Zubac escreveu:

 Anybody got any ideas/experiences/links for 'longest prefix match'
 solution in PostgreSQL ?
 Basically,put some telephone prefices in some kind of trie,and be able
 to perform fast lookups ?

Prefix or suffix?  

For prefix you can use SELECT number FROM table WHERE number LIKE '123%'.

For suffix you change the % to the beginning of the string, but then loose 
the ability to use indices.  (Unfortunately, using suffixes is really 
interesting for caller IDs since you don't always receive area code, country 
code, etc.)

-- 
Jorge Godoy  [EMAIL PROTECTED]


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL]

2008-02-20 Thread Jorge Godoy
Em Wednesday 20 February 2008 03:56:37 Scott Marlowe escreveu:
 On Feb 19, 2008 11:39 PM, Tom Lane [EMAIL PROTECTED] wrote:

 http://www.postgresql.org/docs/8.3/static/plpgsql-cursors.html
 section 38.7.3.5.  But then I go to the index page for plpgsql at
 http://www.postgresql.org/docs/8.3/static/plpgsql.html and there are
 only entries for 38.7.5.1 through 3...

 Is there some problem with the doc rendering going on here?  Pic of
 what I'm seeing attached.

I believe that to save space just two levels of the index are being shown.  
Maybe the titles should allow guessing the contents better...  Or maybe some 
items should be promoted to an upper level :-)

It is very uncommon to go up to the latest level of an index in a summary.


-- 
Jorge Godoy  [EMAIL PROTECTED]


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] longest prefix match

2008-02-20 Thread Oleg Bartunov

On Wed, 20 Feb 2008, Jorge Godoy wrote:


Em Wednesday 20 February 2008 05:55:07 Dragan Zubac escreveu:


Anybody got any ideas/experiences/links for 'longest prefix match'
solution in PostgreSQL ?
Basically,put some telephone prefices in some kind of trie,and be able
to perform fast lookups ?


Prefix or suffix?

For prefix you can use SELECT number FROM table WHERE number LIKE '123%'.

For suffix you change the % to the beginning of the string, but then loose
the ability to use indices.  (Unfortunately, using suffixes is really
interesting for caller IDs since you don't always receive area code, country
code, etc.)


you can maintain an additional index for terms backwards.







Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Regex query not using index

2008-02-20 Thread Postgres User
Tino,

My users are developers and the goal was to accept a simple
comma-delimited list of string values as a function's input parameter.
 The function would then parse this input param into a valid regex
expression.

I was trying to write a function that lets me avoid using Execute
string and instead write in-line SQL with all the benefits of
pre-compilation and optimization.

Regex offers such a technique- IF it could understand regex that
represented a set of logical ORs and do an index scan (my rule is to
avoid seq-scans)

An example of regex that allows you to use in-line SQL with a
condition equivalent to many OR conditions when using basic comparison
operators:

select * from table1
where name ~ '.*' '^Smith$' |^Jones$':

And this works very well- except for the seq scan instead of an index scan



On Feb 20, 2008 2:31 AM, Tino Wildenhain [EMAIL PROTECTED] wrote:
 Postgres User wrote:
  im trying to allow the client to pass a varchar param into my
  function, and want to avoid any parsing of the parameter inside the
  function, or code to build a sql string.
 
  if the function can use this code, it will be compiled and optimized
  (unlike a dynamic sql stirng)
 
  select * from mytable where fielda ~ p_param

 No, you should never let users specify raw regex. at best they can
 hog down your server. Regex is a state engine and you can create
 endless loops.

 Maybe we can see the overall picture of your query?

 Regards
 Tino


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] is a unique key on null field bad?

2008-02-20 Thread Geoffrey
So, we are trying to track down some problems we're having with an 
implementation of slony on our database.  I've posted to the slony list 
about this issue, but I wanted to get a more generic response from the 
perspective of postgresql.


Is it a 'bad thing' to have a unique key on a field that is often times 
null?  This application has been running along just fine for a couple of 
years now, but when we try to implement a slony replication solution, 
this one table consistently has inconsistent data between the primary 
node and the slave.


The problem we are having with slony seems to be related to a table that 
has just such a key, so we are trying to figure out if this is causing 
the problem.


--
Until later, Geoffrey

Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety.
 - Benjamin Franklin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] is a unique key on null field bad?

2008-02-20 Thread Peter Childs
On 20/02/2008, Geoffrey [EMAIL PROTECTED] wrote:

 So, we are trying to track down some problems we're having with an
 implementation of slony on our database.  I've posted to the slony list
 about this issue, but I wanted to get a more generic response from the
 perspective of postgresql.

 Is it a 'bad thing' to have a unique key on a field that is often times
 null?  This application has been running along just fine for a couple of
 years now, but when we try to implement a slony replication solution,
 this one table consistently has inconsistent data between the primary
 node and the slave.

 The problem we are having with slony seems to be related to a table that
 has just such a key, so we are trying to figure out if this is causing
 the problem.


Its not a problem as such, but it will not exactly be unique as there could
be multiple records with null values in that table. So it can't be the
primary key, (Hence why Slony has a problem)

However it you want to ensure that the field is either Unique or Null (ie
not known) then this is a good way of doing it for example with Car Number
Plates where the details are not known yet but must be unique once they are
known...

Regards

Peter.


Re: [GENERAL] dynamic crosstab

2008-02-20 Thread Balázs Klein

I always hope that somebody might have something similar but
 generic - eg. create those columns automatically and just treat them all
 as text.

I came up with this amateurish one based on 
http://www.ledscripts.com/tech/article/view/5.html. 
Maybe someone can use it:
takes
- a select statement
- a name for the resulting view
- the column name of the id
- the column name of the attribute
- the column name of the value
- the aggregate function used

It recreates the view of the given name as a crosstab of the sql specified.

CREATE OR REPLACE FUNCTION public.create_crosstab_view (eavsql_inarg 
varchar, resview varchar, rowid varchar, colid varchar, val varchar, agr 
varchar) RETURNS pg_catalog.void AS
$body$
DECLARE
casesql varchar;
dynsql varchar;
r record;
BEGIN   
 dynsql='';
 
 for r in 
  select * from pg_views where lower(viewname) = lower(resview)
  loop
  execute 'DROP VIEW ' || resview;
  end loop;   
 
 casesql='SELECT DISTINCT ' || colid || ' AS v from (' || eavsql_inarg || ') 
eav ORDER BY ' || colid;
 FOR r IN EXECUTE casesql Loop
dynsql = dynsql || ', ' || agr || '(CASE WHEN ' || colid || '=' || r.v || ' 
THEN ' || val || ' ELSE NULL END) AS ' || agr || '_' || r.v;
 END LOOP;
 dynsql = 'CREATE VIEW ' || resview || ' AS SELECT ' || rowid || dynsql || ' 
from (' || eavsql_inarg || ') eav GROUP BY ' || rowid;  
 EXECUTE dynsql;
END
$body$
LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER;


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] is a unique key on null field bad?

2008-02-20 Thread Geoffrey

Peter Childs wrote:

On 20/02/2008, Geoffrey [EMAIL PROTECTED] wrote:

So, we are trying to track down some problems we're having with an
implementation of slony on our database.  I've posted to the slony list
about this issue, but I wanted to get a more generic response from the
perspective of postgresql.

Is it a 'bad thing' to have a unique key on a field that is often times
null?  This application has been running along just fine for a couple of
years now, but when we try to implement a slony replication solution,
this one table consistently has inconsistent data between the primary
node and the slave.

The problem we are having with slony seems to be related to a table that
has just such a key, so we are trying to figure out if this is causing
the problem.



Its not a problem as such, but it will not exactly be unique as there could
be multiple records with null values in that table. So it can't be the
primary key, (Hence why Slony has a problem)


We aren't using this as the primary key, so would this still pose a 
problem for slony?


(indexes on this table)

Indexes:
tract_pkey primary key, btree (recid)
tract_order_num_key unique, btree (order_num)
tract_assigned btree (assigned)
tract_code btree (code)
tract_comments btree (comments)
tract_compound_1 btree (code, old_order_num)
tract_date_avail btree (date_avail)
tract_dest_state btree (dest_state)
tract_dest_zone btree (dest_zone)
tract_driver btree (driver)
tract_orig_state btree (orig_state)
tract_orig_zone btree (orig_zone)
tract_prebooked btree (prebooked)
tract_tractor_num btree (tractor_num)
tract_trailer_num btree (trailer_num)


However it you want to ensure that the field is either Unique or Null (ie
not known) then this is a good way of doing it for example with Car Number
Plates where the details are not known yet but must be unique once they are
known...


--
Until later, Geoffrey

Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety.
 - Benjamin Franklin

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Regex query not using index

2008-02-20 Thread Tom Lane
Postgres User [EMAIL PROTECTED] writes:
 My users are developers and the goal was to accept a simple
 comma-delimited list of string values as a function's input parameter.
  The function would then parse this input param into a valid regex
 expression.

Why are you fixated on this being a regex?  If you aren't actually
trying to expose regex capabilities to the users, you'll just be having
to suppress a bunch of strange behaviors for special characters.

ISTM that the best solution is to use an array-of-text parameter,
along the lines of

where name = any (array['Smith', 'Jones', ...])

For what you're doing, you'd not actually want the array[] syntax,
it would look more like

where name = any ('{Smith,Jones}'::text[])

This should optimize into an indexscan in 8.2 or later.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Order of SUBSTR and UPPER in statement

2008-02-20 Thread Carlo Stonebanks


Hermann Muster [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]

Hi,

I encountered something I can't really explain. I use the following 
statement in my application:


COALESCE(UPPER(SUBSTR(Y.Firma,1,7)),'')

This returns ERROR:  syntax error at end of input


However, using the following statement is fine:

COALESCE(SUBSTR(UPPER(X.Firma), 1, 7), '')


The fieldtype of Firma is character varying.

The only difference is the order of UPPER and SUBSTR. Is it possible that 
this changed during some PostgreSQL version update? By the way, right now 
I'm using 8.2.


Regards,
Hermann 



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] select...into non-temp table raises 'duplicate key ... pg_type_typname_nsp_index'

2008-02-20 Thread Carlo Stonebanks
When performing a select ... into ...  an app crashed out with the error 
'duplicate key violates unique constraint pg_type_typname_nsp_index'


I looked in the mail lists and I see this error associated with TEMPORARY 
tables, but the into table is not a temp table. A previous 
drop_table_if_exists function makes sure that no other table of the same 
name exists (and I imagine I would get a table already exists error)


It's the first time I've seen this error in our PG 8.2.4 Win2K X64 
environment.


Has this issue been resolved for the temp tables, and is my problem related?

TIA

Carlo




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Regex query not using index

2008-02-20 Thread Postgres User
Tom,

I was looking for another approach but didn't come across that array
syntax in my searches (perhaps because it's newer.  Thanks for a
solution.

Now to end my fixation, one last item.  What about the case of a null
or empty param value- is there a way to assign a condition value that
Postgres will ignore when processing the query?

This syntax results in a seq scan:   WHERE fielda = Coalesce(param, fielda)
because it applies only to non-nulls

Is there another way to write this- perhaps using your array syntax on
an empty array?  Basically I'd PG to ignore the condition just as it
ignores   WHERE 1 = 1


On Wed, Feb 20, 2008 at 8:31 AM, Tom Lane [EMAIL PROTECTED] wrote:
 Postgres User [EMAIL PROTECTED] writes:

  My users are developers and the goal was to accept a simple
   comma-delimited list of string values as a function's input parameter.
The function would then parse this input param into a valid regex
   expression.

  Why are you fixated on this being a regex?  If you aren't actually
  trying to expose regex capabilities to the users, you'll just be having
  to suppress a bunch of strange behaviors for special characters.

  ISTM that the best solution is to use an array-of-text parameter,
  along the lines of

 where name = any (array['Smith', 'Jones', ...])

  For what you're doing, you'd not actually want the array[] syntax,
  it would look more like

 where name = any ('{Smith,Jones}'::text[])

  This should optimize into an indexscan in 8.2 or later.

 regards, tom lane


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[GENERAL] pgplsql and arrays

2008-02-20 Thread Dan Str
Hi List !

I have a problem with defining an array in pl/pgsql using an already defined
column in a table.
Is it possible to define an array ( v_var1 ) as in my example below:

If it is not possible now it would be really nice to have in a future
release , maybe something for 8.4 wishlist ?

create table test1
(
test_id int not null,
amount int not null,
constraint pk_test_id primary key (test_id)
);

create or replace function
test1_func( t_id test1.test_id%TYPE )
returns int as $$
declare
v_var1 test1.amount%TYPE[];
-- v_var1 int[];
v_var2 test1.amount%TYPE;
begin
v_var1[1] := 1;
v_var2 := 1;
return 1;
end;
$$ language plpgsql;


Best Regards
DS


[GENERAL] Vacuous errors in pg_dump ... | pg_restore pipeline

2008-02-20 Thread Kynn Jones
Hi.  I've written a Unix shell (zsh) script to streamline the process of
duplicating a database.  At the heart of this script I have the following
pipeline:

  pg_dump -U $OWNER -Fc $FROM | pg_restore -U $OWNER -d $TO

As far as the shell is concerned, this pipeline fails, due to three errors
emitted by pg_restore.  All these errors are vacuous, as far as I can tell.
 Following are the excerpts from the dump responsible for the errors, along
with the text of the errors:

COMMENT SCHEMA public postgres;
ERROR:  must be owner of schema public

CREATE PROCEDURAL LANGUAGE plpgsql;
ERROR:  must be superuser to create procedural language

CREATE FUNCTION plpgsql_call_handler() RETURNS language_handler
AS '$libdir/plpgsql', 'plpgsql_call_handler'
LANGUAGE c;
ERROR:  permission denied for language c

In all cases, the culprit code is generated automatically by pg_dump,
irrespective of the database being dumped.  Therefore the above pipeline
will fail no matter what.  (If I get rid of the -U $OWNER flag in the
pg_restore command, I get the same errors plus several new ones triggered by
ALTER ... OWNER TO ... statements.)

Granted, with the form of pg_restore I'm using above, these three errors do
not prevent the database from being restored.  The errors are simply skipped
over.

But these vacuous errors make it much more difficult for the script to
determine whether the pg_restore command succeeded.  This seems to me an
totally gratuitous difficulty.  Is there any way around it?

(At the moment, as a stopgap, I have the kluge

  pg_dump -U $OWNER -Fc $FROM | ( pg_restore -U $OWNER -d $TO || true )

...to ignore all the pg_restore errors, but this is unacceptable.)

Alternatively, is there a better way to streamline the duplication of a
database?

BTW, while on this subject: is there a way to modify the pipeline above to
minimize the number of warnings and notices generated during pg_restore?
 I'm aware of \set VERBOSITY 'terse', but I don't see how to inject this
setting into the restore.

TIA!

Kynn


[GENERAL] Suggestions for schema design?

2008-02-20 Thread cluster

I really need some input:

In a system that handles money transfers I have a table to store each 
money transfer. A number of different events can result in a money 
transfer but the events are so different that each event type is stored 
in its own table.


So we have a schema of the form:

  TRANSFERS (TRANSFER_ID, col2, col3, col4, ...)
  EVENT_TYPE_A (EVENT_A_ID, TRANSFER_ID, col3, col4, ...)
  EVENT_TYPE_B (EVENT_B_ID, TRANSFER_ID, col3, col4, ...)
  EVENT_TYPE_C (EVENT_C_ID, TRANSFER_ID, col3, col4, ...)
  ...

With this design it is easy to map a specific event to the corresponding 
transfer (if any). However, if I want to create a list of transfers and 
for each transfer also give the corresponding event ID (if any) the only 
way is to left join *all* the EVENT-tables with the TRANSFERS table. 
This is slow.


Can I modify the design to make a more direct link between transfers and 
events?


Of course I could add EVENT_A_ID, EVENT_B_ID, etc. to the TRANSFERS 
table but I think this would soil the otherwise clean TRANSFERS table. 
What do you think?


One could also introduce a third table:
   TRANSFER_EVENTS(TRANSFER_ID, EVENT_A_ID, EVENT_B_ID, ...)
which only results in the need of a single join operation to create the 
list but adds an INPUT statement to the complexity.


Any ideas? What would you do?

Thanks!

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Vacuous errors in pg_dump ... | pg_restore pipeline

2008-02-20 Thread Douglas McNaught
On 2/20/08, Kynn Jones [EMAIL PROTECTED] wrote:

 Alternatively, is there a better way to streamline the duplication of a
 database?

How about:

CREATE DATABASE newdb TEMPLATE olddb;

(don't remember the exact syntax, but it'll be in the docs for CREATE
DATABASE).  I think the 'createdb' program also has an option for
choosing the template.

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Using sequences in SQL text files

2008-02-20 Thread intelforum

HHB wrote:

Thank you all for your help.
Let me summer what I'm trying to do:
I have an empty database that I want to populate it with data.
I created SQL text files, categories.sql, books.sql 
As I'm a Java guy, I use Apache Ant SQL taks to run those SQL text files.
So, I run categories.sql file first, then books.sql
It seems to me that Brian's solution supposed that I have all SQL insert
statements are in one file, this isn't my case (actually, I have many SQL
files that
I want to run, merging them in one SQL isn't practical).
So, is it possible to do so:

categories.sql
don't insert an id here
INSERT INTO categories ('..', '..' ...) VALUES ('..', '..', ...); 
INSERT INTO categories ('..', '..' ...) VALUES ('..', '..', ...); 


books.sql
INSERT INTO books (category_id, '..', ...) VALUES
(CAST(currval('categories_id_seq') AS INT), '..', ...);
INSERT INTO books (category_id, '..', ...) VALUES
(CAST(currval('categories_id_seq') AS INT), '..', ...); 


Thank you all for your time.




No, because you'll be retrieving the same (last) value for 
categories_id_seq every time. If your library is limited to a single 
category, you might be good to go, but anyway ...



If you're going to be using INSERT, rather than COPY, you might as well
follow each INSERT INTO category ... with the books for that 
category. You say you want to keep things separated a bit (and I can 
relate to that--haha) but you could take that a step further, reading 
the categories line by line and running the appropriate books file for 
each. Or something like that.


Or you could use Ant to create a bunch of COPY blocks after each INSERT
statement. (I guess--I'm sure it can do that)

Or you can use the following mechanism to store the category IDs as
session variables.

CREATE FUNCTION get_id(name text) RETURNS integer
AS $_X$
return $_SHARED{$_[0]};
$_X$
LANGUAGE plperl IMMUTABLE;


CREATE FUNCTION set_id(name text, val integer) RETURNS text
AS $_X$

if ($_SHARED{$_[0]} = $_[1])
{
return 'ok';
}
else
{
return can't set shared variable $_[0] to $_[1];
}

$_X$
LANGUAGE plperl;


Usage:

INSERT INTO categories ...
SELECT set_id('fiction', CAST(currval('categories_id_seq') AS INT));
INSERT INTO categories ...
SELECT set_id('science', CAST(currval('categories_id_seq') AS INT));
INSERT INTO categories ...
SELECT set_id('politics', CAST(currval('categories_id_seq') AS INT));

...
INSERT INTO books (category_id, ) VALUES (CAST(get_id('fiction') AS 
INT4), ...
INSERT INTO books (category_id, ) VALUES (CAST(get_id('politics') AS 
INT4), ...
INSERT INTO books (category_id, ) VALUES (CAST(get_id('fiction') AS 
INT4), ...



Personally, I'd work out some way to do each INSERT into categories 
followed by a COPY block with all the books for that category, simply 
because it will be quicker and is much neater.


b

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Suggestions for schema design?

2008-02-20 Thread brian

cluster wrote:

I really need some input:

In a system that handles money transfers I have a table to store each 
money transfer. A number of different events can result in a money 
transfer but the events are so different that each event type is stored 
in its own table.


So we have a schema of the form:

  TRANSFERS (TRANSFER_ID, col2, col3, col4, ...)
  EVENT_TYPE_A (EVENT_A_ID, TRANSFER_ID, col3, col4, ...)
  EVENT_TYPE_B (EVENT_B_ID, TRANSFER_ID, col3, col4, ...)
  EVENT_TYPE_C (EVENT_C_ID, TRANSFER_ID, col3, col4, ...)
  ...

With this design it is easy to map a specific event to the corresponding 
transfer (if any). However, if I want to create a list of transfers and 
for each transfer also give the corresponding event ID (if any) ...


I think you'd better decide now if you want to let a transfer occur 
without any corresponding event. That might be a recipe for pain.



Can I modify the design to make a more direct link between transfers and 
events?


Of course I could add EVENT_A_ID, EVENT_B_ID, etc. to the TRANSFERS 
table but I think this would soil the otherwise clean TRANSFERS table. 
What do you think?


One could also introduce a third table:
   TRANSFER_EVENTS(TRANSFER_ID, EVENT_A_ID, EVENT_B_ID, ...)
which only results in the need of a single join operation to create the 
list but adds an INPUT statement to the complexity.


Any ideas? What would you do?



CREATE TABLE transfer_events (
  id SERIAL NOT NULL PRIMARY KEY,
  -- shared columns
);
CREATE TABLE transfer_events_a (
  integer NOT NULL,
  ...
) INHERITS transfer_events;

CREATE TABLE transfer_events_b (
  integer NOT NULL,
  ...
) INHERITS transfer_events;

CREATE TABLE transfer_events_c (
  integer NOT NULL,
  ...
) INHERITS transfer_events;

CREATE TABLE transfers (
  id SERIAL NOT NULL PRIMARY KEY,
  -- put the foreign key in transfers because it's the event
  -- that causes the transfer, not vice versa
  transfer_event_id integer NOT NULL
  ...
);

ALTER TABLE transfer_events_a ALTER COLUMN id
SET DEFAULT nextval('transfer_events_id_seq');
CREATE UNIQUE INDEX transfer_events_a_pk ON transfer_events_a (id);

ALTER TABLE transfer_events_b ALTER COLUMN id
SET DEFAULT nextval('transfer_events_id_seq');
CREATE UNIQUE INDEX transfer_events_b_pk ON transfer_events_b (id);

ALTER TABLE transfer_events_c ALTER COLUMN id
SET DEFAULT nextval('transfer_events_id_seq');
CREATE UNIQUE INDEX transfer_events_c_pk ON transfer_events_c (id);

ALTER TABLE ONLY transfers ADD CONSTRAINT fk_transfers_transfer_events
FOREIGN KEY (transfer_event_id)
REFERENCES transfer_events (id) ON DELETE CASCADE;


This allows one to INSERT directly into any of the inheriting tables 
without specifying an ID. The child table will pick up the nextval() 
properly, ensuring that all of the child table IDs will be unique. This, 
then, is passed to the transfers table as the FK.


To get all events for some criteria and have some indication of which 
child table a row is from:


SELECT te.id, te.created, pgc.relname
FROM transfer_events AS te, pg_class AS pgc
WHERE te.tableoid = pgc.oid
AND ...


1 | 2008-02-20 14:56:14.194147-05 | transfer_events_a
2 | 2008-02-20 14:56:14.194147-05 | transfer_events_b
3 | 2008-02-20 14:56:14.194147-05 | transfer_events_a
etc.


Go over the docs on inheritance carefully, though. There are a bunch of 
limitations to inheritance (and some would say to avoid it, altogether).


http://www.postgresql.org/docs/8.3/interactive/ddl-inherit.html


b

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Vacuous errors in pg_dump ... | pg_restore pipeline

2008-02-20 Thread Tom Lane
Kynn Jones [EMAIL PROTECTED] writes:
 As far as the shell is concerned, this pipeline fails, due to three errors
 emitted by pg_restore.  All these errors are vacuous, as far as I can tell.

They're hardly vacuous --- they're telling you that the destination
database will be missing plpgsql, which was in the source database.

The only really good way around that pre-8.3 is to run the restore as
a database superuser.  As of 8.3 it should be sufficient to be the
database's owner.

If you don't have any actual use for plpgsql in this database, maybe
you should remove it from the source DB?

 BTW, while on this subject: is there a way to modify the pipeline above to
 minimize the number of warnings and notices generated during pg_restore?

... | PGOPTIONS=--client_min_messages=warning pg_restore ...

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Vacuous errors in pg_dump ... | pg_restore pipeline

2008-02-20 Thread SunWuKung
On Feb 20, 8:12 pm, [EMAIL PROTECTED] (Douglas McNaught) wrote:
 On 2/20/08, Kynn Jones [EMAIL PROTECTED] wrote:

  Alternatively, is there a better way to streamline the duplication of a
  database?

 How about:

 CREATE DATABASE newdb TEMPLATE olddb;


Do these methods also recreate the schema path of the database copied?
B.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Error in PlPython procedure

2008-02-20 Thread mars_osrp

Hi All,

I am using PlPython procedure to insert data in a table:

plan = plpy.execute(insert into test(id,name) values(1 , 'test_py'))

I am getting error:
ERROR:  invalid input syntax for integer: PLyResult object at 0x374a5a70

In fact preparing a plan and passing a list of values for columns is also
giving same error.

What is correct valid input syntax for integer in PlPython?

Mars_osrp
-- 
View this message in context: 
http://www.nabble.com/Error-in-PlPython-procedure-tp15601869p15601869.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] ts_headline

2008-02-20 Thread Stephen Davies
I am a bit puzzled by the output of ts_headline (V8.3) for different queries.

I have one record in a test documentation table and am applying different 
queries against that table to check out the ts_headline outputs.

The document in question has 2553 words which generate 519 tokens in the 
ts_vector.
For most queries, ts_headline returns a string starting with one of the 
criterion words and with all criterion words highlit - as I would expect.

However, some other queries return a string which seems to always start at the 
beginning of the document and contains no highlit terms.

It seems that the difference is in the number of occurrences of the criterion 
words. If the number of hits is less than some number, the ts_headline result 
is correct but if the number of hits exceeds that limit, the result is just 
the first MinWords of the document.

I have seen cases with up to 20 hits succeed but cases with 35 hits miss.
The spread of hits does not seem to be relevant.

Is this a bug or am I missing some configuration option?

TIA,
Stephen Davies
-- 

This email is for the person(s) identified above, and is confidential to
the sender and the person(s).  No one else is authorised to use or
disseminate this email or its contents.

Stephen Davies ConsultingVoice: 08-8177 1595
Adelaide, South Australia. Fax: 08-8177 0133
Computing  Network solutions.   Mobile:0403 0405 83

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Error in PlPython procedure

2008-02-20 Thread brian

mars_osrp wrote:

Hi All,

I am using PlPython procedure to insert data in a table:

plan = plpy.execute(insert into test(id,name) values(1 , 'test_py'))

I am getting error:
ERROR:  invalid input syntax for integer: PLyResult object at 0x374a5a70

In fact preparing a plan and passing a list of values for columns is also
giving same error.

What is correct valid input syntax for integer in PlPython?



Shouldn't that be plpy.prepare(...) instead of the execute() method? I 
think what you want is:


plan = plpy.prepare(INSERT INTO test (id, name) VALUES ($1, $2), 
[integer, text])


res = plpy.execute(plan, [1, test_py])

Or something like that. pl/Python error messages are definitely pretty 
cryptic, though.


b

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Install problem w/8.3 on windows 2000 : application failed to initialize properly (0xc0000022)

2008-02-20 Thread Dee
Hi. This is my first post to this list.  So please let me know if my question 
is in the wrong area.

I am trying to install version 8.3.0.1 on an old windows 2000 box I inherited.  
 Firewall s/w is installed but disabled.  I am logged in as a user with 
Administrative rights.  When the installing gets to the step Starting 
Services.  It halts with the error

ERROR:
The application failed to initialize properly (0xc022). Click on OK to 
terminate the application.

LOG SNIPPET:
==
StartServices: Service: PostgreSQL Database Server 8.3
MSI (c) (04:00) [22:12:14:687]: Note: 1: 2205 2:  3: Error 
MSI (c) (04:00) [22:12:14:687]: Note: 1: 2228 2:  3: Error 4: SELECT `Message` 
FROM `Error` WHERE `Error` = 2888 
DEBUG: Error 2888:  Executing the TextStyle view failed
The installer has encountered an unexpected error installing this package. This 
may indicate a problem with this package. The error code is 2888. The arguments 
are: TextStyle, , 
Action 22:13:06: Cancel. Dialog created
==

It is an old box with lots of programs installed.  So it may well be a 
configuration problem.  Anyone have any suggestions how I might go about 
diagnosing the problem?   Btw, I am not a w2k admin, but if you need more 
information, please let me know.  Any suggestions would be appreciated!


 

   
-
Looking for last minute shopping deals?  Find them fast with Yahoo! Search.

Re: [GENERAL] Error in PlPython procedure

2008-02-20 Thread Erik Jones


On Feb 20, 2008, at 5:53 PM, mars_osrp wrote:



Hi All,

I am using PlPython procedure to insert data in a table:

plan = plpy.execute(insert into test(id,name) values(1 , 'test_py'))

I am getting error:
ERROR:  invalid input syntax for integer: PLyResult object at  
0x374a5a70


In fact preparing a plan and passing a list of values for columns  
is also

giving same error.

What is correct valid input syntax for integer in PlPython?


Can you show us the function you're using and how you're using it?  I  
don't see anything wrong with that on its own.  You're error refers  
to using a PyResult object somewhere and integer is expected --  
plpy.execute() returns PyResult objects -- not integers -- so that  
looks good.  My guess is you're expecting to be able to use the  
return value of plpy.execute() directly,  i.e I think your query is  
running fine, it's how you're accessing the result that is the problem.


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] How to view temp tables

2008-02-20 Thread pc
My php code is creating temporary table named mytemp,but when I run a
selec * from mytemp I cannot see the table.How can I see the table
from postgresql command prompt?

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] How to make update rapidly?

2008-02-20 Thread hewei
Hi, Scott Marlowe:

You said that  As for processing them in order versus randomly,that's a
common problem. 
do you know why? how postgres work in this scenario.

On Wed, Feb 20, 2008 at 3:07 PM, Scott Marlowe [EMAIL PROTECTED]
wrote:

 On Feb 19, 2008 9:38 PM, hewei [EMAIL PROTECTED] wrote:
  Hi,Every body;
 I have a table contains 100,000 rows, and has a primary key(int).
Now ,I need to execute sql command like update .. where
 id=*(id
  is primary key).
I expect execute 1200-1600 sqlcommands per second(1200-1600/s).
 In test,when the id increase by degrees in sqlcommands, then I can
 reach
  the speed(1600/s);
But in fact , the id  in sqlcommands  is out of rule, then the speed
 is
  very slow, just 100/s.

 Assuming that you're updating a non-indexed field, you should really
 look at migrating to 8.3 if you haven't already.  It's performance on
 such issues is reportedly much faster than 8.2.

 As for processing them in order versus randomly, that's a common
 problem.  right sizing shared_buffers so that all of the table can fit
 in ram might help too.  As would a caching RAID controller.



Re: [GENERAL] How to view temp tables

2008-02-20 Thread Chris

pc wrote:

My php code is creating temporary table named mytemp,but when I run a
selec * from mytemp I cannot see the table.How can I see the table
from postgresql command prompt?


temp tables are automatically deleted when the connection is closed.

make it a non-temp table :)

--
Postgresql  php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] How to view temp tables

2008-02-20 Thread Scott Marlowe
On Wed, Feb 20, 2008 at 7:30 PM, pc [EMAIL PROTECTED] wrote:
 My php code is creating temporary table named mytemp,but when I run a
  selec * from mytemp I cannot see the table.How can I see the table
  from postgresql command prompt?

Got a code sample that shows this problem?  I.e. create table
statement, php script that tries to hit it.  Error codes are nice too.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] How to make update rapidly?

2008-02-20 Thread Scott Marlowe
On Thu, Feb 21, 2008 at 1:07 AM, hewei [EMAIL PROTECTED] wrote:
 Hi, Scott Marlowe:

 You said that  As for processing them in order versus randomly,that's a
 common problem. 
 do you know why? how postgres work in this scenario.

Pretty much the same way any database would.  it's likely that your
data in the table is in some order.  When you update one row, then the
next n rows are read into memory as well.  Updating these is cheaper
because they don't have to be read, just flushed out to the write
ahead log.  If you have very random access on a table much larger than
your shared_buffers or OS cache, then  it's likely that by the time
you get back to a row on page x it's already been flushed out of the
OS  or pg and has to be fetched again.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/