Re: [SQL] postgresql multiple insert slow

2004-06-06 Thread Harald Fuchs
In article <[EMAIL PROTECTED]>,
"Michael L. Hostbaek" <[EMAIL PROTECTED]> writes:

> Hello, 
> I've got a table in an oracle database with approx. 10 records, that
> I'd like to put into a table in a postgresql database. (This should be
> done a couple of times per week)

> I have written a short perl script, on a server that has remote access
> to both the oracle database as well as the postgresql database. I am
> running postgresql 7.4.1 on FreeBSD.

> My perl script looks something like this:

> [...]
> my $sth2 = $cnx2->prepare('SELECT * FROM oracle_table');
> my $res2 = $sth2->execute();

> while(my($field2,$field5,$field6) = ($sth2->fetchrow_array)) {
>   if(defined($field2)) {
>   my $sth = $cnx->prepare('INSERT INTO
> the_pg_table(field1, field2) VALUES(?,?)');
>   my $result = $sth->execute($field2,$field5);
>   $sth->finish;

>   }
> }
> [...]

> I runs fine - and I get no errors - but it takes almost 25 minutes to
> complete.. I tried running the script while just grabbing the rows from
> the oracle database and writing to a text file - and then it only takes
> a couple of minutes .. So it must be the INSERT command that chokes - is
> there a better way to do it ? 

First of all, you should prepare the insert statement only once,
outside of the loop.  Then you could use fetchrow_arrarref instead of
fetchrow_array; this should eliminate a copy operation.

But the biggest win would be not to use INSERT at all.  Instruct
Oracle to dump the rows into a CSV file, and then do just

$cnx->do ("COPY the_pg_table FROM 'csv.file'")


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[SQL] Formatting problems with negative intervals, TO_CHAR

2004-06-06 Thread Jeff Boes
This seems ... well, counter-intuitive at least:
(using Pg 7.4.1)
# select to_char('4 minutes'::interval -
'5 minutes 30 seconds'::interval, 'mi:ss');
 to_char
-
 -1:-3
(1 row)
Why is the trailing zero lost? Why are there two minus signs?
I would expect '-1:30'.
Likewise,
# select to_char('4 minutes'::interval -
'4 minutes 30 seconds'::interval,
# 'mi:ss');
 to_char
-
 00:-3
(1 row)
I would expect '-00:30'.
I ended up fixing this with a very convoluted expression:
... case when last.time_count > prev.time_count then '+' else '-' end ||
to_char((abs(extract(epoch from last.time_count) -
extract(epoch from prev.time_count)) ||
'seconds')::interval,'FMmi:ss.cc')
but I have to believe there is an easier way.
--
(Posted from an account used as a SPAM dump. If you really want to get
in touch with me, dump the 'jboes' and substitute 'mur'.)

Jeffery Boes <>< [EMAIL PROTECTED]
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[SQL] SQL DDL: FOREIGN KEY construct and field mapping: unexpected behavior

2004-06-06 Thread Neil Zanella
Hello,

Consider the following code run under PostgreSQL 7.3.4:

CREATE TABLE X (
  A INT,
  B INT,
  PRIMARY KEY (A, B)
);

CREATE TABLE Y (
  A INT,
  B INT,
  C INT,
  PRIMARY KEY (C),
  FOREIGN KEY (B, A) REFERENCES X
);

INSERT INTO X (A, B) VALUES (1, 2);
INSERT INTO Y (A, B, C) VALUES (1, 2, 3);

The second insert causes the database server to report the following
error:
ERROR:  $1 referential integrity violation
- key referenced from y not found in x

Upon examining the interpretation of PostgreSQL's
REFERENCES clause I find the following:

foodb=# \d x
   Table "public.x"
 Column |  Type   | Modifiers
+-+---
 a  | integer | not null
 b  | integer | not null
Indexes: x_pkey primary key btree (a, b)

foodb=# \d y
   Table "public.y"
 Column |  Type   | Modifiers
+-+---
 a  | integer |
 b  | integer |
 c  | integer | not null
Indexes: y_pkey primary key btree (c)
Foreign Key constraints: $1 FOREIGN KEY (b, a)
  REFERENCES x(a, b) ON UPDATE NO ACTION ON DELETE NO ACTION

It is as though PostgreSQL, instead of matching names, associated

field A in table Y with field B in table X
and
field B in table Y with field A in table X

whereas I was expecting the database server to match the names as in:

field A in table Y with field A in table X
and
field B in table Y with field B in table X

I wonder what the SQL standard has to say on this one and how the
REFERENCES clause with no field names on the right hand side
really works in spite of the unexpected results produced by
this very simple example...

Thanks,

Neil

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] SQL DDL: FOREIGN KEY construct and field mapping: unexpected behavior

2004-06-06 Thread Tom Lane
[EMAIL PROTECTED] (Neil Zanella) writes:
> CREATE TABLE X (
>   A INT,
>   B INT,
>   PRIMARY KEY (A, B)
> );

> CREATE TABLE Y (
>   A INT,
>   B INT,
>   C INT,
>   PRIMARY KEY (C),
>   FOREIGN KEY (B, A) REFERENCES X
> );

> whereas I was expecting the database server to match the names as in:

Why were you expecting that?  The SQL spec is perfectly clear that the
columns are matched in the order written in the primary key.  For
instance, SQL92 11.8  saith:

 2) Case:
b) If the  does not specify a
  , then the table descriptor of the
  referenced table shall include a unique constraint that spec-
  ifies PRIMARY KEY. Let referenced columns be the column or
  columns identified by the unique columns in that unique con-
  straint and let referenced column be one such column. The
   shall be considered to implic-
  itly specify a  that is identical to
  that .

 7) The  shall contain the same number of s as the . The i-th col-
umn identified in the  corresponds to the
i-th column identified in the .
The data type of each referencing column shall be the same as
the data type of the corresponding referenced column.

Nothing there about "try to match by name".

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] Formatting problems with negative intervals, TO_CHAR

2004-06-06 Thread Tom Lane
Jeff Boes <[EMAIL PROTECTED]> writes:
> This seems ... well, counter-intuitive at least:
> (using Pg 7.4.1)

> # select to_char('4 minutes'::interval -
> '5 minutes 30 seconds'::interval, 'mi:ss');

>   to_char
> -
>   -1:-3
> (1 row)

> Why is the trailing zero lost? Why are there two minus signs?

> I would expect '-1:30'.

Yeah, me too.  The underlying interval value seems right:

regression=# select '4 minutes'::interval - '5 minutes 30 seconds'::interval;
 ?column?
---
 -00:01:30
(1 row)

so I think this is a to_char() bug.  Possibly it's platform-dependent
--- the roundoff behavior for division with a negative input varies
across machines.  However I do see the bug on HPUX 10.20 with CVS tip.

> Likewise,

> # select to_char('4 minutes'::interval -
> '4 minutes 30 seconds'::interval, 'mi:ss');

>   to_char
> -
>   00:-3
> (1 row)

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] ERROR: duplicate key violates unique constraint (SOLVED)

2004-06-06 Thread Khairul Azmi
The problem is solved. It is a silly mistake actually. The line
SELECT sid FROM conf_category WHERE
   category_name like temp_category and category_status=1;
returns duplicated values. Variable temp_category would be '%dos.rules' 
and there are entries 'dos.rules' and 'ddos.rules' in table 
conf_category which I did not notice until today. So I add the keyword 
DISTINCT and it runs perfectly.

 select * into temp_category from get_root_path(category);
   OPEN T1Cursor FOR
  SELECT DISTINCT sid FROM conf_category WHERE
   category_name like temp_category and category_status=1;
Josh Berkus wrote:
Khairul,
 

Need help on this problem. I've created two functions. The function
should perform a few sql processes. The problem is I got different
results when I call the function and when I manually run the sql command.
I'm using postgresql 7.4.2.
   

Hmmm ... that's odd.  I remember getting this issue early in the 7.2 series 
but not since.  I'd guess that you're missing something in your function, 
like the transposition of two fields or an unterminated loop.  You've made 
that likely because:

 

RETURNS integer AS '
DECLARE
   var_f0 alias FOR $1;-- rh_sign_id
   var_f1 alias FOR $2;-- rh_status
   var_f2 alias FOR $3;-- rh_action
   var_f3 alias FOR $4;-- proto
   

... this is a really bad way of dealing with function variables; I certainly 
can't parse the rest of the function and tell if you've accidentally swapped 
a var_f3 for a var_f4.   I'd strongly suggest naming your variables clearly, 
like, for example, calling it "v_proto" instead of "var_f3".  This is 
"programming 101".

 

I am porting this application from ORACLE. So I try my best not to 
change the structure of the original codes. I guess it is still not a 
good excuse for the way I name the variable :).  Anyway thanks Josh.

Issue is
cews=> select
rule_header_add(999,1,'alert','ip','$EXTERNAL_NET','any','$HOME_NET','any',
'->','dos.rules',3,0); NOTICE:   INSERT INTO rule_header VALUES 999 1 alert
ip $EXTERNAL_NET any $HOME_NET any -> dos.rules 3 2004-06-04
15:21:30.448633
NOTICE:   INSERT INTO sensor_signature VALUES -1 999
CONTEXT:  PL/pgSQL function "rule_header_add" line 26 at perform
ERROR:  duplicate key violates unique constraint "sensor_signature_pkey"
CONTEXT:  PL/pgSQL function "update_sen_sig" line 16 at SQL statement
PL/pgSQL function "rule_header_add" line 26 at perform
I thought it might be caused by duplicated data. But ...
cews=> insert into rule_header values
(268,1,'alert','ip','$EXTERNAL_NET','any','$HOME_NET','any','->','dos.rules
',3,current_timestamp,0); INSERT 29393 1
And
cews=> insert into sensor_signature values (-1,268);
INSERT 29394 1
   

This isn't the same id you tested with the function.  Mind running the *exact 
same values* with both command line and function?

Also, I notice that update_sen_sig makes use of a cursor and a loop.   Best 
guess is that the cursor isn't returning what you think it is, and is looping 
several times ... thus attempting to insert the same value several times.

Good luck!
 


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] User defined types -- Social Security number...

2004-06-06 Thread Christopher Browne
In the last exciting episode, "Greg Patnude" <[EMAIL PROTECTED]> wrote:
> Thanks Josh -- I understand that there are valid and invalid SSN's -- 
> similar rules apply to zip codes and area codes...
>
> I tried this:
>
> SELECT to_char(123456789, '000-00-');
> which yields 123-45-6789 -- nicely, I might add...
>
> the trick is getting postgreSQL to do this without having to create an ON
> SELECT and ON UPDATE TRIGGER...
>
> an ON UPDATE, SELECT, and INSERT re-write RULE might do the trick...
>
> SSN's CAN in fact start with a leading 0 -- mine does -- "041-xx-" --
>
> I do agree that there are valid ranges -- my main concern is being able to
> store any leading zeros - I just need to make sure that something "looks"
> like a valid SSN in the formattig
> (nnn-nn-) and that I can store / retrieve it with the approoriate
> format -- what I am really trying to accomplish is an "input mask"...
>
> I hadn't considered using a Domain have to look at that

Strongly recommended; that allows applying the validation in many
places without having to repeat validation "code."

If you will be using really a lot of these values, and indexing on
them, it even may be worth looking at a custom type.

A performance "win" would come in using a compact data type.  For
instance, for 9 digit national ID numbers, you can do a LOT better
than an 11 byte string.  (Aside: Anything bigger than 34 bits would
do, demonstrating that it is a regrettable loss that 36 bit computer
systems went the way of the dodo...)
-- 
output = reverse("gro.gultn" "@" "enworbbc")
http://www.ntlug.org/~cbbrowne/internet.html
I found out why  cats drink out of the toilet. My  mother told me it's
because it's cold in there. And I'm like: How did my mother know THAT?
--Wendy Liebman

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings