[SQL] Constraint->function dependency and dump in 7.3

2004-07-08 Thread SZŰCS Gábor
Dear Gurus,

Just recently realized that our daily dump from a 7.3 (production) db to a
7.4 (development) server has problems. I have no idea where to search for an
answer so please feel free to point me to the appropriate thread, doc or TFM
:)

Below is two portions of the dump, which seems to be in the wrong order (at
least for me). I'm not sure, and not in a position to easily test it, that
it's wrong order in 7.3; but 7.4 has problems creating the table without the
function (which is logical):

%--- cut here ---%
CREATE TABLE cim (
-- etc etc ...
orszag_kod integer,
CONSTRAINT cim_orszag_kod CHECK ((hely_fajta(orszag_kod) = 7))
);

-- ... several lines later:

CREATE FUNCTION hely_fajta (integer) RETURNS integer
AS ' ... '
LANGUAGE sql;
%--- cut here ---%

Checked pg_depend, and constraint cim_orszag_kod refers to function
hely_fajta, but noone (not even the table) refers to the constraint. I'm
just wondering if it's ok...

1) is it normal that the table does not refer to its constraints?
2) if not, do you have the idea of the possible cause?
3) if so, is it normal for pg_dump to dump in this order?
4) if so, how may I change it?
5) may inserting into pg_depend solve the problem?

TIA,
G.
%--- cut here ---%
\end


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] Triggers - need help !!!

2004-07-08 Thread SZUCS Gábor
I'd like to add that a NULL value might mess things up. If CreateTime may be
null, try this:

  if (OLD.CreateTime <> NEW.CreateTime) OR
 (OLD.CreateTime ISNULL <> NEW.CreateTime ISNULL) THEN ...

or this:

  if COALESCE(OLD.CreateTime, '3001-01-01') <>
 COALESCE(NEW.CreateTime, '3001-01-01') THEN ...

(provided you can safely assume that createtimes remain in this millenium
;) )

or maybe:

  if COALESCE(OLD.CreateTime <> NEW.CreateTime,
  OLD.CreateTime ISNULL <> NEW.CreateTime ISNULL) THEN ...

However; I'd stay with the first one. It's quite simple and Y3K-safe ;)
Also, it seems to be the most effective of them, if any.

G.
%--- cut here ---%
\end

- Original Message - 
From: "Richard Huxton" <[EMAIL PROTECTED]>
Sent: Wednesday, July 07, 2004 1:03 PM


> Pradeepkumar, Pyatalo (IE10) wrote:
> > Thanks a lot for ur help.
> >  In the trigger, I am checking if a field is updated or not. The syntax
I
> > use is
> >
> > IF UPDATE(CreateTime) THEN
> > 
> > 
> > END IF;
> >
> > Is this syntax correct.
>
> No, and I don't recall seeing anything like it in the manuals.
>
> IF OLD.CreateTime <> NEW.CreateTime THEN
>   ...


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[SQL] Newbie (to postgres) question

2004-07-08 Thread Pedro B.
Hello all,
First of all, accept my apologies for what is surely a dumb question, 
and yes i have been reading extensively through all the documents, but i 
really need to ask this.. :)

I have recently started the migration of a large ex-MySql database to 
postgresql, and im still "adapting" to the new tweaks of this new (to 
me) environment.

My question is basically... how does postgresql deal with the equivalent 
of "permanent connections of mysql"?
Alongside with the database, i have an extensive amount of .c code that 
used to just reuse sockets if they were already in an open state (and 
only if needed new one(s) would be open). It's an application that will 
run as a standalone, but many times per minute, so the reusage is indeed 
a must for me.

I have substituted the "mysql_ping"s with PQconnectPolls just to see if 
the behaviour would be alike, and right now that seems to work, but i'm 
in a standstill regarding the sockets and permanent connection usage.

Any help/directions someone might give me will be deeply appreciated.
Regards,
\\pb
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


[SQL] best method to copy data across databases

2004-07-08 Thread ctrl
I need to copy data that I have on a table in one Postgres database
into another table, on a different database. The destination table is
not identical with the source table so I need to do some mapping
between the 2 tables.
What would be the best (and quickest) way to transfer the data? (there
are over 500k rows)

thanks!

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] best method to copy data across databases

2004-07-08 Thread ctrl
Many thanks Tony and Tom,

since this was a "one time" process speed wasn't an issue...
I just did a plain pg_dump with insert and explicit column names in
the dump, then used vi to rename the columns and get rid of some of
them...
non very scientific but it worked :)

cheers.

[EMAIL PROTECTED] (Tom Lane) wrote in message news:<[EMAIL PROTECTED]>...
> [EMAIL PROTECTED] (Tony Reina) writes:
> > If the 2 tables have different arrangements, then I'm not sure if
> > there is a quick way. The safest way is probably to do a pg_dump
> > --attribute-inserts.
> 
> In recent versions (definitely 7.4, don't remember about 7.3),
> pg_dump will include a column list in its COPY commands, so a
> plain pg_dump should work.  The way with COPY will be a good bit
> faster than a pile of INSERT commands.
> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 9: the planner will ignore your desire to choose an index scan if your
>   joining column's datatypes do not match

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] [JDBC] Error in DatabaseMetaData.getColumns() with Views

2004-07-08 Thread Dario V. Fassi




I have found that the problem is worst.

In the sample adjunct, you can see that error arise at the time when
the view's sql text is parsed and saved in database catalog.
Then generic NUMERIC type is forced for every calculated column without
regard or precision.
And at execute time the f2 column has varying type decimals (in row 2
you can see 4 decimals and in other rows has 3 decimals), this is not a
behavior , this is an ERROR.

Precision calculation  in Numeric fields it's not so difficult. 
This problem *must* be corrected in the CREATE VIEW sentence , 
and I offer my collaboration for that.

Regards all,
Dario Fassi.

Dario V. Fassi wrote:

  
  
  
Kris Jurka wrote:
  
On Fri, 2 Jul 2004, Dario Fassi wrote:

  

  Hi, I wish to report a erroneous information returned by   
DatabaseMetaData.getColumns()  method.

This happens with ResultSet's column (7) COLUMN_SIZE  and  (9) 
DECIMAL_DIGITS ,
when DatabaseMetaData.getColumns() inform about a VIEW with columns 
formed with :
coalesce , case or numeric operations over DECIMAL fields.

Suppose

CREATE TABLE A (  f1 DEC(6,3), f2 dec(6,3) );
CREATE VIEW B as ( select ( f1 + f2 ) as f from a;

Then DatabaseMetaData.getColumns() returns:

VIEW B
F   NUMERIC(  65535 ,  -65531 )




The problem is that f1+f2 does not retain the numeric(6,3) size 
restriction, but turns into an unbounded plain "numeric" data type.  So 
when retrieving this data the precision/scale are unavailable and the 
unreasonable values you see are returned.  We could return NULL instead, 
but I'm not sure that would be much more helpful if the client is 
expecting real values.  Any other ideas?

Kris Jurka
  
  
Yes, a few.
  
In the tool named PgManage (come with the commercial version of
Mammoth) ,  the information is accurate values for this MetaData, and I
believe, they get that values from pqsql catalog tables and not
from DatabaseMetaData interface.
  
More even, the engine resolve the View properly and return data values
properly typed  (engine return 999.999  for f1+f2  and  999.99 for
f1*f2  , like is expected).
So, the information if know or derived en some way for the engine
, and/or is contained some where in catalog's tables.
  
I'm working in CodeGeneration tools for many DB engines (DB2, Oracle,
MS-Sql, PostgreSql, etc) , and it's impossible not to use generic
DatabaseMetaData interface to obtain metadata information. So this is a
very important problem for me.
  
DB2 for example do, data type escalation based on "Error propagation
Theory "  , that has rules (I'm not and expert in the field) like :
  
[dec(6,a) + dec(6,b) ] -> [ dec( 6, max(a,b) ) ]
[ coalesce( dec(6,a) , dec(6,b) ] -> [ dec[ 6, max(a,b)] ]
[ case( dec(6,a) , dec(6,b), dec(6,c), dec(6,d) ] -> [ dec[ 6,
max(a,b,c,d)] ]
[dec(6,a) * dec(6,b ) -> [ dec( 6,a+b ) ]
etc.
  
This rules are taken into account in the engine data formation process,
but there are stored some where ???
  
Thanks for your answer.
Dario V. Fassi
  
  





CREATE TABLE public.t ( 
 f1 numeric(6,3),
 f2 numeric(6,4),
 f3 numeric(8,2)
) WITHOUT OIDS;


insert into public.t values ( 123.123, 12.1234, 12345.12);
insert into public.t values ( null, 12.1234, 12345.12);
insert into public.t values ( 123.123, null, 12345.12);
insert into public.t values ( 123.123, 12.1234, null);


--
--  Create sql text 
--

create view public v as 
select 
(f1+f2+f3) as fsum,
coalesce(f1,f2,f3) as fcoal,
(f1*f2*f3) as fprod
from public.t;

--
--  Parsed sql view from catalog
--

CREATE OR REPLACE VIEW public.v AS 
SELECT ((t.f1 + t.f2 ) + t.f3 ) AS fsum, 
CASE WHEN (t.f1 IS NOT NULL ) THEN t.f1 
WHEN (t.f2 IS NOT NULL ) THEN t.f2 
WHEN (t.f3 IS NOT NULL ) THEN t.f3 
ELSE NULL::numeric 
END  AS fcoal, 
((t.f1 * t.f2 ) * t.f3 ) AS fprodFROM t;


--
--  Returned Data 
--

select * from public.v;

fsumfcoal   fprod
12480.3664  123.123 18427182.594204384
null12.1234 null<< !!! No look good !!!
null123.123 null
null123.123 null


--
--
--  WORKAROUND WORKAROUND WORKAROUND
--
--

--
--  Create sql text with forced datatype
--

create view public v2 as 
select 
cast( (f1+f2+f3)as dec(9,4) ) as fsum,
cast( coalesce(f1,f2,f3)as dec(7,4) ) as fcoal,
cast( (f1*f2*f3)as dec(20,9) ) as fprod
from public.t;

--
--  Parsed sql view from catalog
--

CREATE OR REPLACE VIEW public.v2 AS 
SELECT 
(((t.f1 + t.f2 ) + t.f3))::numeric(9,4 ) AS fsum, 
( CASE 
WHEN (t.f1 IS NOT NULL ) THEN t.f1 
WHEN (t.f2 IS NOT NULL ) THEN t.f2 
WHEN (t.f3 IS NOT NULL ) THEN t.f3 
ELSE NULL::numeric END)::numeric(7,4 ) AS fcoal, 
(((t.f1 * t.f2 ) * t.f3))::numeric(20,9 ) AS fprod 
FROM t; 

--
--  Returned Data 
--

select * from p

Re: [SQL] Constraint->function dependency and dump in 7.3

2004-07-08 Thread Együd Csaba
Hi Gábor!

I had the same problem and someone pointed me the right direction. I tried
to define a table default clause refering a function. Reloading the dump
file an error messaged raised up saying that the referred function doesn't
exist.
This is because dumping out the schema pg_dump pushes out the table
definitions first and then the functions (I don't know why can not realize
these issues.). You can keep the schema dump in a separete file and move the
referred functions in front of the tble definitions. After that regulary
dump out only the data. Restoing the db start with the schema file.

I hope I was clear. Another advance of this method is that it is absolutely
Y3K safe. :)

Bye,
  -- Csaba Együd

> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Behalf Of SZŰCS Gábor
> Sent: 2004. július 8. 13:10
> To: [EMAIL PROTECTED]
> Subject: [SQL] Constraint->function dependency and dump in 7.3
>
>
> Dear Gurus,
>
> Just recently realized that our daily dump from a 7.3
> (production) db to a
> 7.4 (development) server has problems. I have no idea where
> to search for an
> answer so please feel free to point me to the appropriate
> thread, doc or TFM
> :)
>
> Below is two portions of the dump, which seems to be in the
> wrong order (at
> least for me). I'm not sure, and not in a position to easily
> test it, that
> it's wrong order in 7.3; but 7.4 has problems creating the
> table without the
> function (which is logical):
>
> %--- cut here ---%
> CREATE TABLE cim (
> -- etc etc ...
> orszag_kod integer,
> CONSTRAINT cim_orszag_kod CHECK ((hely_fajta(orszag_kod) = 7))
> );
>
> -- ... several lines later:
>
> CREATE FUNCTION hely_fajta (integer) RETURNS integer
> AS ' ... '
> LANGUAGE sql;
> %--- cut here ---%
>
> Checked pg_depend, and constraint cim_orszag_kod refers to function
> hely_fajta, but noone (not even the table) refers to the
> constraint. I'm
> just wondering if it's ok...
>
> 1) is it normal that the table does not refer to its constraints?
> 2) if not, do you have the idea of the possible cause?
> 3) if so, is it normal for pg_dump to dump in this order?
> 4) if so, how may I change it?
> 5) may inserting into pg_depend solve the problem?
>
> TIA,
> G.
> %--- cut here ---%
> \end
>
>
> ---(end of
> broadcast)---
> TIP 5: Have you checked our extensive FAQ?
>
>http://www.postgresql.org/docs/faqs/FAQ.html
>
> ---
> Incoming mail is certified Virus Free.
> Checked by AVG anti-virus system (http://www.grisoft.com).
> Version: 6.0.712 / Virus Database: 468 - Release Date: 2004. 06. 27.
>

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.712 / Virus Database: 468 - Release Date: 2004. 06. 27.



---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org