[SQL] Sending function parametars within EXECUTE ''SELECT...

2005-09-28 Thread Mario Splivalo
I've learned that one can't use temporary tables within the function
unless
EXECUTE'd the SELECTS from that temp table.

So, I have a function like this:

CREATE FUNCTION Foo1(int4, int4) RETURNS SETOF myType
AS
'
DECLARE
aDataId ALIAS FOR $1;
aBid ALIAS FOR $2;
return myType;
rec record;
BEGIN
CREATE TEMP TABLE tmpTbl
AS
SELECT col1, col2 FROM t1 JOIN t2 ON t1.c1 = t2.c3 WHERE t1.c4 =
aDataId;

FOR rec IN EXECUTE ''SELECT COUNT(col1) AS num FROM tmpTbl WHERE
col2 = aBid''
LOOP
return.myType = rec.num;
END LOOP;

RETURN NEXT return;
RETURN;
END
' language 'pgplsql'


Now, when I try to call that function, i get an error that aBid is
unknown
column name. How can I pass the aBid value to the SELECT statement
inside
the EXECUTE?

I'm using temp table because the tables from where to gather the data
are
huge. If I'd be using views instead, it'd take too much time. I tought
that
storing only a small fraction of the data (1/1000 of the data is put
into
the temp table), and then performing calculations on that temp table
would
be much faster. I just don't know how to pass parameters to the EXECUTE
SELECT.

Any help here would be appreciated.

Mike

P.S. I tried above code on pg8.0.3 and 8.1beta2, with same results.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] Sending function parametars within EXECUTE ''SELECT...

2005-09-28 Thread Mario Splivalo
On Wed, 2005-09-28 at 10:01 -0700, codeWarrior wrote:
> >
> > ---(end of
broadcast)---
> > TIP 5: don't forget to increase your free space map settings
> >
> 
> FOR rec IN EXECUTE ''SELECT COUNT(col1) AS num FROM tmpTbl WHERE col2
= " || 
> aBid || '' LOOP
> 

Huh! :) Seems so obvious now :)

Is there a way to assing variable a value returned from query that
accesses the temporary table in a way you explained above?

For instance, I can do:

myValue := col2 FROM tmpTbl WHERE someValue = somethingElse...

But then I encounter the problem with temp tables. Can I do EXECUTE'd
SELECT like you explained above? Or I need to do 'widdleing' with FOR
like explained above? 

Mario


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] Sending function parametars within EXECUTE ''SELECT...

2005-09-29 Thread Mario Splivalo
On Thu, 2005-09-29 at 16:36 +1000, Stewart Ben (RBAU/EQS4) * wrote:
> > Is there a way to assing variable a value returned from query that
> > accesses the temporary table in a way you explained above?
> > 
> > For instance, I can do:
> > 
> > myValue := col2 FROM tmpTbl WHERE someValue = somethingElse...
> 
> If I'm parsing this correctly, all you need do is:
> 
> SELECT col2 INTO myValue FROM ...
> 

So, when accessing temporary tables, like this:

EXECUTE ''SELECT col2 INTO myValue FROM tmpTable...''

Don't have postsgres handy now, I'll check that ASAP, but somehow I feel
that I'd run into same problems as before. 

I'll try and I'll post a reply here.

Mario
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[SQL] EXECUTE with SELECT INTO variable, or alternatives

2005-09-29 Thread Mario Splivalo
I can assign a value to a variable in several ways:

myVar := (SELECT col FROM table WHERE somecondition...)
myVar := col FROM table WHERE somecondtition...
SELECT col INTO myVar FROM table WHERE somecondition

How do I do any of the above using EXECUTE? I need to be able to assign
the value to a variable, a value returned by a querry on a temporary
table.

So far I have found workaround like this:

myRec record;

FOR rec IN EXECUTE ''SELECT col FROM table WHERE somecondition''
LOOP
myVar := rec
END LOOP

Obviously, the above SELECT query returns only one row. Still, if is
realy inconvinient to have FOR...LOOP...END LOOP construct for assigning
the value to a variable 'read' from the temporary table.

Mario


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[SQL] EXECUTE SELECT INTO... last one!

2005-09-30 Thread Mario Splivalo
PgSql8.1beta2 gives me this:

ERROR:  EXECUTE of SELECT ... INTO is not implemented yet
CONTEXT:  PL/pgSQL function "_rndbroj" line 5 at execute statement

Shall I forget about this once and for all (and use FOR construct), or
there is a chance that will be 'fixed'?

    Mario
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] EXECUTE SELECT INTO... last one!

2005-09-30 Thread Mario Splivalo
On Fri, 2005-09-30 at 02:34 -0600, Michael Fuhr wrote:
> On Fri, Sep 30, 2005 at 10:00:35AM +0200, Mario Splivalo wrote:
> > PgSql8.1beta2 gives me this:
> > 
> > ERROR:  EXECUTE of SELECT ... INTO is not implemented yet
> > CONTEXT:  PL/pgSQL function "_rndbroj" line 5 at execute statement
> 
> Could you post a simple example of what you're trying to do?  The
> following works for me in 8.1beta2 (from CVS):
> 
> CREATE FUNCTION mycount(text) RETURNS bigint AS $$
> DECLARE
> query  text;
> cntbigint;
> BEGIN
> query := 'SELECT count(*) FROM ' || quote_ident($1);
> EXECUTE query INTO cnt;
> RETURN cnt;
> END;
> $$ LANGUAGE plpgsql;

Yes, it's my fault. I was testing it on pg7.4. On 8.1beta2 works as you
explained. Though it would be realy nice to be able to do 

var := col FROM tempTable WHERE condition

withouth EXECUTE'ing and stuff.

Sorry for the fuss I might have created.

Mario
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[SQL] Returning NULL results?

2005-10-11 Thread Mario Splivalo
_SQL := 'SELECT TmessageId FROM tmpBids WHERE TphoneNumber = ' 
|| quote_literal(phoneNumber) 
|| ' AND Tbid = ' || aBid;
FOR rec IN EXECUTE _SQL LOOP bidCount := rec._qv; END LOOP;

This works ok as long as the SELECT query returns rows. Of course, if it
returns multiple rows bidCount variable will hold just the last row
value, but the design of application is like that so the query from the
start returns only one row, or returns no rows.

Of course, if it returns no rows, I'm presented with an error, saying:

ERROR:  record "rec" has no field "_qv"

This is logical. My question would be is there a way around this
withouth first doing SELECT COUNT(*) FROM tmbBids WHERE ..., and then if
COUNT(*) is zero THEN bidCount := NULL ELSE if COUNT(*) is not zero then
do SELECT from the begining?

Mike
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] Returning NULL results?

2005-10-12 Thread Mario Splivalo
On Tue, 2005-10-11 at 15:39 +0100, Richard Huxton wrote:
> Mario Splivalo wrote:
> > 
> > Of course, if it returns no rows, I'm presented with an error, saying:
> > 
> > ERROR:  record "rec" has no field "_qv"
> > 
> > This is logical. My question would be is there a way around this
> > withouth first doing SELECT COUNT(*) FROM tmbBids WHERE ..., and then if
> > COUNT(*) is zero THEN bidCount := NULL ELSE if COUNT(*) is not zero then
> > do SELECT from the begining?
> 
> Could you check the FOUND variable? As in IF NOT FOUND THEN RETURN NULL
> 

That actually works, but it's a bit of fuss to use. I need to have it
inside the FOR loop, then exit the loop if FOUND is NOT TRUE, wich makes
the code even harder to read/mantain. Do you think that anytime soon
would we be able to do SELECTs from the temporary tables withouth
EXECing those selects? And, some qualifier for the variables (@ in
T-SQL, $ in PHP, or something similair) would make the code even more
readable. Just my tought.

Btw, on postgres 7.4 you can do SELECT something FROM , and if query
returns no rows, the FOR loop is skipped. So, it's safe to do something
like:

someVar:=NULL;
_SQL:='SELECT someVar FROM someTable WHERE someCondition...'
FOR rec IN EXECUTE _SQL LOOP someVar:=rec.someVar; END LOOP;

Now, if select returns no rows, the LOOP/END LOOP is skippend, and
someVar is se to be null before FOR. 

Mario
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [SQL] Consulta!

2005-10-18 Thread Mario Splivalo
On Tue, 2005-10-18 at 14:26 +, Adriana Marcela Aguirre wrote:
> Hola a todos!!...


Hi! :) That's a known postges 'problem', one which, I guess, every
pg-newbie encounters, especialy if he/she came from other RDBMSes. (I
personaly came from MSSQL).

> CREATE OR REPLACE FUNCTION "public"."f_test" () RETURNS SETOF
> "pg_catalog"."record" AS'
> BEGIN
> CREATE TEMPORARY TABLE test( x integer ) ON COMMIT DROP;
>
> INSERT INTO test values (1);
>  
> --RETORNA LOS RESULTADOS
> FOR res IN SELECT x  FROM test LOOP
> RETURN NEXT res;
> END LOOP;
> RETURN;
> END;
> 'LANGUAGE 'plpgsql' IMMUTABLE CALLED ON NULL INPUT SECURITY DEFINER;
>  

Postgres parsed your function, and compilled it, when first run. When
doing so it refferenced your 'test' temp table via the oid. You do drop
the table at the end of the proc, but prepared query plan for that
function still references the OID that temp table had when it did
exists. So, you need to stop postgres to 'prepare' query plan for that
table. You do so by EXECUTEing the desired query, in your case:

FOR res IN EXECUTE SELECT x FROM test LOOP
...


It is more convinient to add the SQL query to a variable, and then exec
that variable, like so:

_SQL := 'SELECT a, b FROM tmpTable WHERE colText=' ||
quote_literal(someParametar);

FOR res IN EXECUTE _SQL LOOP
...


Mike






















-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[SQL] Field Separator not working?

2005-10-19 Thread Mario Splivalo
I'm having troubles setting the field separator. I tried this on pg7.4,
8.0 and 8.1beta, and I always get the same results.

So, from within psql i do:
pulitzer2=# select * from pg_user;
  usename  | usesysid | usecreatedb | usesuper | usecatupd |  passwd  |
valuntil | useconfig
---+--+-+--+---+--+--+---
 mario |1 | t   | t| t |  |
|
 postgres  |  100 | t   | t| t |  |
|
 pulitzer2 |  102 | f   | f| f |  |
infinity |
 101   |  101 | f   | f| f |  |
infinity |
(4 rows)

pulitzer2=# \t
Showing only tuples.
pulitzer2=# \f#
Field separator is "#".
pulitzer2=# select * from pg_user;
 mario |1 | t   | t| t |  |
|
 postgres  |  100 | t   | t| t |  |
|
 pulitzer2 |  102 | f   | f| f |  |
infinity |
 101   |  101 | f   | f| f |  |
infinity |

pulitzer2=#


psql does tell me that I changed the field separator to "#", but it
still uses "|" as separator. I'm parsing the script from bash, using
psql, and setting the field separator options on the command line, but I
get similair results, the field separator is always "|", so I need to
awk-it with -F, but sometimes I have problems because data in my tables
often contain "|" charachter.

I've been looking trough the psql source code, but just like that, it's
too big hassle for me now :)

Am I doing something wrong?

Mike
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] Field Separator not working?

2005-10-19 Thread Mario Splivalo
On Wed, 2005-10-19 at 11:43 +0100, Richard Huxton wrote:
> Mario Splivalo wrote:
> > pulitzer2=# \t
> > Showing only tuples.
> > pulitzer2=# \f#
> > Field separator is "#".
> > pulitzer2=# select * from pg_user;
> >  mario |1 | t   | t| t |  |
> 
> > psql does tell me that I changed the field separator to "#", but it
> > still uses "|" as separator. I'm parsing the script from bash, using
> > psql, and setting the field separator options on the command line, but I
> > get similair results, the field separator is always "|", so I need to
> > awk-it with -F, but sometimes I have problems because data in my tables
> > often contain "|" charachter.
> 
> You'll want this too:
>\pset format unaligned
> 
> man psql or see the manual for full details

Hopla. It works perfetcly, thnx. Sometimes we're lazy to trough-read the
manuals :)

Mario
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[SQL] Postgres 7.4.9 slow!

2005-10-21 Thread Mario Splivalo
Hello! I'm not sure if this belongs to this mailing list, if not, please
tell me to redirect to where it belongs.

I have a query that does COUNT, LEFT JOIN and GROUP BY from two tables.
One has 85000 records, and other has 100 records. I've been running
the tests on 7.4.3, with SET ENABLE_SEQSCAN TO OFF, and I get 9-11
seconds for that query. Query plan shows that postgres is using both
indexes on both tables (one index is set on primary key, naturaly, and
other is manualy set on foreign key in 'child' table). That is
acceptable.

But, now I downloaded postgres 7.4.9, and i'm running the very same
query on the very same database with all the indices and constraints
beeing the same (also SET ENABLE_SEQSCAN TO OFF), and I get around
90-110 seconds.

Has anyone noticed extreeme slowdown of postgres 7.4.9?

    Mike
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] Postgres 7.4.9 slow!

2005-10-21 Thread Mario Splivalo
On Fri, 2005-10-21 at 14:01 +0200, Havasvölgyi Ottó wrote:
> Mike,
> 
> Please send the EXPLAIN ANALYZE of the two versions of the query.

There they are, they are both the same:

join_test=# select version();
   version
--
 PostgreSQL 7.4.9 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.0.2
20050808 (prerelease) (Ubuntu 4.0.1-4ubuntu9)
(1 row)

join_test=# \timing
Timing is on.
join_test=# set enable_seqscan to off;
SET
Time: 0.715 ms
join_test=# explain select p.phone, count(*) from phones p left join
table_data d on p.phone = d.phone group by p.phone having count(*) > 1
order by count(*) desc;
 QUERY PLAN
-
 Sort  (cost=4993545.35..4993754.61 rows=83704 width=16)
   Sort Key: count(*)
   ->  GroupAggregate  (cost=0.00..4985814.87 rows=83704 width=16)
 Filter: (count(*) > 1)
 ->  Merge Left Join  (cost=0.00..4974843.57 rows=1379136
width=16)
   Merge Cond: (("outer".phone)::text =
("inner".phone)::text)
   ->  Index Scan using pk1 on phones p  (cost=0.00..2876.37
rows=83704 width=16)
   ->  Index Scan using "fki_fkTableData" on table_data d
(cost=0.00..4954515.15 rows=1379135 width=16)
(8 rows)

Time: 169.781 ms
join_test=#


And now the 7.4.8:

join_test=# select version();
 version
--
 PostgreSQL 7.4.8 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.5
(Debian 1:3.3.5-8ubuntu2)
(1 row)

join_test=# \timing
Timing is on.
join_test=# set enable_seqscan to off;
SET
Time: 0.500 ms
join_test=# explain select p.phone, count(*) from phones p left join
table_data d on p.phone = d.phone group by p.phone having count(*) > 1
order by count(*) desc;
 QUERY PLAN
-
 Sort  (cost=4993545.35..4993754.61 rows=83704 width=16)
   Sort Key: count(*)
   ->  GroupAggregate  (cost=0.00..4985814.87 rows=83704 width=16)
 Filter: (count(*) > 1)
 ->  Merge Left Join  (cost=0.00..4974843.57 rows=1379136
width=16)
   Merge Cond: (("outer".phone)::text =
("inner".phone)::text)
   ->  Index Scan using pk1 on phones p  (cost=0.00..2876.37
rows=83704 width=16)
   ->  Index Scan using "fki_fkTableData" on table_data d
(cost=0.00..4954515.15 rows=1379135 width=16)
(8 rows)

Time: 31.510 ms
join_test=#

The plans are same. It's just that when I run the query with pg7.4.8 it
takes 100% of the processor time while running. pg7.4.9 takes 2-10%
while running. Disk activity is much more intense with pg7.4.9

Mike

-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Postgres 7.4.9 slow!

2005-10-21 Thread Mario Splivalo
On Fri, 2005-10-21 at 10:20 -0400, Tom Lane wrote:
> Mario Splivalo <[EMAIL PROTECTED]> writes:
> > I have a query that does COUNT, LEFT JOIN and GROUP BY from two tables.
> > One has 85000 records, and other has 100 records. I've been running
> > the tests on 7.4.3,
> 
> Your later message shows 7.4.8.  Which is it?
> 
> > But, now I downloaded postgres 7.4.9, and i'm running the very same
> > query on the very same database with all the indices and constraints
> > beeing the same (also SET ENABLE_SEQSCAN TO OFF), and I get around
> > 90-110 seconds.
> 
> You sure it's the very same?  The version outputs suggest that these
> might be two different machines; certainly two very different compilers
> were used.  One thing I'd wonder about is whether both databases were
> initialized in the same locale.

Yes, I realized that the new Ubuntu distribution hac gcc4 by default.
I'll compile again both 7.4.8 and 7.4.9 and my home PC, and see what
happens then.

My mention of 7.4.3 is a typo. It's 7.4.8 and 7.4.9.

Mike
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[SQL] RETURNS SETOF primitive returns results in parentheses

2005-10-26 Thread Mario Splivalo
Consider this function:

CREATE OR REPLACE FUNCTION php_get_subfield_data_repeating(int4,
"varchar")
  RETURNS SETOF "varchar" AS
$BODY$
DECLARE
aRecordID ALIAS FOR $1;
aSubFieldId ALIAS FOR $2;

returnValue record;
subFieldNumber char(3);
subFieldLetter char(1);

BEGIN
subFieldNumber = substr(aSubFieldId, 1, 3);
subFieldLetter = substr(aSubFieldId, 4);

FOR returnValue IN SELECT "subfieldValue"::varchar
FROM "records_sub" 
WHERE "fieldTag" = subFieldNumber AND 
"subfieldTag" = subFieldLetter
AND "recordId" = aRecordId
LOOP
RETURN NEXT returnValue;
END LOOP;

RETURN;
END
$BODY$
  LANGUAGE 'plpgsql' VOLATILE;



Now, when I do this:

biblio3=# select * from php_get_subfield_data_repeating(1,'606a');
 php_get_subfield_data_repeating1
--
 (Anđeli)
 (ofsajd)
(2 rows)

I have return values in parentheses. However, if I create a new type:

CREATE TYPE subfield_data_type AS (subfield_data varchar);

And then drop the function and recreate it like this:

CREATE OR REPLACE FUNCTION php_get_subfield_data_repeating(int4,
"varchar")
  RETURNS SETOF subfield_data_type AS
$BODY$
...

And then when I run the function, the results are ok:

biblio3=# select * from php_get_subfield_data_repeating(1,'606a');
 subfield_data
---
 Anđeli
 ofsajd
(2 rows)


Am I doing something wrong here? Why do I need to create type with only
one member of type varchar to have results without the parentheses?

Mike

P.S. The subFieldValue field in the records_sub table is of type
varchar(4096).


-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] RETURNS SETOF primitive returns results in parentheses

2005-10-26 Thread Mario Splivalo
On Wed, 2005-10-26 at 08:54 -0400, Sean Davis wrote:
> > Now, when I do this:
> > 
> > biblio3=# select * from php_get_subfield_data_repeating(1,'606a');
> > php_get_subfield_data_repeating1
> > --
> > (Anđeli)
> > (ofsajd)
> > (2 rows)
> 
> Does:
> 
> select * from php_get_subfield_data_repeating(1,'606a') as s(a)
> 
> do what you want (single column)?
> 

Nope. I still get the results in parentheses. When I change the SETOF
from varchar to my defined type, your query [with as s(a)] I get normal
resutls, withouth parentheses. I clearly have solved a problem, I just
need to create a type containing one member only, with the type of
varchar, and instead of 'RETURNS SETOF varchar' i need to do 'RETURNS
SETOF my_varchar_type'. I'm just wondering is this like that 'by
design', or is it a bug.

I've been reproducing this on postgres versions 7.4.8, 7.4.9, 8.0.3,
8.0.4 and 8.1beta3.

Mike
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] RETURNS SETOF primitive returns results in parentheses

2005-10-26 Thread Mario Splivalo
On Wed, 2005-10-26 at 10:40 -0400, Tom Lane wrote:
> Mario Splivalo <[EMAIL PROTECTED]> writes:
> > biblio3=# select * from php_get_subfield_data_repeating(1,'606a');
> >  php_get_subfield_data_repeating1
> > --
> >  (Anđeli)
> >  (ofsajd)
> > (2 rows)
> 
> > I have return values in parentheses.
> 
> You're getting bit by plpgsql's perhaps-excessive willingness to convert
> datatypes.  Your returnValue variable is not a varchar, it is a record
> that happens to contain one varchar field.  When you do "RETURN NEXT
> returnValue", plpgsql has to coerce that record value to varchar, and
> it does that by converting the record value to text ... which produces
> the parenthesized data format specified at 
> http://www.postgresql.org/docs/8.0/static/rowtypes.html#AEN5604

Wo-ha, makes perfect sense. So, I'd go by with declaring the rec as
varchar, instead as of a record. Wich is what I should do in the first
place.

Thnx for pointing that out. :)

Mike


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] RETURNS SETOF primitive returns results in parentheses

2005-10-26 Thread Mario Splivalo
On Wed, 2005-10-26 at 15:45 -0400, Tom Lane wrote:
> Mario Splivalo <[EMAIL PROTECTED]> writes:
> > Wo-ha, makes perfect sense. So, I'd go by with declaring the rec as
> > varchar, instead as of a record. Wich is what I should do in the first
> > place.
> 
> Or just return the correct field out of it.
> 
>   RETURN NEXT returnValue.fieldname;
> 
> I think you may have to do it this way because the FOR loop wants a
> record variable.

Yes, you're right. Funny implementation of the FOR loop :)

Mike


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] why vacuum

2005-10-27 Thread Mario Splivalo
On Wed, 2005-10-26 at 10:19 -0500, Scott Marlowe wrote:

> But, the next time someone says that slony is a toy add on, and MySQL
> has REAL replication, point them to THIS page on the same blog:
> 
> http://ebergen.net/wordpress/?p=70
> 
> In short, it basically shows that MySQL replication is incredibly
> fragile, and not fit for production on any real system.  The lack of
> system wide transaction support, like postgresql has, makes the problem
> he outlines that much worse.
> 
> The hoops people will jump through to use their favorite toys...

I see no point in blatantly putting 'other' products such shape. Pgsql
offers no replication at all, you need to use slony (wich is also a poor
replacement for a wannabe replication), or some other commercial
products. What about 2PC? What about linking the databases from
different servers?

I've been using MSSQL for over 6 years now, started with MSSQL7 and went
on with MSSQL2000. The replication it offers is superb! It runs
smoothly, you have click-me-click interface with wich you can create
publications and deploy them to subscribers with ease. Ok, there are
gotchas (and they-re ms-style funny), but it's all well documented, and
works most of the time as expected. So what?

Btw, I 'ported' the merge replication from MSSQL to postgres. It
basicaly adds triggers to every table that is 'published' for
replication. There is a separate table to store and calculate the change
differences from several servers (so you could do update on any of the
servers and change will be propagated to the others). I'm missing 2PC
badly here, I wrote some stupid python 'thingie' wich should act as 2PC
serializer, but that's slow as hell. And triggers slow down postgres
quite a bit.

So, to end this 'my father has bigger car than yours' debate, when will
postgres have two phase commit protocol implemented? I presume that
should come after you allow something like SELECT someCol FROM
myServer.myDatabase[Schema].myTable...

Mike
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] why vacuum

2005-10-27 Thread Mario Splivalo
On Wed, 2005-10-26 at 12:09 -0400, Jan Wieck wrote:
> > 
> 
> You must have missed the FAQ and other side notes about replication in 
> the MySQL manual. Essentially MySQL replication is nothing but a query 
> duplicating system, with the added sugar of taking care of now() and 
> some other non-deterministic things, but not all of them.
> 
> Non-deterministic user defined procedures, functions and triggers will 
> simply blow MySQL's sophisticated replication apart.
> 

That is just not true. I haven't tried MySQL 4 and above, but in 3.5.x
replication wos working as expected, and the load that replication posed
was insignificant. The only TRUE problem was that replication was
unidirectional. That SAME problem has Slony, and other 'replication
systems' available for postgres.

Mike
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] why vacuum

2005-10-27 Thread Mario Splivalo
On Thu, 2005-10-27 at 06:21 -0400, Andrew Sullivan wrote:
> On Thu, Oct 27, 2005 at 10:22:41AM +0200, Mario Splivalo wrote:
> > offers no replication at all, you need to use slony (wich is also a poor
> > replacement for a wannabe replication), or some other commercial
> > products. What about 2PC? What about linking the databases from
> 
> Slony is in fact a community-supported system; so I don't know why
> you think that amounts to "no replication at all".  And since this is
> a community-supported system, it'd be nice if you said why it's a
> "poor replacement for wannabe replication".  What's wrong with it?

Postgres itself offers no replication. You could achive some sort of
replication by restoring the parts of WAL files, but that's rather
inconvinient. Then, if you want to replicate your data in any way, you
need to take slony, or whatever is existant out there, commercial or
open-free-source.
I appologize here if I insulted the slony developers, I ment nothing
like that :) I am sorry. I was just addressing the issue where
replication to one means 'just move my data here', and at others it
means 'merge my data'.

> > Btw, I 'ported' the merge replication from MSSQL to postgres. It
> > basicaly adds triggers to every table that is 'published' for
> > replication. There is a separate table to store and calculate the change
> > differences from several servers (so you could do update on any of the
> > servers and change will be propagated to the others). I'm missing 2PC
> > badly here, I wrote some stupid python 'thingie' wich should act as 2PC
> > serializer, but that's slow as hell. And triggers slow down postgres
> > quite a bit.
> 
> This is interesting.  Care to package it up for others, or write a
> proof-of-concept outline for the lists or General Bits or something
> like that?  This is a different sort of replication people are asking
> for.  Note that you get 2PC in the next Postgres release.

I'll be glad to, I'm just not that familiar (in fact, i'm not familiar
at all) with the 'proof-of-cocept' or 'General Bits' terms, so if you
could http-redirect me, I'll be  most thankfull.

Mentioning the 2PC, is it available in pg8.1beta4?

Mike
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] why vacuum

2005-10-28 Thread Mario Splivalo
On Thu, 2005-10-27 at 11:51 -0400, Andrew Sullivan wrote:

> Well, AFAIK Oracle itself offers no replication, either.  If you want
> it, you have to buy a license for it.  Which means it's an add-on. 
> Heck, most Linux distributions' kernels don't offer support for
> network cards: they're an add-on.  We call them modules.  (In case
> it's not clear, I don't buy the "itself/add-on" distinction.  The
> point is that the whole system works together.  PostgreSQL most
> definitely offers replication.  In fact, you can get warm-standby with
> WAL shipping, or read-only capabilities with Slony or some other
> tools.)

I wish I was Dumbo now. I could hide myself with me ears :) 

> > I'll be glad to, I'm just not that familiar (in fact, i'm not familiar
> > at all) with the 'proof-of-cocept' or 'General Bits' terms, so if you
> > could http-redirect me, I'll be  most thankfull.
> 
> Proof of concept is just a description of what you did, how it
> worked, design and limitations, &c.  Post it to the lists (uh,
> -general or maybe -hackers, I suppose), or put it on a web page or
> whatever.  General Bits is a fairly regular column that Elein Mustain
> puts out.  I bet she'd include a submission on this topic, although
> you'd have to ask her.  You can find GB at
> <http://varlena.com/varlena/GeneralBits/>.

I'll go trough my code, it's been a while since I touched it, I'll write
some documentation and I'll inform the comunity. Thnx for the pointouts.

Mike
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] PGSQL encryption functions

2005-11-02 Thread Mario Splivalo
On Tue, 2005-11-01 at 17:13 -0500, Tom Lane wrote:
> "Mark R. Dingee" <[EMAIL PROTECTED]> writes:
> > md5 works, but I've been able to 
> > brute-force crack it very quickly,
> 
> Really?  Where's your publication of this remarkable breakthrough?

I'd say you can't bruteforce md5, unless you're extremley lucky.
However, md5 is easily broken, you just need to know how to construct
the hashes.

One could switch to SHA for 'increaased' security.

Although I don't think he'd be having problems using MD5 as he described
it. I'd also lilke to see he's example of brute-force 'cracking' the MD5
digest.

Mike
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[SQL] Slow query - SELECTing one row from 'big' table.

2005-11-03 Thread Mario Splivalo
I have a procedure that goes something like this:
-- Function: create_message(varchar, varchar, varchar, int4)
-- DROP FUNCTION create_message("varchar", "varchar", "varchar", int4);

CREATE OR REPLACE FUNCTION create_message("varchar", "varchar",
"varchar", int4)
  RETURNS SETOF messages AS
'
DECLARE
aFrom ALIAS FOR $1;
aTo ALIAS FOR $2;
aRawText ALIAS FOR $3;
aDestinationId ALIAS FOR $4;
rec messages%ROWTYPE;

BEGIN
INSERT INTO messages ("from", "to", "raw_text", "destination_id")
   VALUES (aFrom, aTo, aRawText, aDestinationId);

 FOR rec IN
 SELECT *
 FROM messages
 WHERE (id = currval(''public.message_id_seq''::text))
 LOOP
 RETURN NEXT rec;
 END LOOP;

 RETURN;
END

The table messages is like this:
CREATE TABLE messages
(
  id int4 NOT NULL DEFAULT nextval('public.message_id_seq'::text),
  "from" varchar(15) NOT NULL,
  "to" varchar(10) NOT NULL,
  receiving_time timestamptz(0) NOT NULL DEFAULT now(),
  raw_text varchar NOT NULL,
  keyword varchar,
  destination_id int4,
  vpn_id int4,
  service_id int4,
  status int4 NOT NULL DEFAULT 2,
  reply varchar,
  CONSTRAINT pk_messages PRIMARY KEY (id),
  CONSTRAINT fk_messages_destination_id FOREIGN KEY (destination_id)
REFERENCES destinations (id) ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT fk_messages_service_id FOREIGN KEY (service_id) REFERENCES
services (id) ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT fk_messages_vpn_id FOREIGN KEY (vpn_id) REFERENCES vpns
(id) ON UPDATE NO ACTION ON DELETE NO ACTION
) 

And extra indices on table messages are like this:

CREATE INDEX idx_messages_receiving_time ON messages USING btree
  (receiving_time);
CREATE INDEX idx_messages_vpn_id ON messages USING btree (vpn_id);
CREATE INDEX idx_service_id ON messages USING btree (service_id);

Now, the table messsages has well above 700k rows. When I call the
function, it takes almost 5 seconds to run on a busy server, on my
shitty I/O capabilities laptop it takes as long as 30 seconds (no users
but me). Now, when I run the SELECT * FROM messages WHERE id = 12317 (or
any other random integer), the response time is much less.

Here is the query plan:
test_db=# explain select * from messages where id = 742001;
  QUERY PLAN
--
 Index Scan using pk_messages on messages  (cost=0.00..6.01 rows=2
width=197) (actual time=0.030..0.034 rows=1 loops=1)
   Index Cond: (id = 742001)
 Total runtime: 0.086 ms
(3 rows)

Now I'm wondering why the SELECT inside the procedure takes that long?
I've tried first to extract the return form currval function to the
local variable, and then supply that variable to the WHERE, but nothing
changed. It seems that SELECT itself is slow, not the currval.

Am I doing something wrong? Or that's just the way it is? 

I have a workaround now, I declared local variable of type messages%ROW,
and I fill that variable with the parametars received from the caller of
the function, and then i do RETURN NEXT, RETURN. But still, why is that
SELECT so slow inside the function?

Mike
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[SQL] RETURNS SETOF table; language 'sql'

2005-11-09 Thread Mario Splivalo
When I issue something like this:

SELECT * FROM ads WHERE id=1004;

i get:

 id  | vpn_id | service_id | ignore_length |   start_time   |
end_time|  ad_text
--+++---+++
 1004 |  1 |106 | f | 2005-01-01 00:00:00+01 |
2005-12-31 00:00:00+01 | Probna reklama numera una!


Now, I create a function that does that:

CREATE FUNCTION get_ads(int4)
RETURNS SETOF ads
AS
'SELECT * FROM ads WHERE id=$1'
LANGUAGE 'sql'

When I do:

SELECT * FROM get_ads(1004);

i get:

ERROR:  query-specified return row and actual function return row do not
match

Why is that?

Mike

P.S. That's run on Postgres 7.4.
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [SQL] RETURNS SETOF table; language 'sql'

2005-11-09 Thread Mario Splivalo
On Wed, 2005-11-09 at 13:49 -0500, Tom Lane wrote:
> Mario Splivalo <[EMAIL PROTECTED]> writes:
> > When I do:
> > SELECT * FROM get_ads(1004);
> > i get:
> > ERROR:  query-specified return row and actual function return row do not
> > match
> 
> The example as given works fine for me in 7.4.9.  Maybe you need an
> update, or maybe you're not telling us the whole truth about what you
> did.

Oh, me, big liar :) 

I dunno what else could be wrong... Maybe because the id column in table
ads has default value taken from a sequence?

Here are the statements for creating the table, and INSERTS for the
data. That's very same amount of data as I have on my server. I just
removed the default value for the id column, and foreign keys to some
other tables:

CREATE TABLE ads
(
  id int4 NOT NULL, 
  vpn_id int4 NOT NULL,
  service_id int4,
  ignore_length bool NOT NULL,
  start_time timestamptz NOT NULL,
  end_time timestamptz,
  ad_text varchar(1024) NOT NULL
) 
WITHOUT OIDS;

INSERT INTO ads VALUES (1004, 1, 106, false, '2005-01-01 00:00:00+01',
'2005-12-31 00:00:00+01', 'Probna reklama numera una!');
INSERT INTO ads VALUES (1005, 1, 106, false, '2005-01-01 00:00:00+01',
'2005-12-31 00:00:00+01', 'Probna reklama numera una!');
INSERT INTO ads VALUES (1006, 1, 106, false, '2005-01-01 00:00:00+01',
'2005-12-31 00:00:00+01', 'Probna reklama numera dua!');
INSERT INTO ads VALUES (1008, 1, 106, false, '2005-01-01 00:00:00+01',
NULL, 'ProbaNull');
INSERT INTO ads VALUES (1007, 1, 106, false, '2005-01-01 00:00:00+01',
'2006-01-01 00:00:00+01', 'ProbaNull');


Now, here is the code for the function:

CREATE FUNCTION get_ad(int4)
RETURNS SETOF ads
AS
'
SELECT * FROM ads WHERE id=$1
'
LANGUAGE 'sql';

I'm doing all of this on Postgres 7.4.9:
pulitzer2=# select version();
   version
--
 PostgreSQL 7.4.9 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.0.2
20050808 (prerelease) (Ubuntu 4.0.1-4ubuntu9)
(1 row)


Now, I created fresh database on the same database cluster, and executed
the CREATE for the table, INSERTs for the data, and CREATE for the
function. Then I run the function:

fun_test=# select * from get_ad(1004);
  id  | vpn_id | service_id | ignore_length |   start_time   |
end_time|  ad_text
--+++---+++
 1004 |  1 |106 | f | 2005-01-01 00:00:00+01 |
2005-12-31 00:00:00+01 | Probna reklama numera una!
(1 row)

Works fine. I remind you again, this is on a newly created database.

So, I go back to the database I'm developing, and I create function
get_ad_test (I created get_ad using plpgsql, ended up with adding some
more logic):

CREATE FUNCTION get_ad_test(int4)
RETURNS SETOF ads
AS
'
SELECT * FROM ads WHERE id=$1
'
LANGUAGE 'sql';


When I run it, again the same:pulitzer2=# select * from
get_ad_test(1004);
ERROR:  query-specified return row and actual function return row do not
match
pulitzer2=#


I went to some other database I have in that same cluster, recreated the
table, and everything works fine. I haven't tried droping the ads table
in the pulitzer database (the developing database the issue appeared).

Maybe I could provide some more detailed log files, or something?


Mike



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] RETURNS SETOF table; language 'sql'

2005-11-09 Thread Mario Splivalo
On Wed, 2005-11-09 at 17:05 -0500, Tom Lane wrote:
> Mario Splivalo <[EMAIL PROTECTED]> writes:
> > Works fine. I remind you again, this is on a newly created database.
> 
> Yup, works fine for me too.
> 
> > When I run it, again the same:pulitzer2=# select * from
> > get_ad_test(1004);
> > ERROR:  query-specified return row and actual function return row do not
> > match
> > pulitzer2=#
> 
> Ah, I bet I know the problem:
> 
> alter table ads add column z int;
> << function still works ... >>
> alter table ads drop column z;
> << function no longer works... >>
> 
> 7.4 isn't too good about coping with dropped columns in rowtypes.
> This problem is fixed in 8.0 and up.

So, I should drop the table, recreate it, and then the function should
work ok?

I'll try ASAP, I'll just jump to a gas station for a beer :)

Mike


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] Create Public Schema

2005-11-20 Thread Mario Splivalo
On Sat, 2005-11-19 at 20:03 -0700, [EMAIL PROTECTED] wrote:
> Hello all,
> 
> I'm trying to create a schema that contains the default tables,
> functions, etc. of the public schema.  Using pgAdmin, when I create a
> schema, it is blank - has no associated aggregates, tables, etc.  How
> would I create additional schems within a database that mirror the
> public schema.
> 

I guess your best bet would be to manually create all the scripts for
your database/schema objects. Use pg_dump to create a 'database schema
snapshot', and then use pg_restore -L to create scripts for objects you
desire.

Mike


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[SQL] JOIN query not working as expected

2005-12-06 Thread Mario Splivalo
How is this possible?

I have two tables. 'services', and 'messages'. Each message can be
assigned to one service, or it can be unnasigned. Therefore 'service_id'
column in table 'messages' is not foreign-keyed to 'id' column in
services table. services.id is PK for services, messages.id is PK for
messages.

Now, here goes:

pulitzer2=# select * from services where id = 1001;
  id  | keyword | type_id | vpn_id | start_time |end_time
| day_boundary | week_boundary | month_boundary | recurrence |
random_message_count
--+-+-++++--+---+++--
 1001 | cocker  |   1 |  1 || 2005-10-20 12:00:00+02
|  |   ||  1 |
(1 row)


Ok, I have a service with id 1001 which is called 'cocker'.

Now, I want all the messages for that service within certain period:

pulitzer2=# select * from messages where service_id = 1001 and
receiving_time between '2005-10-01' and '2005-10-30';
 id | from | to | receiving_time | raw_text | keyword | destination_id |
vpn_id | service_id | status | reply
+--+++--+-+++++---
(0 rows)

Ok, no such messages.


Now I want all services which didn't have any messages within certain
period:
pulitzer2=# select * from services where id not in (select distinct
service_id from messages where receiving_time between '2005-10-01' and
'2005-10-30');
 id | keyword | type_id | vpn_id | start_time | end_time | day_boundary
| week_boundary | month_boundary | recurrence | random_message_count
+-+-+++--+--+---+++--
(0 rows)

Why is that?



I 'discovered' above mentioned when I was transforming this query:

SELECT
services.id AS service_id,
(SELECT 
COUNT(id)
FROM
messages
WHERE
(messages.service_id = services.id)
AND (messages.receiving_time >= '2005-10-01')
AND (messages.receiving_time < '2005-10-30')
) AS "count",
services.keyword
FROM
services
WHERE
(services.vpn_id = 1)
AND
(
(services.start_time IS NULL OR services.start_time <= 
'2005-10-30')
AND
(services.end_time IS NULL OR services.end_time >= '2005-10-01')
)
GROUP BY
services.id,
services.keyword
ORDER BY
services.keyword

[this query shows correctly, for service 'cocker', that '"count"' column
has value 0]


I transformed query to this:

SELECT
services.id AS service_id,
count(messages.id) as "count",
services.keyword
FROM
services
LEFT OUTER JOIN messages
ON services.id = messages.service_id
WHERE
services.vpn_id = 1
AND messages.receiving_time BETWEEN '2005-10-01' AND '2005-10-30'
GROUP BY
services.id,
services.keyword
ORDER BY
services.keyword

This query runs MUCH faster, but it omits the 'cocker' column, as if I
used INNER JOIN. 

Any clues? I'm stuck here...

Mike
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] JOIN query not working as expected

2005-12-06 Thread Mario Splivalo
On Tue, 2005-12-06 at 09:58 -0500, Tom Lane wrote:
> Mario Splivalo <[EMAIL PROTECTED]> writes:
> > Now I want all services which didn't have any messages within certain
> > period:
> > pulitzer2=# select * from services where id not in (select distinct
> > service_id from messages where receiving_time between '2005-10-01' and
> > '2005-10-30');
> > (0 rows)
> 
> > Why is that?
> 
> Probably, you've got some NULLs in the messages.service_id column ...
> try excluding those from the sub-select.

Yes, I've discovered that later. Still, I'm confused with the 'biggies'
at the bottom of my initial mail.

I neglected to metion that I'm using postgresql 7.4.8. I discovered that
I can create a FK on a column wich allows NULL values, and I can even
insert rows with NULLs in FK column, although PK table where FK is
pointing does not allow nuls. Is that 'by design', or...?

Mike
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[SQL] PostgreSQL and uuid/guid

2006-01-02 Thread Mario Splivalo
Hello all.

While browsing the web I've seen that many people 'need' the ability to
create uuid/guid values from within the PostgreSQL. Most of them are
switching from MSSQL, and they're missing the newid() function, which in
MSSQL created the uuid/guid value.

Now I'm in need of such function too. I'm finaly putting together
MSSQL-like-merge-replication for PostgreSQL, and the guid/uuid values
would come more than handy here.

Let's pretend for a moment here that I'm not satisfied with any other
solution (for instance, assigning IDs for each replication 'subscriber',
and then combining that ID with sequence generated numbers to have
unique ID's within the replication system), and that I realy need
guid/uids.

I've found source code for the guid/uuid.  There is a program, called
uuidgen, wich creates the uuid. uuidgen is part of the libuuid1 package
from Debian/GNU. The sources are in the e2fsprogs pacgages, where I
found the source for the uuidgen.

Now, it should be possible to create postgresql function (in python,
forn instance) wich would call the uuidgen program, but that approach
seems rather slow.

I'd like to have postgresql function written in C that would call
uuid_generate
(http://www.die.net/doc/linux/man/man3/uuid_generate.3.html).
Considering what is said for the uuidgen manpage (concerning random and
pseudorandom generated numbers) I feel that uuids generated this way are
what I need.

What is the general feeling concerning the uuid issue I explained above?
I've never written a C postgreSQL function, and any help (or
documentation pointout) would be greatly appreciated.

If I posted this to the wrong mailing list, please point me out to the
correct one.

Mario
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[SQL] Regular Expression Matching problem...

2006-01-04 Thread Mario Splivalo
I have a table containing regular expresion patterns for matching phone
numbers with the mobile operators.

For instance, my phone number is '+385911234567', and the regexp for
this mobile operator is: "^\+38591\d{7}$".

Now, when I do a regexp match in a single select, it behaves as
expected:

octopussy2=# select '+385911234567' ~ '^\\+38591\\d{7}$';
 ?column?
--
 t
(1 row)

Now, as I've mentioned, I have a table with operators and their patterns
for phone numbers:

octopussy2=# select * from operators;
 operator_id | operator_name | operator_phonenumber_pattern
-+---+--
   1 | FreeSMSC  | ^\\+38590\\d{6,7}$
   2 | VipNet| ^\\+38591\\d{7}$
   3 | T-Mobile  | ^\\+3859[9|8]\\d{6,7}$
   4 | Tele2 | ^\\+38595\\d{7}$
(4 rows)


Now, if I construct my query like this:

octopussy2=# select '+385911234567', operator_phonenumber_pattern,
'+385911234567' ~ operator_phonenumber_pattern from operators;

   ?column?| operator_phonenumber_pattern | ?column?
---+--+--
 +385911234567 | ^\\+38590\\d{6,7}$   | f
 +385911234567 | ^\\+38591\\d{7}$ | f
 +385911234567 | ^\\+3859[9|8]\\d{6,7}$   | f
 +385911234567 | ^\\+38595\\d{7}$ | f
(4 rows)


Why do I get all the "f"'s? I tought that the operator_id 2 should
return "t", esp. when I wrote the first query it seems that the regex
match was ok.

Or I can't do regexp matching from the table columns?

Mike
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] Regular Expression Matching problem...

2006-01-08 Thread Mario Splivalo
On Wed, 2006-01-04 at 10:00 -0700, Michael Fuhr wrote:

> What Andreas is saying is that the patterns in the table have too
> many backslashes.  The original query was
> 
> test=> select '+385911234567' ~ '^\\+38591\\d{7}$';
>  ?column? 
> --
>  t
> (1 row)
> 
> but if you select just the pattern you get
> 
> test=> select '^\\+38591\\d{7}$';
> ?column?
> 
>  ^\+38591\d{7}$
> (1 row)

I was a bit silly here. Dunno why I tought I need to escape the
backslashes when entering regexp as data in table. Thank you for
pointing that out.

Mike


---(end of broadcast)---
TIP 6: explain analyze is your friend


[SQL] Regexp group matching - extracting second group element?

2006-01-10 Thread Mario Splivalo
When doing group regexp matching, I can extract first group element,
using SUBSTR function, like this:

octopussy2=# select substring('wizzgame auction 24' from '\\W*(\\w*)[\
\s]+(.*)');
 substring
---
 wizzgame
(1 row)

So, i got 'wizzgame' as first group. Is there a way to extract 'auction
24' as a second group?

Mario
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[SQL] Changing the transaction isolation level within the stored procedure?

2006-01-25 Thread Mario Splivalo
Is it possible to change the transaction level within the procedure?

I'm using Postgres 8.1.2 default isolation level. But, I would like one
stored procedure to execute as in serializable isolation level. I have
created my stored procedure like this:

CREATE OR REPLACE FUNCTION set_message_status("varchar", int4)
  RETURNS void AS
$BODY$
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
UPDATE messages SET message_status = $2 WHERE message_id = $1 AND
message_status < $2;
INSERT INTO _update_log VALUES (now()::timestamp, $1, $2);$BODY$
LANGUAGE 'sql' VOLATILE SECURITY DEFINER;


But I'm getting error:

octopussy2=# select * from
set_message_status('b4c15204-123f-4cba-ad09-d423630c999d', 90);
ERROR:  SET TRANSACTION ISOLATION LEVEL must be called before any query
CONTEXT:  SQL function "set_message_status" statement 1


I get the same error if I write my stored procedure in plpgsql language.

What I want to achive goes like this:

I have a client code (java/jdbc) that inserts some messages to my
database, and then process them.

Basicaly, it goes like this:

One thread (thread A) does this:

1. java got the message via http (whatever)
2. java does: begin;
3. java does: select * from create_message(...)
4. java does some checking
5. java does: select * from set_message_status(...)
6. java does some more checing
7. java does commit; (under rare circumstances java does rollback).


Another thread (thread B) does this:

1. java got the update_status_request via http (whatever)
2. java does: begin;
3. java does: select * from set_message_status(...)
4. java does: commit;

As I've said, I'm using 'read commited', the default isolation level.

Now, sometimes it happens that steps 4 or 6 take more time, and thread B
steps are executed before steps in thread A have finished. So I would
like the UPDATE in set_message_status to 'hold', until the transaction
that previoulsy called the set_message_status have commited or rolled
back.

Is there a way to do so withing the postgres, or I need to do 'SET
TRANSACTION ISOLATION LEVEL TO SERIALIZABLE' before step 2 in both
thread A and thread B?

Mike
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] Changing the transaction isolation level within the

2006-01-25 Thread Mario Splivalo
On Wed, 2006-01-25 at 11:46 -0500, Jaime Casanova wrote:

> you need to set the transaction level after the begin and before every
> other statement... after the begin you have a select that invoke your
> function so that set is not the first statement...

But I can't do that inside of a function, right?

Mari


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] Changing the transaction isolation level within the

2006-01-25 Thread Mario Splivalo
On Wed, 2006-01-25 at 15:54 +0100, Markus Schaber wrote:
> Hi, Mario,
> 
> Mario Splivalo wrote:
> > Is it possible to change the transaction level within the procedure?
> 
> No, currently not, the PostgreSQL "stored procedures" really are "stored
> functions" that are called inside a query, and thus cannot contain inner
> transactions.

Is above true for the newly introduced stored procedures? (Above, when
mentioning 'stored procedures' I was actualy reffering to 'functions').

> > I'm using Postgres 8.1.2 default isolation level. But, I would like one
> > stored procedure to execute as in serializable isolation level. I have
> > created my stored procedure like this:
> [...]
> > One thread (thread A) does this:
> > 
> > 1. java got the message via http (whatever)
> > 2. java does: begin;
> > 3. java does: select * from create_message(...)
> > 4. java does some checking
> > 5. java does: select * from set_message_status(...)
> > 6. java does some more checing
> > 7. java does commit; (under rare circumstances java does rollback).
> 
> So you even want to change the transaction serialization level within a
> running transaction? I'm sorry, this will not work, and I cannot think
> of a sane way to make it work.

I have some ideas, I just needed confirmation it can't be done this way.
Thank you! :)

> It is locically not possible to raise the isolation level when the
> transaction was started with a lower level and thus may already have
> irreversibly violated the constraits that the higher level wants to
> guarantee.

Yes, a thread will need to start a transaction, I'm just affraid that
create_message could lead me to deadlocks.

Thank you for your responses.

Mario


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] Changing the transaction isolation level within

2006-01-27 Thread Mario Splivalo
On Thu, 2006-01-26 at 10:42 +0100, Markus Schaber wrote:

> >>>Is it possible to change the transaction level within the procedure?
> >>No, currently not, the PostgreSQL "stored procedures" really are "stored
> >>functions" that are called inside a query, and thus cannot contain inner
> >>transactions.
> > Is above true for the newly introduced stored procedures? (Above, when
> > mentioning 'stored procedures' I was actualy reffering to 'functions').
> 
> I have to admit that I don't know what "newly introduced stored
> procedures" you're talking about? Is this an 8.2 feature?

And I have to cry 'Why am I not Dumbo, so I could cover myslef with me
ears'. I was reffering to the IN/OUT parametar ability for the
functions, and confused them with 'stored procedures' in other RDBMSes.


> PostgreSQL (and e. G. Oracle) use a MVCC system for transactions, that
> doesn't need exclusive locks.

Has nothing to do with a thread, but, even with MVCC you sometimes need
locking. One case is tracking stock by FIFO metod. There, you need to
serialize transactions which take items from the stock, so they're
ordered.

> I suggest you to read "Chapter 12. Concurrency Control" from the
> PostgreSLQ docs.
> 
> Its easy: if you need "read committed" guarantees, then run the entire
> transaction as "read committed". If you need "serializable", then run
> the entire transaction as "serializable". If you need real serialization
> and synchronization of external programs, use LOCK (or take a deep
> breath, redesign your application and use e. G. LISTEN/NOTIFY. Most
> times, the usage of LOCK is a good indicator of misdesign.)

But, I need to control those from the client, right? For some action
that I'd like database to perform I created functions as an interface to
the application. I'd like for database to take care about transactions
and everything. So, there is one 'external' function that client
application calls, and dozen of 'internal' functions that should be
never called by the application (those reside in shema not visible by
the client application role - so the app developers can't easily get to
them).

> 
> I just re-read your original posting. You want to make thread B wait
> until thread A has committed. This will not be possible with the ACID
> levels. Even when using "serializable" for both threads. If thread B
> issues SELECT after thread A committed, then all works fine. If thread B
> issues SELECT before thread A commits, it sees the database in the state
> it was before thread A started its transaction (so even create_message
> has not been called). It cannot know whether thread A will COMMIT or
> ROLLBACK.

Well, what happens now is that thread A calls set_message_status, and
sets the status for the message to, let's say, 20. Then the thread
continues to work, and if everything goes well, does COMMIT at the end.
But, sometimes, if the calculations after the set_message_status take
longer (actually if the SMS gateway goes bezerk, the reconnection
timeout causes thread A to execute a bit longer) thread B starts doing
something for that message, and it needs to set the message status to
90. So it calls set_message_status, and commits. But, when thread A
commits, the status is back to 20. So I tought I'd do a rowlock while
doing an UPDATE on table messages (that's what set_message_status does)
so that thread B needs to wait untill thread A commits or rollbacks. If
it commits, thread B woud do the update, if it rollback, thread B would
return '0 rows updated' (because if thread A issues rollback at the end,
the create_message is also rolled back, so there is nothing to update
for thread B.). I tought serialization woud help here.

Part of my problem goes from working with MSSQL for too long :) There
you can have nested transactions, and you can have different isolation
levels for them.

But, apparently, there was an error in process design. Thread B is fired
up from within the calculations in thread A. But, thread B should be
started only and only if thread A commits. 

> 
> Transaction isolation is about consistency guarantees, not for true
> serialization. The reason for this is that databases with high load will
> need to allow paralellism.
> 
> So for your case, threas A should issue "NOTIFY" before COMMIT, and then
> thread B should use LISTEN and then wait for the notification before
> beginning its transaction. Be shure to read the paragraph about how
> "NOTIFY interacts with SQL transactions" in the NOTIFY documentation.

I've been playing with that, but performance drops significantly with
those notifications. And, it would be a hac

Re: [SQL] How to find a temporary table

2006-01-27 Thread Mario Splivalo
On Fri, 2006-01-27 at 04:48 -0800, Emil Rachovsky wrote:
> Hi,
> I am using PostgreSQL 8.1.0 . How can I find a temp
> table from my session, having the name of the table?
> Can anyone show me what query should I execute? I've
> tried some things but I receive mixed results of
> tables from different sessions, which is strange.

What do you mean by 'temp table from your session'?

How did you create the temp table in the first place?

Mario


---(end of broadcast)---
TIP 6: explain analyze is your friend


[SQL] CREATE VIEW form stored in database?

2006-01-31 Thread Mario Splivalo
When I create a view, I like to define it like this (just representing
the form here):

CREATE VIEW vw_my_view
AS
SELECT
t1.col1,
t2.col2
FROM
t1
JOIN t2
ON t1.col1 = t2.col3
WHERE
t2.col4 = 'bla'


But, when I extracit it from postgres, it's somehow stored like this:

CREATE VIEW vw_my_view
AS
SELECT  t1.col1, t2.col2
FROMt1
JOINt2 ON t1.col1 = t2.col3
WHERE   t2.col4 = 'bla'

The later is much more hard to read, and when I need to change the view,
i get rash and stuff :)

Is there a way to tell postgres NOT to format the 'source code' of my
views?

Mike
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [SQL] CREATE VIEW form stored in database?

2006-01-31 Thread Mario Splivalo
On Tue, 2006-01-31 at 12:30 +, Richard Huxton wrote:
> Mario Splivalo wrote:
> > When I create a view, I like to define it like this (just representing
> > the form here):
> [snip]
> > But, when I extracit it from postgres, it's somehow stored like this:
> [snip]
> > The later is much more hard to read, and when I need to change the view,
> > i get rash and stuff :)
> > 
> > Is there a way to tell postgres NOT to format the 'source code' of my
> > views?
> 
> I don't think it stores the "source code", but rather the structure of 
> the underlying query. So I'm afraid you lose the spacing.
> 
> I keep all my definitions in a set of files and read in updates with \i 
> my_filename.sql from psql. That lets me keep all my spaces and comments.
> 

Yes, I'm tied to the pgadmin3 for the moment, so there's nothing I could
do. It's a pain to develop a database such way. 

Mike
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[SQL] ORDER BY CASE ...

2006-02-13 Thread Mario Splivalo
Am I misusing the ORDER BY with CASE, or, what? :)

I have a table, messages, half dozen of columns, exposing here just
three of them:

pulitzer2=# select id, "from", receiving_time from messages where
service_id = 20 order by case when 5=5 then 2 else 3 end desc limit 5;
   id   | from  | receiving_time
+---+
 869585 | +385989095824 | 2005-12-08 16:04:23+01
 816579 | +385915912312 | 2005-11-23 17:51:06+01
 816595 | +38598539263  | 2005-11-23 17:58:21+01
 816594 | +385915929232 | 2005-11-23 17:57:30+01
 816589 | +385912538567 | 2005-11-23 17:54:32+01
(5 rows)


pulitzer2=# select id, "from", receiving_time from messages where
service_id = 20 order by case when 5=6 then 2 else 3 end desc limit 5;
   id   | from  | receiving_time
+---+
 869585 | +385989095824 | 2005-12-08 16:04:23+01
 816579 | +385915912312 | 2005-11-23 17:51:06+01
 816595 | +38598539263  | 2005-11-23 17:58:21+01
 816594 | +385915929232 | 2005-11-23 17:57:30+01
 816589 | +385912538567 | 2005-11-23 17:54:32+01
(5 rows)


I tought I'd get differently sorted data, since in the first query I
said 5=5, and in second I said 5=6. 

Is this a right way to use CASE on ORDER BY, or? I need to sord the data
in the function depending on the function parametar. If it's true,
randomize the sort, if not, sort by receiving_time, newest first.

So, can I do it with ORDER BY CASE ... END, or do I need to have two
querries, and then first check for the value of the parametar, and then,
according to that value, call the SELECTs wich sort randomly, or by
receiving_time.

Mario

P.S. The postgres is 8.1.2. 
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [SQL] ORDER BY CASE ...

2006-02-13 Thread Mario Splivalo
On Mon, 2006-02-13 at 16:39 +0100, Mathieu Arnold wrote:

> | 
> | 
> | I tought I'd get differently sorted data, since in the first query I
> | said 5=5, and in second I said 5=6. 
> 
> Well, no, in the first, the result of the CASE is 2, and in the second 3, it
> means that for every line, it'll sort using "2" as value for the first, and
> "3" for the second query.
> 

Yes, I realized just a second ago that when ORDER BY is CASED, numbers
aren't the column numbers, but the integers itself.

It works like this:

ORDER BY (CASE WHEN 5=5 THEN "from"::varchar ELSE
receiving_time::varchar) DESC.

Is there a way to have DESC/ASC inside of a CASE?

Mario
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] ORDER BY CASE ...

2006-02-13 Thread Mario Splivalo
On Mon, 2006-02-13 at 17:10 +0100, Mathieu Arnold wrote:
> | It works like this:
> | 
> | ORDER BY (CASE WHEN 5=5 THEN "from"::varchar ELSE
> | receiving_time::varchar) DESC.
> | 
> | Is there a way to have DESC/ASC inside of a CASE?
> | 
> | Mario
> 
> No, you don't understand, you should do something like :
> 
> case when foo = bar then from else receiving_time desc end
> 

Can't do so, because receiving_time is timestamptz, and "from" is
varchar. There:

pulitzer2=# select id, "from", receiving_time from messages order by
case when 2=3 then "from" else receiving_time end desc limit 5;
ERROR:  CASE types timestamp with time zone and character varying cannot
be matched

I need to explicitly cast receiving_time into varchar.

What I would like to include ASC/DESC into CASE, but I guess that's not
possible.

Mike


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[SQL] 'locking' the SELECTs based on indices...

2006-02-22 Thread Mario Splivalo
I have a table, like this:

CREATE TABLE bla (id int4, code_id int4, code_value varchar, CONSTRAINT
bla_pk PRIMARY KEY (id))

And, i have index on the table:

CREATE INDEX bla_idx1 ON bla(code_id)

Now, when I do this from one connection (psql shell, for instance):

[A]BEGIN TRANSACTION;
[A]SELECT * FROM bla WHERE code_id = 1 FOR UPDATE;

and then, from another psql i do:
[B]SELECT * FROM bla WHERE code_id = 1 FOR UPDATE

the second SELECT will wait untill I rollback or commit first
transaction. That is cool. 

But, if I do second SELECT like this:

[C]SELECT * FROM bla WHERE code_id = 2 FOR UPDATE

I will get the rows.

If I erase the index bla_idx1, then [C] select will wait, same as [B]
select will wait. Is there a way to have this behaviour with the index
on column code_id?

Mario
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] 'locking' the SELECTs based on indices...

2006-02-23 Thread Mario Splivalo
On Wed, 2006-02-22 at 13:58 -0500, Tom Lane wrote:
> Mario Splivalo <[EMAIL PROTECTED]> writes:
> > Now, when I do this from one connection (psql shell, for instance):
> 
> > [A]BEGIN TRANSACTION;
> > [A]SELECT * FROM bla WHERE code_id = 1 FOR UPDATE;
> 
> > and then, from another psql i do:
> > [B]SELECT * FROM bla WHERE code_id = 1 FOR UPDATE
> 
> > the second SELECT will wait untill I rollback or commit first
> > transaction. That is cool. 
> 
> > But, if I do second SELECT like this:
> 
> > [C]SELECT * FROM bla WHERE code_id = 2 FOR UPDATE
> 
> > I will get the rows.
> 
> Well, of course.  Why would you want something different?  Why do you
> think the table's indexes should have anything to do with it?
> 
> If you want a full-table lock then some form of LOCK TABLE seems like
> the proper answer.  SELECT FOR UPDATE is designed to lock the specified
> rows, no more.

Hm. But, is there a way to have just a part of table locked, for
instance:

BEGIN;
SELECT * FROM bla WHERE code_id = 1 AND code_value = 'abla' FOR UPDATE

and then, in second connection:

BEGIN;
SELECT * FROM bla WHERE code_id = 1 AND code_value = 'eble' FOR UPDATE

I'd like to have second SELECT to hold. Now I'm doing it like this:

First connection:
BEGIN;
SELECT * FROM bla WHERE code_id = 1 FOR UPDATE -- this is just for lock
SELECT * FROM bla WHERE code_id = 1 AND code_value = 'abla' FOR UPDATE
...

Second connection:
BEGIN;
SELECT * FROM bla WHERE code_id = 1 FOR UPDATE -- here it'll wait
SELECT * FROM bla WHERE code_id = 1 AND code_value = 'eble' FOR UPDATE

Since I'm actually doing this inside of a function, i'll use PERFORM for
the first select. The problem is that SELECT COUNT(*) FROM bla WHERE
code_id = 1 will return some 10M rows (on live data, my test data has
some 100k rows, and I don't see any performance impact - yet).

Is this a right way to go? 

> > If I erase the index bla_idx1, then [C] select will wait, same as [B]
> > select will wait.
> 
> I don't think so.  If it does, it's a bug; please provide a reproducible
> test case.
> 

It's not a bug. My high fever must have something to do with it. I just
tried it, removed the index, and [C] isn't waiting.

Mike


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] 'locking' the SELECTs based on indices...

2006-02-23 Thread Mario Splivalo
On Thu, 2006-02-23 at 13:56 +0200, Achilleus Mantzios wrote:

> The intersection of  rows that satisfy BOTH 
> "code_id = 1 AND code_value = 'abla'"
> and
> "code_id = 1 AND code_value = 'eble'"
> is ZERO!!!
> 
> Why would you want irrelevant rows to wait for one another??

It was a bit silly representation of what I actually want :) I'll
explain it later, down there.

> > First connection:
> > BEGIN;
> > SELECT * FROM bla WHERE code_id = 1 FOR UPDATE -- this is just for lock
> > SELECT * FROM bla WHERE code_id = 1 AND code_value = 'abla' FOR UPDATE
> > ...
> > 
> > Second connection:
> > BEGIN;
> > SELECT * FROM bla WHERE code_id = 1 FOR UPDATE -- here it'll wait
> > SELECT * FROM bla WHERE code_id = 1 AND code_value = 'eble' FOR UPDATE
> > 
> > Since I'm actually doing this inside of a function, i'll use PERFORM for
> > the first select. The problem is that SELECT COUNT(*) FROM bla WHERE
> > code_id = 1 will return some 10M rows (on live data, my test data has
> > some 100k rows, and I don't see any performance impact - yet).
> > 
> > Is this a right way to go? 
> >
> 
> I'll repeat the first question...
> 
> Lets say you have 100 rows with id=1, and from them 10 have 
> code_value = 'eble' and 20 have code_value = 'able',
> 
> so you got 10 rows with id=1 and code_value = 'eble'
> and 20 rows with id=1 and code_value = 'able'
> 
> So in the first case you deal with 10 rows, in the second with 20 rows.
> Why take into account the rest 90 and 80 rows respectively???
> 
> If for some reason you want INDEED the 
> "SELECT * FROM bla WHERE code_id = 1 FOR UPDATE" effect,
> and the cardinalities are as you describe (more or less),
> then go with the LOCK TABLE solution as Tom said.

I have a actuall table, called code_values. There I have stored codes
for the various prize-winning games. The table looks like this:

CREATE TABLE code_values (
code_id int4,
service_id int4,
code_value int4
)

code_id is, of course, PK for the table.

Now, when a 'player' sends an SMS, or makes a phone call, I need to
check if that code is valid, and then, if that code has allready been
played. If it's invalid, or if it has been played, the user is
discarded. Since the number of prize-winning games (or services) is
around 50, and each service has around 1M codes, I didn't want to lock
entire table, so ALL the services wait, but just the portion of the
table that contains data for the desired service. So, when I check for
the code:

SELECT code_value FROM code_values WHERE service_id = 1 AND code_value =
'KFH1A' FOR UPDATE

I lock just that particular row, which is no good. I need to have all
the codes for the service 1 locked, so if it happens that two users send
the very same code, one has to fail. Therefore, from within plpgsql I
first do:

PERFORM * FROM code_values WHERE service_id = 1 FOR UPDATE

and then, later in code, i check for the actuall code.

Now, If some other user want's his prize, when checking his code, if he
sends code for some other service then service 1, that's ok. If he's
sends code for the service 1 the PERFORM will wait untill I'm finished
with previous user.

I could go with the LOCK TABLE, but that seems to expensive. Or not?

Mario

> 
> P.S. 
> Ti si Hrvatski???

:) Yes! :) Ja sam Hrvat, govorim hrvatski :)

> Dobro Jutro druga!!!

Dobro jutro i tebi! :)


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] 'locking' the SELECTs based on indices...

2006-02-23 Thread Mario Splivalo
On Thu, 2006-02-23 at 17:35 +0100, PFC wrote:
> 
> > Now, If some other user want's his prize, when checking his code, if he
> > sends code for some other service then service 1, that's ok. If he's
> > sends code for the service 1 the PERFORM will wait untill I'm finished
> > with previous user.
> 
>   Sorry for the double post.
> 
>   If the rows in your table represent associations between codes and  
> services that are one-use only, you could simply use UPDATE or DELETE, to  
> mark the row in question as having been "consumed".
>   Then, you check how many rows were deleted or updated. If it's 1, good. 
>  
> If it's 0, the code has been used already.
> 
>   If the code itself is one-use only, you should have a codes table and a 
>  
> codes_to_services table, with an ON DELETE CASCADE so that, when you use a  
> code, you delete it from the codes table and it's "consumed" for all  
> services.

Thank you for the advice, I'll consider it too.

The original idea was with UPDATE, so I could mark codes wich are used,
but the table with codes will have 10M rows in the begining, for just
one game. Later we'll have more games, with like 100M rows in the table.
UPDATEing such table, when the server is under normal load (server also
hosts some other games) is between 15 and 40 seconds. That is why I
tried to eliminate UPDATEs and go with two tables, and INSERTS into the
second table, for used codes.

Mike


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[SQL] SELECTing into usertype, how to do it?

2006-04-25 Thread Mario Splivalo
I have a type, declared like this:

CREATE TYPE type_a AS (
member_a varchar,
member_b bool
);

There is also a table:

CREATE TABLE table_a (
col_a varchar,
col_b bool,
col_c int4
);

Now, in a function, I declared type_var variable of type type_a:

DECLARE type_var type_a;

And then, when I want to fill in the type_var, i do this:

type_var.member_a := col_a FROM table_a WHERE col_c = 5;
type_var.member_b := col_b FROM table_a WHERE col_c = 5;

Is there a way to fill in the type_var, but from just one statement?
Here I have two 'selects' on table_a, and that seems as bit expensive
when iterated a lot of times.

I guess I could do:
SELECT col_a, col_b INTO type_var.member_a, type_var.member_b FROM
table_A WHERE col_c = 5;

but that is a bit hard to read :)

Are there more options on doing what I'd like to do?

    Mario
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] SELECTing into usertype, how to do it?

2006-04-25 Thread Mario Splivalo
On Tue, 2006-04-25 at 14:21 +0200, A. Kretschmer wrote:
> am  25.04.2006, um 14:03:07 +0200 mailte Mario Splivalo folgendes:
> > 
> > And then, when I want to fill in the type_var, i do this:
> > 
> > type_var.member_a := col_a FROM table_a WHERE col_c = 5;
> > type_var.member_b := col_b FROM table_a WHERE col_c = 5;
> > 
> > Is there a way to fill in the type_var, but from just one statement?
> > Here I have two 'selects' on table_a, and that seems as bit expensive
> > when iterated a lot of times.
> > 
> > I guess I could do:
> > SELECT col_a, col_b INTO type_var.member_a, type_var.member_b FROM
> > table_A WHERE col_c = 5;
> > 
> > but that is a bit hard to read :)
> > 
> > Are there more options on doing what I'd like to do?
> 
> select into type_var col_a, col_b from table_a ... ;
> 

Thnx... still, that SELECT INTO looks pretty anoying... it would be neat
to type just: 
type_var := col_a, col_b FROM ...

I guess the code would look much more readable...

Mario
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[SQL] INSERTing values from usertypes how-to?

2006-04-27 Thread Mario Splivalo
Comming back after SELECTing into usertypes, is there a nice way to
insert data into table from usertype, in a nice way?

For instance, I have table like this:

CREATE TABLE tblA (
id int4,
key varchar,
value varchar
)

Then, in my function, I do this:

CREATE FUNCTION testInsert()
RETURNS void
AS
$BODY$
DECLARE
insert_tblA tblA;

insert_tblA.id = get_next_id_or_something();
insert_tblA.key = get_user_key_or_something();
insert_tblA.value = get_some_value();

INSERT INTO tblA (
id,
key,
value)
VALUES (
insert_tblA.id,
insert_tblA.key,
insert_tblA.value
);

END
$BODY$ LANGUAGE 'plpgsql';


Now, in this particular example it seems stupid first to populate
usertype and then insert the data from it to the tblA table, but in my
actuall function that makes sense (the function is quite large for
posting here). Now, since sometimes I have 30 columns in a table, I
tried to find a way to simply 'pour' the usertype into the table. I
tried something like:

INSERT INTO tblA (insert_tblaA);

but postgres complains (and I guess I expeted that) with an error saying
that tblA.id is of type int4, and I'm trying to insert value of type
tblA. So, is there a way to map all those type-members in a way so that
one could easily INSERT those into table?

Mario


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[SQL] Counting the rows INSERTed/UPDATEd?

2006-05-01 Thread Mario Splivalo
I have found, I thinl, in the pg manual, the way to get the number of
rows inserted/updated, from within the plpgsql. I can't find it anymore,
is that still there, or I misread something earlier?

Mario
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [SQL] Counting the rows INSERTed/UPDATEd?

2006-05-02 Thread Mario Splivalo
On Tue, 2006-05-02 at 09:13 +0100, Richard Huxton wrote:
> Mario Splivalo wrote:
> > I have found, I thinl, in the pg manual, the way to get the number of
> > rows inserted/updated, from within the plpgsql. I can't find it anymore,
> > is that still there, or I misread something earlier?
> 
> http://www.postgresql.org/docs/8.1/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-DIAGNOSTICS
> 

Yes, that is it. Thank yall, I guess I was a bit tired and a bit lazy.

Mario
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] hi i need to encrypt one field in a table

2006-05-30 Thread Mario Splivalo
On Tue, 2006-05-30 at 15:42 +0530, Penchalaiah P. wrote:
> Hello,
> I want to create a table with an encrypted field for a password entry, so
> that you can see only .
> And can I encrypt fields which datatype is an integer or a timestamp?

No can do. I mean, you can't do something like that. You can either give
(actually, revoke) that particular user/role permisions to read the
table with the passwords, or you could store hashed values to the
database. 

Later is prefered, I'd say. You could use md5() function to create
hashes, store those in the database, and when checking for the
credentials compare the hases. You'd compare user input flushed trough
md5() function with the hash you have in the database. That way you
don't know what user passwords are, and you could only set a new
password if user forgets his/hers password.

You don't need encrypted passwords. 

Mario
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[SQL] Views and query planner

2006-06-30 Thread Mario Splivalo
Let's say I have a view like this:

CREATE VIEW v_small AS
SELECT c1, c2
FROM t1 JOIN t2 ON t1.c1 = t2.c2

and then I have another view

CREATE VIEW v_big AS
SELECT c1 FROM v_small WHERE c2 > 5

Now, when I do this:

SELECT * FROM v_big WHERE c1 < 1500

Is postgres going to make one query plan, or each view has it own query
plan?

    Mike
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Views and query planner

2006-06-30 Thread Mario Splivalo
On Fri, 2006-06-30 at 21:47 +0900, Michael Glaesemann wrote:
> On Jun 30, 2006, at 21:35 , Mario Splivalo wrote:
> 
> > Is postgres going to make one query plan, or each view has it own  
> > query
> > plan?
> 
> Have you taken a look at the EXPLAIN ANALYZE output of the queries?  
> You can see exactly which plan PostgreSQL will use.
> 

So, the performance will be the same if I use view on view, or I create
big select with those view definitions.

Mario
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[SQL] Primary key constraint violation without error?

2006-07-24 Thread Mario Splivalo
Actually it's not violation, but I get no error message on violating
it...

The story is like this. I have few tables from where I extract
messageIds for particular users. Then, last 100 messages for each user I
transfer to spare table, to do something with that. That spare table has
strange behaviour.

I have two functions. First one extract last 100 messageIds for
particular user, and second one finds all the users, and then inserts
last 100 messageIds for particular user. 

The table definition is like this:


CREATE TABLE mes_del
(
  "messageId" int4 NOT NULL,
  CONSTRAINT pk PRIMARY KEY ("messageId")
) 
WITHOUT OIDS;

And the two functions are like this:
-- this function returns last 100 messageIds for particular user
CREATE OR REPLACE FUNCTION punibrisitablica(int4)
  RETURNS SETOF mes_del AS
$BODY$SELECT
messages.id as messagesId
FROM
users
JOIN phone_numbers
ON users.id = phone_numbers.user_id
JOIN messages
ON messages.phone_number = phone_numbers.phone_number
where
users.id = $1
order by
messages.time desc
limit 100;$BODY$
  LANGUAGE 'sql' VOLATILE;


-- this function goes trough all the users and inserts messageIds
-- to table mes_del
CREATE OR REPLACE FUNCTION punimessages()
  RETURNS bool AS
$BODY$

declare
userId users%ROWTYPE;

begin

truncate table mes_del;

FOR userId IN
SELECT users.id FROM users ORDER BY users.id DESC limit 5
LOOP
INSERT INTO mes_del SELECT * FROM puniBrisiTablica(userId.id);
RAISE NOTICE 'Ubacili smo za usera %.', userId.id;
END LOOP;

return true;
end;
$BODY$
  LANGUAGE 'plpgsql' VOLATILE;

I appologize for the line breaks, but Evolution is a bit stupid email
client...

In second function there is LIMIT 5, because there are 4 users in my
database, and it's going to be easier to explain like this.

Now, there is sequence of the commands I run trough psql:

First, I truncate the table mes_del:

l_netsms=# truncate table mes_del;
TRUNCATE TABLE

Then I run function punimessages() for filling the messageIds to the
mes_del table:

l_netsms=# select punimessages();
NOTICE:  Ubacili smo za usera 4162.
NOTICE:  Ubacili smo za usera 4161.
NOTICE:  Ubacili smo za usera 4160.
NOTICE:  Ubacili smo za usera 4159.
NOTICE:  Ubacili smo za usera 4158.
 punimessages
--
 t
(1 row)

l_netsms=# select count(*) from mes_del;
 count
---
60
(1 row)

There are 60 messages for those five users. 

Now I run the function again:

l_netsms=# select punimessages();
NOTICE:  Ubacili smo za usera 4162.
NOTICE:  Ubacili smo za usera 4161.
NOTICE:  Ubacili smo za usera 4160.
NOTICE:  Ubacili smo za usera 4159.
NOTICE:  Ubacili smo za usera 4158.
 punimessages
--
 t
(1 row)

Shouldn't I get errors that I'm violating primary key constraint when
INSERTing again same data?

l_netsms=# select count(*) from mes_del;
 count
---
60
(1 row)

l_netsms=#

If I execute INSERT statement from the second function, I get the error:

l_netsms=# INSERT INTO mes_del SELECT * FROM puniBrisiTablica(4158);
ERROR:  duplicate key violates unique constraint "pk"
l_netsms=#

This is expected, I'm just unsure why ain't I receiving that error when
running punimessages() function?

Postgres is 8.1.2 running on linux 2.6.17.

Mike
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[SQL] Find periods for a given... action set?

2009-06-12 Thread Mario Splivalo
I have a table where there are actinos for some user logged. It's part 
of the MPI system of some sort. For every user, action type and time of 
the action is logged. There are many action types but the ones which are 
of interest to me are BEGIN and END. I need to find the durations for 
all the periods between BEGINs and ENDs, for each user.


Here is what the table looks like:

CREATE TABLE actions ( 

   user_id integer, 

   action_mark character varying, 

   action_time timestamp 


)

There are no PK constraints because those columns are excerpt from a 
action_log table, there is a message_id column which is a PK; user_id 
and action_mark are FKs to the users and actions tables. Now that I look 
at it, in the above table  PK would be (user_id, action_time), but there 
are no PKs as for now :)


Some example data: 




INSERT INTO actions VALUES (1, 'BEGIN', '2009-02-02 12:00:00'); 

INSERT INTO actions VALUES (1, 'ACT01', '2009-02-02 12:01:22'); 

INSERT INTO actions VALUES (2, 'BEGIN', '2009-02-02 12:02:01'); 

INSERT INTO actions VALUES (1, 'ACT02', '2009-02-02 13:10:00'); 

INSERT INTO actions VALUES (3, 'BEGIN', '2009-02-02 13:11:02'); 

INSERT INTO actions VALUES (1, 'END',   '2009-02-02 13:21:01'); 

INSERT INTO actions VALUES (1, 'BEGIN', '2009-02-02 14:01:01'); 

INSERT INTO actions VALUES (2, 'END',   '2009-02-02 16:11:21'); 

INSERT INTO actions VALUES (1, 'ACT-1', '2009-02-02 17:13:31'); 

INSERT INTO actions VALUES (2, 'BEGIN', '2009-02-02 18:11:12'); 

INSERT INTO actions VALUES (1, 'BEGIN', '2009-02-02 18:12:21'); 

INSERT INTO actions VALUES (2, 'END',   '2009-02-02 19:00:01'); 

INSERT INTO actions VALUES (1, 'END',   '2009-02-02 19:10:01'); 

INSERT INTO actions VALUES (2, 'BEGIN', '2009-02-02 10:11:01'); 



Now, for each user, i need to find all periods 'enclosed' with BEGIN/END 
action_type pairs. If I have a BEGIN and no END, than there is no 
period. So, if I have, for some user, BEGIN-END-BEGIN-END-BEGIN, there 
are only two periods.
Also, if there are consecutive BEGINS, only the last one counts: 
BEGIN-END-BEGIN-BEGIN-BEGIN-END-BEGIN - again, only two periods.


The results I would like to get look like this:

user_id action_duration 

1   01:21:01 

1   00:57:40 

2   04:09:20 


2   00:48:49

User_id 3 has just the BEGIN - there is no period there, because I don't 
have and endpoint. Similarly, user_id 1 has BEGIN as the last action - 
just two periods for user_id 1, because last BEGIN denotes 'period in 
progress'.


Also, user_id 1 has following actions happened, time ordered: BEGIN, 
END, BEGIN, BEGIN, END - only two periods, because 'third' BEGIN 
canceled second BEGIN and all the actions between second and third BEGIN.


Now, using some imperative Python, Perl, C, whatever, it's not that 
complicated to get what I want, but I would realy like to have it solved 
within plain SQL :)


So, if anyone has any suggestions, I would greatly appreciate them.

Mike

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] date_trunc should be called date_round?

2009-06-29 Thread Mario Splivalo
It's stated in the docs that date_trunc is "conceptually similar to the
trunc function for numbers.".

So, trunc(1.1) = 1, trunc(1.9) = 1, and so on.

But, date_trunc behaves like round function: round(1.9) = 2.

Example:

idel=# select date_trunc('milliseconds', '2009-01-01
12:15:00.000999+02'::timestamp with time zone);
 date_trunc

 2009-01-01 11:15:00.001+01
(1 row)

fidel=# select version();
 version

--
 PostgreSQL 8.3.7 on x86_64-unknown-linux-gnu, compiled by GCC
gcc-4.3.real (Ubuntu 4.3.3-5ubuntu4) 4.3.3
(1 row)


Or am I again completely misreading something?

Mike

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] date_trunc should be called date_round?

2009-06-29 Thread Mario Splivalo
Tom Lane wrote:
> Mario Splivalo  writes:
>> But, date_trunc behaves like round function: round(1.9) = 2.
> 
> Hmm ... only for float timestamps, and only for the millisec/microsec
> cases.
> 
> case DTK_MILLISEC:
> #ifdef HAVE_INT64_TIMESTAMP
> fsec = (fsec / 1000) * 1000;
> #else
> fsec = rint(fsec * 1000) / 1000;
> #endif
> break;
> case DTK_MICROSEC:
> #ifndef HAVE_INT64_TIMESTAMP
> fsec = rint(fsec * 100) / 100;
> #endif
> break;
> 
> I wonder if we should change this to use floor() instead.
> 

I guess it's safe, since you don't have negative timestamps (right?) or
parts of timestamps (millisecs/microsecs), so floor() would act as trunc.

Esp. if for the other parts of timestamp (days, hours, ...) it's actualy
truncating, not rounding, i.e.:

date_trunc('minute', '2009-01-01 12:13:50'::timestamp)

would return '2009-01-01 13:13:00', not '2009-01-01 13:14:00'.

One would expect similar behavior for the milli/microsec part.

Now it's truncating, unless dealing with milli/microseconds, where it's
rounding.

Mike

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Trapping 'invalid input syntax for integer'

2009-09-10 Thread Mario Splivalo
Is there a way to trap this error in plpgsql code?

I have a function that accepts integer and character varying. Inside
that function I need to cast that varchar to integer. Of course,
sometimes that is not possible.
When I run function like that, I get this errror:

fidel=# select * from get_account_info_by_tan(1, 'mario');
ERROR:  invalid input syntax for integer: "mario"
CONTEXT:  SQL statement "SELECT  user_id FROM user_tans WHERE user_tan =
 $1 ::bigint"
PL/pgSQL function "get_account_info_by_tan" line 8 at assignment

Now, I know I could change the SELECT so it looks like:

SELECT user_id FROM user_tans WHERE user_tan::varchar = $1

But, is there a way to trap above error usin EXCEPTION WHEN keyword in
plpgsql?

Mike

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Trapping 'invalid input syntax for integer'

2009-09-10 Thread Mario Splivalo
Marcin Krawczyk wrote:
> Hi, I believe you're looking for invalid_text_representation.
> 
> EXCEPTION WHEN invalid_text_representation THEN
> 

Thnx, that is it.

Mike

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] CHECK constraint on multiple tables

2009-09-14 Thread Mario Splivalo
I have two tables, tableA and tableB:

CREATE TABLE tableA (idA integer primary key, email character varying
unique);
CREATE TABLE tableB (idB integer primary key, email character varying
unique);

Now, I want to create check constraint in both tables that would
disallow records to either table where email is 'mentioned' in other table.

If CHECK constraints supported SUBSELECTS, I could write:

ALTER TABLE tableA ADD CONSTRAINT tableA_chk CHECK
(email NOT IN (SELECT email FROM tableB));

Unfortunatley, postgres won't allow me to do so.

Now, i could create function, check_for_email, that would return TRUE if
email is mentioned in either table, and then call that function when
creating a check constraint.

Or I could add separate table, emails, like this:

CREATE TABLE emails (email_id integer primary key, email character
varying unique)

And then replace 'email' column in tables tableA and tableB with
'email_id' that would be foreign key refference to the emails table.

I could, also, write functions for inserting data to the tableA and
tableB tables.

What would be the best approach to solve the problem I have? Could I use
rules on insert to help me?

Mario

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] CHECK constraint on multiple tables

2009-09-14 Thread Mario Splivalo
How would you do it, without creating third table?

Mario

Ries van Twisk wrote:
> can't you solve it creating a reference between the tables?
> 
> Ries
> On Sep 14, 2009, at 8:24 AM, Mario Splivalo wrote:
> 
>> I have two tables, tableA and tableB:
>>
>> CREATE TABLE tableA (idA integer primary key, email character varying
>> unique);
>> CREATE TABLE tableB (idB integer primary key, email character varying
>> unique);
>>
>> Now, I want to create check constraint in both tables that would
>> disallow records to either table where email is 'mentioned' in other
>> table.
>>
>> If CHECK constraints supported SUBSELECTS, I could write:
>>
>> ALTER TABLE tableA ADD CONSTRAINT tableA_chk CHECK
>> (email NOT IN (SELECT email FROM tableB));
>>
>> Unfortunatley, postgres won't allow me to do so.
>>
>> Now, i could create function, check_for_email, that would return TRUE if
>> email is mentioned in either table, and then call that function when
>> creating a check constraint.
>>
>> Or I could add separate table, emails, like this:
>>
>> CREATE TABLE emails (email_id integer primary key, email character
>> varying unique)
>>
>> And then replace 'email' column in tables tableA and tableB with
>> 'email_id' that would be foreign key refference to the emails table.
>>
>> I could, also, write functions for inserting data to the tableA and
>> tableB tables.
>>
>> What would be the best approach to solve the problem I have? Could I use
>> rules on insert to help me?
>>
>> Mario
>>
>> -- 
>> Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
>> To make changes to your subscription:
>> http://www.postgresql.org/mailpref/pgsql-sql
> 
> 
> regards, Ries van Twisk
> 
> -
> 
> tags: Freelance TYPO3 Glassfish JasperReports JasperETL Flex Blaze-DS
> WebORB PostgreSQL DB-Architect
> email: r...@vantwisk.nlweb:   http://www.rvantwisk.nl/skype:
> callto://r.vantwisk
> Phone: +1-810-476-4196Cell: +593 9901 7694   SIP:
> +1-747-690-5133
> 
> 
> 
> 
> 
> 
> 
> 


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] CHECK constraint on multiple tables

2009-09-14 Thread Mario Splivalo
Tom Lane wrote:
> Mario Splivalo  writes:
>> I have two tables, tableA and tableB:
>> CREATE TABLE tableA (idA integer primary key, email character varying
>> unique);
>> CREATE TABLE tableB (idB integer primary key, email character varying
>> unique);
> 
>> Now, I want to create check constraint in both tables that would
>> disallow records to either table where email is 'mentioned' in other table.
> 
> Have you considered refactoring so there's only one table?

Unfortunately I can't do that, due to the
object-relational-mapper-wrapper-mambo-jumbo.

The only 'logical' idea that I can think of is separating emails to the
third table, and then use UNIQUE constraint on the email field on that
table, and then use FK constraint so that email fields in tables tableA
and tableB points to the email in the table emails.

Mario

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] CREATE INDEX on column of type 'point'

2009-09-16 Thread Mario Splivalo
As I have discovered, there is no way to just create index on a column 
of type 'point' - postgres complains about not knowing the default 
operator class, no matter what index type I use.


Now, my table looks like this:

CREATE TABLE places (
place_id integer primary key,
coordinates point,
value integer,
owner_id integer
);

owner_id is foreign-keyed to the owners table and there is an index on 
that column.


Now, my queries would search for places that are of certain value, maybe 
owned by certain owner(s), in 'range' within specified circle. Something 
like this:


SELECT
*
FROM
places
WHERE
coordinates <@ '<(320,200),200>'::circle
AND value BETWEEN 27 AND 80;


I get a sequential scan on that table.

Reading trough the mailinglist archives I found suggestion Tom Lane 
made, saying that I should create functional index on table places


create index ix_coords on places using gist (circle(coordinates, 0));

And then change the WHERE part of my query like this:

WHERE circle(coordinates, 0) <@ '<(320,200),200'>::circle AND value 
BETWEEN 27 AND 80;



Am I better of using 'circle' as data type for column 'coordinates'?

Are there any other options? I know there is PostGIS, but that seems 
like a quite a big overhead. I'll only be checking if some point is in 
our out of some circle.


Mario

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Using information_schema to find about function parameters?

2009-10-29 Thread Mario Splivalo
I looked at the information_schema.routines, to get information about
the functions in the database, but there doesn't seem to be a way to
extract the parameters information about functions? Where would I seek
for such information?

Mike

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Using CASE in plpgsql causes 'ERROR: cache lookup failed'

2010-04-14 Thread Mario Splivalo
I have an enum-type, like this:

CREATE TYPE type_enum_service_type AS ENUM
   ('Banner', 'Ticker', 'Memo');

Then I have a table, like this:

CREATE TABLE services (
  service_id integer NOT NULL,
  service_type type_enum_service_type NOT NULL,
  service_keyword character varying NOT NULL,
  service_time_created timestamp with time zone NOT NULL DEFAULT now(),
);

And, I have a plpgsql function like this:

CREATE OR REPLACE FUNCTION service_something(a_service_id integer)
  RETURNS void AS
$BODY$
DECLARE
serviceType type_enum_service_type;
impressionsLeft integer;
messageId integer;
userId integer;

BEGIN
CASE service_type FROM services WHERE service_id = a_service_id
WHEN 'Banner' THEN
RAISE NOTICE 'It is Banner!';
WHEN 'Ticker' THEN
RAISE NOTICE 'It is Ticker!';
WHEN 'Memo' THEN
RAISE NOTICE 'It is Memo!';
ELSE
RAISE EXCEPTION 'It is strange!';
END CASE;

RETURN;

END
$BODY$
  LANGUAGE 'plpgsql' VOLATILE SECURITY DEFINER
  COST 100


Then I insert some data:

INSERT INTO services (1, 'Banner', 'kw-banner', now());
INSERT INTO services (2, 'Banner', 'kw-banner', now());
INSERT INTO services (2, 'Banner', 'kw-banner', now());


When I call 'service_something' function and provide nonexistent
service_id I get this error:

ERROR:  cache lookup failed for type 37

When I repeat the query (SELECT service_something(1);) the error is like
this:
ERROR:  cache lookup failed for type 0

Is this desired behavior so that first I need to check if service_id is
existent, or is this a bug? :)

Mike

P.S. PostgreSQL 8.4.2 on x86_64-unknown-linux-gnu, compiled by GCC
gcc-4.3.real (Ubuntu 4.3.3-5ubuntu4) 4.3.3, 64-bit


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] CHECK constraints with plpgsql functions - check 'fires' BEFORE data modification?

2010-04-19 Thread Mario Splivalo
The 'data integrity' rule for database I'm designing says that any
subject we're tracking (persons, companies, whatever) is assigned an
agreement that can be in several states: 'Approved', 'Unapproved' or
'Obsolete'. One subject can have only one (or none) 'Approved' or
'Unapproved' agreement, and as many (or none) 'Obsolete' agreements.

I was thinking on employing the CHECK constraint on agreements table
that would check that there is only one 'Approved' state per subject.

My (simplified) schema looks like this:

CREATE TYPE enum_agreement_state AS ENUM
   ('unapproved',
'approved',
'obsolete');

CREATE TABLE subjects
(
  subject_id serial NOT NULL,
  subject_name character varying NOT NULL,
  CONSTRAINT subjects_pkey PRIMARY KEY (subject_id)
);

CREATE TABLE agreements
(
  agreement_id serial NOT NULL,
  subject_id integer NOT NULL,
  agreement_state enum_agreement_state NOT NULL,
  CONSTRAINT agreements_pkey PRIMARY KEY (agreement_id),
  CONSTRAINT agreements_subject_id_fkey FOREIGN KEY (subject_id)
  REFERENCES subjects (subject_id) MATCH SIMPLE
  ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT check_agreements_onlyone_approved CHECK
(check_agreements_onlyone_approved(subject_id))
);

CREATE FUNCTION check_agreements_onlyone_approved(a_subject_id integer)
RETURNS boolean AS
$$
SELECT
CASE COUNT(agreement_id)
WHEN 0 THEN true
WHEN 1 THEN true
ELSE false
END FROM agreements WHERE subject_id = $1 AND agreement_state = 
'approved';
$$ LANGUAGE 'sql';

Now, the above does not work because CHECK function is fired BEFORE
actuall data modification takes place so I can end up with two rows with
'approved' state for particular subject_id. If I change the CASE...WHEN
conditions so that function returns TRUE only when there is 0 rows for
the state = 'approved' then I have problems with UPDATEing:

UPDATE agreements SET agreement_state = 'obsolete' where subject_id =
 AND agreement_state = 'approved'

That update will fail because the CHECK function is fired before the
actuall update, and there is allready a row with state = 'approved' in
the table.

Now, I know I could use triggers to achieve desired functionality but I
try to use triggers as seldom as I can. Often ETL scripts disable
triggers so I could end up with data integrity broken.

The 'proper' way to do this (as suggested by earlier posts on this
mailing list) is to use partial UNIQUE indexes, but I have problem with
that too: indexes are not part of DDL (no matter that primary key
constraints and/or unique constraints use indexes to employ those
constraints), and as far as I know there is no 'partial unique
constraint' in SQL?

Does anyone has better suggestion on how to employ the data-integrity
rules I have?

And, wouldn't it be better to have CHECK constraints check the data
AFTER data-modification? I also found no reference on using CHECK
constraints with user-defined functions on postgres manual - there
should be a mention of the way the CHECK constraint works - that is,
function referenced by CHECK constraint is fired BEFORE the actual data
modification occur. The error message is also misleading, for instance,
when I run the before mentioned UPDATE:

 constraint_check=# update agreements set agreement_state = 'obsolete'
where subject_id = 1 and agreement_state = 'approved';
ERROR:  new row for relation "agreements" violates check constraint
"check_agreements_onlyone_approved"

Mario

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Re: CHECK constraints with plpgsql functions - check 'fires' BEFORE data modification?

2010-04-19 Thread Mario Splivalo
Jasen Betts wrote:
> ...
> 
>> The 'proper' way to do this (as suggested by earlier posts on this
>> mailing list) is to use partial UNIQUE indexes, but I have problem with
>> that too: indexes are not part of DDL (no matter that primary key
>> constraints and/or unique constraints use indexes to employ those
>> constraints), and as far as I know there is no 'partial unique
>> constraint' in SQL?
> 
> huh?
> 
> create unique index agreements_approved_onlyone on agreements(subject_id) 
> where agreement_state='approved';

That will create unique index - the way I'm doing it now. The difference
is just semantic - indexes are not part of DDL, they are RDBMS's aids on
improving database performance.

>> And, wouldn't it be better to have CHECK constraints check the data
>> AFTER data-modification?
> 
> no.

Why not?

>> Now, the above does not work because CHECK function is fired BEFORE
>> actuall data modification takes place so I can end up with two rows with
>> 'approved' state for particular subject_id. If I change the CASE...WHEN
>> conditions so that function returns TRUE only when there is 0 rows for
>> the state = 'approved' then I have problems with UPDATEing:
>>
>> UPDATE agreements SET agreement_state = 'obsolete' where subject_id =
>>  AND agreement_state = 'approved'
>>
>> That update will fail because the CHECK function is fired before the
>> actuall update, and there is allready a row with state = 'approved' in
>> the table.
> 
> fix the check so that it knows what the new state will be. then it test
> if the proposed new state is compatible with the old state.

Yes, but for that I need to know weather DML command is INSERT or
UPDATE, and function called by check constraint has no way of knowing
that. Yes, I know that I could use trigger, even constraint trigger, but
I prefer not to use triggers if I don't have to.

> (but seriously, first explain why the index doesn't work)

It does work, and the more I look at it that seems to be the only proper
way of doing what I want. But, as I've said, indexes are not DDL, and
strictly my data-integrity rule needs to be done within DDL, not
indexes. Since there is no other way I will, of course, use indexes.

(Similarly I'd encourage one to use UNIQUE CONSTRAINT over UNIQUE INDEX
because CONSTRAINTS are part of DDL, while indexes are not. It doesn't
matter that postgres, and any other RDBMS for that matter, will empower
UNIQUE CONSTRAINT using indexes.)

> 
> ...
>CONSTRAINT check_agreements_onlyone_approved CHECK
>
> (check_agreements_onlyone_approved(subject_id,agreement_id,agreement_state))
>  );
> 
>  CREATE OR REPLACE FUNCTION check_agreements_onlyone_approved(the_subject_id
> integer, the_pkey integer, the_new_state enum_agreement_state)
>  RETURNS boolean AS
>  $$
>  SELECT
> CASE COUNT(agreement_id)
> WHEN 0 THEN true
> WHEN 1 THEN $3 != 'approved'
> ELSE false
> END FROM agreements WHERE subject_id = $1 AND agreement_state
> = 'approved' and agreement_id != $2 
>  $$ LANGUAGE 'sql';
> 
> still not perfect: if you need to change the agreement_id this will
> block you from doing that on approved agreements.

Yes, this seems more and more as a bad idea :)

Mario

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Re: CHECK constraints with plpgsql functions - check 'fires' BEFORE data modification?

2010-04-20 Thread Mario Splivalo

Tom Lane wrote:


I think what Mario is actually complaining about is that partial unique
indexes are not part of the SQL standard, and he wants a solution that
at least gives the illusion that it might be portable to some other
RDBMS in the future.


Correct. As far as I can see there is no partial unique constraints 
defined within the standard, and there are check constraints, so... 
Although, especially after this:



Unfortunately, an illusion is all it would be.  Even presuming that the
other DBMS lets you run plpgsql-equivalent functions in CHECK
constraints, the whole approach is broken by concurrency considerations.
If you have two transactions simultaneously inserting rows that would be
valid given the prior state of the table, but it's *not* valid for them
both to be present, then a CHECK or trigger-based constraint is going to
fail, because neither transaction will see the other's uncommitted row.
At least that's how it works in Postgres.  In some other DBMS it might
work differently, but you're right back up against the fact that your
solution is not portable.

Unique constraints (partial or otherwise) deal with the race-condition
problem by doing low-level things that aren't exposed at the SQL level.
So there's simply no way to get the equivalent behavior in pure standard
SQL.


Thank you for the clarification, it's easy to understand now why using 
check constraints is a bad idea for the purpose I wanted to use them. 
It's also easy to see why 'proper' way is using partial unique indexes.


Still, one has to wonder why there are no partial unique constraints 
defined in SQL standard :)


Mario

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Get the max viewd product_id for user_id

2010-12-03 Thread Mario Splivalo
I have a log-table where I record when some user_id has viewed some 
product_id:


CREATE TABLE viewlog (
user_id integer,
product_id integer,
view_timestamp timestamp with time zone
)

Now, I would like to get result that gives me, for each user_id, 
product_id of the product he/she viewed the most time, with the number 
of views.


The 'issue' is I need this running on postgres 8.0.

I went this way, but for a large number of user_id's, it's quite slow:

CREATE VIEW v_views AS
SELECT user_id, product_id, count(*) as views
FROM viewlog
GROUP BY user_id, product_id

SELECT
DISTINCT user_id,
	(SELECT product_id FROM v_views inn WHERE inn.user_id = out.user_id 
ORDER BY views DESC LIMIT 1) as product_id,
	(SELECT views FROM v_views inn WHERE inn.user_id = out.user_id ORDER BY 
views DESC LIMIT 1) as views

FROM
v_views out


Mario

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Get the max viewd product_id for user_id

2010-12-05 Thread Mario Splivalo

On 12/03/2010 12:40 PM, Jayadevan M wrote:

Hello,


I went this way, but for a large number of user_id's, it's quite slow:

CREATE VIEW v_views AS
SELECT user_id, product_id, count(*) as views
FROM viewlog
GROUP BY user_id, product_id

SELECT
DISTINCT user_id,
(SELECT product_id FROM v_views inn WHERE inn.user_id = out.user_id
ORDER BY views DESC LIMIT 1) as product_id,
(SELECT views FROM v_views inn WHERE inn.user_id = out.user_id ORDER

BY

views DESC LIMIT 1) as views
FROM
v_views out


Does this work faster?
select x.user_id,y.product_id,x.count from
(select user_id, max(count ) as count from (select user_id,product_id,
count(*) as count from viewlog group by user_id,product_id) as x group by
user_id
) as x inner join
(select user_id,product_id, count(*) as count1 from viewlog group by
user_id,product_id ) as y
on x.user_id=y.user_id and x.count=y.count1



It does, yes. Actually, pretty silly of me not to implement it that way, 
thank you.


Since I already have the view, the query now looks like this:


select
x.user_id,
y.product_id,
x.views
from (
select
user_id,
max(views) as views
from
v_views
group by
user_id
) as x
inner join v_views as y
on x.user_id=y.user_id and x.views=y.views

And CTEs would also help here :)

Mario

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Get the max viewd product_id for user_id

2010-12-05 Thread Mario Splivalo

On 12/03/2010 12:40 PM, Jayadevan M wrote:

Hello,


I went this way, but for a large number of user_id's, it's quite slow:

CREATE VIEW v_views AS
SELECT user_id, product_id, count(*) as views
FROM viewlog
GROUP BY user_id, product_id

SELECT
DISTINCT user_id,
(SELECT product_id FROM v_views inn WHERE inn.user_id = out.user_id
ORDER BY views DESC LIMIT 1) as product_id,
(SELECT views FROM v_views inn WHERE inn.user_id = out.user_id ORDER

BY

views DESC LIMIT 1) as views
FROM
v_views out


Does this work faster?
select x.user_id,y.product_id,x.count from
(select user_id, max(count ) as count from (select user_id,product_id,
count(*) as count from viewlog group by user_id,product_id) as x group by
user_id
) as x inner join
(select user_id,product_id, count(*) as count1 from viewlog group by
user_id,product_id ) as y
on x.user_id=y.user_id and x.count=y.count1


The issue in both approaches is that if I have two product_ids that are 
viewed same number of times and share the first place as most viewed 
products by that user, I'll get only one of them (LIMIT 1 OR MAX() can 
only return one row :).


I don't see how I can sort this out with elegance in SQL.

Mario

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Get the max viewd product_id for user_id

2010-12-05 Thread Mario Splivalo

On 12/05/2010 05:57 PM, Mario Splivalo wrote:

The issue in both approaches is that if I have two product_ids that are
viewed same number of times and share the first place as most viewed
products by that user, I'll get only one of them (LIMIT 1 OR MAX() can
only return one row :).



And then, to jump again into my own mouth - your approach, Jayadevan, 
correctly gives me both product_id's if they're viewed the same number 
of times.


Mario

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Subselects to Joins? Or: how to design phone calls database

2011-12-10 Thread Mario Splivalo
I have a table called 'calls' which holds 'call detail records'. Let's
assume the table looks like this:

CREATE TABLE cdr (
call_id serial,
phone_number text
);

And I have a table with country call prefixes, that looks like this:

CREATE TABLE prefixes (
prefix text,
country text
);

And now some test data:

INSERT INTO prefixes VALUES ('1', 'USA');
INSERT INTO prefixes VALUES ('44', 'UK');
INSERT INTO prefixes VALUES ('385', 'Croatia');
INSERT INTO prefixes VALUES ('387', 'Bosnia');
INSERT INTO prefixes VALUES ('64', 'New Zeland');
INSERT INTO prefixes VALUES ('642', 'New Zeland Mobile');
INSERT INTO calls VALUES (1, '11952134451');
INSERT INTO calls VALUES (2, '448789921342');
INSERT INTO calls VALUES (3, '385914242232');
INSERT INTO calls VALUES (4, '385914242232');
INSERT INTO calls VALUES (5, '645122231241');
INSERT INTO calls VALUES (6, '444122523421');
INSERT INTO calls VALUES (7, '64212125452');
INSERT INTO calls VALUES (8, '1837371211');
INSERT INTO calls VALUES (9, '11952134451');
INSERT INTO calls VALUES (10, '448789921342');
INSERT INTO calls VALUES (11, '385914242232');
INSERT INTO calls VALUES (12, '385914242232');
INSERT INTO calls VALUES (13, '645122231241');
INSERT INTO calls VALUES (14, '4441232523421');
INSERT INTO calls VALUES (15, '64112125452');
INSERT INTO calls VALUES (16, '1837371211');


Now, if I want to have a 'join' between those two tables, here is what I
am doing right now:

SELECT
call_id,
phone_number,
(SELECT
country
FROM
prefixes
WHERE   
calls.phone_number LIKE prefix || '%'
ORDER BY
length(prefix) DESC LIMIT 1
) AS country
FROM calls;


Is there a way I could use join here? I can do something like:

SELECT ... FROM calls JOIN prefixes ON calls.phone_number LIKE prefix || '%'

but I'd get duplicate rows there (for instance, for New Zeland calls,
from my test data).

Or should I add 'prefix' field to the calls table, and then do a inner
join with prefixes table?

Mario

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Postgres regexp matching failure?

2006-09-05 Thread Mario Splivalo
I have an regular expression, wich works fine in Java or python, but I
don't seem to be able to make it work in postgres. Regular expression
needs to match everything begining with '+' (plus sign) or letters
'STOP', 'stop', 'StoP', or any other combination pronounced 'stop'.

Here is the python example:

>>> import re
>>> p = re.compile(r'^\s*(?:[\+|-]|(?:[sS][tT][oO][pP]\b)).*$')
>>> p.match('+mario').group(0)
'+mario'
>>> p.match('+mario works').group(0)
'+mario works'
>>> p.match('mario works').group(0)
Traceback (most recent call last):
  File "", line 1, in ?
AttributeError: 'NoneType' object has no attribute 'group'
>>> p.match('stop works').group(0)
'stop works'
>>>

Now, here is what happens if I try this in postgres:

pulitzer2=# select '+mario' ~ '^\s*(?:[\+|-]|(?:[sS][tT][oO][pP]\b)).*
$';
 ?column?
--
 t
(1 row)

This one is ok.



pulitzer2=# select '+mario works' ~ '^\s*(?:[\
+|-]|(?:[sS][tT][oO][pP]\b)).*$';
 ?column?
--
 t
(1 row)



This one is also ok.

pulitzer2=# select 'mario works' ~ '^\s*(?:[\
+|-]|(?:[sS][tT][oO][pP]\b)).*$';
 ?column?
--
 f
(1 row)

Same as this one, also ok.


pulitzer2=# select 'stop works' ~ '^\s*(?:[\
+|-]|(?:[sS][tT][oO][pP]\b)).*$';
 ?column?
--
 f
(1 row)

Here, postgres should return true, but it gives me false. 

I'd appreciate any hints on what is wrong here.

Thank you in advance,

Mario
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] Postgres regexp matching failure?

2006-09-05 Thread Mario Splivalo
On Tue, 2006-09-05 at 10:11 -0400, Tom Lane wrote:
> Mario Splivalo <[EMAIL PROTECTED]> writes:
> > Now, here is what happens if I try this in postgres:
> 
> > pulitzer2=# select '+mario' ~ '^\s*(?:[\+|-]|(?:[sS][tT][oO][pP]\b)).*$';
> 
> I'm thinking you've forgotten to double your backslashes.
> 

That was the first thing I tried :)

What confused me was the difference I get when using python, or perl, or
Java, or any other regexp 'matcher', therefore I asked for help here.

Mario
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] Postgres regexp matching failure?

2006-09-05 Thread Mario Splivalo
On Tue, 2006-09-05 at 08:42 -0500, Aaron Bono wrote:
> On 9/5/06, Mario Splivalo <[EMAIL PROTECTED]> wrote:
> 
> pulitzer2=# select 'stop works' ~ '^\s*(?:[\
> +|-]|(?:[sS][tT][oO][pP]\b)).*$';
> ?column?
> --
> f
> (1 row)
> 
> Here, postgres should return true, but it gives me false.
> 
>  
> \b is a back-space - is that what you are wanting there?  If I remove
> it I get true.

Actually, I'm not sure :) As I've mentioned, python/java/perl do as I
expected, postgres on the other hand doesn't. If \b was the backspace,
then I'd have trouble with '+mario test', and that one seems to be OK. 

I also tried doublebackslashing, all the same.

So, I guess it's obvious that postgres doesn't treat regular expressions
the same way as java/perl/pyton/php/awk/sed do...

Mario
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] Postgres regexp matching failure?

2006-09-05 Thread Mario Splivalo
On Tue, 2006-09-05 at 10:21 -0400, Alvaro Herrera wrote:
> Mario Splivalo wrote:
> > On Tue, 2006-09-05 at 08:42 -0500, Aaron Bono wrote:
> > > On 9/5/06, Mario Splivalo <[EMAIL PROTECTED]> wrote:
> > > 
> > > pulitzer2=# select 'stop works' ~ '^\s*(?:[\
> > > +|-]|(?:[sS][tT][oO][pP]\b)).*$';
> > > ?column?
> > > --
> > > f
> > > (1 row)
> > > 
> > > Here, postgres should return true, but it gives me false.
> > > 
> > >  
> > > \b is a back-space - is that what you are wanting there?  If I remove
> > > it I get true.
> > 
> > Actually, I'm not sure :) As I've mentioned, python/java/perl do as I
> > expected, postgres on the other hand doesn't. If \b was the backspace,
> > then I'd have trouble with '+mario test', and that one seems to be OK. 
> 
> No, because the \b is inside the "stop" arm of the |.  You need to do
> *both*, double backslashes and get rid of \b (or at least understand
> what you actually mean with it ...)
> 

I know this might not be the right place for this question, but, how
come (or better: why?) is above regexp macthed ok (ok as in 'the way I
expected') when employed from java/perl/python?

Mario


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] Postgres regexp matching failure?

2006-09-07 Thread Mario Splivalo
On Tue, 2006-09-05 at 11:22 -0400, Tom Lane wrote:
> Mario Splivalo <[EMAIL PROTECTED]> writes:
> > So, I guess it's obvious that postgres doesn't treat regular expressions
> > the same way as java/perl/pyton/php/awk/sed do...
> 
> When you get into stuff as arcane as word-boundary constraints, you'll
> find that regexes are not NEARLY as well standardized as that flippant
> complaint suggests.  For penance, actually try it in all six of those
> languages and report back.

So I did, and I was mistaken the first time. Java, perl, python and php
treat regexpes differently than awk/sed and differently than tcl. I can
post source snippets and results here if anyone is interested in those.

> Postgres' regexp code is the same as Tcl's (it's Henry Spencer's package)
> and if you check TFM you will find out that \y, or possibly \M, is what
> you want.

Thnx, I missed that part that 'postgres has same regexp code as Tcl'.

Mike
-- 
Mario Splivalo
Mob-Art
[EMAIL PROTECTED]

"I can do it quick, I can do it cheap, I can do it well. Pick any two."



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[SQL] Using Temporary Tables in postgres functions

2007-01-25 Thread Mario Splivalo
When I try to use TEMPORARY TABLE within postgres functions (using 'sql'
as a function language), I can't because postgres can't find that
temporary table. Consider this example:

CREATE FUNCTION func1() RETURNS SETOF v_messages_full AS $BODY$
CREATE TEMPORARY TABLE tmpTbl
AS
SELECT 
message_id
FROM 
cached_messages
WHERE 
billing_status = 2;


UPDATE cached_messages SET billing_status = 1 WHERE message_id IN
(SELECT message_id FROM tmpTbl);

SELECT
*
FROM
v_messages_full
WHERE
message_id IN (SELECT message_id FROM tmpTbl);
$BODY$ LANGUAGE 'sql' VOLATILE SECURITY DEFINER;

When I try to execute above CREATE FUNCTION statement postgres gives me
this:
ERROR:  relation "tmptbl" does not exist
CONTEXT:  SQL function "func1"

If I rewrite func1() using 'plpgsq' I have no troubles creating
temporary tables, I just need to use EXEC when referencing to those
temporary tables (which is cumbersome, but there it is).

Am I doing something wrong here, or there is no way of using temporary
tables within 'sql' written functions?

Mike



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [SQL] Using Temporary Tables in postgres functions

2007-01-26 Thread Mario Splivalo
On Thu, 2007-01-25 at 08:03 -0800, Stephan Szabo wrote:
> On Thu, 25 Jan 2007, Mario Splivalo wrote:
> 
> > When I try to use TEMPORARY TABLE within postgres functions (using 'sql'
> > as a function language), I can't because postgres can't find that
> > temporary table. Consider this example:
> >
> > CREATE FUNCTION func1() RETURNS SETOF v_messages_full AS $BODY$
> > CREATE TEMPORARY TABLE tmpTbl
> > AS
> > SELECT
> > message_id
> > FROM
> > cached_messages
> > WHERE
> > billing_status = 2;
> >
> >
> > UPDATE cached_messages SET billing_status = 1 WHERE message_id IN
> > (SELECT message_id FROM tmpTbl);
> >
> > SELECT
> > *
> > FROM
> > v_messages_full
> > WHERE
> > message_id IN (SELECT message_id FROM tmpTbl);
> > $BODY$ LANGUAGE 'sql' VOLATILE SECURITY DEFINER;
> 
> It seems like the sql function checker is unhappy with the above. Does it
> actually work if you turn off the check_function_bodies configuration
> variable, create the function and then call it?

Yes, when I do 'set check_function_bodies to false;' then I can create
the function but I'm unable to execute it:

testdb1=# select * from func1();
ERROR:  relation "tmptbl" does not exist
CONTEXT:  SQL function "func1" during startup

Bummer. In the end I wrote the function using plpgsql. Now, is there any
performance impact on using plpgsql instead of sql in simple scenarios
as in func1() example? I guess there should be some, as minimas as it
can be, but have no way of actualy knowing that.

Mike


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] Using Temporary Tables in postgres functions

2007-01-26 Thread Mario Splivalo
On Thu, 2007-01-25 at 11:00 -0500, Andrew Sullivan wrote:
> On Thu, Jan 25, 2007 at 03:39:14PM +0100, Mario Splivalo wrote:
> > When I try to use TEMPORARY TABLE within postgres functions (using 'sql'
> > as a function language), I can't because postgres can't find that
> > temporary table. Consider this example:
> 
> You need to build the temp table and EXECUTE the statement.  The
> problem is that the plan is cached for later re-use.  Since the
> cached plan has the id of a table that no longer exists, you get this
> error.  See the PL/pgSQL part of the manual for more on this.

Yes, I'm well aware of that, and learned to live with it :) The only
drawback is that my plpgsql code looks realy uqly, because of the
creation of the string variables containing the actuall SQL code that
deals with the temporary tables used. Therefore I'm trying not to use
temp tables as much as I can. A table-type variables would be realy nice
thing to have, I guess they would exist only in memory, and for some
complicated OLTP stuff those could be realy realy hand.

Just a wish, in a way :)

Mike


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [SQL] Using Temporary Tables in postgres functions

2007-01-26 Thread Mario Splivalo
On Thu, 2007-01-25 at 11:09 -0500, Tom Lane wrote:
> Mario Splivalo <[EMAIL PROTECTED]> writes:
> > Am I doing something wrong here, or there is no way of using temporary
> > tables within 'sql' written functions?
> 
> I believe the problem is that for a SQL function we parse the whole
> function body before executing any of it.  So you'd need to split this
> into two separate functions.

Having two function complicates, a bit, interface between applicaation
and the database. But, then again, If I split it in two functions, then
one with the UPDATE still can't find temp table referenced in other
function, right?

I have no problem writing func1() example in plpgsql, it just seemed to
me that using sql instead of plpgsql (when I actually can use sql) gives
me a little performance improvement.

Mike


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[SQL] Subselect strange behaviour - bug?

2008-03-16 Thread Mario Splivalo
I have two tables, 'configured' like this:

melem=# \d t1
   Table "public.t1"
 Column |   Type| Modifiers
+---+---
 id | integer   |
 value  | character varying |

melem=# \d t2
   Table "public.t2"
 Column |   Type| Modifiers
+---+---
 id1| integer   |
 value  | character varying |


And here is the data from both tables:

melem=# select * from t1;
 id | value
+---
  1 | 1
  2 | 2
  3 | 3
(3 rows)

melem=# select * from t2;
 id1 | value
-+---
   1 | 1
   2 | 2
   3 | 3
   4 | 4
(4 rows)


And here is the 'problematic' query:

melem=# select * from t2 where id1 in (select id1 from t1);
 id1 | value
-+---
   1 | 1
   2 | 2
   3 | 3
   4 | 4
(4 rows)

I guess postgres should tell me that column name id1 is nonexistant in
table t1.

Now, if I change subselect to select correct column name, everything is ok:

melem=# select * from t2 where id1 in (select id from t1);
 id1 | value
-+---
   1 | 1
   2 | 2
   3 | 3
(3 rows)


I have found out that this 'error' pops up only if the columns in both
subselect query and the 'super'select query are the same. For instance:

melem=# select * from t2 where id1 in (select id2 from t1);
ERROR:  column "id2" does not exist
LINE 1: select * from t2 where id1 in (select id2 from t1);
  ^
melem=#


I have tested this on postgres 8.3.0, 8.2.6 and 8.2.4, and the results
are the same. I donwloaded the sources for three mentioned versions of
postgres, and built it on Linux 2.6.22-14 (Ubuntu 7.10) using gcc 4.1.3,
as well as on Debian stable (Linux 2.6.22.1 with gcc 4.1.2).

Mario

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Subselect strange behaviour - bug?

2008-03-17 Thread Mario Splivalo

Tom Lane wrote:

Mario Splivalo <[EMAIL PROTECTED]> writes:
  

And here is the 'problematic' query:
melem=# select * from t2 where id1 in (select id1 from t1);

I guess postgres should tell me that column name id1 is nonexistant in

table t1.



No, it shouldn't, because that's a perfectly legal outer reference;
that is, what you wrote is equivalent to
select * from t2 where id1 in (select t2.id1 from t1);

  

Huh, you're right. Appologies for the noise :)

   Mario

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Using UTF strings in pg8.3 - storing hexadecimal values in bytea columns

2008-11-10 Thread Mario Splivalo

I have this issue:

postgres=# select E'\xc5\x53\x94\x96\x83\x29';
ERROR:  invalid byte sequence for encoding "UTF8": 0xc553
HINT:  This error can also happen if the byte sequence does not match 
the encoding expected by the server, which is controlled by 
"client_encoding".

postgres=# show client_encoding ;
 client_encoding
-
 UTF8
(1 row)

postgres=# show server_encoding ;
 server_encoding
-
 UTF8
(1 row)

postgres=# select version();
version 



 PostgreSQL 8.3.5 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.2.4 
(Ubuntu 4.2.4-1ubuntu3)

(1 row)




On postgres 8.2 this worked:

postgres=# select version();
version 



 PostgreSQL 8.2.4 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 
20061115 (prerelease) (Debian 4.1.1-21)

(1 row)

postgres=# select E'\xc5\x53\x94\x96\x83\x29';
 ?column?
--
 S)
(1 row)

postgres=# show client_encoding ;
 client_encoding
-
 UTF8
(1 row)

postgres=# show server_encoding ;
 server_encoding
-
 UTF8
(1 row)


I'm using the above mentioned string to store data into bytea column. I 
did pg_dump of the database on postgres 8.2, and then tried to restore 
it on postgres 8.3, and I got this error. The actuall line that produces 
error is like this:


INSERT INTO vpn_payins_bitfield (vpn_id, payload_pattern, encription, 
encription_key, charset, amount_width, shop_width, counter_width) VALUES 
(3, E'\\W*(\\w+)(?:\\W+(.*))?', 'RC4', 
E'\xc5\x53\x94\x96\x83\x29'::bytea, 'ABCDEGHIKLMOPTWX', 16, 8, 16);


The error is:
ERROR: invalid byte sequence for encoding "UTF8": 0xc553


Now, I see that I can type: "SELECT E'\xFF'" in pg8.2, but can't do that 
in pg8.3.


So, my question is, how do I specify hexadecimal value of C5 to be 
stored in bytea column, in an INSERT statement?


Mike

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Converting between UUID and VARCHAR

2008-11-10 Thread Mario Splivalo

I have a table, like this:

CREATE TABLE t1
(
  u1 character varying
)

And some data inside:

INSERT INTO t1 (u1) VALUES ('62d6b434-7dfd-4b3b-b1bf-87f6c20c10dd');
INSERT INTO t1 (u1) VALUES ('e3fee596-164b-4995-9e0d-7b2a79e83752');
INSERT INTO t1 (u1) VALUES ('37a42ec8-9000-44bc-bb06-13b5d4373a45');
INSERT INTO t1 (u1) VALUES ('fe160c33-846b-4843-999e-071cbc71260c');
INSERT INTO t1 (u1) VALUES ('4a8d9697-f26c-41a4-91cd-444226e075f7');
INSERT INTO t1 (u1) VALUES ('e21cddf9-9843-42a0-acb6-95933ed2d6ee');
INSERT INTO t1 (u1) VALUES ('b3c04c2d-3706-4fa2-a3f5-b15552eaaadb');
INSERT INTO t1 (u1) VALUES ('e73d128d-fcf4-427c-959e-ac989150f2c4');
INSERT INTO t1 (u1) VALUES ('c88ac916-efb6-4afe-a2e3-8f2f49316c67');
INSERT INTO t1 (u1) VALUES ('2014ab62-bee9-4a3a-b273-58859d1d8941');


Now, I can do this:

SELECT u1::uuid FROM t1;

But I can't do this:

ALTER TABLE t1 ALTER u1 TYPE uuid;

So, if I want to change column t1 to use uuid type instead of varchar I 
need to do this:


SELECT u1::uuid INTO _t1 from t1;
DROP TABLE t1;
ALTER TABLE _t1 RENAME TO t1;

That's pain in the behind if I have several tables referenced with 
foreign keys and tons of data.


Is there a more 'elegant' way of changing varchar data type to uuid?

My database consists of several tables which heavily use UUIDs, but 
since we were on 8.2 we used varchar (actually char(36)), but now pg8.3 
supports UUID, and I need to 'upgrade' my database to use that new time.


Mike

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Using UTF strings in pg8.3 - storing hexadecimal values in bytea columns

2008-11-10 Thread Mario Splivalo

Tom Lane wrote:
I'm using the above mentioned string to store data into bytea column. I 
did pg_dump of the database on postgres 8.2, and then tried to restore 
it on postgres 8.3, and I got this error. The actuall line that produces 
error is like this:


INSERT INTO vpn_payins_bitfield (vpn_id, payload_pattern, encription, 
encription_key, charset, amount_width, shop_width, counter_width) VALUES 
(3, E'\\W*(\\w+)(?:\\W+(.*))?', 'RC4', 
E'\xc5\x53\x94\x96\x83\x29'::bytea, 'ABCDEGHIKLMOPTWX', 16, 8, 16);


Exactly what version of pg_dump are you using?  What I get from pg_dump
doesn't look like that.  Bytea fields with -D look more like this:

INSERT INTO foo (f1) VALUES ('\\305S\\224\\226\\203)');


Yes, I mistakenly used pg8.2 pg_dump, when I use pg3.8 dump I get what 
you get. Btw, what is that S after 305? 305 octal is C5 hexadecimal. How 
do I enter hexadecimal C5 without UTF encoding errors?


Mike

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Using UTF strings in pg8.3 - storing hexadecimal values in bytea columns

2008-11-11 Thread Mario Splivalo

Richard Huxton wrote:

Mario Splivalo wrote:

Richard Huxton wrote:

Mario Splivalo wrote:

I have this issue:

postgres=# select E'\xc5\x53\x94\x96\x83\x29';
ERROR:  invalid byte sequence for encoding "UTF8": 0xc553

I think you want to be using octal escapes. That's text you're
generating above.

CREATE TABLE bytea_test (b bytea);
INSERT INTO bytea_test (b) VALUES (E'\\305\\123\\224\\226');
SELECT * FROM bytea_test;
   b
---
 \305S\224\226
(1 row)

That's true, but I'd still like to use hexadecimal notation. Manual
states that I could say '\xC5', but then I get those encoding errors.


I think you're reading the "text" rather than "bytea" part of the manual.

4.1.2.1. String Constants
"...and \xhexdigits, where hexdigits represents a hexadecimal byte
value. (It is your responsibility that the byte sequences you create are
valid characters in the server character set encoding.)"

No mention of hex in the bytea section of the manual.



Hm, you're right. I guess that part of manual confuses me. Nevertheless, 
I'd still like to be able to enter hexadecimal values to the bytea 
fields. As Tom Lane suggested, I can do this:


test1=# select decode('C5', 'hex');
 decode

 \305
(1 row)


But, if I want it other way around, I get the error:

test1=# select envode(E'\305', 'hex');
ERROR:  invalid byte sequence for encoding "UTF8": 0xc5
HINT:  This error can also happen if the byte sequence does not match 
the encoding expected by the server, which is controlled by 
"client_encoding".

test1=#

Is there a way to circumvent encoding when dealing with binary data? Or 
am I completely confused here and have no clue what am I talking about?


Mike

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Using UTF strings in pg8.3 - storing hexadecimal values in bytea columns

2008-11-11 Thread Mario Splivalo

Tom Lane wrote:

Mario Splivalo <[EMAIL PROTECTED]> writes:

Tom Lane wrote:

Exactly what version of pg_dump are you using?  What I get from pg_dump
doesn't look like that.  Bytea fields with -D look more like this:

INSERT INTO foo (f1) VALUES ('\\305S\\224\\226\\203)');


Yes, I mistakenly used pg8.2 pg_dump, when I use pg3.8 dump I get what 
you get.


I was quoting the output of 8.2.latest pg_dump.  Maybe you have a very
old subrelease?  But no version of pg_dump would've put an explicit
cast to bytea in there.


[EMAIL PROTECTED]:~$ pg_dump -V
pg_dump (PostgreSQL) 8.2.4
[EMAIL PROTECTED]:~$

Since I need to have my servers running 24/7 with NO downtime I seldom 
choose to upgrade minor versions, unless there is a major bug that 
affects me. This upgrade from 8.2 to 8.3 is planned, and I have liberty 
of having 3-4 hours of downtime.





Btw, what is that S after 305?


Hex 53 is 'S' I believe.


Still don't get it :) If I have hexadecimal value of C5, that is octal 
305, and I don't get where that S came from.




305 octal is C5 hexadecimal. How 
do I enter hexadecimal C5 without UTF encoding errors?


bytea only supports octal, so \\305 is the way to do it.  The way you
were doing it was guaranteed to fail on corner cases such as \0 and \
itself, because you were converting at the string-literal stage not
byteain().


Ok, that makes sense. Since I just store that data into the database, 
maybe I could store them as strings (varchars), and then do the 
conversion on the client side (java).


Mike

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Using UTF strings in pg8.3 - storing hexadecimal values in bytea columns

2008-11-11 Thread Mario Splivalo

Mario Splivalo wrote:

But, if I want it other way around, I get the error:

test1=# select envode(E'\305', 'hex');
ERROR:  invalid byte sequence for encoding "UTF8": 0xc5
HINT:  This error can also happen if the byte sequence does not match 
the encoding expected by the server, which is controlled by 
"client_encoding".

test1=#

Is there a way to circumvent encoding when dealing with binary data? Or 
am I completely confused here and have no clue what am I talking about?




Now, this works:

test1=# select encode(decode('C5', 'hex'), 'hex');
 encode

 c5
(1 row)


I see that my client encoding has a problem interpreting '\xC5':

test1=# select E'\xC5';
ERROR:  invalid byte sequence for encoding "UTF8": 0xc5
HINT:  This error can also happen if the byte sequence does not match 
the encoding expected by the server, which is controlled by 
"client_encoding".


So the only way is to use decode function, right?

Mike

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Postgres version of all_ind_cols

2008-11-11 Thread Mario Splivalo

Bart van Houdt wrote:

Hi all,

This might be a stupid question, but I wasn't able to find any information on 
it, so here it goes:
Oracle knows a view which contains information about indexed columns 
(all_ind_cols), is there something similar available in Postgres?

I want to be able to create an full overview of each column with the following 
information:
- index name
- table name the index is on
- column name the index is on
- column position in the index
- sort order

Any help would be much appreciated :)



You should check this, for instance:
http://www.postgresql.org/docs/8.3/interactive/monitoring-stats.html#MONITORING-STATS-VIEWS

There is, also, information_schema in postgres, as well as pg_catalog, 
you may wish to check those.


Mike

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] xpath_table, or something similair?

2009-03-30 Thread Mario Splivalo
Is there a way to get recordset out of XML document - something similair 
to contrib/xml2's xpath_table function, but I'd like to be able to get 
data out of XML document that is not stored in the database.


I wrote my own plpgsql function for converting XML document data to 
recordset (table), but that requires more than few xpath/generate_series 
functions, and I think it might get slow for rather large documents. Now 
I'm wondering if that is the only/right way, or there are some other 
tools I might not be aware of.


Thanks in advance,

Mike

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] ENUM vs DOMAIN vs FKyed loookup table

2009-04-08 Thread Mario Splivalo
When you need to choose between enum types, domain types or lookup 
tables with foreign keys, what do you usualy choose?


Only recently I started using domains, but that seems to be painful, as 
Joshua Drake explains here:


http://www.commandprompt.com/blogs/joshua_drake/2009/01/fk_check_enum_or_domain_that_is_the_question/

For instance, I have an application that tracks codes. I have table that 
looks like this:


CREATE TABLE codes (
code_id integer,
code_value integer,
code_type enum_code_type
)

CREATE TYPE enum_code_type AS ENUM
   ('Unapproved',
'ApprovedByEmail',
'ApprovedByAdmin');

Now, let's say I have around 5.000.000 codes in my table. If I want to 
add new code type, i'm in deep pain. I can't do ALTER TYPE to change 
enum definition, I need to do hocus-pocus to achieve what I want.


If I used domain, I can do ALTER DOMAIN DROP CONSTRAINT, ALTER DOMAIN 
ADD... Takes a bit of time, but it is more flexible than ENUMs.


Joshua made some benchmarking of the enum/domain types, and it turns out 
that domain type is slowest one. I'll be posting some of my benchmarks soon.


I abandoned the lookup tables because of the JOINining.

What are your practices, when do you use ENUMs and when Domains?

Mike


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] ENUM vs DOMAIN vs FKyed loookup table

2009-04-14 Thread Mario Splivalo

Dirk Jagdmann wrote:

When you need to choose between enum types, domain types or lookup tables
with foreign keys, what do you usualy choose?


When I have a column with valid values that I know when writing my
tables and that will *never* change I use an enum. For example a human
gender type (and remember that there are 4 values for human sex if you
want to model it completely).

Otherwise a simple table with a primary key of type 'text' that is
used as a foreign key in the other table, so I can change/alter the
valid values later. No join needed! Remember that PK/FK do not always
have to be of type 'serial'.

The reason is, that for a user of the SQL language there is hardly any
difference in using an ENUM or a text type, since they are both
strings which must be enclosed in single quotes. Of course under the
hood for the PostreSQL languange parser and interpreter there is a
difference, but we can ignore that.

To revisit your example I would do it this way:

CREATE TABLE code_type (
  t text not null primary key
);
insert into code_type values ('Unapproved'), ('ApprovedByEmail'),
('ApprovedByAdmin');

CREATE TABLE codes (
   code_id integer,
   code_value integer,
   code_type text not null references code_type
);


I did a little bit of testing, I created two types, one domain and one enum:

CREATE DOMAIN domain_code_type
  AS character varying
  NOT NULL
   CONSTRAINT domain_code_type_check CHECK (VALUE IN 
('ApprovedByAdmin', 'Unapproved', 'ApprovedByEmail'));


CREATE TYPE enum_code_types AS ENUM
   ('Unapproved',
'ApprovedByEmail',
'ApprovedByAdmin');

And two tables:

CREATE TABLE codes__enum
(
  code_id integer NOT NULL,
  code_type enum_code_types NOT NULL,
  CONSTRAINT codes__enum_pk PRIMARY KEY (code_id)
)


CREATE TABLE codes__domain
(
  code_id integer NOT NULL,
  code_type domain_code_type NOT NULL,
  CONSTRAINT codes_domain_pk PRIMARY KEY (code_id)
)


I filled them with 12.000.000 rows of the very same data. Here are size 
differences:


enum_test=# select * from pg_size_pretty(pg_relation_size('codes__enum'));
 pg_size_pretty

 415 MB
(1 row)

enum_test=# select * from pg_size_pretty(pg_relation_size('codes__domain'));
 pg_size_pretty

 520 MB
(1 row)


Inserting data to _domain table is a bit slower than to _enum table.


I also created two tables 'linked' with primary keys, just one used 
code_type::varchar as FK field, while other used code_type::integer as 
FK field (but, in referenced table I need to have code_type_description 
or whatever). I filled them with the same data.


Size of the table where FK is varchar is the same as in _domain table, 
while table where FK is integer is same as in _enum table.


Enums and pain to administer - changing enum type definition for a table 
where I have 12.000.000 rows takes almost forever! Domains are a bit 
more flexibile, yes.


And, yes, 'person sex' is very good example on 'when to use enums'. My 
example is poor for enums and domains. Separate table referenced with 
foreign key seems like best solution because I have no idea if the 
client might want to add 'ApprovedButThenRevoked' or 
'ApprovedWhenNotDrunk' code types in the future.


Mike

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Ordering a name list and ignoring whitespace

2009-04-16 Thread Mario Splivalo

Mikel Lindsaar wrote:

Hi all,

Doing some googling and looking through the docs, I can't find an
obvious way to do this beside post processing after the query (which I
am trying to avoid).

I'm trying to select a list of names in alphabetical order but
ignoring the whitespace.

So for example, the name "La Combe" should come after "Lace" but
before "Lacs..."

Any ideas?


Could you do it like this:

SELECT
   replace(name_column, ' ', '') AS name_replaced
FROM
   your_table_name
ORDER BY
   name_replaced

This can get a bit slowish if your table has quite a number of rows.

Mike

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Query with Parameters and Wildcards

2009-04-27 Thread Mario Splivalo

landsharkdaddy wrote:

I have not tried that but I will in the morning. The @ in SQL is used to
indicate a parameter passed to the query. In PostgreSQL it seems that the :
is the same as the @ in SQL Server. I tried something like:

SELECT * FROM Customers WHERE FirstName LIKE :custfirst + '%'; 


And it told me that the + could not be used. Not sure the exact message but
I will check again tomorrow and see what it was and post the results.


T-SQL defines that variables need to start with @ (like, for instance, 
in PHP they star with $).


In postgres you have positional parametars, $1, for instance.

You could, for instance, write SQL function in postgres that would do 
what you need:


CREATE FUNCTION get_customers_with_like (a_name_part character varying)
RETURNS SETOF customers
AS
$$
SELECT * FROM customers WHERE firstname LIKE $1 || '%';
$$
LANGUAGE 'sql';


In postgres, you use '||' for string concatenation (instead of '+' in 
T-SQL).


Mario

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql