Re: [SQL] From with case

2013-03-26 Thread Ben Morrow
Quoth pavel.steh...@gmail.com (Pavel Stehule):
 Dne 25.3.2013 23:51 Ben Morrow b...@morrow.me.uk napsal(a):
 
  I would use a view for this:
 
  create view vale_any as
  select 'P'::text type, v.adiant, v.desc_per, v.cod
  from valepag v
  union all
  select 'R', v.adiant, v.desc_per, v.cod
  from valerec v;
 
  then
 
  for rSql in
  select a.adiant, a.desc_per
  from vale_any a
  where a.type = cTip and a.cod = 2
  loop
 
 This design has a performance problem. You read both tables everywhere -
 for large tables can be bad

You would think so, but, in general, Pg is cleverer than that. For the
simple case of queries with constants in (so, a client-submitted query
like

select * from vale_any a where a.type = 'P' and a.cod = 2

or the equivalent with bound placeholders) the planner won't even plan
the parts of the view which don't get used. Try some experiments with
EXPLAIN to see what I mean: the unused sections of the Append (that is,
the UNION ALL) are either omitted entirely or get replaced with

Result
One-Time Filter: false

(I'm not entirely sure what makes the difference, though it seems to be
to do with how complicated the individual parts of the UNION are).

PL/pgSQL is a bit more complicated, because (unless you use EXECUTE) it
pre-plans all its statements, so the condition on a.type is not constant
at planning time. However, if you PREPARE a statement like

prepare v as select * from vale_any a 
where a.type = $1 and a.cod = $2

and then run it with EXPLAIN ANALYZE EXECUTE v ('P', 2) you will see
that although the plan includes the parts of the view that don't get
used they are all marked '(never executed)' by EXPLAIN ANALYZE, because
the executor had enough information to work out they could never return
any rows. Skipping those parts of the plan at execute time does have a
small cost--for small tables you will see the total query time go up a
little for a prepared statement--but nothing like the cost of scanning a
large table. I would expect it's about the same as the cost of a
PL/pgSQL IF/THEN/ELSE.

It's worth noting at this point that if you know the rows of a UNION
will be distinct it's worth making it a UNION ALL, since otherwise Pg
has to add a sort-and-uniq step which can be expensive.

Ben



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] From with case

2013-03-25 Thread Ben Morrow
Quoth c...@sygecom.com.br (Mauricio Cruz):
 
 I'm working in a PL/SQL and I'd like to use the same
 PL for 2 kinds of tables... 
 
 I have valepag and valerec both tables
 have the same columns, but one is for debit and the other one is for
 credit, the PL will work for both cases 
 
 with the unique diference for
 the name of the table... 
 
 So I thought to use something like this:
 ... 
 
 For rSql in select a.adiant,
  a.desc_per
  from case
  when
 cTip='P'
  then valapag
  else valerec
  end
  where cod=2 Loop 
 
 ... 
 
 But
 it just dont work... does some one have other solution for this case ?

I would use a view for this:

create view vale_any as
select 'P'::text type, v.adiant, v.desc_per, v.cod
from valepag v
union all
select 'R', v.adiant, v.desc_per, v.cod
from valerec v;

then

for rSql in
select a.adiant, a.desc_per
from vale_any a
where a.type = cTip and a.cod = 2
loop

You need to cast the constant in the view definition, otherwise Pg
complains about its type being ambiguous. You should use the same type
as cTip will be.

Ben



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] ZIP function

2013-03-16 Thread Ben Morrow
Quoth ja...@xnet.co.nz (Jasen Betts):
 On 2013-03-16, Victor Sterpu vic...@caido.ro wrote:
 
  Is there a function that will give the resulting zip content for a=20
  string?
  Like SELECT zip('test data');?
 
 no. you could write one that calls gzip in one of the untrusted
 languages.  or in C you could call zlib.

You can call Compress::Zlib or Archive::Zip (depending on which sort of
'zip' you mean) from (trusted) PL/Perl, provided you load the modules
from plperl.on_init.

Ben



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] UPDATE query with variable number of OR conditions in WHERE

2013-03-14 Thread Ben Morrow
Quoth jorgemal1...@gmail.com (JORGE MALDONADO):
 
 I am building an UPDATE query at run-time and one of the fields I want to
 include in the WHERE condition may repeat several times, I do not know how
 many.
 
 UPDATE table1
 SET field1 = some value
 WHERE (field2 = value_1 OR field2 = value_2 OR .OR field2 = value_n)
 
 I build such a query using a programming language and, after that, I
 execute it. Is this a good approach to build such a query?

You can use IN for this:

UPDATE table1
SET field1 = some value
WHERE field2 IN (value_1, value_2, ...);

Ben



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Need help revoking access WHERE state = 'deleted'

2013-03-02 Thread Ben Morrow
Quoth lists-pg...@useunix.net (Wayne Cuddy):
 On Thu, Feb 28, 2013 at 06:02:05PM +, Ben Morrow wrote:
  
  (If you wanted to you could instead rename the table, and use rules on
  the view to transform DELETE to UPDATE SET state = 'deleted' and copy
  across INSERT and UPDATE...)
 
 Sorry to barge in but I'm just curious... I understand this part
 transform DELETE to UPDATE SET state = 'deleted'. Can you explain a
 little further what you mean by copy across INSERT and UPDATE...?

I should first say that AIUI the general recommendation is to avoid
rules (except for views), since they are often difficult to get right.
Certainly I've never tried to use rules in a production system.

That said, what I mean was something along the lines of renaming the
table to (say) entities_table, creating an entities view which filters
state = 'deleted', and then

create rule entities_delete
as on delete to entities do instead 
update entities_table 
set state = 'deleted'
where key = OLD.key;

create rule entities_insert
as on insert to entities 
where NEW.state != 'deleted'
do instead
insert into entities_table 
select NEW.*;

create rule entities_update
as on update to entities 
where NEW.state != 'deleted'
do instead
update entities_table
set key = NEW.key,
state   = NEW.state,
field1  = NEW.field1,
field2  = NEW.field2
where key = OLD.key;

(This assumes that key is the PK for entities, and that the state
field is visible in the entities view with values other than 'deleted'.
I don't entirely like the duplication of the view condition in the WHERE
clauses, but I'm not sure it's possible to get rid of it.)

This is taken straight out of the 'Rules on INSERT, UPDATE and DELETE'
section of the documentation; I haven't tested it, so it may not be
quite right, but it should be possible to make something along those
lines work.

Ben



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Need help revoking access WHERE state = 'deleted'

2013-02-28 Thread Ben Morrow
Quoth m...@summersault.com (Mark Stosberg):
 
 We are working on a project to start storing some data as soft deleted
 (WHERE state = 'deleted') instead of hard-deleting it.
 
 To make sure that we never accidentally expose the deleted rows through
 the application, I had the idea to use a view and permissions for this
 purpose.
 
 I thought I could revoke SELECT access to the entities table, but then
 grant SELECT access to a view:
 
   CREATE VIEW entities_not_deleted AS SELECT * FROM entities WHERE state
 != 'deleted';
 
 We could then find/replace in the code to replace references to the
 entities table with the entities_not_deleted table

(If you wanted to you could instead rename the table, and use rules on
the view to transform DELETE to UPDATE SET state = 'deleted' and copy
across INSERT and UPDATE...)

 However, this isn't working, I permission denied when trying to use
 the view. (as the same user that has had their SELECT access removed to
 the underlying table.)

Works for me. Have you made an explicit GRANT on the view? Make sure
you've read section 37.4 'Rules and Privileges' in the documentation,
since it explains the ways in which this sort of information hiding is
not ironclad.

Ben



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Creating a new database with a TEMPLATE did not work

2013-02-25 Thread Ben Morrow
Quoth adrian.kla...@gmail.com (Adrian Klaver):
 On 02/25/2013 02:49 PM, mkumbale wrote:
  Hi, I am new to PostgreSQL.  I have an empty PostgreSQL DB containing tables
  but no data.  I issued the following command in PGADMIN SQL editor:
 
  CREATE DATABASE NewDefault
 WITH OWNER = postgres
  ENCODING = 'UTF8'
  TEMPLATE = Default
  TABLESPACE = pg_default
  LC_COLLATE = 'English_United States.1252'
  LC_CTYPE = 'English_United States.1252'
  CONNECTION LIMIT = -1;
 
  Although it created the NewDefault DB, it does not contain any of the tables
  in Default.  Default was disconnected when I executed this command.
 
  What am I doing something wrong?
 
 So you have a database named Default?
 
 Probably not a good name because:
 http://www.postgresql.org/docs/9.2/interactive/sql-createdatabase.html
 
 template
 The name of the template from which to create the new database, or 
 DEFAULT to use the default template (template1).
 
 So at a guess you are actually creating the new database from template1.

Also, SQL names are folded to lowercase unless they are quoted, so

TEMPLATE = Default

would refer to a database called default (if it weren't special syntax
for template1), but

TEMPLATE = Default

should find a database created with CREATE DATABASE Default.

Ben



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Summing Grouping in a Hierarchical Structure

2013-02-23 Thread Ben Morrow
Quoth parri...@gmail.com (Don Parris):
 
 Is it possible to use spaces in the ltree path, like so:
 TOP.Groceries.Food.Herbs  Spices
 
 Or do the elements of the path have to use underscores and dashes?

From the docs:

|  A label is a sequence of alphanumeric characters and underscores (for
|  example, in C locale the characters A-Za-z0-9_ are allowed). Labels
|  must be less than 256 bytes long. [...]
| 
| A label path is a sequence of zero or more labels separated by dots,
| for example L1.L2.L3, representing a path from the root of a
| hierarchical tree to a particular node. The length of a label path
| must be less than 65Kb, but keeping it under 2Kb is preferable.

If you need to store non-alphanumeric labels, one answer (as long as
they aren't too long) would be to use URL-encoding, like

TOP.Groceries.Food.Herbs_20_26_20Spices

Of course, you would need to encode _ as well, and you would need to be
sure the labels weren't going to come out too long. Another alternative
would be to MD5 each label and use (say) the first 10 bytes of that MD5
in hex as the ltree label. (Annoyingly there's only one
non-alphanumeric, so you can't use base64.) If you were going to do that
you would need to consider the possibility of an attacker arranging a
hash collision: I don't know where you're labels come from, so I don't
know if this would be an issue.

Ben



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Volatile functions in WITH

2013-02-20 Thread Ben Morrow
Quoth gray...@gmail.com (Sergey Konoplev):
 On Sat, Feb 16, 2013 at 11:58 PM, Ben Morrow b...@morrow.me.uk wrote:
  WITH exp AS ( -- as before
  ),
  subst AS (
  SELECT add_item(e.basket, e.nref, e.count)
  FROM exp e
  WHERE e.nref IS NOT NULL
  )
  SELECT DISTINCT e.msg
  FROM exp e
 
 Alternatively I suppose you can try this one:
 
  WITH exp AS (
 DELETE FROM item i
 USING item_expired e
 WHERE e.oref = i.ref
 AND i.basket = $1
 RETURNING i.basket, e.oref, e.nref, i.count, e.msg
 ),
 upd AS (
 UPDATE item SET count = e.count
 FROM exp e
 WHERE e.nref IS NOT NULL
 AND (basket, nref) IS NOT DISTINCT FROM (e.basket, e.nref)
 RETURNING basket, nref
 )
ins AS (
 INSERT INTO item (basket, ref, count)
 SELECT e.basket, e.nref, e.count
 FROM exp e LEFT JOIN upd u
 ON (basket, nref) IS NOT DISTINCT FROM (e.basket, e.nref)
 WHERE e.nref IS NOT NULL AND (u.basket, u.nref) IS NULL
 )
 SELECT DISTINCT e.msg
 FROM exp e

That's not reliable. A concurrent txn could insert a conflicting row
between the update and the insert, which would cause the insert to fail
with a unique constraint violation.

  then the planner sees that the results of subst are not used, and
  doesn't include it in the query plan at all.
 
  Is there any way I can tell WITH that add_item is actually a data-
  modifying statement? Adding FOR UPDATE doesn't seem to help (I didn't
  really expect it would.)
 
 In this regard I would like to listen to gugrus' opinion too.
 
 EXPLAIN ANALYZE WITH t AS (SELECT random()) SELECT 1;
  QUERY PLAN
 
  Result  (cost=0.00..0.01 rows=1 width=0) (actual time=0.002..0.003
 rows=1 loops=1)
  Total runtime: 0.063 ms
 (2 rows)
 
 EXPLAIN ANALYZE WITH t AS (SELECT random()) SELECT 1 from t;
  QUERY PLAN
 
  CTE Scan on t  (cost=0.01..0.03 rows=1 width=0) (actual
 time=0.048..0.052 rows=1 loops=1)
CTE t
  -  Result  (cost=0.00..0.01 rows=1 width=0) (actual
 time=0.038..0.039 rows=1 loops=1)
  Total runtime: 0.131 ms
 (4 rows)
 
 I couldn't manage to come to any solution except faking the reference
 in the resulting query:
 
 WITH t AS (SELECT random()) SELECT 1 UNION ALL (SELECT 1 FROM t LIMIT 0);

Yes, I can do experiments too; the alternatives I gave before both work
on my test database. What I was asking was whether they are guaranteed
to work in all situations, given that the planner can in principle see
that the extra table reference won't affect the result.

Ben



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Volatile functions in WITH

2013-02-20 Thread Ben Morrow
At  8AM -0800 on 20/02/13 you (Sergey Konoplev) wrote:
 On Wed, Feb 20, 2013 at 12:19 AM, Ben Morrow b...@morrow.me.uk wrote:
  That's not reliable. A concurrent txn could insert a conflicting row
  between the update and the insert, which would cause the insert to fail
  with a unique constraint violation.
 
 Okay I think I got it. The function catches exception when INSERTing
 and does UPDATE instead, correct?

Well, it tries the update first, but yes. It's pretty-much exactly the
example in the PL/pgSQL docs.

 If you got mixed up with plpgsql anyway what is the reason of making
 this WITH query constructions instead of implementing everything in a
 plpgsql trigger on DELETE on exp then?

I'm not sure what you mean. exp isn't a table, it's a WITH CTE. The
statement is deleting some entries from item, and replacing some of
them with new entries, based on the information in the item_expired
view. I can't do anything with a trigger on item, since there are
other circumstances where items are deleted that shouldn't trigger
replacement.

  Yes, I can do experiments too; the alternatives I gave before both work
  on my test database. What I was asking was whether they are guaranteed
  to work in all situations, given that the planner can in principle see
  that the extra table reference won't affect the result.
 
 From the documentation VOLATILE indicates that the function value can
 change even within a single table scan, so no optimizations can be
 made. So they are guaranteed to behave as you need in your last
 example.

Well, that's ambiguous. The return value can change even within a single
scan, so if you want 3 return values you have to make 3 calls. But what
if you don't actually need one of those three: is the planner allowed to
optimise the whole thing out? For instance, given

select * 
from (select j.type, random() r from item j) i
where i.type = 1

the planner will transform it into

select i.type, random() r
from item i
where i.type = 1

before planning, so even though random() is volatile it will only get
called for rows of item with type = 1. I don't know if this happens, or
may sometimes happen, or might happen in the future, for rows eliminated
because of DISTINCT.

(I think perhaps what I would ideally want is a PERFORM verb, which is
just like SELECT but says 'actually calculate all the rows implied here,
without pulling in additional filter conditions'. WITH would then have
to treat a top-level PERFORM inside a WITH the same as DML.)

Ben



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Volatile functions in WITH

2013-02-20 Thread Ben Morrow
At 12PM -0800 on 20/02/13 you (Sergey Konoplev) wrote:
 On Wed, Feb 20, 2013 at 10:16 AM, Ben Morrow b...@morrow.me.uk wrote:
  If you got mixed up with plpgsql anyway what is the reason of making
  this WITH query constructions instead of implementing everything in a
  plpgsql trigger on DELETE on exp then?
 
  I'm not sure what you mean. exp isn't a table, it's a WITH CTE. The
 
 Sorry, I meant item of course, exp was a typo.

OK.

  statement is deleting some entries from item, and replacing some of
  them with new entries, based on the information in the item_expired
  view. I can't do anything with a trigger on item, since there are
  other circumstances where items are deleted that shouldn't trigger
  replacement.
 
 Okay, I see.
 
 If the case is specific you can make a simple plpgsql function that
 will process it like FOR _row IN DELETE ... RETORNING * LOOP ...
 RETURN NEXT _row; END LOOP;

Yes, I *know* I can write a function if I have to. I can also send the
whole lot down to the client and do the inserts from there, or use a
temporary table. I was hoping to avoid that, since the plain INSERT case
works perfectly well.

  select *
  from (select j.type, random() r from item j) i
  where i.type = 1
 
  the planner will transform it into
 
  select i.type, random() r
  from item i
  where i.type = 1
 
  before planning, so even though random() is volatile it will only get
  called for rows of item with type = 1.
 
 Yes, functions are executed depending on the resulting plan A query
 using a volatile function will re-evaluate the function at every row
 where its value is needed.
 
  I don't know if this happens, or may sometimes happen, or might happen
  in the future, for rows eliminated because of DISTINCT.
 
 It is a good point. Nothing guarantees it in a perspective. Optimizer
 guarantees a stable result but not the way it is reached.

Well, it makes functions which perform DML a lot less useful, so I
wonder whether this is intentional behaviour.

Ben



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] upsert doesn't seem to work..

2013-02-17 Thread Ben Morrow
Quoth bier...@gmail.com (Bert):
 
 We continuously load data from flat files in our database.
 We first insert the data into unlogged tables (in the loadoltp schema), and
 then we use the 'upsert' statement to transfer the data from the load table
 into the tables we are going to use.
 
 The load tables are unlogged, and don't have indexes / pk's on them. All
 our 'real tables', which contains the data, always have a pk consisting out
 of 2 fields. In the example those are 'tick_server_id' and 'item_id'.
 
 At first everything seems to run ok, however it seems that new fields
 aren't always inserted as desired.
 
 
 This is an example query which causes troubles:

That query is basically equivalent to something like

create table st_item (
server_id   integer,
item_id integer,
item_desc   text,
primary key (server_id, item_id)
);
create table st_item_insert (
server_id   integer,
item_id integer,
item_desc   text
);

with upsert as (
update st_item et
set item_desc = e.item_desc
from st_item_insert e
where et.server_id = e.server_id
and et.item_id = e.item_id
returning et.server_id, et.item_id
)
insert into st_item 
(server_id, item_id, item_desc)
select et.server_id, et.item_id, et.item_desc
from st_item_insert et
where et.server_id not in (
select et.server_id
from upsert b) 
and et.item_id not in (
select et.item_id
from upsert b)

There are three problems here. The first is that the NOT IN subselect
selects from et instead of from b. In the context of this subselect et
is a table reference from outside the subselect, so it's treated as a
constant for each run of the subselect. That means that the subselect
will return the value you are testing against for every row in upsert,
so if there were any updates at all you will make no insertions.

The second is that you are making two separate subselects. This means
that a row in st_item_insert will not be inserted if there is a row in
upsert with a matching server_id and a row in upsert with a matching
item_id, *even if they are different rows*. For instance, suppose
st_item_insert has

2   1   foo
1   2   bar
2   2   baz

and the 'foo' and 'bar' entries get updated. The 'baz' entry will then
not get inserted, because the first subselect will find the 'foo' row
and the second will find the 'bar' row. What you need is a single row
subselect, like this:

where (et.server_id, et.item_id) not in (
select server_id, item_id
from upsert)

The third is that upsert is not as simple as you think. It isn't
possible (at least, not in Postgres) to take a lock on a row which
doesn't exist, so it's possible that a concurrent transaction could
insert a row with a conflicting key between the time the UPDATE runs and
the time the INSERT runs. You need to either lock the whole table or use
the retry strategy documented in the 'Trapping Errors' section of the
PL/pgSQL documentation. Annoyingly, even 9's serializable transactions
don't seem to help here, at least not by my experiments.

Ben



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Perform Function When The Rows Of A View Change

2013-02-17 Thread Ben Morrow
Quoth adam.mailingli...@gmail.com (Adam):
 
 I have a rather complicated view that is dependent upon multiple
 tables, consisting of several windowing and aggregate functions, as
 well as some time intervals. I would like to be able to perform a
 function, i.e. pg_notify(), whenever a row is added, changed, or
 removed from the view's result set.
 
 I think the kicker is the fact that the set of results returned by the
 view is dependent on the current time.
 
 Here's a simplified version of what's going on:
 
 CREATE VIEW view2 AS (
  SELECT view1.id, view1.ts
FROM view1
   WHERE view1.ts  (now() - '1 day'::interval)
 );
 
 As such, even if there are no inserts, deletes, or updates performed
 on any of the tables that view1 depends on, the data contained in
 view2 will change as a function of time  (i.e. rows will disappear
 from the view as time elapses).  I have been unable to come up with a
 trigger or rule that can detect this situation and provide the
 notification I'm looking for.
 
 I could just query the view over and over again, and look for changes
 as they occur. But I'm hoping to find a more elegant (and less
 resource-intensive) solution. Any ideas?

Well, in principle you could calculate the next time the view will
change assuming the tables don't change first, and have a client sit
there sleeping until that time. For instance, the view you posted will
next change at 

select min(t.ts)
from (
select view1.ts + '1 day'::interval ts
from view1
) t
where t.ts  now()

unless the tables view1 is based on change first.

Apart from the potential difficulty calculating that time, you would
need to be able to wake up that client early if one of the tables
changed. Setting triggers on the tables to send a notify to that client
(probably a different notify from the one that client then sends out to
other clients) should be sufficient, as long as that client uses
select(2) and PQconsumeInput to make sure it receives the notifications
in a timely fashion.

Ben



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] How to reject overlapping timespans?

2013-02-17 Thread Ben Morrow
Quoth maps...@gmx.net (Andreas):
 Am 17.02.2013 19:20, schrieb Andreas Kretschmer:
  Andreas maps...@gmx.net hat am 17. Februar 2013 um 18:02 geschrieben:
  I need to store data that has a valid timespan with start and enddate.
 
  objects ( id, name, ... )
  object_data ( object_id referencs objects(id), startdate, enddate, ... )
 
  nothing special, yet
 
  How can I have PG reject a data record where the new start- or enddate
  lies between the start- or enddate of another record regarding the same
  object_id?
 
  With 9.2 you can use DATERANGE and exclusion constraints
 
 though I still have a 9.1.x as productive server so I'm afraid I have to 
 find another way.

If you don't fancy implementing or backporting a GiST operator class for
date ranges using OVERLAPS, you can fake one with the geometric types.
You will need contrib/btree_gist to get GiST indexes on integers.

create extension btree_gist;

create function point(date) 
returns point
immutable language sql
as $$ 
select point(0, ($1 - date '2000-01-01')::double precision)
$$;

create function box(date, date)
returns box
immutable language sql
as $$ 
select box(point($1), point($2))
$$;

create table objects_data (
object_id   integer references objects,
startdate   date,
enddate date,
exclude using gist
(object_id with =, box(startdate, enddate) with )
);

You have to use 'box' rather than 'lseg' because there are no indexes
for lsegs. I don't know how efficient this will be, and of course the
unique index will probably not be any use for anything else.

Ben



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Volatile functions in WITH

2013-02-16 Thread Ben Morrow
Suppose I run the following query:

WITH exp AS (
DELETE FROM item i
USING item_expired e
WHERE   e.oref  = i.ref
AND i.basket= $1
RETURNING i.basket, e.oref, e.nref, i.count, e.msg
),
subst AS (
INSERT INTO item (basket, ref, count)
SELECT e.basket, e.nref, e.count
FROM exp e
WHERE e.nref IS NOT NULL
)
SELECT DISTINCT e.msg
FROM exp e

This is a very convenient and somewhat more flexible alternative to
INSERT... DELETE RETURNING (which doesn't work). However, the item
table has a unique constraint on (basket, ref), so sometimes I need to
update instead of insert; to handle this I have a VOLATILE function,
add_item. Unfortunately, if I call it the obvious way

WITH exp AS ( -- as before
),
subst AS (
SELECT add_item(e.basket, e.nref, e.count)
FROM exp e
WHERE e.nref IS NOT NULL
)
SELECT DISTINCT e.msg
FROM exp e

then the planner sees that the results of subst are not used, and
doesn't include it in the query plan at all.

Is there any way I can tell WITH that add_item is actually a data-
modifying statement? Adding FOR UPDATE doesn't seem to help (I didn't
really expect it would.)

Alternatively, are either of these safe (that is, are they guaranteed to
call the function once for every row returned by exp, even if the
DISTINCT ends up eliminating some of those rows)?

WITH exp AS ( -- as before
), subst AS ( -- SELECT add_item(...) as before
)
SELECT DISTINCT e.msg
FROM exp e
LEFT JOIN subst s ON FALSE

WITH exp AS ( -- as before
)
SELECT DISTINCT s.msg
FROM (
SELECT e.msg, CASE
WHEN e.nref IS NULL THEN NULL
ELSE add_item(e.basket, e.nref, e.count)
END subst
) s

I don't like the second alternative much, but I could live with it if I
had to.

Ben



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] strangest thing happened

2010-07-08 Thread Ben Morrow
Quoth jo...@jfcomputer.com (John):
 On Wednesday 07 July 2010 03:14:40 pm Justin Graf wrote:
  I would be looking at the log files for the Inserts into that table as a
  means to track down what is the cause.  If there are no log files or
  don't have enough detail, crank up the logging level and wait for it to
  happen again???
 
 
 That is scary - let it happen again  I'm not keeping enough info in the 
 log.  I actually turned off most of the info the log files are gathering 
 because the system has been running for 6-7 months without an issue.  I just 
 got a call around noon telling me something was going wrong.  That's when I 
 discovered the sequences were the wrong values.  I'm sure there has to be 
 some sort of real explanation - but I don't know what it is.

There are several possible causes:

- Something somewhere is inserting values directly into the serial
  columns, without using the sequence. This can be prevented by
  REVOKEing INSERT and UPDATE on the relevant columns for all users.
  If this causes problems anywhere in your app, those are good
  places to start looking for bugs.

- Something somewhere is manipulating the sequence. This can be
  prevented by REVOKEing UPDATE on all sequences for all users. You
  may need some additional GRANTs of USAGE on sequences if parts of
  the app were relying on UPDATE to call nextval().

Obviously if your app routinely drops and creates tables you will need to
arrange for these permissions to be applied every time.

- The database has become corrupted, perhaps by a badly-done backup
  and restore. (I would not expect taking a backup alone to cause
  corruption, but if the backup isn't done right the backed-up copy
  may be corrupt.) Have you done a restore recently?

- Something I haven't thought of :).

- A bug in Pg. While this is *extremely* unlikely, it must be
  mentioned as a possibility.

Ben


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] enforcing constraints across multiple tables

2010-06-25 Thread Ben Morrow
Quoth andrew.ge...@gmail.com (Andrew Geery):
 
 I have a question about checking a constraint that is spread across multiple
 (in the example below, two) tables.  In the example below, every food (food
 table) is in a food group (food_group table).  For every person (person
 table), I want to enforce the constraint that there can only be one food in
 a given food group (person_food link table) [think of it as every person may
 have a favorite food in a given food group].
 
 The problem seems to be that the link is in the person_food table, but the
 information that is needed to verify the constraint is also in the food
 table (i.e., what food group is the food in?).
 
 There are two problems here:
 (1) don't allow a food to be associated with a person if there is already a
 food in the same food group associated with the person; and
 (2) don't allow the food group for a food to be changed if this would
 violate (1)
 
 To enforce (1), I created a function to check whether a given food can be
 associated with a given person (is there already a food in the same food
 group associated with the person?) and added a check constraint to the
 person_food table.
 To enforce (2), I wasn't able to use a check constraint because the
 constraint was being checked with the existing data, not with the new data.
  I had to add an after trigger that called a function to do the check.
 
 My questions are:
 (A) Is there a way to check (2) above using a constraint and not a trigger?
 (B) Is there an easier way to solve this problem?  Does the complicated
 nature of the solution make the design poor?
 (C) Should I not worry about this constraint at the DB level and just
 enforce it at the application level?
 
 Below are the tables, functions and triggers I was using.
 
 Thanks!
 Andrew
 
 ===
 
 create table person (
id serial primary key,
name varchar not null
 );
 
 create table food_group (
id serial primary key,
name varchar not null
 );
 
 create table food (
id serial primary key,
food_group_id int not null references food_group,
name varchar not null
 );
 
 create table person_food (
person_id int not null references person,
food_id int not null references food,
primary key (person_id, food_id),
check (is_person_food_unique(person_id, food_id))
 );

Instead of this, try

create table person_food (
person_id int not null references person,
food_id int not null,
food_group_id int not null,

foreign key (food_id, food_group_id)
references food (id, food_group_id),
unique (person_id, food_group_id)
);

If you wish to move foods between groups, the foreign key above will
need to be ON UPDATE CASCADE.

Ben


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Casts in foreign schemas

2010-06-08 Thread Ben Morrow
Is this behaviour expected? (This is with 8.4.3 on FreeBSD.)

create schema one;
set search_path to one;

create type foo as (x integer);
create function foo (integer) returns foo
language plpgsql as $$
declare
y foo;
begin
y.x = $1;
return y;
end
$$;
create cast (integer as foo) with function foo (integer);

grant usage on schema one to public;
grant execute on function foo (integer) to public;

create schema two;

-- reconnect as a different user

set search_path to two;
select 3::one.foo;
ERROR:  type foo does not exist
CONTEXT:  compilation of PL/pgSQL function foo near line 2

set search_path to two, one;
select 3::foo;
 foo
-
 (3)
(1 row)

My understanding of things was that PL/pgSQL functions were compiled at
CREATE FUNCTION time, using the SEARCH_PATH currently in effect. Is that
wrong? Is there some GRANT I'm missing that will make this work?

Ben


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Rules and sequences

2010-05-27 Thread Ben Morrow
Quoth t...@sss.pgh.pa.us (Tom Lane):
 Ben Morrow b...@morrow.me.uk writes:
  I am trying to implement a fairly standard 'audit table' setup, but
  using rules instead of triggers (since it should be more efficient).
 
 Rules are sufficiently tricky that I would never, ever rely on them for
 auditing.  Use a simple AFTER trigger instead.

OK, thanks. 

Ben


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Rules and sequences

2010-05-26 Thread Ben Morrow
I am trying to implement a fairly standard 'audit table' setup, but
using rules instead of triggers (since it should be more efficient).
However, I'm running into problems when one of the audited tables has a
'serial' column that is allowed to default:

create table foo (id serial, bar text);
create table audit (ix bigserial, rec text);
create rule audit_insert as on insert to foo do also 
insert into audit (rec) values ((new.*)::text);
insert into foo (bar) values ('baz');
select * from foo;
 id | bar 
+-
  1 | baz
(1 row)
select * from audit;
 ix |   rec   
+-
  1 | (2,baz)
(1 row)

I can see why this is happening (the rule is essentially a macro, so the
NEW expression gets expanded twice, including the nextval call, so the
sequence is incremented twice), but is there any way to prevent it? Some
way of 'materialising' the NEW row so it is just plain values rather
than a list of expressions?

Ben


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] count function alternative in postgres

2010-04-06 Thread Ben Morrow
Quoth junaidmali...@gmail.com (junaidmalik14):
 
 Is there any alternative of mysql function COUNT(DISTINCT expr,[expr...]) in
 postgres. We get error if we 
 
 write count like this count(distinct profile.id, profile.name, profile.age)
 but it works well in mysql.

Pg does support COUNT(DISTINCT ), but only for a single column. The best
I can come up with for multiple columns is

select count(distinct profile.tuple) from
(select (id, name, age) as tuple from profile)
as profile;

or alternatively

select count(*) from
(select distinct (id, name, age) as tuple from profile)
as profile;

Ben


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Help : insert a bytea data into new table

2010-03-11 Thread Ben Morrow
Quoth dennis den...@teltel.com:
 Dear Ben
 
 thanks for you anwser.
 I try to add function quote_literal on my sql statement .
 
 but it raise other error message (quote_literal not support bytea format):
   function quote_literal(bytea) does not exist

Which Postgres version are you using?

Ben


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Help : insert a bytea data into new table

2010-03-11 Thread Ben Morrow
[quoting fixed]

Quoth dennis dennis.ma...@gmail.com:
 Ben Morrow wrote:
  Quoth dennisden...@teltel.com:
  Dear Ben
 
   thanks for you anwser.
  I try to add function quote_literal on my sql statement .
 
  but it raise other error message (quote_literal not support bytea format):
 function quote_literal(bytea) does not exist
 
  Which Postgres version are you using?

 Postgres : 8.1.4

Then I think you want 

create function quote_literal (bytea)
returns text
immutable strict
language plpgsql
as $$ 
begin
return 'E'''
|| replace(encode($1, 'escape'), E'\\', E'')
|| ;
end;
$$;

Ben


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Help : insert a bytea data into new table

2010-03-10 Thread Ben Morrow
Quoth dennis den...@teltel.com:
 Hi Ben
 
 here is my function , it's for fix missing chunk problem.
 It has same problem ,please take look
 
 
 thank for you help
 
 -table--
 
 
 db=# \d usersessiontable;
  Table public.usersessiontable
Column   |  Type  | Modifiers
 ---++---
   serverid  | character varying(100) |
   sessionid | character varying(50)  |
   data  | bytea  |
 Indexes:
  usersessiontable_idx btree (sessionid)
 db=#
 
 db=# \d usersessiontable_test;
  Table public.usersessiontable
Column   |  Type  | Modifiers
 ---++---
   serverid  | character varying(100) |
   sessionid | character varying(50)  |
   data  | bytea  |
 
 --function
 
 
 CREATE OR REPLACE FUNCTION check_missing_chunk_table_usersessiontable()
RETURNS integer AS
 $BODY$
 declare
 begin
 records = 0;
 OPEN curs1  FOR EXECUTE 'SELECT * FROM usersessiontable ORDER BY 
 sessionid';
 loop
  FETCH curs1 INTO rowvar;
  IF  NOT FOUND THEN
  EXIT;
  END IF;
  begin
  a_sql = 'insert into 
 usersessiontable_test(sessionid,serverid,data) 
 values('''||rowvar.sessionid||''','''||rowvar.serverid||''',E'''||rowvar.data||''')';--my
  

You are trying to concatenate ''',E''' (of type text) and rowvar.data
(of type bytea). This is where the error is coming from. (This actually
works in 8.4, so I presume you're using an earlier version?) In any
case, this is not a safe way to interpolate into an SQL string: you need
the quote_literal function.

a_sql = 'insert into usersessiontable (sessionid, serverid, data) '
|| 'values (' || quote_literal(rowvar.sessionid) || ', '
|| quote_literal(rowvar.serverid) || ', '
|| quote_literal(rowvar.data) || ')';

(Is there a function which will do %-interpolation the way RAISE does?
It would be much clearer in cases like this.)

Ben


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Help : insert a bytea data into new table

2010-03-09 Thread Ben Morrow
Quoth dennis den...@teltel.com:
 here is example
 
 table name is mail:
 column|   type
 -
 sender|char
 subject   |char

I presume you mean 'varchar'?

 content   |bytea
 
 
 I want copy some record into new table 'mail_new'.
 
 sql:
 create table mail_new as select * from mail sender='dennis'

You omitted the WHERE. It's very hard to see what's actually going on
when you keep mis-typing the commands you used.

 result has an error:
 operator does not exist: text || bytea
 
 
 But if my sql statement has no column content
 the sql works.
 sql:
 create table mail_new as select sender,subject from mail sender='dennis'

No, it still doesn't give that error for me. Show us something you've
*actually* *tried*.

Ben


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Help : insert a bytea data into new table

2010-03-08 Thread Ben Morrow
Quoth dennis den...@teltel.com:
 
 I need to copy some data to new table.
 But I encounter some error message.
 the table structure
 Table A:
   c1  char
   c2  bytea
 
 Table B:
   c1  char
   c2  bytea
 
 
 My sql command:
 insert into B as select * from a where c1=xxx

'AS' isn't valid there. What is xxx? Is it a field you haven't shown us,
or is it a quoted string?

 error:
 operator does not exist: text || bytea

That command (with 'xxx' quoted and the AS removed) doesn't give that
error with those table definitions, so you will need to show us your
actual query.

Ben


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql