Re: [SQL] uniqueness constraint with NULLs

2009-06-29 Thread Craig Ringer
On Mon, 2009-06-29 at 15:42 +1000, Robert Edwards wrote:

> Can anyone suggest a way that I can impose uniqueness on a and b when
> c is NULL?

One way is to add an additional partial index on (a,b):

CREATE INDEX bobtest_ab_unique ON bobtest(a,b) WHERE (c IS NULL);

... however, if you want to do the same sort of thing for all
permutations (a, null, null), (b, null, null), (c, null, null), (a, b,
null), (a, c, null), (b, c, null), (a, b, c)that'll be a LOT of indexes.

In that case you might be better off just using a trigger function like
(untested but should be about right):

CREATE OR REPLACE FUNCTION bobtest_unique_trigger() RETURNS trigger AS
$$
declare
  conflicting_id integer;
begin
  if TG_OP = 'INSERT' or TG_OP = 'UPDATE' then
select into conflicting_id from bobtest
where (NOT new.a IS DISTINCT FROM a)
  and (NOT new.b IS DISTINCT FROM b)
  and (NOT new.c IS DISTINCT FROM c);
if found then
  raise exception 'Unique violation in bobest: inserted row
conflicts with row id=%',conflicting_id;
end if;
  end if;
end;
$$ LANGUAGE 'plpgsql';

... which enforces uniqueness considering nulls.

> In the real app., c is a date field and I require it to be NULL for
> some rows.

Oh. Er, In that case, the partial unique index is your best bet (but 'a'
and 'b' should ne NOT NULL, right).

> in case I am missing some other solution that
> doesn't involve the use of triggers etc.

Sometimes a trigger is the right solution.

-- 
Craig Ringer


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] uniqueness constraint with NULLs

2009-06-29 Thread Craig Ringer
On Tue, 2009-06-30 at 10:22 +1000, Robert Edwards wrote:

> Would this be in addition to a unique constraint on (a, b, c) (for the
> cases where c is not null)?

That depends on your app's needs. Do you need to enforce uniqueness of
(a,b,c) ? Or of (a,b) where c is _not_ null? If so, then yes, that's in
addition to the existing unique index over a, b and c.

If you're happy to permit multiple identical (a,b,c) sets so long as c
is null, then there's no need for the additional index. 

> In the real app. a and b are not null ints and c is a date. The date
> indicates if and when a row has expired (there are other columns in the
> table). I am trying to avoid having separate columns for the "if" and
> the "when" of the expiry.

Sounds fairly sensible to me, though I have the unsubstantiated feeling
that sometimes storing a boolean + date can help the database optimise
queries better. 

If you do store a boolean and a date, add a check constraint to ensure
sanity:

CHECK ( is_expired = (expired_date IS NOT NULL) )

since you don't want a valid expired_date for an unexpired row, or a row
that expired at an unknown time.

> One alternate would be to use a date way off into the future (such as
> the famous 9/9/99 case many COBOL programmers used back in the 60's...)
> and to test on expired < now ().

Ick. Ick. Ick.

> Another option is to use a separate shadow table for the expired rows
> and to use a trigger function to "move" expired rows to that shadow
> table. Then need to use UNION etc. when I need to search across both
> current and expired rows.

That can work. It might be worth using inherited tables so you have a
single parent table that has rows from both history and live tables,
though.

Often a partial index on the main table works just as well, though.
That's another benefit to adding the index on (a,b) where c is null:
it's a smaller index that takes up less memory and can be scanned much
faster.

Unlike partitioning the data into separate tables, though, you'll still
have a mix of expired and unexpired rows in the table heap. The data of
interest will be scattered across more blocks, so the index will be
bigger and more data will have to be read in to satisfy a `where c is
not null' constrained query. So a partial index isn't _as_ good as
partitioning the data - but it's quite a bit easier.

-- 
Craig Ringer


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Bit by "commands ignored until end of transaction block" again

2009-07-24 Thread Craig Ringer
On Thu, 2009-07-23 at 17:06 +1000, Chris wrote:
> Joshua Tolley wrote:
> > On Thu, Jul 23, 2009 at 02:04:53AM -0400, Glenn Maynard wrote:
> >> On Thu, Jul 23, 2009 at 1:31 AM, Richard Huxton wrote:
> >>>>  - Let me use SAVEPOINT outside of a transaction,
> >>> You are never outside a transaction. All queries are executed within a
> >>> transaction.
> >> "Transaction block", then, if you insist.
> >>
> >>> I think this is the root of your problem - all queries are within a
> >>> transaction so either:
> >>> 1. You have a transaction that wraps a single statement. If you get an 
> >>> error
> >>> then only that statement was affected.
> >>> 2. You have an explicit BEGIN...COMMIT transaction which could use a
> >>> savepoint.
> >> Savepoints can only be used inside transaction blocks.  My function
> >> has no idea whether it's being called inside a transaction block.
> >>
> >> From inside a transaction block, my function would need to call
> >> SAVEPOINT/RELEASE SAVEPOINT.
> >>
> >> If it's not in a transaction block, it needs to call BEGIN/COMMIT
> >> instead.  SAVEPOINT will fail with "SAVEPOINT can only be used in
> >> transaction blocks".
> > 
> > Have you tried this? I expect if you give it a shot, you'll find you don't
> > actually have this problem. Really, everything is always in a transaction.

[snip]

> You haven't explicitly started a transaction, therefore savepoints won't 
> work.

True. However, he's talking about code within a PL/PgSQL function. To a
PL/PgSQL function there is NO difference between:


begin;
select my_function();
commit;

and a standalone:

select my_function();

in both cases the statement executes in a transaction, and in both cases
individual statements within the function are within the same
transaction. That's why any function can EXCEPTION blocks, etc, which
rely on savepoints.

-- 
Craig Ringer


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Bit by "commands ignored until end of transaction block" again

2009-07-24 Thread Craig Ringer
On Thu, 2009-07-23 at 03:39 -0400, Glenn Maynard wrote:

> I'm writing a Python library call.  It has no idea whether the caller
> happens to be inside a transaction already, and I don't want to
> specify something like "always run this inside a transaction".
> (Callers are equally likely to want to do either, and it's bad API to
> force them to start a transaction--the fact that I'm using the
> database at al should be transparent.)

Personally, I'd think about moving the function into the database, using
PL/PgSQL or even PL/PythonU if you have to.

Why should DB use be transparent when you're modifying the DB? In one
case you immediately make a change. In another case, you schedule a
change to be applied if/when the current transaction commits, so the
change may or may not occur at some point in the future. That is, IMO, a
big difference.

Most applications with this sort of thing will have app-level
transaction APIs that contain and manage the DB-level ones anyway.

> RELEASE SAVEPOINT would only COMMIT the transaction *if* the savepoint
> that it's releasing started it.

So, what you're really asking for boils down to nestable transactions?

-- 
Craig Ringer


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Bit by "commands ignored until end of transaction block" again

2009-07-26 Thread Craig Ringer
On Sun, 2009-07-26 at 19:15 -0400, Science wrote:

> FWIW, the way the Rails ORM ActiveRecord (another fairly damaged ORM) 
> handles this is by allowing you to open any number of transaction 
> blocks, but only the outer transaction block commits (in Pg):
> 
> Property.transaction { # SQL => 'BEGIN'
>User.transaction {
>  Foo.transaction {
>Foo.connection.execute('--some sql code') # SQL => '--some sql code'
>  }
>}
> } # SQL => 'COMMIT'

What happens if, Foo.transaction does something that causes an error,
though, or issues a rollback? It's not creating savepoints, so if
Foo.transaction rolls back it throws out the work of User.transaction
and Property.transaction too.

Ugh.

That design would be quite good _IF_ it used savepoints:


Property.transaction { # SQL => 'BEGIN'
   User.transaction {  # SQL => SAVEPOINT User
 Foo.transaction { # SQL => SAVEPOINT Foo
   Foo.connection.execute('--some sql code') # SQL => '--some sql code'
 } # SQL => RELEASE SAVEPOINT Foo
   }   # SQL => RELEASE SAVEPOINT User
}  # SQL => 'COMMIT'

... so that inner transactions could ROLLBACK TO SAVEPOINT on error ,
and so that asking for a rollback would give you a ROLLBACK TO SAVEPOINT
if the transaction is a subtransaction.

-- 
Craig Ringer


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Multiple simultaneous queries on single connection

2009-08-18 Thread Craig Ringer

On 17/08/2009 8:49 PM, Yeb Havinga wrote:

Hello list,

We want to access a postgres database with multiple queries / result
sets that are read simultaneously (hence async). The documentation says
explicitly that no new PQsendQuery can be send on the same channel
before the pqgetresults has returned null. This means we need to use
multiple connections.

The problem is that multiple connections (from the same application
instance) do not share the same MVCC snapshot.

Is there a way to have two or more connections see each others
uncommitted things?


Not at present.

You should be able to use explicitly declared cursors and FETCH to 
interleave requests for results from one or more queries in the same 
transation using the one connection, but only one FETCH may be active at 
a time.


--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Hibernate, web application and only one sequence for all primary keys

2009-09-17 Thread Craig Ringer
Tom Lane wrote:
> rawi  writes:
>> Grails/Hibernate wishes per default one sequence for all tables-PKs and all
>> PKs as BigInt.
> 
> Redesign that software; this is fundamentally broken and stupid.

It's a pretty silly default, but it's clearly intended for simple /
small databases. In any real database you'll be using one sequence per
generated PK. I've had no problems with this with Hibernate, and am a
bit puzzled that the OP is trying to fit their DB to Hibernate rather
than doing the trivial configuration required to get Hibernate to fit
the DB.

The Hibernate documentation is pretty good, and covers this sort of
thing well.

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Random Unique Id

2009-10-20 Thread Craig Ringer
Nahuel Alejandro Ramos wrote:
> Hi all,
>   I was searching for a sequence (for serials) that let me use a random
> unique number ID on a Primary Key or a simple index.
>   I have not found a solution so I have done it by myself. I would like to
> share it so here it is:

Here's what I'm using:

http://wiki.postgresql.org/wiki/Pseudo_encrypt

thanks to the incredibly helpful folks on this list, in this case
particularly Daniel Verite.

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] @@Error equivalent in Postgresql

2009-10-23 Thread Craig Ringer
Pavel Stehule wrote:

> So the
> programming based on returning state codes is very obsolete, and
> little bit difficult. You can emulate, but any protected block creates
> inner transaction and this should negative effect on speed - and it
> are some lines more.

I'd like to second and emphasise this. Attempting to use return codes
will cause you unnecessary grief. It might be worth it to maintain
compatibility in other code between Pg and MS-SQL, but I'm not convinced
- there are enough other differences that you'll probably need to
maintain fairly different versions of the functions anyway. In many
cases (mainly where no flow control is required) your T-SQL functions
can become simple SQL functions anyway.

I suspect in  the long run you'll be better off handling errors by
letting exceptions bubble up so that the caller can handle it or let it
bubble up in turn. The exception will include an informative error code.
You can then get that error code via exception information if you trap
the exception in PL/PgSQL or via your application driver interface
(JDBC/ODBC/etc) as the SQLSTATE if it bubbles up to a top-level statement.

I've found PostgreSQL's exception-based error handling a real
life-saver, especially when working with programming languages that also
use exceptions. When I'm working in Java, for example, a sanity check
deep in some PL/PgSQL function may raise an exception that propagates
through to the top-level SQL statement, causing the JDBC driver to throw
a Java SQLException that in turn propagates up to code that's in a
position to do something about the problem.

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Why don't I get a LATIN1 encoding here with SET ENCODING?

2009-11-03 Thread Craig Ringer
Bryce Nesbitt wrote:
> I'm tracking another bug, but wanted to verify stuff on the command line.  I 
> can't figure out why this did not work:

> dblack3-deleteme=> insert into bryce1 values(1,2,'test\375');
> ERROR:  invalid byte sequence for encoding "UTF8": 0xfd

I'd say the server is interpreting your query text as latin-1 and
converting it to the server encoding UTF-8 as it should, resulting in
the utf-8 string:

  insert into bryce1 values(1,2,'test\375');

which it *then* interprets escapes in. As test\xfd ('0x74 0x65 0x73 0x74
0xfd') isn't valid UTF-8, the server rejects it.

If my understanding is right then the trouble is that the
client_encoding setting doesn't affect string escapes in SQL queries.
The conversion of the query text from client to server encoding is done
before string escapes are processed.

In truth, that's how I'd expect it to happen. If I ask for the byte 0xfd
in a string, I don't want the server to decide that I must've meant
something else because I have a different client encoding. If I wanted
encoding conversion, I wouldn't have written it in an escape form, I'd
have written 'ý' not '\375'.

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Why don't I get a LATIN1 encoding here with SET ENCODING?

2009-11-03 Thread Craig Ringer
Bryce Nesbitt wrote:
> 
> 
> Craig Ringer wrote:
>> In truth, that's how I'd expect it to happen. If I ask for the byte 0xfd
>> in a string, I don't want the server to decide that I must've meant
>> something else because I have a different client encoding. If I wanted
>> encoding conversion, I wouldn't have written it in an escape form, I'd
>> have written 'ý' not '\375'.

> I've got a client encoding of LATIN1... so I'd expect to be able to
> present any valid LATIN1 character, not care how the backend stored it,
> then get the same character back from the database.

Yes - but you are *not* presenting a Latin-1 character. You're
presenting four Latin-1 characters:

  '\', '3', '7', '5'

The server *cannot* process those as an escape sequence before first
converting the SQL string from client to server encoding. It doesn't
know what the bytes you sent it mean until it converts the data sent by
the client to the server encoding. Not all encodings preserve the lower
128 characters - in shift-jis, for example, the bytes usually used for
the '\' and '~' characters mean '¥' and '‾' respectively. If the server
didn't do client-to-server encoding before escape processing, a user
with a shift-jis client encoding who sent:

   test¥041

would be very surprised when the server saw that as:

   test!

instead of literally test¥041 like it should.


Perhaps when processing escapes after doing the encoding conversion the
server could apply any client->server encoding transformation on escape
sequences too. That would achieve the result you wanted here, but it
would leave you very, very, very confused and frustrated the first time
you tried to insert an image into a `bytea' field or manipulate a BLOB,
because the server would 'helpfully' translate the byte escapes for you.

To come closer to what you want, the server would have to detect whether
the escape was in a string that was going to land up in a
character-typed field instead of a byte-typed field. But what about
casts, functions, etc? And how would you specify it if you really did
want exactly those bytes in a text field? It'd be a nightmare.

The server does the only sensible, consistent thing - when you give it a
byte sequence, it assumes you mean literally those bytes.

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Why don't I get a LATIN1 encoding here with SET ENCODING?

2009-11-03 Thread Craig Ringer
Bryce Nesbitt wrote:
> Craig Ringer wrote:
>> Yes - but you are *not* presenting a Latin-1 character. You're
>> presenting four Latin-1 characters:
>>
>>   '\', '3', '7', '5'
> Well, then I have a different question.  If I can view a bytea column as
> so:
> 
>> select object from context_objects where context_key=100041;
>   object
> -
> \005\007\021\007Article\003
> (1 row)
> 
> How can I modify the data and put it back into the column?

Presuming the working environment:



CREATE TABLE context_objects(
  id serial primary key,
  object bytea not null
);

-- to avoid E'' and double-escaping
SET standard_conforming_strings = on;

INSERT INTO context_objects (object)
VALUES ( '\005\007\021\007Article\003' ), ( 'test\375' );




I'd just SELECT the `object' value, receiving it in escape form from the
PostgreSQL protocol:

SELECT object FROM context_objects
   object
-
\005\007\021\007Article\003

... which is a string of 27 characters in the local client_encoding. So
in Perl (or whatever) I'd have a string of length 27:

  \005\007\021\007Article\003

I'd then convert that by parsing the escape sequences. (As Perl uses \0
as the octal escape prefix instead of just \, it's necessary to write a
simple loop for that. I don't do Perl voluntarily and haven't used it
for a while so I won't write one inline here, but it should be easy enough.)

Once you have the byte string (length 12 bytes) you manipulate it
however you want, then convert it back to octal escape form for sending
to the Pg server.

client_encoding has no effect on any of this so long as you're using a
client encoding that preserves the lower 128 characters, ie basically
anything except shift-jis.



Now, if your 'object' is actually a string in a single known text
encoding, not just a sequence of bytes, then another approach is
possible. First, stop using `bytea' for text. If the byte sequences are
all known to be latin-1 encoded text, for example, use:

ALTER TABLE context_objects ALTER COLUMN object TYPE text
USING ( convert_from(object, 'latin-1'));

--
-- Now retrieve the rows, which will be returned as text in
-- the client_encoding. Note that \375 is ý in latin-1.
--
testdb=> SELECT object FROM context_objects;
object
-
 \x05\x07\x11\x07Article\x03
 testý
(2 rows)

(Note: If they're all of different encodings, but you know the
encodings, you can make the encoding param of convert_from a column
reference instead).

Now you have 'object' as server-side utf-8 encoded text that'll be
automatically converted to and from the specified client_encoding . If
you want to get the raw server-side-encoded byte sequence you can always
cast to bytea in your query to get the utf-8 byte sequence in octal
escape form:

testdb=> SELECT object::bytea FROM context_objects;
   object
-
 \005\007\021\007Article\003
 test\303\275


... but usually you'd just let the DB




> I'm trying to track down while a ten year old system no longer works
> after a Perl DBI upgrade.

Is it just Perl DBI? Or is it also Pg?

Could it be related to the standards_conforming_strings change that
affects string escaping ?

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Why don't I get a LATIN1 encoding here with SET ENCODING?

2009-11-03 Thread Craig Ringer
Bryce Nesbitt wrote:
> Craig Ringer wrote:
>> Yes - but you are *not* presenting a Latin-1 character. You're
>> presenting four Latin-1 characters:
>>
>>   '\', '3', '7', '5'
> Well, then I have a different question.  If I can view a bytea column as
> so:
> 
>> select object from context_objects where context_key=100041;
>   object
> -
> \005\007\021\007Article\003
> (1 row)
> 
> How can I modify the data and put it back into the column?

Oh: you might also find the encode(...) function useful. From :

http://www.postgresql.org/docs/current/static/functions-string.html

encode(data bytea, type text) returns text
Encode binary data to different representation. Supported types are:
base64, hex, escape. Escape merely outputs null bytes as \000 and
doubles backslashes.

so rather than struggling through Pg's octal-escaped strings you can ask
for them in hex or base64 form, eg:



craig=> \d context_objects
 Table "public.context_objects"
 Column |  Type   |  Modifiers

+-+--
 id | integer | not null default
nextval('context_objects_id_seq'::regclass)
 object | bytea   | not null
Indexes:
"context_objects_pkey" PRIMARY KEY, btree (id)

craig=> select * from context_objects;
 id |   object
+-
  1 | \005\007\021\007Article\003
  2 | test\375
(2 rows)

craig=> SELECT encode( object, 'hex' ) FROM context_objects;
  encode
--
 0507110741727469636c6503
 74657374fd
(2 rows)

craig=> SELECT encode( object, 'base64' ) FROM context_objects;
  encode
--
 BQcRB0FydGljbGUD
 dGVzdP0=
(2 rows)


Both the hex-string and base64 forms are trivial to convert to and from
a byte string in Perl. You can use the matching 'decode' function when
updating / inserting data, eg:

  UPDATE context_objects
  SET object = decode('BQcRB0FydGljbGUD', 'base64')
  WHERE id = 2;

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Profiling tool for postgres under win32

2009-11-26 Thread Craig Ringer
On 26/11/2009 5:48 PM, aymen marouani wrote:
> Hi,
> I'm using the Postgres database system version 8.3 for a J2EE application.
> I'd  like to profile and monitor in "real time" the status of my queries
> because I notice some slow loading.
> Can anyone tell how to get a "good" profiling tool for the Postgres
> database system ?

I'm not aware of any tools that connect to the database to provide
profiling and monitoring.

What the right course of action is depends on what sort of slowdown
you're encountering. If it's particular activities within the program
that're a problem, then you should probably use logging in your
application to record database activity (including query runtimes) to
see what exactly it's doing.

If it's unpredictable slowdowns in operations that are normally fast,
then you need to look at the database end. Look into locking issues (
see: pg_catalog.pg_locks ), table bloat and VACUUM / autovacuum, and the
possibility of simple concurrent load spikes ( see
pg_catalog.pg_stat_activity ).

Typically what you'll do if you're seeing unpredictably slow queries is
use log_min_duration to log problem statements to the PostgreSQL log,
which you can then analyse. If you enable CSV logging, you can pull the
PostgreSQL log into a database, spreadsheet, or whatever for easier
analysis.

The new auto_explain module in 8.4 is excellent and very helpful too,
since it can help you find out what plans were used to execute problem
queries easily and conveniently.

Of course, this won't help you much if your problem is an application
issuing *huge* numbers of very small queries. You can set Pg to log
every query, but you'll rapidly have an apalling amount of data to troll
through, and it's hard to know which ones are related to user-perceived
slowdowns. Application logging is usually a better option for tracking
this sort of thing down.

Out of interest, how are you talking to PostgreSQL? Hand-written SQL via
JDBC? Hibernate / TopLink / some other ORM (possibly via JPA)?

If you're using Hibernate, just enable its query logging features via
log4j and watch what happens.

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Profiling tool for postgres under win32

2009-11-30 Thread Craig Ringer

On 30/11/2009 4:42 PM, aymen marouani wrote:

Hi,
I'm sorry and thanks for the help, concerning your question :

"Out of interest, how are you talking to PostgreSQL? Hand-written SQL via
JDBC? Hibernate / TopLink / some other ORM (possibly via JPA)?"

I'm using JPA/Toplink to talk to the PostgresSQL, and I asked my
question about profiling because of a slow simple query

"SELECT i FROM Item i"

which takes 4s to execute.

Cordially and best regards.


In my off-list reply, where I suggested that you follow up on the list 
instead, I pointed you to the EXPLAIN command. Also, the initial reply I 
made pointed you to the logging options like log_min_duration.


You might want to use those tools to see what's going on. Start with:

  EXPLAIN ANALYZE SELECT i FROM Item i;

... but I'd expect to see just a normal sequential scan of a table with 
lots of entries. If that's the case, options to make it faster include:


- don't do it if you don't have to, it's always going to be expensive

- Make sure your tables aren't bloated. See:

http://www.postgresql.org/docs/8.3/interactive/routine-vacuuming.html

http://wiki.postgresql.org/wiki/Introduction_to_VACUUM%2C_ANALYZE%2C_EXPLAIN%2C_and_COUNT

 ... and use proper autovacuuming.

- Don't have tables that are too "wide", ie with too many fields. While 
they're supported fine, they can be slower to scan because there's just 
more data there. If you need crazy-fast sequential scans of the whole 
table for just a few points of data, consider splitting the table into 
two tables with a one-to-one relationship - but understand that that'll 
slow other things down. A materialized view is another alternative.


- Write your app to deal with the latency. Sometimes queries are slow, 
especially over slow links. Do your work in a background worker thread, 
and keep the UI responsive. (Doesn't make sense for web apps, but is 
important for normal GUI apps).


- Get faster disks, more RAM for caching, etc.

--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Using || operator to fold multiple columns into one

2009-12-23 Thread Craig Ringer

On 23/12/2009 4:34 PM, Bryce Nesbitt wrote:

Dear experts,

This point is confusing me with the || operator. I've got a table with
"one column per data type", like so:


Dare I ask why? What problem are you trying to solve by doing this?


# select context_key,keyname,t_number||t_string||t_date||t_boolean as
value from context_keyvals;

But it is not working, the columns always come up empty.


Here's why:

psql> SELECT ('TEST'||NULL) IS NULL;

?column?
   t

(1 row)


`||' applied to anything and null returns null, since it is unknown what 
the "null" part of the expression. It makes sense if you think of null 
as "unknown" or "undefined".


In your case, you should probably typecast each field to `text' and use 
`coalesce' to show the first non-null one. Alternately, you could use a 
CASE statement, eg:


SELECT COALESCE( t_number::text, t:string::text, t_date::text, 
t_boolean::text) AS value;


Also: You do have a CHECK constraint on the table that asserts that at 
most one of those entries may be non-null, right? If not, you're very 
likely to land up with entries with more than one t_something non-null 
sooner or later so I suggest adding one.


--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] short-cutting if sum()>constant

2009-12-23 Thread Craig Ringer
Pavel Stehule wrote:

> The problem is massive cycle. Plpgsql really isn't language for this.
> This interpret missing own arithmetic unit. So every expression is
> translated to SELECT statement
> 
> IF a > c ~ SELECT a > c
> a = a + 1 ~ SELECT a + 1
> 
> these queries are executed in some special mode, but still it is more
> expensive than C a = a + 1

... and may have different rules, so you can't just write a simple "map
expressions to C equivalents" arithmetic evaluator.

It's also far from easy to just translate PL/PgSQL to directly
machine-executable code as you suggested, by the way. It'd really
require a just-in-time compiler akin to what Java uses, though the
ability to compile once and cache would help get rid of some of the
complexity of Java's.

It'd quickly become attractive to just use PL/Java instead, or write
your own C-language function and LOAD it.

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Using || operator to fold multiple columns into one

2009-12-24 Thread Craig Ringer

On 24/12/2009 5:04 AM, Rosser Schwarz wrote:

On Wed, Dec 23, 2009 at 1:44 AM, Craig Ringer
  wrote:


SELECT COALESCE( t_number::text, t:string::text, t_date::text, t_boolean::text) 
AS value;


Your invocation of COALESCE is incorrect -- it is n-ary, but it
returns its first non-NULL argument.


Yes. That was the point.

I think we're assuming the OP wants different things. You're assuming 
they're trying to concatenate all fields, where more than one field in a 
given record may be non-null. I'm assuming that all but one field in 
each record will be null, and they want to show the "value" of the 
record - in other words, they're using the record as a sort of union 
type. It looks like that from the example they gave.


--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Foreign key - Indexing & Deadlocking.

2009-12-29 Thread Craig Ringer

On 30/12/2009 6:59 AM, Andrew Hall wrote:

Hi,

I'm porting some Oracle software to PostgreSQL, & have a question about
best practices with foreign keys.

In the Oracle environment, you're basically obliged to index all foreign
keys (more specifically foreign key columns should appear on the leading
edge of an index) due to the way that it manages locks.

Failure to index foreign keys in this way in Oracle often causes
deadlock situations.

Although I could no find a mention of anything similar in the postgreSQL
documentation, I just wanted to check whether or not the same was true
of PostgreSQL to be on the safe side.


It is not the case. It's quite common in PostgreSQL to leave foreign 
keys without indexes. Doing so does mean that referential integrity 
checks being made on DELETE from the tablew with the referenced primary 
key will require a sequential scan of the referencing table(s) to check 
integrity, though.


PostgreSQL isn't smart enough (yet) to group up such checks into a 
single pass down the target table. So if you delete 100 rows from a 
table in a query, and that table is referenced by another table via a 
foreign key without an index, the referencing table will be sequentially 
scanned 100 times.


Needless to say, you want to add indexes to your foreign keys if you 
expect to delete from the parent, or (for some reason) update the 
primary key value for rows in the parent table.


--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] [GENERAL] DataBase Problem

2009-12-29 Thread Craig Ringer

On 30/12/2009 9:49 AM, Premila Devi wrote:


Caused by: _java.sql.SQLException_: Couldn't perform the operation
rollback: You can't perform any operations on this connection. It has
been automatically closed by Proxool for some reason (see logs).


"see logs"

Look at your proxool logs and see why the connection was closed. The 
error message its self gives you the next step.


I suggest reading this:
  http://wiki.postgresql.org/wiki/Guide_to_reporting_problems
for some hints in case you need to ask a more detailed follow-up.

--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] indexes

2010-01-17 Thread Craig Ringer
Seb wrote:
> On Mon, 18 Jan 2010 08:59:56 +1100,
> Chris  wrote:
> 
>>> o Should all foreign keys have an index?
> 
>> Not necessarily, you might just want the db to enforce the restriction
>> but not actually use the data in it. For example, keep a userid (and
>> timestamp) column of the last person to update a row. You may need it
>> to say "aha - this was last changed on this date and by person X", but
>> you'll never generally use it.
> 
>> If you never have a where clause with that column, no need to index
>> it. If you're using it in a join all the time, then yes it would be
>> better to index it.
> 
> Thanks for all your pointers!
> 
> Do views use the indexes in the underlying tables, whenever say a SELECT
> operation is called on the view?  If so, indexes on views don't make any
> sense right?

A view WILL use an index(es) on the underlying table(s) if the planner
thinks the use of those indexes will be helpful. In general it's as if
you substituted the text of the view's query into the query using the
view and executed that composite query.

Sometimes views are very expensive to compute, and avoiding computing
values you're not interested in for a particular query would be very
handy. It's not always possible to tack on a WHERE clause that gets
applied as part of a big complex join; sometimes you land up computing a
lot of data then throwing the vast majority of it away. That's not
really desirable.

If you have very expensive views, a good way to handle this is to
maintain a materialized view and index the materialized view. Pg doesn't
currently have any support for creating and maintaining materialized
views automatically, but it's easy enough to do it with a few triggers
(and you can often do it more efficiently/intelligently than the
database could do in the general case).

I have a couple of materialized views in my schema that make queries
several orders of magnitude faster (!) when querying for current data by
eliminating the need for tree-like multiple self joins. Updates to these
views are cheap, because the triggers on the source tables can
selectively update only the mat.view entries they know are affected by a
given update/insert/delete.

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] kind of RULE on SELECT depending on existence of a WHERE condition

2010-03-02 Thread Craig Ringer

On 1/03/2010 9:38 PM, Richard Huxton wrote:

On 01/03/10 12:16, rawi wrote:



Not quite the way you suggest. You could build a series of views with
the WHERE conditions built in to them, and grant permissions on those
though.


Thank you very much for your help.

Unfortunately is this not what I hoped...
The permissions will be granted dynamic by the application out of the
user-records and expressed in the WHERE flags.
I'll need another approach...


You could write a set-returning function that takes either:
1. A list of conditions
2. The text for a WHERE clause


SELECT my_priveleged_function('1=1');

You'll probably have to provide different functions for the use of 
different roles, or have your function check the current role (see 
INFORMATION_SCHEMA) and prepend something appropriate to the WHERE clause.


Even then you'll probably have to pre-filter the results in a subquery, 
otherwise it's hard to protect against the user appending 'OR 1=1' or 
the like to your WHERE clause.


Personally, I'd avoid any sort of textual query building - instead I'd 
provide my_function_for_admins(param1, param2), 
my_function_for_users(param1, param2) etc. Each one would substitute 
parameters into existing SQL using `EXECUTE USING'. Possibly-null params 
can be handled using COALESCE or CASE to avoid string-building.


--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL]

2010-05-21 Thread Craig Ringer

On 21/05/2010 9:56 AM, Richard Broersma wrote:

On Thu, May 20, 2010 at 5:52 PM,  wrote:


I'm new to triggers in PostgreSQL. I have to create a trigger on insert to
increment a sequence to overcome MS-Access' limitation in acknowledging
serial "datatype".


Uh?  Access doesn't need to acknowledge the serial datatype.
At-least in recent versions auto increment is recognized by MS-Access
just fine (I'm guessing this is due to the Return clause which the
ODBC driver automatically calls).


Really?

I had problems with Access complaining that the object it just inserted 
had vanished, because the primary key Access had in memory (null) didn't 
match what was stored (the generated PK). I had to fetch the next value 
in the PK sequence manually and store it in Access's record before 
inserting it to work around this.


I wouldn't be surprised if this actually turned out to just require some 
bizarre ODBC driver parameter change, but I never figured it out and I 
couldn't find any info about it on the 'net.


For the original poster: I posted some information about this at the 
time I was working on it, so search the archives of this list for MS Access.


I eventually ditched Access entirely as the user who was demanding the 
use of MS Access relented (phew!), so I put together a simple web-app to 
do what they wanted in a day. Hopefully I'll never need to go near ODBC 
again, because it's a truly "special" way to talk to PostgreSQL.


--
Craig Ringer


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Using SKYLINE command on PostgreSQL

2010-08-07 Thread Craig Ringer
On 05/08/10 00:26, LaboRecherche wrote:
> 
> 
> Hello,
> 
> I have just installed version 8.4 of PostGresql (on a Windows machine)
> in order to use SKYLINE command, but it seems that I get an error each
> time I try to use this command.
> 
> Is there a specific plugin to install to add SKYLINE command ? Can you
> help me on this point to get this command working ?

All I can find with a quick Google search for "SKYLINE postgresql" is
references to someone's masters thesis, plus a bit of mailing list
discussion.

There's no indication it was ever proposed and discussed as a serious
patch to the core of PostgreSQL, let alone applied. Have a look at this
thead, including follow-ups.

http://archives.postgresql.org/pgsql-hackers/2007-03/msg00192.php

The masters thesis page does contain a patch against PostgreSQL 8.3:
  http://skyline.dbai.tuwien.ac.at/
but this is something you have to apply to the source code, then compile
PostgreSQL into usable binaries. That's trivial on Linux, but I don't
recommend you try it on Windows, as compiling software on Windows is
much more complex.

They say they're porting the patch to 8.4devel, but I suspect that since
8.4 has been out for ages and 9.0 is almost released, they've probably
lost interest and dropped the work.

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Jagged/ragged arrays

2010-09-20 Thread Craig Ringer

On 21/09/2010 3:42 AM, Dmitriy Igrishin wrote:

Hey Florian,

What do you mean by "ragged" arrays?


At a guess:

craig=> SELECT '{ {1}, {1,2}, {1}, {1,2,3} }'::integer[][];
ERROR:  multidimensional arrays must have array expressions with 
matching dimensions


(OP) Correct?

--
Craig Ringer

Tech-related writing at http://soapyfrogs.blogspot.com/

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Slow response in select

2010-09-21 Thread Craig Ringer

On 09/21/2010 08:25 PM, Gary Stainburn wrote:

Hi folks,I have a view that seems to be very slow compared to similar views on
the same server.

I've had a look at running explain but I don't really understand it.

I've posted the view and the output from explain at

http://www1.ringways.co.uk/explain.txt


Woah. That's an interesting plan.

When concerned about execution time, it's probably best to post EXPLAIN 
ANALYZE rather than plain EXPLAIN results. EXPLAIN ANALYZE provides more 
timing information and information about how rowcount estimates differed 
from reality.


--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] HowTo divide streetname from house-nr ?

2010-09-22 Thread Craig Ringer

On 23/09/2010 9:25 AM, Andreas wrote:

Hi,

how could I divide streetnames from housenumbers ?


You can't, reliably. There are papers written on the subject of 
addressing, address analysis, addressing in databases, etc.


How would you handle the address:

  Person's Name
  RD3 Clemo Rd
  Whangarei

? "RD3" is "Rural Delivery Area 3". The posties deliver within the area 
by named recipient. This is a real address scheme.


The world is full of weird and wacky addressing. IMO, unless you're 
willing to confine your schema to only handling addresses of a 
particular area you know about, don't try to normalize address. Even 
then, I wouldn't try to normalize addresses with text processing, I'd 
ask the user to do it during form entry or not do it at all.


If you're trying to find duplicate addresses, matching addreses, etc, 
then IMO you're better off using existing tools that do this with 
free-form addresses using national phone databases, postcode databases, etc.


--
Craig Ringer

Tech-related writing at http://soapyfrogs.blogspot.com/

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] find and replace the string within a column

2010-09-24 Thread Craig Ringer

On 24/09/2010 5:15 PM, Nicholas I wrote:


Example:
table name person:

name

Samuel (S/o Sebastin )
-

remove the word within the brackets.
the output should be , Samuel.


Can't you just regexp_replace, matching \([^)]+\) (in other words "an 
open parenthisis, then a sequence of one or more of any character other 
than a close parenthesis, followed by a close parentheis) and replacing 
with an empty string ?


--
Craig Ringer

Tech-related writing at http://soapyfrogs.blogspot.com/

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] check files .backup

2011-01-24 Thread Craig Ringer
On 25/01/11 08:18, manuel antonio ochoa wrote:
> Do you know if exist a  function to check my file.backup created by
> pgdump. ?

What do you want to check? That it restores correctly? That it's
complete and not truncated? That it hasn't been modified since being
originally written? That it matches the data currently in the database?
What?

-- 
System & Network Administrator
POST Newspapers

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] check files .backup

2011-01-26 Thread Craig Ringer

On 01/25/2011 11:14 PM, manuel antonio ochoa wrote:

I would like it complete and not truncated? That it hasn't been modified
since being
originally written?


OK. First, check the exit code from pg_backup so you detect any 
failures. The exit code is reported by the shell on UNIX systems in the 
$? variable, but you can usually just do something like the following 
(untested):


if ! pg_backup  ; then
  logger -p crit "pg_backup failed"
  mail -s "Pg backup failed" someone@yourcompany <<__END__
  The latest backup failed. Check the logs for why, and
  fix it!
__END__
fi



Second: after writing the backup, calculate an md5sum or (preferably) 
digitally sign the backup using gpg. An md5sum is only really protection 
against corruption unless you store it somewhere separate and secure. I 
prefer to digitally sign my backups with detached gpg signatures.


--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] PosgreSQL - TSQL2 question

2011-02-06 Thread Craig Ringer
On 06/02/11 06:26, Sarbu Anca wrote:

> Could you please tell me what I need to do to run TSQL2 on PostrgreSQL
> for Windows? What do I need to install? Where can I found it? How do I
> do the installation.

The temporal support extension is at :

  http://temporal.projects.postgresql.org/

and is on PgFoundry here:

  http://pgfoundry.org/projects/temporal/

but I have no idea how up-to-date it is or whether there are Windows
packages for the appropriate contrib modules. There's a question about
it on the PgFoundary help but it's unanswered.

http://pgfoundry.org/forum/forum.php?thread_id=1777&forum_id=1131


-- 
System & Network Administrator
POST Newspapers

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Performance of NOT IN and <> with PG 9.0.4

2011-05-16 Thread Craig Ringer

On 05/16/2011 06:05 PM, Jasmin Dizdarevic wrote:

Hi,

is there a reason why Not IN-performance is so poor in 9.0.4 compared to
8.4?


Example queries?

EXPLAIN ANALYZE output?

http://wiki.postgresql.org/wiki/SlowQueryQuestions

--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] client-side lo_import() provided by libpq ?

2011-05-18 Thread Craig Ringer

On 05/19/2011 04:51 AM, Emi Lu wrote:


About client-side lo_import(), is there an online doc about install
lo_import?


It's in the manual.

http://www.postgresql.org/docs/current/static/lo-examplesect.html

--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] column type for pdf file

2011-05-18 Thread Craig Ringer

On 05/19/2011 05:21 AM, Julien Cigar wrote:

On 05/18/2011 23:00, Karsten Hilbert wrote:

On Wed, May 18, 2011 at 10:46:23PM +0200, Julien Cigar wrote:


Unless you've good reasons to do so it's best to store the file on
the file system

Why ?

If you suggest reasons are needed for storing the PDF in the
database I'd like to know the reasons for *not* doing so.



It increases the load, consume connections, but the biggest drawback is
probably the memory consumption ..

IMHO storing binary data in a database is almost always a bad idea .. it
could be OK to store things like avatars, small icons, etc, but
certainly not to store files of several MB ... file systems are done for
that !


_however_, you lose transactional properties when doing this. You can 
land up with a file that's been added/updated where the associated 
transaction in the DB failed or rolled back. Very careful application 
programming and the use of 2 phase commit can provide reliable 
behaviour, but it's not trivial.


This is one area where I envy Microsoft. As they control the file system 
and the database, they've been able to come up with a really cool system 
where the file system integrates into database transactions, so you kind 
of get the best of both worlds. Very cool. If reiser4 hadn't gone the 
way of the dodo such a thing might've become possible on Linux, but I'm 
not aware of any other Linux file systems that safely support transactions.


--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] column type for pdf file

2011-05-19 Thread Craig Ringer

On 05/19/2011 09:53 PM, Karsten Hilbert wrote:

On Thu, May 19, 2011 at 09:39:54AM -0400, Emi Lu wrote:


All right, it seems that everyone thinks saving a pdf into postgresql
is not a good idea.


No.


Hardly everyone. You lose transaction safety when using file system 
storage outside the DB, you need another way to talk to the server than 
just the Pg connection, and most importantly your backups become more 
complicated because you have two things to back up.


It's not simple, and it depends a lot on how much the data changes, how 
big the files are, etc.


--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] unnest in SELECT

2011-05-21 Thread Craig Ringer

On 05/21/2011 03:13 AM, Karl Koster wrote:

I have a couple of questions regarding unnest.

1) If I issue a select statement "select unnest(vector1) as v from
some_table", I cannot seem to use the column alias v in a WHERE or
HAVING clause. I can use it in an ORDER BY or GROUP by clause. Is this
the way it is supposed to work?


Yes, and it's what the SQL standard requires. Otherwise, how would this 
query work?


SELECT a/b FROM sometable WHERE b <> 0;

?

The SELECT list has to be processed only once the database has already 
decided which rows it applies to and how.


Use unnest in a FROM clause, eg:

SELECT v1.* FROM unnest(vector) ...

This may require a join and/or subquery to obtain 'vector'.


2) If I issue a select statement "select unnest(vector1) as v1,
unnest(vector2) as v2 from some_table" and vector1 has a length of 3 and
vector2 has a length of 4, the result set will have 12 rows with the
data of vector1 repeating 4 times and vector2 repeating 3 times.
Shouldn't the content of the shorter array(s) simply be return null in
it's respective column and the result set be the size of the longest array?


unnest is a set-returning function, and it doesn't really make that much 
sense to have them in the SELECT list anyway. Few databases support it, 
and PostgreSQL's behavior is a historical quirk that I think most people 
here hope will go quietly away at some point.


Use unnest in a FROM clause.

--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] extracting location info from string

2011-05-22 Thread Craig Ringer

On 23/05/2011 9:11 AM, Andrej wrote:

On 23 May 2011 10:00, Tarlika Elisabeth Schmitz
  wrote:

On Sun, 22 May 2011 21:05:26 +0100
Tarlika Elisabeth Schmitz  wrote:


A column contains location information, which may contain any of the
following:

1) null
2) country name (e.g. "France")
3) city name, region name (e.g. "Bonn, Nordrhein-Westfalen")
4) city name, Rg. region name (e.g. "Frankfurt, Rg. Hessen")
5) city name, Rg region name (e.g. "Frankfurt, Rg Hessen")



I also need to cope with variations of COUNTRY.NAME and REGION.NAME.


This is a hard problem. You're dealing with free-form data that might be 
easily understood by humans, but relies on various contextual 
information and knowledge that makes it really hard for computers to 
understand.


If you want to do a good job of this, your best bet is to plug in 3rd 
party address analysis software that is dedicated to this task. Most 
(all?) such packages are commercial, proprietary affairs. They exist 
because it's really, really hard to do this right.



Another thing of great import is whether the city can occur in the
data column all by itself; if yes, it's next to impossible to distinguish
it from a country.


Not least because some places are both, eg:

  Luxembourg
  The Vatican
  Singapore

(The Grand Duchy of Luxembourg has other cities, but still serves as an 
example).


--
Craig Ringer

Tech-related writing at http://soapyfrogs.blogspot.com/

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Which version of PostgreSQL should I use.

2011-05-23 Thread Craig Ringer

On 23/05/2011 11:19 PM, jasmin.dizdare...@gmail.com wrote:

Just be careful with pg_dump, if you have binary data stored in your 8.4 db. In 
default mode it just export text.


Er ... what?

Can you elaborate on that?

Do you mean large objects? bytea fields? Something else?

--
Craig Ringer

Tech-related writing at http://soapyfrogs.blogspot.com/

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] extracting location info from string

2011-05-23 Thread Craig Ringer

On 24/05/2011 6:39 AM, Tarlika Elisabeth Schmitz wrote:


Indeed. However, the situation is not quite as bleak as it appears:
- I am only dealing with 50 countries (Luxemburg and Vatican are not
amongst them)
- Only for two countries will city/region be displayed instead of
country.
- Ultimately, where the only important bit of imformation is the
country.
- The only really important persons are those from the two countries.


Ah, see that's critical. You've just been able to restrict the problem 
domain to a much simpler task with a smaller and well-defined range of 
possibilities. Most of the complexity is in the nasty corner cases and 
weirdness, and you've (probably) just cut most of that away.



Of 17000 historical records, 4400 don't match this simple pattern.
Of the 4400, 1300 are "USA" or "Usa" instead of "United States", 900
"North America" whatever that is! There are plenty of common +
valid region abbreviations.

I get about 1000 new records of this type per year.


I'd do this kind of analysis in a PL/Perl or PL/python function myself. 
It's easier to write "If  then  else " logic in a readable 
form, and such chained tests are usually better for this sort of work. 
That also makes it easier to do a cleanup pass first, where you 
substitute common spelling errors and canonicalize country names.



However, the import process has to be as automatic as possible in such
a way that inconsistencies are flagged up for later manual
intervention. I say later because, for instance, a person's data will
have to be imported with or without location info because other new
data will link to it.


That's another good reason to use a PL function for this cleanup work. 
It's easy to INSERT a record into a side table that flags it for later 
examination if necessary,  and to RAISE NOTICE or to issue a NOTIFY if 
you need to do closer-to-realtime checking.


--
Craig Ringer

Tech-related writing at http://soapyfrogs.blogspot.com/

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Which version of PostgreSQL should I use.

2011-05-23 Thread Craig Ringer
On 24/05/11 14:30, jasmin.dizdare...@gmail.com wrote:
> We had trbls with our drupal site, because some settings are stored in bytea 
> columns. I think lobs are a problem too, but pls see the pg_dump docs for 
> details.

Is it possible that you are referring to the `bytea_output' setting in
PostgreSQL 8.4 and above?

If so, the issue isn't that pg_dump somehow fails to dump the bytea
data. Rather, it's that drupal doesn't deal well with bytea data from
newer versions of PostgreSQL until the bytea_output setting is changed
to 'escape' because it doesn't understand the new hex format.

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Which version of PostgreSQL should I use.

2011-05-24 Thread Craig Ringer
On 24/05/11 18:58, Jasmin Dizdarevic wrote:
> Alright, I've misunderstood this issue. Do you have to escape bytea
> columns during export or import? And how you would do this?

Some database drivers and some apps don't understand the new hex output
format for bytea columns.

IIRC, the format change should ONLY affect how bytea values are sent
from the server to the client using the standard text-based postgresql
protocol.

When using apps/drivers that aren't ready for the hex format yet, you must

  SET bytea_output TO 'escape';

This can be done at the postgresql.conf level (globally), by ALTERing
the database the app uses, by ALTERing the user ID the app connects
with, or by modifying the app so it knows to issue an explicit SET
before doing any work with a connection.

AFAIK there is no impact on dump/load, though you *could* see problems
if you used an application's own dump/load feature rather than pg_dump
and the app wasn't ready for the new bytea format.

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Problems Pgdump

2011-05-24 Thread Craig Ringer
On 25/05/11 03:26, manuel antonio ochoa wrote:
> sorry  I cound finish my problem.
> 
> I trying to get the next one :
> 
> pg_dump -h 192.170.1.3 -U  User --format custom --inserts --verbose
> --file \"/root/Desktop/$name .backup\" --table "$ESQUEMA.$Nametable" DB"
> 
> my Name table is  detalle_Inegra , and the problem is that it table
> alwals sent me a message like i COULD NOT FIND THE NAME  OF TABLE but
> the table exist and

It's almost certainly a case-folding issue. It'd help if you posted the
output of:

  \d

in the database of interest.

At a guess, what's happening is that you have a mixed-case table name.
Let's say it's called "Fred". You're trying to run:

  pg_dump --table "Fred" dbname

The shell consumes the double-quotes during argument processing, so what
pg_dump actually sees is three arguments:

  --table
  Fred
  dbname

Because of case-folding rules, table names that are not double-quoted
are folded to lower-case. This means that the table names:

  Fred
  FRED
  fred
  FrEd

are all interpreted as meaning the table "fred" (lower case).

If you need to preserve case, you need to protect the double quotes from
consumption by the shell, so you send the argument "Fred" to pg_dump. In
your command line above, you would change

  --table "$ESQUEMA.$Nametable"

to

  --table "\"$ESQUEMA\".\"$Nametable\""

... adding a pair of escaped double-quotes around both the table and
schema names that the shell does NOT consume. The outer double-quotes
need to be retained in case the table or schema names contain shell
meta-characters and/or spaces.

To learn more about postgresql's case folding, see:

http://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS

http://sql-info.de/postgresql/postgres-gotchas.html

http://archives.postgresql.org/pgsql-hackers/2004-04/msg00818.php

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] best performance for simple dml

2011-06-27 Thread Craig Ringer
On 27/06/11 15:05, Pavel Stehule wrote:

> exec('begin');
> for(i = 0; i < 1000; i++)
>   exec("insert into foo values($1), itoa(i));
> exec('commit');

You can probably also benefit from multi-valued INSERTs, though I
haven't verified this.

INSERT INTO foo VALUES
(1,'joe','dean'),
(4,'fred','bob'),
(11,'anne','smith');

There'll be a threshhold above which the COPY protocol becomes faster,
though.

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Looking for a "show create table " equivalent

2011-07-14 Thread Craig Ringer

On 14/07/2011 6:22 PM, Jasen Betts wrote:

On 2011-07-12, B.Rathmann  wrote:

Hello,

I've been trying to find out how to find out which sql was run to create
a certain table.


That's like trying to find what change was used to create $1.83


Yep, and just like that question, sometimes any valid answer is fine. 
You might not care if it's $1 + .5 + .2 + .02 + .01 or 183 * 0.01, so 
long as the result is the same.


This is turning out to be a reasonably common question, where people 
want to be able to use functionality of pg_dump from _within_ the 
database system.


If the INFORMATION_SCHEMA isn't sufficient for the OP's purposes or is 
too clumsy for re-creating DDL from, there's another option. It's ugly, 
but you can add a plpythonu or plperlu server-side function that calls 
the server's copy of pg_dump and spits the resulting text back to the 
client.


--
Craig Ringer

POST Newspapers
276 Onslow Rd, Shenton Park
Ph: 08 9381 3088 Fax: 08 9388 2258
ABN: 50 008 917 717
http://www.postnewspapers.com.au/

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] (pgsql8.4) DATA Corruption

2011-08-25 Thread Craig Ringer

On 20/08/2011 4:27 AM, Mik Rose wrote:

Thanks for the suggestions Scott.

I was looking at the zero_damage_page option and enabled it in my 
postgres.conf.  I saw that it detected bad pages and zero'd them out 
during a number of reindex and a vacuum's.


Now when i reindex though,  I am getting this error.

NOTICE:  table "pg_foreign_server" was reindexedERROR:  could not 
create unique index "pg_toast_57366_index"

DETAIL:  Table contains duplicated values.

ERROR:  could not create unique index "pg_toast_57366_index"
DETAIL:  Table contains duplicated values.


You'll have to figure out what what values are duplicates within each 
table and DELETE one of them after determining which is the correct one 
- if you can. Then you should be able to re-create the index. If the 
DELETE fails you might find you have to DROP the index, do the 
DELETE(s), then CREATE the index again.


If the index is a partial index (ie it has a WHERE clause) you need to 
find duplicates that match that WHERE clause, you can ignore ones that 
don't match.


I hope you took a file-system-level backup of your database before 
starting to mess with it. People here will normally advise you to do 
that first and I'm kind of surprised nobody did.


--
Craig Ringer


Re: [SQL] FW: Hi

2011-09-11 Thread Craig Ringer

On 11/09/2011 5:22 AM, Mandana Mokhtary wrote:



*From:* Mandana Mokhtary
*Sent:* 10 September 2011 23:21
*To:* pgsql-sql-ow...@postgresql.org
*Subject:* Hi

Hi All
I tried to import shapfile into postgres using this comand:

pgsql -c -s 3006 -W LATIN1 c:/.. |psql _U postgres (database name)

I could import some but not all.
I got this error that I do not have any idea about it:

current transaction is aborted, commands ignored until end of
transactions block.


There was another error before that one, look for it. You want the FIRST 
error.


--
Craig Ringer


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] how to temporally disable foreign key constraint check

2011-10-21 Thread Craig Ringer

On 10/21/2011 09:36 PM, Emi Lu wrote:

Good morning,


Is there a way to temporally disabled foreign key constraints something
like

SET FOREIGN_KEY_CHECKS=0

When population is done, will set FOREIGN_KEY_CHECKS=1


If you really, really want to do this you can do it by disabling the 
triggers that enforce the checks on particular tables. This must be done 
table-by-table, there's no global way to do it.


Use ALTER TABLE ... DISABLE TRIGGER to do it. See:

http://www.postgresql.org/docs/current/static/sql-altertable.html

--
Craig Ringer


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Handling mutliple clients access with views

2011-10-24 Thread Craig Ringer
On 25/10/11 03:23, Brice André wrote:
> Hello everyone,
> 
> I am developping a web service where some tables are handling the data
> of different clients. Depending on configured rights, one client can
> have no access, or read access, or read and write access to other
> clients data.
> 
> In order to handle that, I am using views and, to ensure that a client
> cannot access data outside the view, all clients info is stored in a
> table where its postgresql user name is also stored. So, in order to
> limit access of view, I am using the postgresql special function
> "current_user()" and I am retrieving the id of my client like this.

That sounds ... roundabout.

Why not use roles and role inheritance? You can use SET ROLE to
temporarily change roles, log in as different roles, have one role be a
member of other roles, have role access permissions on tables/views at
the column or table level, etc.

>- My method requests that each user has its own postgresql user. But,
> in this case, my web server needs to establish a postgresql connection
> for each user, which will maybe cause problems.

Connect as a single user, then SET ROLE to the user you want in order to
control access.

Instead of using current_user() and programmatic security checking, use
GRANT and REVOKE for declarative access checking where possible.

> So, I was guessing if I was not completely wrong by doing like that.
> Maybe is there a simpler way of doing what I try to do ? Or maybe am I a
> little bit too paranoïde, and maybe should I handle all my clients with
> a single postgresql user, handling all safety aspect in my php script ?

Nope, I heartily approve of doing security in-database, especially if
you can do it declaratively.

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Handling mutliple clients access with views

2011-10-24 Thread Craig Ringer
On 25/10/11 11:36, David Johnston wrote:

> Except database roles cannot strictly enforce row-level security (i.e., 
> multiple-tenant) which is the goal of this setup.

Ah, yes, if theyr'e relying on _row_ level security then that's very
much the case. I misread their post as suggesting that they had
different tables for different clients, rather than tables that mix
different clients' data.

Declarative row-level security (row ownership) would be really nice...
here's hoping the SELinux work can be extended to support a simpler,
OS-agnostic non-SELinux-based row-level RBAC mechanism.

I'd still use SET ROLE where possible, and rely on SECURITY DEFINER
stored procs in cases where clients' data is mixed in a table so you
need to filter it programmatically. Then I'd wrap those procs in a view
so they were transparent and looked like tables, so once row-level
security is implemented you could switch to that without the app caring.

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] How to write sql to access another odbc source.

2011-10-25 Thread Craig Ringer
On 19/10/11 22:08, Rich wrote:
> I have a mumps database with an ODBC connection so I can write queries
> from other databases.  How can I write a sql in Postgresql to access
> this table to use in my Postgresql reports?

Use dbi-link to make an ODBC connection to the other database. Either
copy the data into a PostgreSQL table for processing, or make a view
over the dbi-link function call to query the data each time. The latter
option is slow, but requires no periodic refreshing of the copied table.

If you're on PostgreSQL 9.1 I'd recommend copying the data from the
other database via dbi-link into an unlogged table.

See: http://pgfoundry.org/projects/dbi-link and the documentation for
the dbi-link contrib module in your version of PostgreSQL.

Hopefully in 9.2 the odbc foreign data wrapper will be working and ready
for general use, in which case you'll be able to do it even more easily
using ODBC FDW (SQL/MED).



--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] How to implement Aggregate Awareness?

2011-11-05 Thread Craig Ringer

On 11/05/2011 05:03 AM, Olgierd Michalak wrote:


Simply put, when Transparent (to the reporting tool) Aggregate Navigator
recognizes that a query would execute faster using aggregates, it
automatically rewrites the query so that the database hits the smaller
aggregates rather than larger detail tables upon which the small
aggregates are pre-computed.


Assuming I understand what you want correctly: Before you could usefully 
do anything like that, I think you'd need to implement full support for 
materialized views.


Once that's in place, the optimizer could potentially be enhanced to 
recognise queries against a base table and rewrite it to use a 
materialized view of the table when it sees that the query only touches 
data collected by the materialized view.


Right now, there isn't really anything for query rewriting like this to 
/target/ .


--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] pg_dump : problem with grant on table columns

2011-11-15 Thread Craig Ringer
On Nov 15, 2011 9:46 PM, "Brice André"  wrote:
>
> Hello,
>
> I sometimes define some access rights on table columns instead of whole
table. Everything works fine except when I perform a dump.
>
> When I dump a databse that contains such access rights, the pg_dump
utility generates commands like this one :
> GRANT SELECT("Login") ON TABLE "Clients" TO "AgendaSubscript";
>
> But, when I execute the content of this dump in a fresh database (for
backup and restore stuff), those commands are rejected and thus, my
restored database has not the proper access rights.

Please specify the version of Pg you are dumping from and the version you
are restoring to.

At a guess you are restoring to a version from before column grants were
added.

Please also give the full, exact text of any error message you get. See the
guide to reporting problems on the wiki.

>
> Is this a bug with the pg_dump tool or am I doing something wrong ?
>
> Regards,
>
> Brice André


Re: [SQL] No response from the backend

2011-12-07 Thread Craig Ringer

On 12/07/2011 07:27 PM, feng.zhou wrote:

Hi
There are some questions when using progreSQL8.4.2. I don't sure this 
problem is bug.The detail is following:
I make a client program that communicate with  progreSQL by ODBC on 
Windows7. After running serveral months, this client program is hung 
at 00:01 in 2011-11-26 , and become OK after 8 hours.When becoming OK, 
client program log message is following:
2011/11/26 08:38:05.527,  : SQL ERROR: GetSelectRowNum: 
No response from the backend;

No response from the backend
2011/11/26 08:38:05.527,   : SQL ERROR: GetSelectRowNum: State:42601,Native:26
2011/11/26 08:38:05.527,   : SQL ERROR: GetSelectRowNum: 
Could not send Query(connection dead);

Could not send Query(connection dead)
I don't know how to solve this problem. Someone can explain this 
problem cause.Thanks


I'd say you had a connectivity drop-out, and it only recovered when your 
TCP/IP connection timed out. Reduce your TCP/IP timeouts and/or enable 
keepalives if you have an unreliable network connection between client 
and server.


If that's not the case, then more detail please. Where do the client and 
server run (same machine?)? How are they connected? etc.


--
Craig Ringer


Re: [SQL] No response from the backend

2011-12-07 Thread Craig Ringer

On 12/08/2011 09:38 AM, feng.zhou wrote:

Thanks for your answer.
The client and server run on the same machine(Windows7). They are 
connected by ODBC.The client accesses server by call ODBC 
API.Connection and query timeout don't be set in the client.




Since client and server are on the same host, it's unlikely to be 
network connectivity. Check the server logs.


--
Craig Ringer


Re: [SQL] partitions versus databases

2011-12-08 Thread Craig Ringer

On 12/08/2011 10:26 PM, chester c young wrote:

have an db with about 15 tables that will handle many companies.  no data 
overlap between companies.  is it more efficient run-time to use one database 
and index each row by company id, and one database and partition each table by 
company id, or to create a database for each company?

it is a web-based app using persistent connections.  no copying.



If you post a question on Stack Overflow and on the mailing list, please 
link to your stack overflow question from your mailing list post!


http://stackoverflow.com/questions/8432636/in-postgresql-are-partitions-or-multiple-databases-more-efficient/

That'll help avoid duplication of effort, and make it easier for people 
searching for similar topics later to find out more.


--
Craig Ringer


Re: [SQL] Query Timeout Question

2011-12-08 Thread Craig Ringer

On 12/09/2011 09:44 AM, feng.zhou wrote:

Hi
I set query timeout in code by using SQLSetStmtAttr ODBC API .
SQLSetStmtAttr( StatementHandle, SQL_ATTR_QUERY_TIMEOUT, (SQLPOINTER)timeout, 0 ) 
;

SQLExecute(StatementHandle);
But I find this setting has no effect.Before adding timeout setting, 
SQLExecute costs 47 seconds.After adding timeout, costing time is same.


First: You're using a very old patch release with known bugs. Update to 
8.4.10 .


As for the timeout: PostgreSQL doesn't support query timeouts. It 
supports a session-level statement timeout. I don't know whether the 
ODBC driver uses that or not. Examine the ODBC `mylog' output after 
enabling psqlODBC debugging, and examine the server log after turning on 
query logging, so you can see what the ODBC driver actually asks the 
server for when you set a query timeout.


--
Craig Ringer


Re: [SQL] JDBC Statement.setQueryTimeout : is there plan to implement this?

2011-12-14 Thread Craig Ringer

On 14/12/2011 9:29 PM, Sylvain Mougenot wrote:
Even if the behaviour is not implemented, I guess it could be "mocked" 
if the call to Statement.setQueryTimeout(int seconds) generated the 
select statement "SELECT statement_timeout(SEC)".

I know this is not ideal but could solve the problem temporarily.
I could even be turned on by some driver settings.


Please search the mailing list archives for discussion on this topic:

http://www.postgresql.org/search/?m=1&q=setQueryTimeout+&l=&d=-1&s=d 
<http://www.postgresql.org/search/?m=1&q=setQueryTimeout+&l=&d=-1&s=d>


IIRC there are server backend changes required to make it possible to 
implement setQueryTimeout, and nobody's come up with an acceptable 
patch. I haven't followed the issue so I could easily be mistaken, though.

--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] JDBC Statement.setQueryTimeout : is there plan to implement this?

2011-12-14 Thread Craig Ringer

On 14/12/2011 9:29 PM, Sylvain Mougenot wrote:
Even if the behaviour is not implemented, I guess it could be "mocked" 
if the call to Statement.setQueryTimeout(int seconds) generated the 
select statement "SELECT statement_timeout(SEC)".

I know this is not ideal but could solve the problem temporarily.
I could even be turned on by some driver settings.



(following up on last post): See in particular this thread:

http://archives.postgresql.org/pgsql-jdbc/2010-10/msg00071.php

with posts like:

http://archives.postgresql.org/pgsql-jdbc/2010-10/msg00131.php
http://archives.postgresql.org/pgsql-jdbc/2010-10/msg00077.php


I'm a little concerned about the proposal to use a java.util.Timer, as 
IIRC there are issues with using a Timer in a JavaEE environment. I'm 
struggling to find more than vague references like it being 
"inappropriate for a managed environment" though.


Ah, here:

http://jcp.org/en/jsr/detail?id=236

"JavaTM Platform, Enterprise Edition (Java EE and formally known as 
J2EETM) server containers such as the enterprise bean or web component 
container do not allow using common Java SE concurrency APIs such as 
java.util.concurrent.ThreadPoolExecutor, java.lang.Thread, 
java.util.concurrent.ScheduledThreadPoolExecutor or java.util.Timer 
directly."


and

"java.util.Timer, java.lang.Thread and the Java SE concurrency utilities 
(JSR-166) in the java.util.concurrency package should never be used 
within managed environments, as it creates threads outside the purview 
of the container."


I suspect that PgJDBC will have to get a timer from the container via 
JNDI and fall back to direct instantiation if it is in a Java SE 
environment. I'm not sure how to do that right now or whether it can be 
done in a container-independent way (*shudder*). I'm quite sure that 
using EJB timers is NOT the right way to do it - they're not supported 
by web profile containers and are really intended for "business level" 
timers that should be persistent across redeploy/relaunch of 
appserver/reboot.


I've CC'd David Fetter, the author of the JDBC patch.

--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] JDBC Statement.setQueryTimeout : is there plan to implement this?

2011-12-14 Thread Craig Ringer

On 12/14/2011 11:30 PM, Sylvain Mougenot wrote:
Thank you Craig Ringer for the detailed list of post (I found some by 
myself).

Specially, I'm glad to see it is #1 "TODO" on the compliance matters.
http://jdbc.postgresql.org/todo.html#Compliance

As a reminder, I found post (on the net, not only on this forum) about 
this issue more than 3 years old!

And still : The driver doesn't implement it.

So,
Does anyone know if there is a plan to implement this feature?


I haven't heard of any active work on it recently.

Are you volunteering? Almost all the PgJDBC work is done on a volunteer 
basis, and the best way to make improvements happen is to do them 
yourself or contract someone who knows PgJDBC to do the work.


--
Craig Ringer


Re: [SQL] JDBC Statement.setQueryTimeout : is there plan to implement this?

2011-12-14 Thread Craig Ringer
Replied on pgsql-jdbc; please follow the discussion there.

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] possible bug in psql

2012-05-29 Thread Craig Ringer

On 05/29/2012 07:48 PM, Jasen Betts wrote:

On 2012-05-29, chester c young  wrote:

do not know if right mailing list

in psql in Ubuntu, when beginning with a smaller terminal, eg, 80
col wide, then moving to a larger terminal, eg, 132 col wide, the
readline(?) editor in psql still treats like 80 cols, making it
impossible to edit longer text.

As far as I know (as this is the behavior in debian stable)
the ubuntu people use libgettext and not libreadline.

You can force it to use libreadline by using LD_PRELOAD.  I reccomend
that you do I did this by editing /usr/bin/psql which is a perl script

In any case linux doesn't propogate SIG_WINCH to the other processes
that are using the pty so even if you are using readline you have to
force this by resizing the window again after exiting the pager, and
then press ctrl-l to clear the screen.
It'd be nice if `less` would kill(SIGWINCH) its ppid since it is so 
often invoked as a helper. I guess they probably have good reasons not to.


--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Partitioned Tables

2012-06-12 Thread Craig Ringer

On 06/13/2012 08:05 AM, Wayne Cuddy wrote:

I'm using partitioned tables where a child table is chosen based on a
time stamp. One child table exists for each month.

I move records from a temp table to the partitioned table via something like
this:

INSERT INTO parent_table SELECT * FROM temp_table;

All works well but with when inserting to standard tables the insert
result contains the number of records inserted. I've found that when I
insert into partitioned tables the result is always zero.


Are you inserting directly into the partition? Or into the parent table?

If you're inserting into the parent, then it's true that no rows were 
inserted into the parent; a trigger redirected the write into the child 
table. Pg's handling of partitioning isn't yet sophisticated enough to 
recognise that those rows got written to a child and thus still appeared 
in the parent and report that.


As a workaround, ignore the insert count for partitioned tables, or 
insert directly into the appropriate partition(s).


--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] how to use schema with data type

2012-06-12 Thread Craig Ringer

On 06/13/2012 08:46 AM, John Fabiani wrote:

I have tried to use a user defined data type I created within a schema.
But I could not figure it out.

CREATE OR REPLACE FUNCTION xchromasun._chromasun_totals(date)
   RETURNS SETOF xchromasun.weekly_mpr AS

CREATE OR REPLACE FUNCTION xchromasun._chromasun_totals(date)
   RETURNS SETOF "xchromasun.weekly_mpr" AS

I had to move the user defined data type to "public".

Could someone explain how I might get that done - that is use a schema
data type from a function.


What exactly couldn't you work out?

How was it going wrong, and when? What error messges did you get?

The first form looks reasonable to me, though I haven't tested. If you 
need to quote the schema for caps reasons, you'd use:


   "xchromasun"."weekly_mpr"


--
Craig Ringer



--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Insane behaviour in 8.3.3

2012-06-19 Thread Craig Ringer

On 06/19/2012 05:17 PM, Achilleas Mantzios wrote:

We had another corruption incident on the very same machine, this time in the jboss 
subsystem (a "jar cvf" produced corrupted .jar).
IMHO this means faulty RAM/disk.
If that is true, then i guess HW sanity checks are even more important than SW 
upgrades.


... and a lot more difficult :S

Log monitoring is often the most imporant part - monitoring for NMIs and 
other hardware notifications, checking the kernel log for odd issues or 
reports of unexpected segfaults from userspace programs, etc.


--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Is there a similarity-function that minds national charsets?

2012-06-20 Thread Craig Ringer

On 06/21/2012 12:30 AM, Andreas wrote:

Hi,

Is there a similarity-function that minds national charsets?

Over here we've got some special cases that screw up the results on 
similarity().


Our characters: ä, ö, ü, ß
could as well be written as:  ae, oe, ue, ss

e.g.

select similarity ( 'Müller', 'Mueller' )
results to:  0.363636

In normal cases everything below 0.5 would be to far apart to be 
considered a match.


That's not just charset aware, that's looking for awareness of 
language-and-dialect specific transliteration rules for representing 
accented chars in 7-bit ASCII. My understanding was that these rules and 
conventions vary and are specific to each language - or even region.


tsearch2 has big language dictionaries to try to handle some issues like 
this (though I don't know about this issue specifically). It's possible 
you could extend the tsearch2 dictionaries with synonyms, possibly 
algorithmically generated.


If you have what you consider to be an acceptable 1:1 translation rule 
you could build a functional index on it and test against that, eg:


CREATE INDEX blah ON thetable ( (flatten_accent(target_column) );
SELECT similarity( flatten_accent('Müller'), target_column );

Note that the flatten_accent function must be IMMUTABLE and can't access 
or refer to data in other tables, columns, etc nor SET (GUC) variables 
that might change at runtime.

--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] How to limit access only to certain records?

2012-06-23 Thread Craig Ringer

On 06/22/2012 07:36 PM, Andreas wrote:

Hi,

is there a way to limit access for some users only to certain records?

e.g. there is a customer table and there are account-managers.
Could I limit account-manager #1 so that he only can access customers 
only acording to a flag?


What you describe is called row-level access control, row level 
security, or label access control, depending on who you're talking to. 
It's often discussed as part of multi-tenant database support.


As far as I know PostgreSQL does not currently offer native facilities 
for row-level access control (except possibly via SEPostgreSQL 
http://wiki.postgresql.org/wiki/SEPostgreSQL_Introduction). There's 
discussion of adding such a feature here 
http://wiki.postgresql.org/wiki/RLS .


As others have noted the traditional way to do this in DBs without row 
level access control is to use a stored procedure (in Pg a SECURITY 
DEFINER function), or a set of access-limited vies, to access the data. 
You then REVOKE access on the main table for the user so they can *only* 
get the data via the procedure/views.


See:
http://www.postgresql.org/docs/current/static/sql-createview.html 
<http://www.postgresql.org/docs/9.1/static/sql-createview.html>
http://www.postgresql.org/docs/ 
<http://www.postgresql.org/docs/9.1/static/sql-createfunction.html>current 
<http://www.postgresql.org/docs/9.1/static/sql-createview.html>/static/sql-createfunction.html 
<http://www.postgresql.org/docs/9.1/static/sql-createfunction.html>
http://www.postgresql.org/docs/current/static/sql-grant.html 
<http://www.postgresql.org/docs/9.1/static/sql-grant.html>
http://www.postgresql.org/docs/current/static/sql-revoke.html 
<http://www.postgresql.org/docs/9.1/static/sql-revoke.html>


Hope this helps.

--
Craig Ringer


Re: [SQL] Simple Upgrade from PostgreSQL version 8.1.11 (With schemas)

2012-07-10 Thread Craig Ringer

On 07/10/2012 11:28 PM, Chris Preston wrote:

Hello all,
How far can I get to a higher version of PostgreSQL by just entering a
command line instruction to upgrade without any major effort?


Not very, unless your database and queries are very simple.

At minimum you'll have to deal with the removal of implicit casts to 
text ( 
http://www.postgresql.org/docs/current/static/release-8-3.html#AEN116407) and, 
if you use bytea, the change of default bytea format ( 
<http://www.postgresql.org/docs/9.0/static/runtime-config-client.html#GUC-BYTEA-OUTPUT> 
http://www.postgresql.org/docs/current/static/release-9-0.html#AEN109764).


In general, you will want to read the "Migration to" notes for each version:

http://www.postgresql.org/docs/current/static/release-8-2.html 
<http://www.postgresql.org/docs/current/static/release-9-0.html>
http://www.postgresql.org/docs/current/static/release-8-3.html 
<http://www.postgresql.org/docs/current/static/release-9-0.html>
http://www.postgresql.org/docs/current/static/release-8-4.html 
<http://www.postgresql.org/docs/current/static/release-9-0.html>

http://www.postgresql.org/docs/current/static/release-9-0.html
http://www.postgresql.org/docs/current/static/release-9-1.html 
<http://www.postgresql.org/docs/current/static/release-9-0.html>


Alternately, you could try using PostgreSQL 9.1's pg_dump on your 8.1 
database, load the dump into a 9.1 instance, run your client against it 
and see what breaks. For simpler apps that can be a reasonable choice.


--
Craig Ringer


Re: [SQL] Selecting data from XML

2012-07-16 Thread Craig Ringer

On 07/17/2012 03:56 AM, Victor Sterpu wrote:

If I have a XML like this






can I write a query that will output the columns names and values like this?

code;validFrom;validTo
--
CLIN102;1980-02-23;
CLIN103;1980-02-23;2012-01-01


http://www.postgresql.org/docs/9.1/static/functions-xml.html#FUNCTIONS-XML-PROCESSING

You should be able to do it with some xpath expressions. It probably 
won't be fast or pretty. Consider using PL/Python, PL/perl, PL/Java, or 
something like that to do the processing and return the resultset.


--
Craig Ringer



Re: [SQL] Tablesample Function on Postgres 9.1

2012-08-29 Thread Craig Ringer

On 08/30/2012 04:07 AM, Tom Lane wrote:

"Mubarik, Aiysha"  writes:

I am trying to run a simple query to test out the tablesample function, but in 
postgres 9.1 it does not recognize the function. As from the wiki it seems like 
tablesample is supported 
(https://wiki.postgresql.org/wiki/TABLESAMPLE_Implementation)

Sorry, that wiki page is just blue-sky speculation.  If the feature were
supported, you would find it in the main documentation.

Wiki page updated to make that clearer.

--
Craig Ringer


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] ORDER BY COLUMN_A, (COLUMN_B or COLUMN_C), COLUMN_D

2012-09-16 Thread Craig Ringer

On 09/13/2012 07:20 AM, Rodrigo Rosenfeld Rosas wrote:


Thank you Gavin, I was looking for this LEAST function for a long time.
I have tried MIN but it didn't work with strings.


In SQL, "MIN" is an aggregate function. It actually does work with 
strings, but only when used as an aggregate:


regress=# SELECT min(x.a) FROM ( VALUES ('blah'),('blah2'),('') ) x(a);
 min
--
 
(1 row)

--
Craig Ringer


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Calling the CTE for multiple inputs

2012-10-04 Thread Craig Ringer

On 10/05/2012 03:31 AM, air wrote:

I have a CTE that takes top left and bottom right latitude/longitude values
along with a start and end date and it then calculates the amount of user
requests that came from those coordinates per hourly intervals between the
given start and end date. However, I want to execute this query for about
2600 seperate 4-tuples of lat/lon corner values instead of typing them in
one-by-one. How would I do that? The code is as below:


Sometimes it's easiest to just wrap it in an SQL function.

CREATE OR REPLACE FUNCTION some_expr( lat_low IN integer, lat_high IN 
integer, lon_low IN integer, lon_high IN integer, calltime OUT 
timestamptz, zcount OUT integer) returns setof record as $$

-- ... query text here, using $1 through $4 to refer to parameters
$$ LANGUAGE 'SQL';

... then invoke with something like (untested, from memory):

SELECT (some_expr(lat_low, lat_high, lon_log, lon_high).*)
FROM table_containing_lat_lon_pairs;

Alternately you may be able to rephrase the `qqq` part as a `join` on a 
table containing the lat/lon pairs and include those pairs in `qqq`'s 
output as well as the rest. You then use those in the outer query where 
required. Without a schema to test with and some understanding of what 
the query does it's hard to say exactly.


Wrapping it in a function is likely to be less efficient, but probably 
easier.


--
Craig Ringer





WITH cal AS (
 SELECT generate_series('2011-02-02 00:00:00'::timestamp ,
'2012-04-01 05:00:00'::timestamp ,
'1 hour'::interval) AS stamp
),
qqq AS (
   SELECT date_trunc('hour', calltime) AS stamp, count(*) AS zcount
   FROM mytable
   WHERE calltime >= '2011-02-13 11:59:11'
 AND calltime <= '2012-02-13 22:02:21'
 AND (calltime::time >= '11:59:11'
 AND calltime::time <= '22:02:21')
 AND ((extract(DOW from calltime) = 3) /*OR (extract(DOW from calltime) =
5)*/)
 AND lat BETWEEN '40' AND '42'
 AND lon BETWEEN '28' AND '30'
  GROUP BY date_trunc('hour', calltime)
)
SELECT cal.stamp, COALESCE (qqq.zcount, 0) AS zcount
FROM cal
LEFT JOIN qqq ON cal.stamp = qqq.stamp
WHERE cal.stamp >= '2011-02-13 11:00:00'
   AND cal.stamp <= '2012-02-13 22:02:21'
   AND ((extract(DOW from cal.stamp) = 3) /*OR (extract(DOW from cal.stamp) =
5)*/)
   AND (
 extract ('hour' from cal.stamp) >= extract ('hour' from '2011-02-13
11:00:00'::timestamp) AND
 extract ('hour' from cal.stamp) <= extract ('hour' from '2012-02-13
22:02:21'::timestamp)
   )
ORDER BY stamp ASC;


And the sample output for the query above:

calltime  zcount
"2011-02-16 11:00:00"0
"2011-02-16 12:00:00"   70
"2011-02-16 13:00:00" 175
"2011-02-16 14:00:00"   97
"2011-02-16 15:00:00"  167
.
.
.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Calling-the-CTE-for-multiple-inputs-tp5726661.html
Sent from the PostgreSQL - sql mailing list archive at Nabble.com.






--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] find sudstring on text (data type) field failure

2012-11-11 Thread Craig Ringer
On 11/07/2012 04:36 PM, pantelis vlachos wrote:
> I was trying to find a substring on a text (data type) column like
> 'cat foo dog ...'.
> I use the query below
> SELECT id FROM table WHERE name LIKE '% foo %';
> Sometimes the query return with nTuples=0 but there are matching rows.
> On retry, the query return with expected results. Any ideas;
>
> (postgres ver 9.2, libpq - C Library)

Nowhere near enough information. I'd say you have concurrency issues; at
the time you ran the query there weren't actually matching rows. See if
you can put together a test case that demonstrates the issue, or follow
up with a lot more detail.


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] organizing cron jobs in one function

2012-11-18 Thread Craig Ringer
On 11/18/2012 12:19 AM, Louis-David Mitterrand wrote:
> Hi,
>
> I'm planning to centralize all db maintenance jobs from a single
> pl/pgsql function called by cron every 15 minutes (highest frequency
> required by a list of jobs).
It sounds like you're effectively duplicating PgAgent.

Why not use PgAgent instead?

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] organizing cron jobs in one function

2012-11-18 Thread Craig Ringer
On 11/19/2012 01:11 AM, Louis-David Mitterrand wrote:
> On Sun, Nov 18, 2012 at 07:27:54PM +0800, Craig Ringer wrote:
>> On 11/18/2012 12:19 AM, Louis-David Mitterrand wrote:
>>> Hi,
>>>
>>> I'm planning to centralize all db maintenance jobs from a single
>>> pl/pgsql function called by cron every 15 minutes (highest frequency
>>> required by a list of jobs).
>> It sounds like you're effectively duplicating PgAgent.
>>
>> Why not use PgAgent instead?
> Sure, I didn't know about PgAgent.
>
> Is it still a good solution if I'm not running PgAdmin and have no plan
> doing so?
>
It looks like it'll work. The main issue is that if your jobs run
over-time, you don't really have any way to cope with that. Consider
using SELECT ... FOR UPDATE, or just doing an UPDATE ... RETURNING
instead of the SELECT.

I'd also use one procedure per job in separate transactions. That way if
your 4-hourly job runs overtime, it doesn't block your 5-minutely one.

Then again, I'd also just use PgAgent.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Select into

2008-03-20 Thread Craig Ringer

Gavin 'Beau' Baumanis wrote:


The copy is inside the same table, so I don't understand why it (the 
required query ) would require any joins.
Maybe you should use FROM clause in the update that references a 
row-valued subquery?


craig=# create table x ( id serial, val integer );
NOTICE:  CREATE TABLE will create implicit sequence "x_id_seq" for 
serial column "x.id"

CREATE TABLE

craig=# insert into x ( val ) values ( 4 ) , ( 6 ) ;
INSERT 0 2

craig=# select * from x;
id | val
+-
 1 |   4
 2 |   6
(2 rows)

craig=# update x set val = foundrow.val from ( select val from x where 
id = 2 ) as foundrow where id = 1 ;

UPDATE 1

craig=# select * from x;
id | val
+-
 2 |   6
 1 |   6
(2 rows)

craig=# insert into x ( val ) select generate_series(0,1);
INSERT 0 10001

craig=# explain update x set val = foundrow.val from ( select val from x 
where id = 4123 ) as foundrow where id = 5912 ;

  QUERY PLAN
-
Nested Loop  (cost=0.00..16.55 rows=1 width=14)
  ->  Index Scan using x_id_idx on x  (cost=0.00..8.27 rows=1 width=10)
Index Cond: (id = 5912)
  ->  Index Scan using x_id_idx on x  (cost=0.00..8.27 rows=1 width=4)
Index Cond: (public.x.id = 4123)
(5 rows)

Will that do the job?

--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Select into

2008-03-20 Thread Craig Ringer





craig=# update x set val = foundrow.val from ( select val from x 
where id = 2 ) as foundrow where id = 1 ;

UPDATE 1




Thinking about it, it'd actually be better written as:

UPDATE x SET val = foundrow.val FROM ( SELECT val FROM x AS x2 WHERE 
x2.id = 2 ) AS foundrow WHERE id = 1;


... because it's nicer to use a table alias for x within the subquery 
and elimate any ambiguity for the reader about which "id" you're 
referring to. After all, it's also valid to reference the "id "field of 
the "x" outside the subquery within it, like in the following valid but 
rather nonsensical query:


UPDATE x SET val = (SELECT id+1) WHERE id = 1;

Using the table alias will not change the query plan at all, it just 
makes the reference to "id" within the subquery unambiguous to the reader.


Sorry for the repeat post.

--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Select into

2008-03-20 Thread Craig Ringer

Gurjeet Singh wrote:

Except that it doesn't work... Did you try to execute that query; I am
assuming not.
  
It does, or at least a query written to work the same way works fine for 
me. Not only that, but at least in the presence of a unique index the 
query planner optimises it to the same query plan as the one I proposed.


From my earlier test data:

craig=# update x set val = x2.val from x as x2 where x.id = 1000 and 
x2.id = 1024;

UPDATE 1
craig=# select * from x where id in (1000,1024);
 id  | val
--+--
1024 | 1021
1000 | 1021
(2 rows)

craig=# explain update x set val = x2.val from x as x2 where x.id = 1000 
and x2.id = 1024;

   QUERY PLAN
---
Nested Loop  (cost=0.00..16.55 rows=1 width=14)
  ->  Index Scan using x_id_idx on x  (cost=0.00..8.27 rows=1 width=10)
Index Cond: (id = 1000)
  ->  Index Scan using x_id_idx on x x2  (cost=0.00..8.27 rows=1 width=4)
Index Cond: (x2.id = 1024)
(5 rows)

The above query actually executes slightly faster, presumably because 
the query planner has to do less work to reach the same point than it 
does with the subquery-based one I proposed. You should probably use 
this one instead of the subquery one.


--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Rollback locks table - why?

2008-03-20 Thread Craig Ringer

Jan Peters wrote:

If I run the transaction block from above again I get first a unique key violation (that is ok, because 
that's what I trying to check for) but there is NO rollback to sp1, only the "Unique Key" error 
message and after that I get the dreaded "current transaction is aborted" error message and the 
system reports an EXCLUSIVE LOCK on the table (in transaction).
  
Am I correct in saying that your issue is really that you want an error 
to trigger an automatic rollback to the last savepoint, but it does not 
do so ?


If you issue the ROLLBACK manually (after you get the constraint error 
or a "current transaction is aborted" message) does it work as you would 
expect?


--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Rollback locks table - why?

2008-03-20 Thread Craig Ringer

Jan Peters wrote:

Yes, you are correct: I just want to issue an insertion of a row at the very beginning of an 
application start. So, when the table "test" is empty,  the row with "runde = 
0"  should be inserted. If this row is already present, it should be updated.
  
This is quite common - you might find past discussions about 
alternatives to the SQL:2003 MERGE statement for PostgreSQL informative.

How do I do a "manual" ROLLBACK?
  

To roll back to a particular savepoint:

ROLLBACK TO SAVEPOINT spname;


However, for what you're trying to do another option is to just issue a 
pair of statements that'll work anyway. You should probably test and see 
which works better, faster, etc. Assuming there's only one row in the 
table so I don't need any more specific WHERE clauses, I might try 
something like:


UPDATE my_unique_table SET col = some_col_val;
INSERT INTO my_unique_table ( col ) SELECT some_col_val WHERE NOT EXISTS 
(SELECT 1 FROM my_unique_table WHERE someotherconstraint = somevalue);


because one of those two is guaranteed to work whether or not there are 
any rows in the table (assuming I got it all right, at least). 
Presumably you're doing something like limiting app instances to one per 
user, in which case you'd need additional constraints in the above (you 
wouldn't be keeping just one row anymore, but one per user) and some 
more indexes but the idea's basically the same.



Maybe you should tell the readers of this list a little more about what 
you're trying to do and why?

--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


[SQL] Deferrable NOT NULL REFERENCES ... for two-way referential relationship

2008-03-25 Thread Craig Ringer
Hi all

I'm running into an issue with my schema where I'm trying to establish a
mandatory two-way relationship between two entities, and I'm hoping for
some external wisdom. Using "customer" and "address" by way of example:

CREATE TABLE customer (
 id INTEGER PRIMARY KEY,
 address_id INTEGER NOT NULL
 REFERENCES address(id) DEFERRABLE INITIALLY DEFERRED
)

CREATE TABLE address (
 id INTEGER PRIMARY KEY,
 customer_id INTEGER NOT NULL
 REFERENCES customer(id)
)

Every customer must have one or more addresses, ie address_id must be
set to a valid address by commit time. It does not matter which address
it's set to (though in this particular app there is in fact a preferred
address).

If it could be written as a CHECK constraint I'd be trying to enforce:

CONSTRAINT address_id_exists CHECK
  ( EXISTS (SELECT 1 FROM address WHERE address.id = address_id) )
  DEFERRABLE INITIALLY DEFERRED;

... on the customer table.

PostgreSQL supports DEFERRABLE INITIALLY DEFERRED for the foreign key
constraint, but unless I'm missing something it doesn't appear to have
any direct way to make the NOT NULL check deferrable.

There are constraint triggers:

http://www.postgresql.org/docs/8.3/static/sql-createconstraint.html

and I could use one to enforce the NOT NULL on the table as a whole (at
an acceptable cost in this case). However, I've seen some posts like
this one:

http://archives.postgresql.org/pgsql-hackers/2005-01/msg00882.php

that suggest that constraint triggers are or have been deprecated.
However, their removal is no longer on the TODO as far as I can tell.

Are constraint triggers a safe and reasonably future proof way to
implement this, given that I'm quite OK with being tied to postgresql?

Is there some better way that I'm missing?

Is what I'm trying to do crazy for some reason I haven't spotted?

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Deferrable NOT NULL REFERENCES ... for two-way referential relationship (SOLVED?)

2008-03-25 Thread Craig Ringer

Craig Ringer wrote:

Hi all

I'm running into an issue with my schema where I'm trying to establish a
mandatory two-way relationship between two entities, and I'm hoping for
some external wisdom. Using "customer" and "address" by way of example:

CREATE TABLE customer (
 id INTEGER PRIMARY KEY,
 address_id INTEGER NOT NULL
 REFERENCES address(id) DEFERRABLE INITIALLY DEFERRED
)

CREATE TABLE address (
 id INTEGER PRIMARY KEY,
 customer_id INTEGER NOT NULL
 REFERENCES customer(id)
)


OK, it looks like there are at least two ways to do it, and I'm hoping 
for some comments from the experts on what seems sanest/cleanest/most 
future proof and the best in a concurrent environment I also figure this 
post might be useful for the archives.




It looks like it's possible to avoid the use of a constraint trigger by
making address_id in the above NOT NULL and inserting a dummy value when
inserting the customer record. A DEFAULT(-1) clause on address_id will
have much the same effect of a deferred NOT NULL check when combined
with the deferred REFERENCES check.

An AFTER ... INSERT trigger on address then updates the associated
customer with the address_id, and an AFTER ... DELETE trigger ensures
that if an address is deleted another valid ID (if any exists, otherwise
null) is inserted into customer.address_id.

However, as above the schema will permit customer to reference an ID
that doesn't have a reference back to the same customer, so it'd have to
be something like:

CREATE TABLE customer (
 id SERIAL  PRIMARY KEY,
 address_id INTEGER NOT NULL DEFAULT(-1),
 name   TEXT
);

CREATE TABLE address (
 id SERIAL  PRIMARY KEY,
 CONSTRAINT address_pkey_not_negative CHECK(id >= 0),
 customer_id INTEGER NOT NULL
 REFERENCES customer(id) ON DELETE CASCADE,
 addr   TEXT,
 UNIQUE(id, customer_id)
);

ALTER TABLE customer ADD CONSTRAINT customer_address_fkey
 FOREIGN KEY (address_id, id)
 REFERENCES address(id, customer_id) DEFERRABLE INITIALLY DEFERRED;

CREATE OR REPLACE FUNCTION address_insert_trig_set_cust_address_id ()
RETURNS trigger AS $$
BEGIN
UPDATE customer
SET address_id = new.id
WHERE customer.id = new.customer_id
  AND customer.address_id = -1;
RETURN NULL;
END;
$$ LANGUAGE 'plpgsql' VOLATILE;

CREATE OR REPLACE FUNCTION address_delete_set_cust_address_id ()
RETURNS trigger AS $$
BEGIN
UPDATE customer
SET address_id =
(SELECT id
 FROM address
 WHERE address.customer_id = customer.id
 LIMIT 1)
WHERE customer.id = old.customer_id;
RETURN NULL;
END;
$$ LANGUAGE 'plpgsql' VOLATILE;

CREATE TRIGGER address_insert_cust_address_id AFTER INSERT ON address
FOR EACH ROW EXECUTE PROCEDURE address_insert_trig_set_cust_address_id();

CREATE TRIGGER address_delete_cust_address_id AFTER DELETE ON address
FOR EACH ROW EXECUTE PROCEDURE address_delete_set_cust_address_id();



An UPDATE trigger preventing changes to the id field of address is assumed.

As a result, the whole thing is transparent to users, though it does
mean that inserts/deletes to `address' result in a lock on the
associated customer record. I'm not 100% sure it's free from races or
locking issues, but at least with races in this case the worst outcome
should be an unexpected error on commit, right?


It works, though. The error from a delete removing the last record could 
be nicer, but that's easily fixed in the DELETE trigger.


Does this look like a better idea than just using a deferred constraint
trigger (on both customer and address, to catch customers inserted w/o
an address and to catch deleted addresses) to directly verify that at
commit time a customer has one or more addresses?




I guess a deferred constraint trigger for the same job, again assuming 
an UPDATE trigger is in place to prevent changes to address2.id and 
customer2.id, would look like:


CREATE TABLE customer2 (
   id SERIAL PRIMARY KEY,
   name TEXT
);

CREATE TABLE address2 (
   id SERIAL PRIMARY KEY,
   customer2_id INTEGER REFERENCES customer2(id) ON DELETE CASCADE,
   addr TEXT
);
CREATE INDEX address2_customer2id_idx ON address2(customer2_id);

CREATE OR REPLACE FUNCTION assert_customer_has_address(INTEGER)
RETURNS VOID AS $$
BEGIN
-- Make sure the customer record actually exists; if it does
-- not then consider the assertion to be passed.
IF EXISTS (SELECT 1 FROM customer2 WHERE id = $1) THEN
-- Customer exists. Verify that at least one address entry for
-- it exists.
PERFORM 1 FROM address2 WHERE customer2_id = $1 LIMIT 1;
IF NOT FOUND THEN
RAISE EXCEPTION 'Customer must have one or more related 
address entries';

END IF;
END IF;
END;
$$ LANGUAGE 'plpgsql';

CREATE OR REPLACE FUNCTION customer_inserted_addrcheck_tr()
RETURNS TRIGGER AS $$
BEGIN
PERFORM assert_customer_has_address(new.id);
RETURN

Re: [SQL] link to Jsp

2008-03-27 Thread Craig Ringer
Shavonne Marietta Wijesinghe wrote:
> Hello
> 
> I'm trying to connect to my postgresql db from a JSP page but my db is on one 
> computer and the jsp in another.. 
> 
> Has anyone done something like this before??
> 
>  private synchronized Connection getConnection() throws 
> ClassNotFoundException,
>   SQLException {
> final String DRIVER = "org.postgresql.Driver";
> final String URL = "jdbc:postgresql:FormAjax";

Read the documentation for the PostgreSQL JDBC layer, particularly the
part about the JDBC URL. I quote:


The JDBC URL

In JDBC all url's begin with jdbc:protocol: This is the standard. After
this is driver specific, and no two drivers are the same.

So our url? It's one of the following:

jdbc:postgresql:database
jdbc:postgresql://host/database
jdbc:postgresql://host:port/database

where database is the database to connect to, host the server to connect
to, and port the port number.

If left out, host defaults to localhost (not 127.0.0.1 see applets!) and
port to 5432 (configurable at build time).
---

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Removing redundant itemsets

2008-03-31 Thread Craig Ringer
Allan Kamau wrote:
> Hi all,
> I have a list of purchases (market basket) and I would like to select
> non redundant longest possible patterns by eliminating
> (creating/populating other table to contain only non redandant itemsets)
> purchases having item lists which are fully included in at least one
> other purchase.

Here's a possibly slow and surely ugly solution (I think it's right,
though I haven't done more than passing testing):



CREATE VIEW togo_as_arr AS
  SELECT a.tid,
ARRAY(SELECT item FROM togo b WHERE b.tid = a.tid ORDER BY item)
AS items
  FROM togo a GROUP BY tid;

SELECT arr_a.tid AS redundant_tid, arr_b.tid AS contained_by
FROM togo_as_arr arr_a CROSS JOIN togo_as_arr arr_b
WHERE arr_a.tid <> arr_b.tid AND arr_a.items <@ arr_b.items;



(the view isn't necessary, but does improve the readability of the query).

It groups the purchases up with item lists as arrays, then finds any
purchases with items arrays wholly contained by other item arrays from
other purchases.

I'm *sure* there's a smarter way to do this that avoids the use of
arrays, but I don't seem to be able to come up with one right now. It's
interesting, though, so I might keep fiddling.

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Removing redundant itemsets

2008-03-31 Thread Craig Ringer
Craig Ringer wrote:
> Allan Kamau wrote:
>> Hi all,
>> I have a list of purchases (market basket) and I would like to select
>> non redundant longest possible patterns by eliminating
>> (creating/populating other table to contain only non redandant itemsets)
>> purchases having item lists which are fully included in at least one
>> other purchase.
> 
> Here's a possibly slow and surely ugly solution (I think it's right,
> though I haven't done more than passing testing):
> 
> 
> 
> CREATE VIEW togo_as_arr AS
>   SELECT a.tid,
> ARRAY(SELECT item FROM togo b WHERE b.tid = a.tid ORDER BY item)
> AS items
>   FROM togo a GROUP BY tid;
> 
> SELECT arr_a.tid AS redundant_tid, arr_b.tid AS contained_by
> FROM togo_as_arr arr_a CROSS JOIN togo_as_arr arr_b
> WHERE arr_a.tid <> arr_b.tid AND arr_a.items <@ arr_b.items;

Alternately:

-- Helps with the massively repeated subquery below
CREATE INDEX togo_by_tid_and_item ON togo(tid,item);

-- Find any `a' for which `item_from_a_is_in_b' is
-- true for all items in `a'
SELECT a_tid AS is_redundant, b_tid AS contained_by
FROM (
  -- For every item in every pair of purchases,
  -- determine whether the item in purchase `a'
  -- was also in purchase `b'.
  SELECT
a.tid AS a_tid,
b.tid AS b_tid,
a.item AS item,
EXISTS(
  -- Was this item from `a' also in the `b' purchase?
  SELECT 1 FROM togo x WHERE x.tid = b.tid AND x.item = a.item
) AS item_from_a_is_in_b
  FROM togo a INNER JOIN togo b ON (a.tid <> b.tid)
  GROUP BY a.tid, b.tid, a.item) AS item_containment
GROUP BY a_tid, b_tid
HAVING every(item_from_a_is_in_b);


... which avoids the array building, but is actually considerably slower
on the trivial test data. That's not too surprising given that this
approach requires a subquery:

   SELECT 1 FROM togo x WHERE x.tid = b.tid AND x.item = a.item

for EVERY item to be tested. Twice, actually, as each record appears in
both `a' and `b' positions.

I'd be very interested to see what happened on real world test data,
especially compared to doing the array accumulation based query off a
temporary table instead of a view.

I suspect it'll depend on the average number of items per purchase -
lots of items per purchase and the array building cost will dominate.
That's really just a guess, though.

I'm sure there's a properly smart way to do this that I just can't
figure out, but this is the best I've come up with so far.

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Removing redundant itemsets

2008-03-31 Thread Craig Ringer
> -- Find any `a' for which `item_from_a_is_in_b' is
> -- true for all items in `a'
> SELECT a_tid AS is_redundant, b_tid AS contained_by
> FROM (
>   -- For every item in every pair of purchases,
>   -- determine whether the item in purchase `a'
>   -- was also in purchase `b'.
>   SELECT
> a.tid AS a_tid,
> b.tid AS b_tid,
> a.item AS item,
> EXISTS(
>   -- Was this item from `a' also in the `b' purchase?
>   SELECT 1 FROM togo x WHERE x.tid = b.tid AND x.item = a.item
> ) AS item_from_a_is_in_b
>   FROM togo a INNER JOIN togo b ON (a.tid <> b.tid)
>   GROUP BY a.tid, b.tid, a.item) AS item_containment
> GROUP BY a_tid, b_tid
> HAVING every(item_from_a_is_in_b);

That really should've been written as:

SELECT
  a.tid AS is_redundant,
  b.tid AS contained_by
FROM togo a INNER JOIN togo b ON (a.tid <> b.tid)
GROUP BY a.tid, b.tid
HAVING
  EVERY(EXISTS(
SELECT 1 FROM togo x WHERE x.tid = b.tid AND x.item = a.item
  ));

... but I'm a bit of an idiot, and couldn't figure out why the
EVERY(EXISTS(subq)) wasn't working when testing it before.

Sorry for all the noise.

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Sequential non unique IDs

2008-04-01 Thread Craig Ringer
John Reeve wrote:

> I've already considered:
> 1. I can't lock the table, because there are too many inserts happening
> and it will slow down the app.

In a locking approach you may not need to lock the whole table. You
should only need to lock the entry for the customer being altered, eg:

BEGIN;

SELECT 1 FROM task WHERE customerid = 92 FOR UPDATE;

INSERT INTO task (id, customerid, localid) VALUES (nextval('task_id'),
92, (SELECT MAX(localid) + 1 FROM task WHERE customerid = 92));

If I'm not mistaken, that should ensure that for any given customer ID
there's only one transaction holding the locks on that customer.

It won't prevent SELECTs from reading the customer's records, but you
don't mind that so long as they're not using the customer's records to
determine the least free localid. That'll help reduce the hit on your
app's performance, too. If you do mind the , use SELECT ... FOR SHARE
and you'll wait on the FOR UPDATE lock if one is active (however, it
might prove hard to obtain a FOR UPDATE lock if there are lots of FOR
SHARE operations active).

I *think* that'll work, but you should of course test and investigate
before doing anything as crazy as taking my word for it.

> Anyone know a solution for this? This can't be the first time anyone has
> ever tried to do this. Thanks!

If I'm not mistaken about the similarity, you might want to search the
archives for the thread "Primary Key with serial". It might be informative.

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Problem commit in function

2008-04-07 Thread Craig Ringer

[EMAIL PROTECTED] wrote:

Hi,
I need to execute COMMIT in a function pgsql,
there is a way?
  

A function runs in a transaction. It cannot, as far as I know, commit one.

Why do you need to do that, anyway? What're you trying to achieve?

--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Commit every processed record

2008-04-07 Thread Craig Ringer

[EMAIL PROTECTED] wrote:

Hi,
I have to execute commit for evey record that i processed during a cursor fetch 
in a function.
There is a way to do it?

  
Sure. Do it outside PL/PgSQL, using a database client API like those 
available for Python, Java, Perl, etc.


If you really need to do it in PL/PgSQL, is there any reason you can't 
do all the work in one transaction and commit when the function finishes?


--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] count question

2008-04-08 Thread Craig Ringer
novice wrote:

> Is it possible to write a query to produce:
> 
>  meter_id | no_of_bays |   bay_id
> --++---
>  5397 |  2 |   5397-01
>  5397 |  2 |   5397-02
>  5409 |  3 |   5409-01
>  5409 |  3 |   5409-02
>  5409 |  3 |   5409-03

Sure. One way, not necessarily a particularly clever or efficient way,
is to do a join with:

generate_series(1, (select max(no_of_bays) from meter)) as i

and use a WHERE clause to select for `i <= no_of_bays'


eg:

SELECT
meter_id, no_of_bays, meter_id::text||'-'||i::text AS bay_id
FROM meter,
 generate_series(1, (select max(no_of_bays) from meter)) as i
WHERE i <= no_of_bays;


--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Testing for null record in plpgsql

2008-04-11 Thread Craig Ringer

Erik Jones wrote:
Now, let's say I want to call this from another function and test the 
result to see if I have a null record (null, null),.  I've got the 
following working but it feels like there should be something much 
simpler but I just can't seem to hit on it.  Is this it?
I'm assuming that returns_null_maybe() is  a dummy to show general 
behavior. I can't imagine why you'd ever want to do what it's doing.


In general I'm suspicious of code that's testing for a real, known value 
and returning NULL in its place. It seems like an odd thing to do. 
Still, I'm sure you have your reasons and they probably make sense in 
the real application rather than the simplified example.


You can tidy test_null_rec a little by just using:

RETURN row(res.*) IS DISTINCT FROM row(null_rec.*);

but otherwise, without incorporating it into the containing query as a 
subquery I don't see much to be done. I'm still curious about the 
purpose of using null values like this is, though.


--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] How to find double entries

2008-04-15 Thread Craig Ringer
Andreas wrote:
> Hi,
> 
> how can I find double entries in varchar columns where the content is
> not 100% identical because of a spelling error or the person considered
> it "looked nicer" that way?

When doing some near-duplicate elimination as part of converting a
legacy data set to PostgreSQL I found the `fuzzystrmatch' contrib module
immensely helpful.

http://www.postgresql.org/docs/current/static/fuzzystrmatch.html

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] How to find double entries

2008-04-16 Thread Craig Ringer
Vivek Khera wrote:
> 
> On Apr 15, 2008, at 11:23 PM, Tom Lane wrote:
>> What's really a duplicate sounds like a judgment call here, so you
>> probably shouldn't even think of automating it completely.
> 
> I did a consulting gig about 10 years ago for a company that made
> software to normalize street addresses and names.  Literally dozens of
> people worked there, and that was their primary software product.  It is
> definitely not a trivial task, as the rules can be extremely complex.

>From what little I've personally seen of others' addressing handling,
some (many/most?) people who blindly advocate full normalisation of
addresses either:

(a) only care about a rather restricted set of address types ("ordinary
residential addresses in ", though that can be bad enough);
or
(b) don't know how horrible addressing is  yet ... and are going to
find out soon when their highly normalized addressing schema proves
incapable of representing some address they've just been presented with.

with most probably falling into the second category.

Overly strict addressing, without the associated fairly extreme
development effort to get it even vaguely right, seems to lead to users
working around the broken addressing schema by entering bogus data.


Personally I'm content to provide lots of space for user-formatted
addresses, only breaking out separate fields for the post code
(Australian only), the city/suburb, the state, and the country - all
stored as strings. The only DB level validation is a rule preventing the
entry of invalid & undefined postcodes for Australian addresses, and
preventing the entry of invalid Australian states. The app is used
almost entirely with Australian addresses, and there's a definitive, up
to date list of australian post codes available from the postal
services, so it's worth a little more checking to protect against basic
typos and misunderstandings.

The app provides some more help at the UI level for users, such as
automatically filling in the state and suburb if an Australian post code
is entered. It'll warn you if you enter an unknown Australian
suburb/city for an entry in Australia. For everything else I leave it to
the user and to possible later validation and reporting.

I've had good results with this policy when working with other apps that
need to handle addressing information, and I've had some truly horrible
experiences with apps that try to be too strict in their address checking.

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] using string functions on plpgsql

2008-04-16 Thread Craig Ringer

Marcelo Edgardo Paniagua Lizarraga wrote:


declare
i integer;
begin
 ...

   i =  length(texto)

  ^^^

i := length(texto)

--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] using string functions on plpgsql

2008-04-16 Thread Craig Ringer

Craig Ringer wrote:

Marcelo Edgardo Paniagua Lizarraga wrote:


declare
i integer;
begin
 ...

   i =  length(texto)

  ^^^

i := length(texto)


Whoops, I spoke too soon - it seems both are valid for assignment. Has 
that always been true?


The one time I don't write a small test before posting

--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] What does mod - in "mod statement" - stand for?

2008-04-16 Thread Craig Ringer
Daniel CAUNE wrote:
> Hi,
> 
> Could you please tell me what does mod - in "mod statement" - stand for?

`mod' is shorthand for "modify" or "modifying", ie statements that
modify the data.

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] error codes

2008-04-17 Thread Craig Ringer
Marcin Krawczyk wrote:
> Hi guys. Does anyone know the error code for '/currval of sequence * is
> not yet defined in this session/' error ? Is there one at all?

A quick JDBC test program shows:

ERROR: currval of sequence "customer_id_seq" is not yet defined in this
session (SQLState: 55000)

which, looking up the error code in the docs:

http://www.postgresql.org/docs/current/static/errcodes-appendix.html

turns out to be:

55000   OBJECT NOT IN PREREQUISITE STATE

... which makes sense, but I wouldn't call blindingly and immediately
obvious.



Here's a trivial little utility for running a statement, catching an
error, and reporting the error along with the Pg error code. Sorry for
the horrible formatting (short lines for email).

--- Put in file `JdbcTest.java' ---

import java.sql.SQLException;
import java.sql.CallableStatement;
import java.sql.Connection;
import java.sql.DriverManager;

class JdbcTest  {

public static void main(String[] args)
  throws ClassNotFoundException, InterruptedException, SQLException
{
Class.forName("org.postgresql.Driver");
Connection c =
  DriverManager.getConnection("jdbc://YOUR_PARAMS_HERE");
try {
CallableStatement st =
c.prepareCall("SELECT currval('customer_id_seq')");
st.execute();
} catch (SQLException ex) {
System.out.println(
"DB error string: " + ex.getMessage()
 + " (SQLState: " + ex.getSQLState() + ")");
}
}
}

 end 

You'll need the PostgreSQL JDBC driver.

To compile (assuming you don't use a Java IDE of some sort, but have the
JDK installed) run:

   javac -d /out/dir JdbcTest.java

and to run (all on one line):

java -classpath /path/to/postgresql-8.3-603.jdbc4.jar:/out/dir JdbcTest

where /out/dir is wherever you want the generated .class file to be put.

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] error codes

2008-04-17 Thread Craig Ringer
Craig Ringer wrote:
> Marcin Krawczyk wrote:
>> Hi guys. Does anyone know the error code for '/currval of sequence * is
>> not yet defined in this session/' error ? Is there one at all?
> 
> A quick JDBC test program shows:
> 
> ERROR: currval of sequence "customer_id_seq" is not yet defined in this
> session (SQLState: 55000)

I've just realised I've been doing this a stupid way without thinking
about it. Unsurprisingly it turns out that no Java/JDBC snippets are
required. The much, much easier way to find the error code:

$ psql

... blah blah blah ...

test=# \set VERBOSITY verbose
test=# select currval('test_seq');
ERROR:  55000: currval of sequence "test_seq" is not yet defined in this
session
LOCATION:  currval_oid, sequence.c:644

In retrospect it seems kind of obvious - "surely psql must have way to
show this information, maybe I should look at the manual".

--
Craig Ringer

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Problems with function while migrating from Windows to Linux

2008-04-22 Thread Craig Ringer

Jan Peters wrote:

Hello list,
after developping my application in a Windows environment I am now migrating to 
the production Linux webserver.
  

[snip]

when I do e.g. a "select getgv_neu(5,'company_5',1,0)" I get the error message:

ERROR:  syntax error at or near "$4" at character 81
QUERY:  SELECT  'SELECT getaktiva(' ||  $1 || ',' ||  $2  || ',' ||  $3  || ') 
;' INTO  $4
CONTEXT:  PL/pgSQL function "getgv_neu" line 7 at execute statement

What might be the problem here? Any ideas? I am stuck...
  
At a guesss, you were using PostgreSQL 8.3 under Windows and the 
production Linux system has PostgreSQL 8.2 or earlier, which does not 
support EXECUTE ... INTO .


--
Craig Ringer


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] After updating dataset the record goes to the end of the dataset

2008-04-22 Thread Craig Ringer

Nacef LABIDI wrote:

Yes I don't issue any sort statement, and I indeed want the data to be show
as it is stored in the database.
That's what you're getting. PostgreSQL has an MVCC design. How it works 
in general terms
(simplified, but I lack the expertise to give a complete explanation 
even if one was required) is:
When you update a row the old row is marked as dead and a new row is 
inserted. If there is no
spare space in the table near the old row (assuming there's any reason 
for the DB to even try
to put the new row near the old one) then the new row will be placed 
elsewhere, such as at

the end of the table.

In other words, after an UPDATE the row really is often at the end of 
the table.


In any case as others have explained you should never rely on the 
database ordering
of records; you should always use an ORDER BY if you care about order. 
The database

makes no guarantees about the order of returned rows.

The database may optimise row retrieval in ways you do not expect. For 
example,
in recent versions of PostgreSQL if there is a sequential scan in 
progress on a table
and you start another query that also runs a sequential scan on the 
table, PostgreSQL
may synchronize the two scans. That'll cause your query to start part 
way through the
table. If the table contains alphabetically ordered data you might get 
something like:


J
K
L
M
... etc ...
A
B
C

So ... in any database, always use ORDER BY if you care about order. 
Just because it usually
works in some databases doesn't mean it won't break just rarely enough 
to drive you insane

while debugging...

--
Craig Ringer

--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] LEFT OUTER JOIN question

2008-05-04 Thread Craig Ringer
seiliki wrote:
> Hi!
> 
> I expect the SELECT to return two rows. Would some kind 
> soul explain for me why it gives only one row?

Without having read the post in detail I'll make a guess: Because NULL =
NULL results in NULL, not true, and the outer (or any other) join
condition only accepts rows where the join condition is true.

This is a FAQ. It probably needs to go in the PostgreSQL FAQ.

The usual response is: Rethink your use of NULL values. Null really
means "unknown" and if you're comparing for equality you probably don't
really want NULLs to be present. If you absolutely must perform
comparisons where NULL should be considered equal to NULL use `IS
DISTINCT FROM` ... but as everybody here says, use of that often
suggests design problems in your queries and schema.

--
Craig Ringer


-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


  1   2   >