Re: [GENERAL] Revisited: Transactions, insert unique.

2000-04-28 Thread Bruce Momjian

[Charset iso-8859-1 unsupported, filtering to ASCII...]
> On Thu, 27 Apr 2000, Bruce Momjian wrote:
> 
> > Are you asking for sub-second timeout values?  If so, we could add that.
> 
> I already have a millisecond resolution for deadlock timeouts implemented.
> Is that okay?

Sure, that should work.

-- 
  Bruce Momjian|  http://www.op.net/~candle
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



Re: [GENERAL] Revisited: Transactions, insert unique.

2000-04-27 Thread Ed Loehr

Joachim Achtzehnter wrote:
> 
> ...It is NOT required that the outcome be
> equivalent to the result that would be observed by running the
> transactions in a particular order, such as in the order they were
> actually started. The outcome is only required to be equivalent to some
> (arbitrary) order.

That was another fundamental misunderstanding of mine about SERIALIZABLE. 
I appreciate the explanation, and have a deeper appreciation for Vadim's
MVCC work!!

Regards,
Ed Loehr



Re: [GENERAL] Revisited: Transactions, insert unique.

2000-04-27 Thread Ed Loehr

[EMAIL PROTECTED] wrote:
> 
> ...the snapshot is taken from the first DML statement...

That explains it.  I missed that in the docs, and was mislead by the
SERIALIZABLE doc sections.

Regards,
Ed Loehr



Re: [GENERAL] Revisited: Transactions, insert unique.

2000-04-27 Thread Bruce Momjian

> Ok so I'm biased to how MySQL does it (it's simple and has a good chance of
> working well). Yes it shifts a lot to the application. But if people have
> to do things like do their multiple select for updates in the right order
> (to prevent deadlocks), they might as well start using something like this
> instead (or fix their architecture if possible ;) ). 
> 
> And it's likely to be faster! Anyone else here like this arbitrary lock
> thingy? 
> 
> I'm very interested to know of other ways to achieve good serialisation,
> especially database centric methods.
> 
> Cheerio,
> 
> Link.
> 
> p.s. Would anyone actually need timeouts of a day (86400) or greater?

Are you asking for sub-second timeout values?  If so, we could add that.

-- 
  Bruce Momjian|  http://www.op.net/~candle
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



RE: [GENERAL] Revisited: Transactions, insert unique.

2000-04-27 Thread Hiroshi Inoue

> -Original Message-
> From: Peter Eisentraut [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, April 27, 2000 4:46 PM
> To: Hiroshi Inoue
>
> On Thu, 27 Apr 2000, Hiroshi Inoue wrote:
>
> > PostgreSQL's SERIALIZABLE isolation level would allow both inserts.
> > READ COMMITED isolation level wouldn't allow A's inserts.
> >
> > As I mentioned in another posting,PostgreSQL's SERIALIZABLE
> > isolation level isn't completely serializable and it's same as Oracle.
> > Probably Vadim refers to this incomplete serializability somewhere
> > in documentation.
> > It seems almost impossible to implement complete serializability
> > under MVCC without using table level locking. I love MVCC much
> > more than theoretically beautiful complete serializability.
>
> Given that Postgres correctly recognizes concurrent updates and aborts one
> of the transactions,

Is what you mean the following ?
When a transaction is about to update a row which has been updated by
other transactions under SERIALIZABLE isolation level,update is rejected
with message ERROR: Can't serialize access due to concurrent update.

> how difficult would it be to do the same for inserts?

Should INSERT/UPDATE/SELECT .. FOR UPDATE statements wait
until being inserted rows to be commit/aborted ?
This means INSERT operations block all update operations for the
same table.

Regards.

Hiroshi Inoue
[EMAIL PROTECTED]




RE: [GENERAL] Revisited: Transactions, insert unique.

2000-04-27 Thread Peter Eisentraut

On Thu, 27 Apr 2000, Hiroshi Inoue wrote:

> PostgreSQL's SERIALIZABLE isolation level would allow both inserts.
> READ COMMITED isolation level wouldn't allow A's inserts.
> 
> As I mentioned in another posting,PostgreSQL's SERIALIZABLE
> isolation level isn't completely serializable and it's same as Oracle.
> Probably Vadim refers to this incomplete serializability somewhere
> in documentation.
> It seems almost impossible to implement complete serializability
> under MVCC without using table level locking. I love MVCC much
> more than theoretically beautiful complete serializability.

Given that Postgres correctly recognizes concurrent updates and aborts one
of the transactions, how difficult would it be to do the same for inserts?

-- 
Peter Eisentraut  Sernanders väg 10:115
[EMAIL PROTECTED]   75262 Uppsala
http://yi.org/peter-e/Sweden




Re: [GENERAL] Revisited: Transactions, insert unique.

2000-04-27 Thread Joachim Achtzehnter

In a message to and Hiroshi Inoue pgsql-general, Ed Loehr wrote:
> 
> -- Within transaction A --
> BEGIN;
> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
> 
> -- Within transaction B --
> BEGIN;
> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
> INSERT INTO foo (id, msg) 
> SELECT 1, 'From B' 
> WHERE NOT EXISTS (SELECT * FROM foo WHERE id = 1);
> COMMIT;
> 
> -- Within transaction A --
> SELECT * FROM foo;
> ...

In this case, it is actually OK for A to see the committed results of B
because the overall outcome is then equivalent to B occuring entirely
before A.

In general, it is important to remember what SERIALIZABLE means: A
particular concurrent execution of several transactions must have an
observable outcome that is equivalent to running the same transactions one
after the other (serialized). It is NOT required that the outcome be
equivalent to the result that would be observed by running the
transactions in a particular order, such as in the order they were
actually started. The outcome is only required to be equivalent to some
(arbitrary) order.

A concurrancy mechanism supports the SERIALIZABLE isolation level if
it guarantees that every concurrent execution of transactions is
serializable.

Joachim

-- 
private:  [EMAIL PROTECTED](http://www.kraut.bc.ca)
work: [EMAIL PROTECTED]  (http://www.mercury.bc.ca)




Re: [GENERAL] Revisited: Transactions, insert unique.

2000-04-26 Thread Ed Loehr

Hiroshi Inoue wrote:
> 
> You should call SET TRANSACTION immediately after BEGIN.
> Note that SET TRANSACTION .. is per transaction command.
> 
> PostgreSQL's SERIALIZABLE isolation level would allow both inserts.
> READ COMMITED isolation level wouldn't allow A's inserts.

Even if I call SET after BEGIN, it is not consistent with docs or the
standard (see pghackers discussion under same subject), as the two scripts
below seem to demonstrate.

[Rex:  There is a subtle difference between your sequence and mine.  Insert
from B first, and don't do any select from A before you attempt the insert,
as below.]

> As I mentioned in another posting,PostgreSQL's SERIALIZABLE
> isolation level isn't completely serializable and it's same as Oracle.
> Probably Vadim refers to this incomplete serializability somewhere
> in documentation.
> It seems almost impossible to implement complete serializability
> under MVCC without using table level locking. I love MVCC much
> more than theoretically beautiful complete serializability.

Interesting, thanks.  I certainly don't have any big gripes about PG
concurrency, as it has worked flawlessly for me so far with READ COMMITTED
level.  All this has been strictly in the interest of clarifying a
discrepancy between the docs/std and 7.0.  I could see it mattering to
some, but not for my app.

Regards,
Ed Loehr
===

This sequence, AFAICT, appears to fail the SERIALIZABLE standard, allowing
A to see effects from B that permute the serializability...

- START -
-- Within transaction A --
DROP TABLE foo;
CREATE TABLE foo (id INTEGER, msg VARCHAR);
BEGIN;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;

-- Within transaction B --
BEGIN;
--SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
INSERT INTO foo (id, msg) 
SELECT 1, 'From B' 
WHERE NOT EXISTS (SELECT * FROM foo WHERE id = 1);
COMMIT;
SELECT * FROM foo;

-- Within transaction A --
SELECT * FROM foo;
INSERT INTO foo (id, msg) 
SELECT 1, 'From A' 
WHERE NOT EXISTS (SELECT * FROM foo WHERE id = 1);
SELECT * FROM foo;
COMMIT;
- END -


...while this sequence, which I would've thought to be functionally
identical, is compliant...


- START -
-- Within transaction A 
DROP TABLE foo;
CREATE TABLE foo (id INTEGER, msg VARCHAR);
BEGIN;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;

-- Within transaction B 
BEGIN;
INSERT INTO foo (id, msg) 
SELECT 1, 'From B' 
WHERE NOT EXISTS (SELECT * FROM foo WHERE id = 1);

-- Within transaction A 
SELECT * FROM foo;  -- The magic query.

-- Within transaction B 
COMMIT;
SELECT * FROM foo;

-- Within transaction A 
SELECT * FROM foo;
INSERT INTO foo (id, msg) 
SELECT 1, 'From A' 
WHERE NOT EXISTS (SELECT * FROM foo WHERE id = 1);
SELECT * FROM foo;
COMMIT;



Re: [GENERAL] Revisited: Transactions, insert unique.

2000-04-26 Thread Lincoln Yeoh

At 11:59 AM 26-04-2000 -0500, Ed Loehr wrote:
>Joachim Achtzehnter wrote:
>> 
>> In a message to Ed Loehr and pgsql-general, Lincoln Yeoh wrote:
>> > Then you do a commit on both, and you end up with two rows.
>> 
>> This is dissapointing indeed! What this means is that Postgresql
>> transactions are, in fact, not truely serializable. The concurrent

Well, it's not that disappointing as it's not easily avoidable given that
transactions usually consist of multiple consecutive nonzerotime order
sensitive statements. :)

So the database has to figure out how much information to share amongst
partially completed transactions in order to achieve maximum
serialisation/atomicity. 

There will also be transactions which to be serialised must actually occur
serially. That is a whole transaction has to go through before the other
one (or every other one), or at least a group of statements within the
transaction.

Stuff like select for update gives the database engine some hints, but in
the case I mentioned it is not enough (unless modified).

Basically the database needs to get a need-to-know summary of the
transaction in a single atomic statement, then it can better decide what to
block. The application is in the best position to give this summary.

So in the end for maximum serialisation it may be a good idea to have an
arbitrary lock mechanism to cover the troublesome cases.

e.g.
getlock('arbitrary string'[,timeout]);
getlock('all temporary product tables, created or about to be')
getlock('all rows potential or real on all tables where key1=X or key2=Y')
If another transaction tries to get the same lock, it will wait for timeout
then abort with an error.
(possible timeout values, -1= never, 0 or greater=time to wait in seconds).

Ok so I'm biased to how MySQL does it (it's simple and has a good chance of
working well). Yes it shifts a lot to the application. But if people have
to do things like do their multiple select for updates in the right order
(to prevent deadlocks), they might as well start using something like this
instead (or fix their architecture if possible ;) ). 

And it's likely to be faster! Anyone else here like this arbitrary lock
thingy? 

I'm very interested to know of other ways to achieve good serialisation,
especially database centric methods.

Cheerio,

Link.

p.s. Would anyone actually need timeouts of a day (86400) or greater?





Re: [GENERAL] Revisited: Transactions, insert unique.

2000-04-25 Thread Joachim Achtzehnter

In a message to Ed Loehr and pgsql-general, Lincoln Yeoh wrote:
>
> Transaction A
> begin;
> insert into test (a,b) select 4,'four' from test
> where not exists (select * from test where a=4);
> 
> Transaction B
> begin;
> insert into test (a,b) select 4,'four' from test
> where not exists (select * from test where a=4);
> 
> Then you do a commit on both, and you end up with two rows.

This is dissapointing indeed! What this means is that Postgresql
transactions are, in fact, not truely serializable. The concurrent
execution of these two transactions is not equivalent to some serial
execution.

Have to read up on the documentation, which I thought was claiming
serializability.

Joachim

-- 
private:  [EMAIL PROTECTED](http://www.kraut.bc.ca)
work: [EMAIL PROTECTED]  (http://www.mercury.bc.ca)




Re: [GENERAL] Revisited: Transactions, insert unique.

2000-04-25 Thread Joachim Achtzehnter

Today, in a message to pgsql-general, Ross J. Reedstrom wrote:
>
> On Mon, Apr 24, 2000 at 09:23:48PM -0700, Joachim Achtzehnter wrote:
> > 
> > I suspect, though, that to support statement-level aborts
> > more is needed than an overhaul of the error system. Statement
> > level aborts are essentially a form of nested transaction,...
>
> Hmm, sounds like your moving the goal posts: now you want checkpoints,
> no?

I didn't say that. Was merely pointing out that supporting statement level
aborts may be more difficult than it seems. This is not just an error
reporting issue. What it amounts to is a special case of nested
transactions, i.e. where every statement is an implicit nested
transaction.

Keep in mind that a statement can be very complex, and it can update,
directly or indirectly via triggers, many rows in many tables. To
precisely undo the effect of such a statement may not be trivial, in
general.

[quote from Gray and Reuter omitted]

> Doesn't this quote mearly bemoan the fact that nesting of transactions
> is _not_ a general requirement of "even today's SQL systems"? I can't
> follow the example,

The quote essentially said this: Most existing SQL systems don't support
user-controlled nested transactions. The authors find this surprising
because these same systems DO support a special form of nested
transactions, namely statement-level abort. What they are complaining
about is the fact that vendors choose ad hoc approaches to implement the
special case, instead of implementing the general case. The general
mechanism could then be used to achieve statement-level aborts as well as
supporting user-controlled nested transactions.

Joachim

-- 
private:  [EMAIL PROTECTED](http://www.kraut.bc.ca)
work: [EMAIL PROTECTED]  (http://www.mercury.bc.ca)





Re: [GENERAL] Revisited: Transactions, insert unique.

2000-04-25 Thread Lincoln Yeoh

At 10:58 AM 24-04-2000 -0500, Ed Loehr wrote:
>Good point.  And you can combine the check David suggests with the insert
>statement, e.g.,
>
>   INSERT INTO mytable (id, ...)
>   SELECT 7, ...
>   FROM mytable
>   WHERE NOT EXISTS (SELECT * FROM mytable WHERE id = 7)
>
>And then check the return result for number of rows inserted.  '0' means an
>update is needed.  I don't remember if there is cleaner more efficient
>manner for doing that, but probably so...

At first it looked like it might work, but it doesn't ensure uniqueness
when I checked. And after that I realised the situation was even more
difficult.

e.g.
create table test (a int, b text);
Transaction A
begin;
insert into test (a,b) select 4,'four' from test
where not exists (select * from test where a=4);

Transaction B
begin;
insert into test (a,b) select 4,'four' from test
where not exists (select * from test where a=4);

Then you do a commit on both, and you end up with two rows.

The reason this happens is because inserted stuff in one transaction is not
visible to other transactions until a commit. So even David's suggestion
won't work, nor will a "select for update", since with the current
implementation there is nothing to lock on till the commit. 

This is correct behaviour, but that means we have to try something else to
ensure unique rows. If we resort to using UNIQUE then we'll have to redo
whole transactions till the error point. If we use the proposed arbitrary
lock, it's a lot more work for the developers plus it shifts so much more
responsibility to the application compared to stuff like select for update.
I kind of like the arbitrary lock thingy, but I'm not in a "large"
environment.

For large environments maybe there should be a select for insert? Or the
behaviour for select for update should change?

That is to mean that the statement blocks if there's any other prior select
for insert with a potentially interfering WHERE clause (ouch!), even if
there are NO relevant ROWS to return (ouch!), . It does not even begin to
fetch rows till after the other transactions are completed.

This doesn't look easy to do. But it's one of the ways to maintain the
Postgresql "no error transaction" behaviour.

To maintain this behaviour there need to be a few more ways to check for
stuff (and lock the state) before attempting to change or create stuff.
Stuff like create table and so on.

Aside: would an arbitrary lock mechanism be better for performance than
using a UNIQUE on a table? In most cases you're inserting to an indexed
table, so having a UNIQUE isn't going to cost much more right?

Cheerio,
Link.




Re: [GENERAL] Revisited: Transactions, insert unique.

2000-04-25 Thread Ross J. Reedstrom

On Mon, Apr 24, 2000 at 09:23:48PM -0700, Joachim Achtzehnter wrote:
> 
> This is good to hear. I suspect, though, that to support statement-level
> aborts more is needed than an overhaul of the error system. Statement
> level aborts are essentially a form of nested transaction, i.e. the
> ability to rollback a precisely defined subset of the top-level
> transaction.

Hmm, sounds like your moving the goal posts: now you want checkpoints, 
no? The problem under discussion is allowing the tranaction to procede
when an error occurs that both the programmer and the DBMS knows leaves
the system in a consistent state: a simple SQL parse error, for example,
or an attempt to violate a 'unique' constraint on an INSERT. Peter
Eisentraut already offered a very high level, potential dangerous one
line patch, to deal with the 'table not found' case. Digging into the
code a little, I think this will also handle the SQL parse error case
correctly, but may have a problem with constaint case: I've recompiling
code right now to test it. ;-) (Later) Hmm, not good: the bad tuple ends
up going live, not immediately, but after the next select. I think I
better find that old message and follow up with this info, so it's in the
mailing list archives, at least.

> 
> > being able to play the 'technically SQL92 compliant' card, without
> > having to cross your fingers behind your back, is very important.
> 
> But you are essentially doing that, crossing your fingers I mean, by
> relying on a shaky interpretation solely to be able to claim compliance,
> while you know full well that the alternative interpretation is the better
> one.

Right, but often one is in a situation where technical requirements are 
being enforced by beaurocracies for non-technical reasons: if I _know_
that the product is good enough for the task at hand, and there exists no
mechanism to get an exception to the requirement, 'technical compliance'
serves my purposes. Basically, if management is going to play games, I'm
going to outplay them, if I have to. 

> 
> > Heck, I'd be wrestling with Oracle right now,
> 
> Well, since you mention Oracle: A few years ago I had a similar argument
> with them about their peculiar interpretation of what SERIALIZABLE in
> SQL92 means. The early versions of Oracle's multi-version concurrancy
> mechanism didn't provide truely serializable transactions, yet they
> claimed it did by resorting to rather twisted interpretations of the
> standard.
> 
> I didn't think this was acceptable then, and I certainly don't condone
> similar window dressing today. Postgresql doesn't need this kind of PR.
> 

The difference between the Oracle case, and the Postgresql case is
that the corporation tried to give you that twisting of the standard
as the _only_ answer: What I'm saying is, well, technically, this _is_
compliant, but yes, it's not the right way to do it, and we're working
on it, and by the way, how do you want it to work, and here's the code,
if we can't get to it fast enough, maybe you can?

I'm just asking you not to make Postgresql jump a higher _political_
bar than its competitors. Hmm, I find it odd to be on the 'practical,
business' side of one of these arguments: I'm usually the one pushing
the 'theoretical, academic' side.

One thing I should mention: PostgreSQL's overall transaction semantics
_are not_ SQL92 standard compliant, in that the client must explicitly
start the first transaction. the 'autocommit' mode is a non-SQL92
extension, which is also commonly available in other systems. Having it
be the default is non-standard, however.

> 
> Let me get back to my contention that statement-level aborts, or atomicity
> on the level of SQL statements, are considered an obvious requirement in
> the database community. Because everybody assumes this to be the case, it
> is hard to find it explicitly mentioned. Let me quote from Gray and
> Reuter's "Transaction Processing", considered the bible on the subject by
> some:
> 
>   Even current database systems ... do not rely on nested transactions for
>   their own implementation. This is quite surprising, because nesting the
>   scope of commitment and backout is commonplace even in today's SQL
>   systems (although the user cannot influence it). This is particularly
>   true for SQL update statements that are executed as the subtransactions.
>   Think of this transaction as the root transaction and the SQL statements
>   as the subtransactions. Obviously, an SQL update statement commits its
>   modifications to the top-level transaction. If it fails (assume an
>   INSERT being rejected because a unique value condition is violated), it
>   is implicitly undone and so appears to be atomic even in the case of a
>   failure. In other words, update SQL statements have all the properties
>   of subtransactions, but since the implementation techniques are
>   typically ad hoc rather than guided by the idea of transaction nesting,
>   the general mechanism is not available for the app

Re: [GENERAL] Revisited: Transactions, insert unique.

2000-04-25 Thread Lincoln Yeoh

At 10:17 PM 24-04-2000 -0400, Mike Mascari wrote:
>Lincoln Yeoh wrote:
>PostgreSQL implements SELECT...FOR UPDATE to allow for the
>sequence you'be described:
>
>Session 1:
>
>BEGIN;
>SELECT webuser FROM webusers WHERE webuser = 'webuser1';
>
>Session 2:
>
>BEGIN;
>UPDATE webusers SET webuser = 'webuser2' WHERE webuser =
>'webuser1';*

The 6.5.3 running on my site does not block for the case I described which
has an INSERT after the select. 

begin;
select from mytable where field=value for update;
if rows=0 {
insert into mytable (field,field2) values (value,value2);
} else {
update mytable set field=value, field2=value2;
}
commit;

Tested on 6.5.3. 
Did a select for update in two concurrent transactions, they did not block,
and both inserts went through (if there was a unique, one of the
transactions would have had to be rolled back and redone from scratch).

If the behaviour is different for 7.0, I'm interested!

Cheerio,

Link.




Re: [GENERAL] Revisited: Transactions, insert unique.

2000-04-24 Thread Joachim Achtzehnter

Today, in a message to pgsql-general, David Boerwinkle wrote:
> 
> it seems like this is something that ought to be handled
> programmatically.  That is, query the table to see if the row exists,
> then decide what you are going to do (insert or update) based on the
> results of your query.

It certainly 'can' be handled the way you describe, but to say that it
'ought' to be handled this way is going too far. It is common practice in
database programming to simply try the most likely case and fall back to
alternatives when an error is encountered. For example, if one expects 99%
of inserts to be unique one may simply try the insert and when this fails
because of a duplicate key error one can update instead. This is slightly
more efficient than doing the extra query in 100% of cases.

In any case, if I'm not mistaken the SQL standard permits an automatic
rollback only for deadlock errors and equivalent types of errors where the
rollback may be required to resolve a lockup situation.

Joachim

-- 
private:  [EMAIL PROTECTED](http://www.kraut.bc.ca)
work: [EMAIL PROTECTED]  (http://www.mercury.bc.ca)




Re: [GENERAL] Revisited: Transactions, insert unique.

2000-04-24 Thread Haroldo Stenger

Ed Loehr wrote:
> 
> Lincoln Yeoh wrote:
> >
> > Hi,
> >
> > Previously I wanted to ensure that I am inserting something unique into a
> > table, the answer was to create a unique index on the relevant columns.
> >
> > But what if I don't want to get an error which would force a rollback? Say
> > I want to insert something if it doesn't already exist, but update it if it
> > does.

I think you could SELECT from etc using the key value, before trying to
insert anything; if it returns 0 rows, then you insert, else you update.
> 
> I think the best answer to this is to correct the non-std error-handling to
> abort only the current statement and not the entire transaction.  IIRC,
> Peter Eisenstraut recently posted a one-line patch to facilitate this,
> though I don't know how well it's working for those who tried it.  I have
> not seen anything that indicated that the core developers were ready to
> adopt this, though recent discussions appeared to be heading that way.
> 

I tested the mentioned patch. I worked fine as far as I could try. I
agree with you in that this is the way to go, including what Bruce
suggested of using a SET statement to select behaviour ...

> Regards,
> Ed Loehr
> 
> >
> > Do I have to lock the whole table?
> >
> > Would it be a good idea to be able to request a lock on an arbitrary string
> > like in MySQL? Then I could perhaps do something like
> >
> > LOCK HANDLE('max255charstring',TimeoutInSeconds)
> > e.g.
> > LOCK HANDLE('mytable,field1=x,field2=y',10)
> >
> > Then I could control access to a row that may not even exist, or do other
> > snazzy transaction stuff.
> >
> > Cheerio,
> > Link.



Re: [GENERAL] Revisited: Transactions, insert unique.

2000-04-24 Thread Ed Loehr

[EMAIL PROTECTED] wrote:
> 
> Hi Lincoln,
> 
> I'm not sure I'm understanding your question, but it seems like this is
> something that
> ought to be handled programmatically.  That is, query the table to see if
> the row exists,
> then decide what you are going to do (insert or update) based on the results
> of your
> query.

Good point.  And you can combine the check David suggests with the insert
statement, e.g.,

INSERT INTO mytable (id, ...)
SELECT 7, ...
FROM mytable
WHERE NOT EXISTS (SELECT * FROM mytable WHERE id = 7)

And then check the return result for number of rows inserted.  '0' means an
update is needed.  I don't remember if there is cleaner more efficient
manner for doing that, but probably so...

Regards,
Ed Loehr

> David Boerwinkle
> 
> -Original Message-
> From: Lincoln Yeoh <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
> Date: Monday, April 24, 2000 1:13 AM
> Subject: [GENERAL] Revisited: Transactions, insert unique.
> 
> >Hi,
> >
> >Previously I wanted to ensure that I am inserting something unique into a
> >table, the answer was to create a unique index on the relevant columns.
> >
> >But what if I don't want to get an error which would force a rollback? Say
> >I want to insert something if it doesn't already exist, but update it if it
> >does.
> >
> >Do I have to lock the whole table?
> >
> >Would it be a good idea to be able to request a lock on an arbitrary string
> >like in MySQL? Then I could perhaps do something like
> >
> >LOCK HANDLE('max255charstring',TimeoutInSeconds)
> >e.g.
> >LOCK HANDLE('mytable,field1=x,field2=y',10)
> >
> >Then I could control access to a row that may not even exist, or do other
> >snazzy transaction stuff.
> >
> >Cheerio,
> >Link.
> >



Re: [GENERAL] Revisited: Transactions, insert unique.

2000-04-24 Thread davidb

Hi Lincoln,

I'm not sure I'm understanding your question, but it seems like this is
something that
ought to be handled programmatically.  That is, query the table to see if
the row exists,
then decide what you are going to do (insert or update) based on the results
of your
query.

Am I completely missing the point?

David Boerwinkle

-Original Message-
From: Lincoln Yeoh <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
Date: Monday, April 24, 2000 1:13 AM
Subject: [GENERAL] Revisited: Transactions, insert unique.


>Hi,
>
>Previously I wanted to ensure that I am inserting something unique into a
>table, the answer was to create a unique index on the relevant columns.
>
>But what if I don't want to get an error which would force a rollback? Say
>I want to insert something if it doesn't already exist, but update it if it
>does.
>
>Do I have to lock the whole table?
>
>Would it be a good idea to be able to request a lock on an arbitrary string
>like in MySQL? Then I could perhaps do something like
>
>LOCK HANDLE('max255charstring',TimeoutInSeconds)
>e.g.
>LOCK HANDLE('mytable,field1=x,field2=y',10)
>
>Then I could control access to a row that may not even exist, or do other
>snazzy transaction stuff.
>
>Cheerio,
>Link.
>




Re: [GENERAL] Revisited: Transactions, insert unique.

2000-04-24 Thread Ed Loehr

Lincoln Yeoh wrote:
> 
> Hi,
> 
> Previously I wanted to ensure that I am inserting something unique into a
> table, the answer was to create a unique index on the relevant columns.
> 
> But what if I don't want to get an error which would force a rollback? Say
> I want to insert something if it doesn't already exist, but update it if it
> does.

I think the best answer to this is to correct the non-std error-handling to
abort only the current statement and not the entire transaction.  IIRC,
Peter Eisenstraut recently posted a one-line patch to facilitate this,
though I don't know how well it's working for those who tried it.  I have
not seen anything that indicated that the core developers were ready to
adopt this, though recent discussions appeared to be heading that way.

Regards,
Ed Loehr

> 
> Do I have to lock the whole table?
> 
> Would it be a good idea to be able to request a lock on an arbitrary string
> like in MySQL? Then I could perhaps do something like
> 
> LOCK HANDLE('max255charstring',TimeoutInSeconds)
> e.g.
> LOCK HANDLE('mytable,field1=x,field2=y',10)
> 
> Then I could control access to a row that may not even exist, or do other
> snazzy transaction stuff.
> 
> Cheerio,
> Link.