Re: [HACKERS] Hard problem with concurrency

2003-02-24 Thread Manfred Koizar
On Mon, 17 Feb 2003 09:51:54 +0800, "Christopher Kings-Lynne" <[EMAIL PROTECTED]> wrote: >Strategy three: > >begin; >lock table in exclusive mode; >update row; >if (no rows affected) insert row; >commit; > >Problem - Works, but this table needs high concurrency. Chris, distributing congestion mig

Re: [HACKERS] Hard problem with concurrency

2003-02-20 Thread CoL
Hi, Vincent van Leeuwen wrote, On 2/19/2003 10:08 PM: On 2003-02-18 20:02:29 +0100, Peter Eisentraut wrote: Christopher Kings-Lynne writes: > REPLACE INTO anyone? ;) The upcoming SQL 200x standard includes a MERGE command that appears to fulfill that purpose. MySQL features a poor-mans apro

Re: [HACKERS] Hard problem with concurrency

2003-02-19 Thread Vincent van Leeuwen
On 2003-02-18 20:02:29 +0100, Peter Eisentraut wrote: > Christopher Kings-Lynne writes: > > > REPLACE INTO anyone? ;) > > The upcoming SQL 200x standard includes a MERGE command that appears to > fulfill that purpose. > MySQL features a poor-mans aproach to this problem, their REPLACE command:

Re: [HACKERS] Hard problem with concurrency

2003-02-19 Thread Peter Eisentraut
Christopher Kings-Lynne writes: > *sigh* It's just like a standard to come up with a totally new syntax for a > feature that no-one has except MySQL who use a different syntax :) Actually that command was copied straight out of Oracle. -- Peter Eisentraut [EMAIL PROTECTED] -

Re: [HACKERS] Hard problem with concurrency

2003-02-18 Thread Ron Mayer
Christopher Kings-Lynne wrote: > >*sigh* It's just like a standard to come up with a totally new syntax for a >feature that no-one has except MySQL who use a different syntax :) You sure? :) http://otn.oracle.com/products/oracle9i/daily/Aug24.html MERGE INTO SALES_FACT D USING SALES_JUL

Re: [HACKERS] Hard problem with concurrency

2003-02-18 Thread Bruce Momjian
URL added to develepers FAQ. --- Peter Eisentraut wrote: > Hannu Krosing writes: > > > Where is this upcoming standard available on net ? > > Near ftp://sqlstandards.org/SC32/WG3/Progression_Documents/FCD > > -- > Peter

Re: [HACKERS] Hard problem with concurrency

2003-02-18 Thread Christopher Kings-Lynne
> > REPLACE INTO anyone? ;) > > The upcoming SQL 200x standard includes a MERGE command that appears to > fulfill that purpose. Is there somewhere that I can read that spec? Or can you just post the MERGE syntax for us? *sigh* It's just like a standard to come up with a totally new syntax for a

Re: [HACKERS] Hard problem with concurrency

2003-02-18 Thread Peter Eisentraut
Hannu Krosing writes: > Where is this upcoming standard available on net ? Near ftp://sqlstandards.org/SC32/WG3/Progression_Documents/FCD -- Peter Eisentraut [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands

Re: [HACKERS] Hard problem with concurrency

2003-02-18 Thread Hannu Krosing
Peter Eisentraut kirjutas T, 18.02.2003 kell 21:02: > Christopher Kings-Lynne writes: > > > REPLACE INTO anyone? ;) > > The upcoming SQL 200x standard includes a MERGE command that appears to > fulfill that purpose. Where is this upcoming standard available on net ? Hannu ---

Re: [HACKERS] Hard problem with concurrency

2003-02-18 Thread Ron Mayer
e; Hackers Subject: Re: [HACKERS] Hard problem with concurrency Christopher Kings-Lynne writes: > REPLACE INTO anyone? ;) The upcoming SQL 200x standard includes a MERGE command that appears to fulfill that purpose. -- Peter Eisentraut [EMAIL PROTECTED] ---(end

Re: [HACKERS] Hard problem with concurrency

2003-02-18 Thread Peter Eisentraut
Christopher Kings-Lynne writes: > REPLACE INTO anyone? ;) The upcoming SQL 200x standard includes a MERGE command that appears to fulfill that purpose. -- Peter Eisentraut [EMAIL PROTECTED] ---(end of broadcast)--- TIP 6: Have you searched our

Re: [HACKERS] Hard problem with concurrency

2003-02-17 Thread Bruce Momjian
Christopher Kings-Lynne wrote: > > If you don't have a primary key already, create a unique index on the > > combination you want to be unique. Then: > > > > . Try to insert the record > > . If you get a duplicate key error > > then do update instead > > > > No possibilities of duplicate records

Re: [HACKERS] Hard problem with concurrency

2003-02-17 Thread Bruno Wolff III
On Sun, Feb 16, 2003 at 23:51:49 -0500, Greg Stark <[EMAIL PROTECTED]> wrote: > > Hm, odd, nobody mentioned this solution: > > If you don't have a primary key already, create a unique index on the > combination you want to be unique. Then: > > . Try to insert the record > . If you get a duplic

Re: [HACKERS] Hard problem with concurrency

2003-02-16 Thread Christopher Kings-Lynne
> If you don't have a primary key already, create a unique index on the > combination you want to be unique. Then: > > . Try to insert the record > . If you get a duplicate key error > then do update instead > > No possibilities of duplicate records due to race conditions. If two people > try to

Re: [HACKERS] Hard problem with concurrency

2003-02-16 Thread Greg Stark
Hm, odd, nobody mentioned this solution: If you don't have a primary key already, create a unique index on the combination you want to be unique. Then: . Try to insert the record . If you get a duplicate key error then do update instead No possibilities of duplicate records due to race condit

Re: [HACKERS] Hard problem with concurrency

2003-02-16 Thread Christopher Kings-Lynne
> Do the update, then try to insert if the update found nothing, and put > a retry loop around the whole transaction in case you fail because of > concurrent inserts. > > Realistically you will need a retry loop in all but the most trivial > cases anyway --- certainly so if you want to use serializ

Re: [HACKERS] Hard problem with concurrency

2003-02-16 Thread Tom Lane
"Christopher Kings-Lynne" <[EMAIL PROTECTED]> writes: > I can't try the insert and then the update because the INSERT, in Postgres, > will cause an outright transaction failure. Do the update, then try to insert if the update found nothing, and put a retry loop around the whole transaction in case

Re: [HACKERS] Hard problem with concurrency

2003-02-16 Thread greg
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Solution one: use sequences for the unique value. Solution two: use another table to effect the exclusive locking and use it to store the "unique" values: begin; update row; if (no rows affected) { lock table foo in exclusive mode; find a uniq

[HACKERS] Hard problem with concurrency

2003-02-16 Thread Christopher Kings-Lynne
OK, this is the problem: I want to write a bit of SQL that if a row exists in a table, then update it, otherwise it will update it. Problem is, there is a very high chance of simultaneous execute of this query on the same row (the rows have a unique index). So, strategy one: begin; update row;