On the extremely rare occasion when I couldn't use an auto increment, mainly for performance reasons, I've used an id+subid. Usually I've only done this for long running scripts that process/add a lot of records. In this case I want to the script to generate it's own id's. To keep them unique, the script checks in and grabs a "worker id", which is really just an id from a "workers" table that has an autoincrement column. That id is the scripts id for everything it does. It keeps track of it's own subid, incrementing when needed without having to check with any thing else.
Each record then has a "worker id" and a "sub id" that makes it unique. Perhaps 
that would work for you.

Another technique I use occasionally is a compound primary key index. Basically you setup a table with an auto increment field, but your primary key index will be an id field plus the autoincrement field (majorid+minorid).
CREATE TABLE `example` (
 `majorid` int unsigned NOT NULL,
 `minorid` int unsigned NOT NULL auto_increment
 PRIMARY KEY  (`majorid`,`minorid`)

The minorid will then be incremented based on the majorid. So you'll actually 
have minorid's that are the same.
INSERT INTO example (majorid) values (1),(1),(1),(2),(2)

The result of the insert will be:
1    1
1    2
1    3
2    1
2    2

So as an alternatively, you could use an autoincrement field in combination with a "workerid" if you wanted mysql to handle the "unique" id. MySQL would return only the minorid (autoincrement field) on an insert. Just some ideas to keep uniqueness when you have things running concurrently.


----- Original Message ----- From: "Burke, Dan" <[EMAIL PROTECTED]>
To: "Dan Buettner" <[EMAIL PROTECTED]>
Cc: <mysql@lists.mysql.com>
Sent: Wednesday, June 28, 2006 9:35 AM
Subject: RE: concurrency problem


I have been using this set of functions to do sequences, especially for
tables where AUTO_INCREMENT can be inconvenient.  Maybe it will help you
over just incrementing the ID by 1 in your code.

I based them on the DBIx::MysqlSequence perl module

http://brazil.addictmud.org/wiki/index.php/MySQL_Sequences

I'd also welcome any criticisms on it if anyone else is bored enough to
check it out.

Dan.


-----Original Message-----
From: Dan Buettner [mailto:[EMAIL PROTECTED]
Sent: Tuesday, June 27, 2006 8:55 AM
To: balaraju mandala
Cc: mysql@lists.mysql.com
Subject: Re: concurrency problem

Use a transaction and an InnoDB table.
http://dev.mysql.com/doc/refman/5.0/en/transactional-commands.html

Or, use the LOCK TABLES command:
http://dev.mysql.com/doc/refman/5.0/en/lock-tables.html

An ideal solution (in my mind) is to use a stored procedure to read the
table, increment the current value, update the table, and return the new

UID to the caller - all inside a transaction.  Used to do this with
Sybase all the time; never tried it with MySQL's transactions and stored

procedures, but don't know why it wouldn't work.

Dan


balaraju mandala wrote:
Hi Comunity,

I am facing concurrency problem. The scenario is, I have a table, the
primary key of that table say 'uid' is a unique number, which is used
by my
application. So every time i fire a query, i get max(uid) i read this
ResultSet from my Java application. I increament this uid to +1. This
resulted 'uid' will be updated to the Table. This table is used by
more
than
one thread at any given time. So if some threads fire SQL command at
same
time, they are getting same uid and result is a duplicate values
Exception.
I cannot use AUTOINCREMENT. Please anybody have any solution for this?


--
Dan Buettner

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

-----------------------------------------------------------------------------------------------------------------------
This e-mail transmission is strictly confidential and intended solely
for the person or organization to whom it is addressed. It may contain
privileged and confidential information and if you are not the intended
recipient, you must not copy, distribute or take any action in reliance
on it. If you have received this e-mail in error, please notify the sender
as soon as possible and delete the e-mail message and any attachment(s).
This message has been scanned for viruses by TechTeam's email gateway.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to