Re: Unique IDs

2005-01-07 Thread Joshua J. Kugler
When I saw this message a few weeks ago, I *knew* MySQL had something for 
this, but I couldn't remember where I saw it, and I couldn't find it.  Today 
I found it.

Take a look at: http://dev.mysql.com/doc/mysql/en/Miscellaneous_functions.html
and scroll down to UUID()

Returns a Universal Unique Identifier (UUID) generated according to ``DCE 1.1: 
Remote Procedure Call'' (Appendix A) CAE (Common Applications Environment) 
Specifications published by The Open Group in October 1997 (Document Number 
C706). A UUID is designed as a number that is globally unique in space and 
time. Two calls to UUID() are expected to generate two different values, even 
if these calls are performed on two separate computers that are not connected 
to each other.

This was added in MySQL 4.1.2.  You didn't mention the version you were using, 
but 4.1 is production now.

Hope that helps!!

j- k-

On Monday 20 December 2004 05:33, Andrew Mull said something like:
 I'm working on a rather large database with many cross-linked tables
 currently using auto increment IDs.  The system is primarily a web based
 system, however, there will be times that the system will be run as a stand
 alone server...meaning no internet connection is available.

 The question arises that if someone enters information to the database on
 the website, while others are entering information on the local database,
 what is the best way to merge the data?  I would imagine that we would run
 into many duplicate auto increment IDs.

 I'm sure that for one table insert, this would not be a problem as I could
 store the SQL statement in a text file without the ID specified, and run it
 as a batch process on the live server when we get connectivity.  But I
 don't have a handle on how to update the sub tables that have a FK pointer.

 Any ideas?

 Thanks!
 -Andy

-- 
Joshua J. Kugler -- Fairbanks, Alaska -- ICQ#:13706295
Every knee shall bow, and every tongue confess, in heaven, on earth, and under 
the earth, that Jesus Christ is LORD -- Count on it!

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique IDs

2004-12-22 Thread Philippe Poelvoorde
with implicit join :
INSERT into table1(id, list of other columns)
 select m.newID, list of other columns from table2, tmpIDMAP
where
m.id = table2.id
 UPDATE table3, tmpIDMAP
 SET table3.parentID = m.newID
where
m.id = table3.parentID
Andrew Mull wrote:
Thank you.  I'll have to review the SQL carefully as I haven't used inner 
joins in quite some time :)


Quoting [EMAIL PROTECTED]:

Create a temporary table that will act as a map between your old IDs and 
your new ones. That way you can re-use those values and change them from 
the old values to the new values. 

LOCK TABLE table1 WRITE, table2 WRITE, table3 WRITE;
select @max_id:=max(id) FROM table1;
CREATE TEMPORARY TABLE tmpIDMAP
SELECT id, [EMAIL PROTECTED] as newID
FROM table2
INSERT into table1(id, list of other columns) 
select m.newID, list of other columns from table2
INNER JOIN tmpIDMAP m
ON m.id = table2.id

UPDATE table3
INNER JOIN tmpIDMAP m
ON m.id = table3.parentID
SET table3.parentID = m.newID
(repeat for other child tables)
UNLOCK
I don't have time to give a better explanation right now but if you write 
the list back, I can fill in the details later.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine
Andrew Mull [EMAIL PROTECTED] wrote on 12/21/2004 12:20:57 
PM:


You are correct in that I need to merge two tables.  However, I'm 
not clear on 
how to update the FKs in the sub tables. 

Example, the main table (Person) has a sub table (Address).  Person has 
a 1:N 

relationship with Address.  So Address has the Person's ID as a FK. How 
do I 

update the FK in Address with the new ID assigned to Person that was 
created 

with the merge? 

Thanks for the help!
Quoting Philippe Poelvoorde [EMAIL PROTECTED]:

Andrew Mull wrote:

I'm working on a rather large database with many cross-linked tables 

currently using auto increment IDs.  The system is primarily a web 
based 

system, however, there will be times that the system will be run as 
a 

stand
alone server...meaning no internet connection is available. 

The question arises that if someone enters information to the 
database on
the 

website, while others are entering information on the local 
database, what
is 

the best way to merge the data?  I would imagine that we would run 
into
many 

duplicate auto increment IDs. 

I'm sure that for one table insert, this would not be a problem as I 
could
store the SQL statement in a text file without the ID specified, and 
run 

it
as 

a batch process on the live server when we get connectivity.  But I 
don't
have 

a handle on how to update the sub tables that have a FK pointer.
Any ideas? 

Thanks!
-Andy 

I understand your question as the following: you want to merge 2 
tables 

comming from different database in a single table. If this is not what 

you want, sorry :)
I would do that :
LOCK TABLE table1 WRITE, table2 WRITE
select @max_id:=max(id) FROM table1;
UPDATE table2 SET [EMAIL PROTECTED];
insert into table1(list of columns) select [list of columns] from 
table2
UNLOCK
and then updating your FK within the update query.
Keep in mind that I didn't try with InnoDb tables... (but works fine 
for 

MyIsam)
--
Philippe Poelvoorde
COS Trading Ltd.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
Philippe Poelvoorde
COS Trading Ltd.
+44.(0)20.7376.2401
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Unique IDs

2004-12-21 Thread Philippe Poelvoorde
Andrew Mull wrote:
I'm working on a rather large database with many cross-linked tables 
currently using auto increment IDs.  The system is primarily a web based 
system, however, there will be times that the system will be run as a stand 
alone server...meaning no internet connection is available.  

The question arises that if someone enters information to the database on the 
website, while others are entering information on the local database, what is 
the best way to merge the data?  I would imagine that we would run into many 
duplicate auto increment IDs.  

I'm sure that for one table insert, this would not be a problem as I could 
store the SQL statement in a text file without the ID specified, and run it as 
a batch process on the live server when we get connectivity.  But I don't have 
a handle on how to update the sub tables that have a FK pointer.

Any ideas?   

Thanks!
-Andy 

I understand your question as the following: you want to merge 2 tables 
comming from different database in a single table. If this is not what 
you want, sorry :)
I would do that :
LOCK TABLE table1 WRITE, table2 WRITE
select @max_id:=max(id) FROM table1;
UPDATE table2 SET [EMAIL PROTECTED];
insert into table1(list of columns) select [list of columns] from table2
UNLOCK

and then updating your FK within the update query.
Keep in mind that I didn't try with InnoDb tables... (but works fine for 
MyIsam)

--
Philippe Poelvoorde
COS Trading Ltd.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Unique IDs

2004-12-21 Thread Andrew Mull
You are correct in that I need to merge two tables.  However, I'm not clear on 
how to update the FKs in the sub tables.  

Example, the main table (Person) has a sub table (Address).  Person has a 1:N 
relationship with Address.  So Address has the Person's ID as a FK.  How do I 
update the FK in Address with the new ID assigned to Person that was created 
with the merge?  

Thanks for the help!


Quoting Philippe Poelvoorde [EMAIL PROTECTED]:

 Andrew Mull wrote:
 
  I'm working on a rather large database with many cross-linked tables 
  currently using auto increment IDs.  The system is primarily a web based 
  system, however, there will be times that the system will be run as a 
stand
 
  alone server...meaning no internet connection is available.  
  
  The question arises that if someone enters information to the database on
 the 
  website, while others are entering information on the local database, what
 is 
  the best way to merge the data?  I would imagine that we would run into
 many 
  duplicate auto increment IDs.  
  
  I'm sure that for one table insert, this would not be a problem as I could
 
  store the SQL statement in a text file without the ID specified, and run 
it
 as 
  a batch process on the live server when we get connectivity.  But I don't
 have 
  a handle on how to update the sub tables that have a FK pointer.
  
  Any ideas?   
  
  Thanks!
  -Andy 
  
 
 I understand your question as the following: you want to merge 2 tables 
 comming from different database in a single table. If this is not what 
 you want, sorry :)
 I would do that :
 LOCK TABLE table1 WRITE, table2 WRITE
 select @max_id:=max(id) FROM table1;
 UPDATE table2 SET [EMAIL PROTECTED];
 insert into table1(list of columns) select [list of columns] from table2
 UNLOCK
 
 and then updating your FK within the update query.
 Keep in mind that I didn't try with InnoDb tables... (but works fine for 
 MyIsam)
 
 -- 
 Philippe Poelvoorde
 COS Trading Ltd.
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique IDs

2004-12-21 Thread SGreen
Create a temporary table that will act as a map between your old IDs and 
your new ones. That way you can re-use those values and change them from 
the old values to the new values. 

LOCK TABLE table1 WRITE, table2 WRITE, table3 WRITE;

select @max_id:=max(id) FROM table1;

CREATE TEMPORARY TABLE tmpIDMAP
SELECT id, [EMAIL PROTECTED] as newID
FROM table2

INSERT into table1(id, list of other columns) 
select m.newID, list of other columns from table2
INNER JOIN tmpIDMAP m
ON m.id = table2.id

UPDATE table3
INNER JOIN tmpIDMAP m
ON m.id = table3.parentID
SET table3.parentID = m.newID

(repeat for other child tables)

UNLOCK

I don't have time to give a better explanation right now but if you write 
the list back, I can fill in the details later.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Andrew Mull [EMAIL PROTECTED] wrote on 12/21/2004 12:20:57 
PM:

 You are correct in that I need to merge two tables.  However, I'm 
 not clear on 
 how to update the FKs in the sub tables. 
 
 Example, the main table (Person) has a sub table (Address).  Person has 
a 1:N 
 relationship with Address.  So Address has the Person's ID as a FK. How 
do I 
 update the FK in Address with the new ID assigned to Person that was 
created 
 with the merge? 
 
 Thanks for the help!
 
 
 Quoting Philippe Poelvoorde [EMAIL PROTECTED]:
 
  Andrew Mull wrote:
  
   I'm working on a rather large database with many cross-linked tables 

   currently using auto increment IDs.  The system is primarily a web 
based 
   system, however, there will be times that the system will be run as 
a 
 stand
  
   alone server...meaning no internet connection is available. 
   
   The question arises that if someone enters information to the 
database on
  the 
   website, while others are entering information on the local 
database, what
  is 
   the best way to merge the data?  I would imagine that we would run 
into
  many 
   duplicate auto increment IDs. 
   
   I'm sure that for one table insert, this would not be a problem as I 
could
  
   store the SQL statement in a text file without the ID specified, and 
run 
 it
  as 
   a batch process on the live server when we get connectivity.  But I 
don't
  have 
   a handle on how to update the sub tables that have a FK pointer.
   
   Any ideas? 
   
   Thanks!
   -Andy 
   
  
  I understand your question as the following: you want to merge 2 
tables 
  comming from different database in a single table. If this is not what 

  you want, sorry :)
  I would do that :
  LOCK TABLE table1 WRITE, table2 WRITE
  select @max_id:=max(id) FROM table1;
  UPDATE table2 SET [EMAIL PROTECTED];
  insert into table1(list of columns) select [list of columns] from 
table2
  UNLOCK
  
  and then updating your FK within the update query.
  Keep in mind that I didn't try with InnoDb tables... (but works fine 
for 
  MyIsam)
  
  -- 
  Philippe Poelvoorde
  COS Trading Ltd.
  
  
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


Re: Unique IDs

2004-12-21 Thread Andrew Mull
Thank you.  I'll have to review the SQL carefully as I haven't used inner 
joins in quite some time :)



Quoting [EMAIL PROTECTED]:

 Create a temporary table that will act as a map between your old IDs and 
 your new ones. That way you can re-use those values and change them from 
 the old values to the new values. 
 
 LOCK TABLE table1 WRITE, table2 WRITE, table3 WRITE;
 
 select @max_id:=max(id) FROM table1;
 
 CREATE TEMPORARY TABLE tmpIDMAP
 SELECT id, [EMAIL PROTECTED] as newID
 FROM table2
 
 INSERT into table1(id, list of other columns) 
 select m.newID, list of other columns from table2
 INNER JOIN tmpIDMAP m
 ON m.id = table2.id
 
 UPDATE table3
 INNER JOIN tmpIDMAP m
 ON m.id = table3.parentID
 SET table3.parentID = m.newID
 
 (repeat for other child tables)
 
 UNLOCK
 
 I don't have time to give a better explanation right now but if you write 
 the list back, I can fill in the details later.
 
 Shawn Green
 Database Administrator
 Unimin Corporation - Spruce Pine
 
 Andrew Mull [EMAIL PROTECTED] wrote on 12/21/2004 12:20:57 
 PM:
 
  You are correct in that I need to merge two tables.  However, I'm 
  not clear on 
  how to update the FKs in the sub tables. 
  
  Example, the main table (Person) has a sub table (Address).  Person has 
 a 1:N 
  relationship with Address.  So Address has the Person's ID as a FK. How 
 do I 
  update the FK in Address with the new ID assigned to Person that was 
 created 
  with the merge? 
  
  Thanks for the help!
  
  
  Quoting Philippe Poelvoorde [EMAIL PROTECTED]:
  
   Andrew Mull wrote:
   
I'm working on a rather large database with many cross-linked tables 
 
currently using auto increment IDs.  The system is primarily a web 
 based 
system, however, there will be times that the system will be run as 
 a 
  stand
   
alone server...meaning no internet connection is available. 

The question arises that if someone enters information to the 
 database on
   the 
website, while others are entering information on the local 
 database, what
   is 
the best way to merge the data?  I would imagine that we would run 
 into
   many 
duplicate auto increment IDs. 

I'm sure that for one table insert, this would not be a problem as I 
 could
   
store the SQL statement in a text file without the ID specified, and 
 run 
  it
   as 
a batch process on the live server when we get connectivity.  But I 
 don't
   have 
a handle on how to update the sub tables that have a FK pointer.

Any ideas? 

Thanks!
-Andy 

   
   I understand your question as the following: you want to merge 2 
 tables 
   comming from different database in a single table. If this is not what 
 
   you want, sorry :)
   I would do that :
   LOCK TABLE table1 WRITE, table2 WRITE
   select @max_id:=max(id) FROM table1;
   UPDATE table2 SET [EMAIL PROTECTED];
   insert into table1(list of columns) select [list of columns] from 
 table2
   UNLOCK
   
   and then updating your FK within the update query.
   Keep in mind that I didn't try with InnoDb tables... (but works fine 
 for 
   MyIsam)
   
   -- 
   Philippe Poelvoorde
   COS Trading Ltd.
   
   
  
  
  -- 
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
  
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique IDs

2004-02-13 Thread Keith C. Ivey
On 12 Feb 2004 at 22:57, Bill Easton wrote:

 You can use + 1 instead of + interval 1 second, but it may give
 different results some day if MySQL changes the precision of
 timestamp.

Actually, that won't work in all cases.  If the current timestamp is 
20040213114859, then adding 1 (rather than INTERVAL 1 SECOND), gives 
20040213114860, which gets converted to 20040213114800 on insert.  If 
you've already used all the timestamps for that minute, then you're 
never going to get to the next minute (and thus never going to get a 
usable ID) by repeatedly adding 1.

-- 
Keith C. Ivey [EMAIL PROTECTED]
Tobacco Documents Online
http://tobaccodocuments.org


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique IDs

2004-02-12 Thread Eamon Daly
You definitely don't want to use timestamping for unique
IDs. You want to use an auto-incrementing column or similar.
See

http://www.mysql.com/doc/en/example-AUTO_INCREMENT.html


Eamon Daly
NextWave Media Group LLC
Tel: 1 773 975-1115
Fax: 1 773 913-0970



- Original Message - 
From: Craig Jackson [EMAIL PROTECTED]
To: mysql [EMAIL PROTECTED]
Sent: Thursday, February 12, 2004 11:30 AM
Subject: Unique IDs


 Hello People,

 This is my first post to this list. I am having a bit of a problem that
 Google doesn't seem to help with, and I'm not sure what part of Mysql
 docs will help.

 I have a very large web app that uses timestamp for unique IDs.
 Everything was rolling fine until we started getting many users per
 second, causing some of the unique IDs to not be unique -- users were
 being assigned the same timestamp. Since the web app is so large we
 don't want to change the method of assigning IDs as it would create a
 major project.

 I have looked at Locking and Insert Delay, but I'm not sure that will
 help.

 Does anyone have a suggestion? We are using 4.0.14 and ISAM tables and
 PHP 4.3.

 Thanks,
 Craig Jackson


 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique IDs

2004-02-12 Thread Keith C. Ivey
Craig Jackson [EMAIL PROTECTED] wrote:

 I have a very large web app that uses timestamp for unique IDs.
 Everything was rolling fine until we started getting many users per
 second, causing some of the unique IDs to not be unique -- users were
 being assigned the same timestamp. Since the web app is so large we
 don't want to change the method of assigning IDs as it would create a
 major project.

I don't understand.  If you're getting many users per second, and 
your timestamps have 1-second resolution, how could you possibly 
solve the problem without changing the method of assigning IDs?
Are the many users per second periods just short bursts, and you're 
really only getting several hundred users per day?  If so, I guess 
you could keep waiting a second and trying the insert again, but that 
could lead to indefinite delays if traffic gets high.  I think you've 
got to bite the bullet and change the unique ID to something that's 
actually unique -- even an AUTO_INCREMENT would work.

-- 
Keith C. Ivey [EMAIL PROTECTED]
Tobacco Documents Online
http://tobaccodocuments.org


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique IDs

2004-02-12 Thread Brent Baisley
You have a design flaw in your database, using timestamp as a unique 
ID. There really is no work around. You can't reliably keep using 
timestamps for unique IDs. As a rule, a unique ID should not be 
dependent on anything else and should not represent anything else aside 
from a unique id. For instance, social security number is a very poor 
choice for unique ids. Yes, it's unique (to a certain degree), but it 
relies on accurate data entry and it represents confidential 
information.
I always use an unsigned integer field with auto increment for my 
unique id fields. An auto increment field is really the only way to 
assure uniqueness.

One way or another you are going to have change the way you database 
works.
Sorry, but any other solution I can think will only buy you a little 
time, it won't solve the problem.

On Feb 12, 2004, at 12:30 PM, Craig Jackson wrote:

Hello People,

This is my first post to this list. I am having a bit of a problem that
Google doesn't seem to help with, and I'm not sure what part of Mysql
docs will help.
I have a very large web app that uses timestamp for unique IDs.
Everything was rolling fine until we started getting many users per
second, causing some of the unique IDs to not be unique -- users were
being assigned the same timestamp. Since the web app is so large we
don't want to change the method of assigning IDs as it would create a
major project.
I have looked at Locking and Insert Delay, but I'm not sure that will
help.
Does anyone have a suggestion? We are using 4.0.14 and ISAM tables and
PHP 4.3.
Thanks,
Craig Jackson
--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Unique IDs

2004-02-12 Thread Craig Jackson
On Thu, 2004-02-12 at 11:47, Keith C. Ivey wrote:
 Craig Jackson [EMAIL PROTECTED] wrote:
 
  I have a very large web app that uses timestamp for unique IDs.
  Everything was rolling fine until we started getting many users per
  second, causing some of the unique IDs to not be unique -- users were
  being assigned the same timestamp. Since the web app is so large we
  don't want to change the method of assigning IDs as it would create a
  major project.
 
 I don't understand.  If you're getting many users per second, and 
 your timestamps have 1-second resolution, how could you possibly 
 solve the problem without changing the method of assigning IDs?
 Are the many users per second periods just short bursts, and you're 
 really only getting several hundred users per day?  If so, I guess 
 you could keep waiting a second and trying the insert again, but that 
 could lead to indefinite delays if traffic gets high.  I think you've 
 got to bite the bullet and change the unique ID to something that's 
 actually unique -- even an AUTO_INCREMENT would work.

Thanks for the speedy reply and I have already recommended
auto_increment for the solution. We do need that quick fix until the
problem is fixed. How would I go about making Mysql wait one second
between inserts. We only get about 1000 hits per day, but they tend to
be concentrated in short time intervals.


 
 -- 
 Keith C. Ivey [EMAIL PROTECTED]
 Tobacco Documents Online
 http://tobaccodocuments.org


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique IDs

2004-02-12 Thread Mike Miller
code it within your program.  If this is in fact a primary key (whcih you 
seem to describe it as), then it should be distinct anyway.  So do an INSERT 
IGNORE or even just an INSERT.  It will fail upon duplicates.  Check the 
affected rows or the insert_id (using whatever API you use to access MySQL), 
sleep for a second, then try the insert again.  I doubt there's a good way 
to hang up the database on the issue.

Cheers;
-M


From: Craig Jackson [EMAIL PROTECTED]
Reply-To: [EMAIL PROTECTED]
To: Keith C. Ivey [EMAIL PROTECTED]
CC: [EMAIL PROTECTED]
Subject: Re: Unique IDs
Date: Thu, 12 Feb 2004 11:57:24 -0600
On Thu, 2004-02-12 at 11:47, Keith C. Ivey wrote:
 Craig Jackson [EMAIL PROTECTED] wrote:

  I have a very large web app that uses timestamp for unique IDs.
  Everything was rolling fine until we started getting many users per
  second, causing some of the unique IDs to not be unique -- users were
  being assigned the same timestamp. Since the web app is so large we
  don't want to change the method of assigning IDs as it would create a
  major project.

 I don't understand.  If you're getting many users per second, and
 your timestamps have 1-second resolution, how could you possibly
 solve the problem without changing the method of assigning IDs?
 Are the many users per second periods just short bursts, and you're
 really only getting several hundred users per day?  If so, I guess
 you could keep waiting a second and trying the insert again, but that
 could lead to indefinite delays if traffic gets high.  I think you've
 got to bite the bullet and change the unique ID to something that's
 actually unique -- even an AUTO_INCREMENT would work.
Thanks for the speedy reply and I have already recommended
auto_increment for the solution. We do need that quick fix until the
problem is fixed. How would I go about making Mysql wait one second
between inserts. We only get about 1000 hits per day, but they tend to
be concentrated in short time intervals.

 --
 Keith C. Ivey [EMAIL PROTECTED]
 Tobacco Documents Online
 http://tobaccodocuments.org
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
_
Add photos to your e-mail with MSN 8. Get 2 months FREE*.  
http://join.msn.com/?page=features/photospgmarket=en-caRU=http%3a%2f%2fjoin.msn.com%2f%3fpage%3dmisc%2fspecialoffers%26pgmarket%3den-ca

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Unique IDs

2004-02-12 Thread nalaka_nanayakkara
Add a second field that is auto incremented and change the primary key to
that
So for now you can keep the time-stamp


Thanks,

Nalaka Nanayakkara
Overwaitea Food Group
Tel: 604-888-1213 (ext 3742)

 -Original Message-
From:   Craig Jackson [mailto:[EMAIL PROTECTED] 
Sent:   Thursday, February 12, 2004 9:57 AM
To: Keith C. Ivey
Cc: [EMAIL PROTECTED]
Subject:Re: Unique IDs

On Thu, 2004-02-12 at 11:47, Keith C. Ivey wrote:
 Craig Jackson [EMAIL PROTECTED] wrote:
 
  I have a very large web app that uses timestamp for unique IDs.
  Everything was rolling fine until we started getting many users per
  second, causing some of the unique IDs to not be unique -- users were
  being assigned the same timestamp. Since the web app is so large we
  don't want to change the method of assigning IDs as it would create a
  major project.
 
 I don't understand.  If you're getting many users per second, and 
 your timestamps have 1-second resolution, how could you possibly 
 solve the problem without changing the method of assigning IDs?
 Are the many users per second periods just short bursts, and you're 
 really only getting several hundred users per day?  If so, I guess 
 you could keep waiting a second and trying the insert again, but that 
 could lead to indefinite delays if traffic gets high.  I think you've 
 got to bite the bullet and change the unique ID to something that's 
 actually unique -- even an AUTO_INCREMENT would work.

Thanks for the speedy reply and I have already recommended
auto_increment for the solution. We do need that quick fix until the
problem is fixed. How would I go about making Mysql wait one second
between inserts. We only get about 1000 hits per day, but they tend to
be concentrated in short time intervals.


 
 -- 
 Keith C. Ivey [EMAIL PROTECTED]
 Tobacco Documents Online
 http://tobaccodocuments.org


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


-
Note:  This message is only intended for the use of the individual or entity to which 
it is addressed and may contain information that is privileged, confidential and 
exempt from disclosure.  If the reader of this message is not the intended recipient, 
or an employee or agent responsible for delivering the message to the intended 
recipient, you are hereby notified that any dissemination, distribution, or copying of 
this communication is strictly prohibited.  If you have received this communication in 
error, please notify us by telephone (604-888-2079) or electronically by return 
message, and delete or destroy all copies of this communication.  Thank you.


Re: Unique IDs

2004-02-12 Thread Keith C. Ivey
On 12 Feb 2004 at 11:57, Craig Jackson wrote:

 Thanks for the speedy reply and I have already recommended
 auto_increment for the solution. We do need that quick fix until the
 problem is fixed. How would I go about making Mysql wait one second
 between inserts. We only get about 1000 hits per day, but they tend to
 be concentrated in short time intervals.

You'd have to code that logic into your application, using Perl or 
PHP or C or whatever.  Try the insert.  If it fails, wait a second 
and try again with the new timestamp.  Repeat until you succeed or 
until you've gone through some number of tries (at which point you 
give up).  It's ugly, but that's what you asked for.

How about converting the column to a BIGINT AUTO_INCREMENT with a new 
value of, say, 30?  Then your old values would still be 
around with values like 20040212131422.  That's also ugly, but not as 
ugly as the solution with waiting.

-- 
Keith C. Ivey [EMAIL PROTECTED]
Tobacco Documents Online
http://tobaccodocuments.org


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique IDs

2004-02-12 Thread Jeff Mathis
as everyone has pointed out, using timestamps as a unique id was a 
design flaw. you should fix the problem using an auto-increment field.

that said, can you change the column type you are currently using as a 
timestamp to be an auto-increment int field? the return type in both 
cases is technically an integer for mysql versions  4.1, but your API 
call may need to be changed nevertheless.

my experience has always been to fix the problem right rather than do a 
hork, even if it means down time.

jeff
Craig Jackson wrote:
On Thu, 2004-02-12 at 11:47, Keith C. Ivey wrote:

Craig Jackson [EMAIL PROTECTED] wrote:


I have a very large web app that uses timestamp for unique IDs.
Everything was rolling fine until we started getting many users per
second, causing some of the unique IDs to not be unique -- users were
being assigned the same timestamp. Since the web app is so large we
don't want to change the method of assigning IDs as it would create a
major project.
I don't understand.  If you're getting many users per second, and 
your timestamps have 1-second resolution, how could you possibly 
solve the problem without changing the method of assigning IDs?
Are the many users per second periods just short bursts, and you're 
really only getting several hundred users per day?  If so, I guess 
you could keep waiting a second and trying the insert again, but that 
could lead to indefinite delays if traffic gets high.  I think you've 
got to bite the bullet and change the unique ID to something that's 
actually unique -- even an AUTO_INCREMENT would work.


Thanks for the speedy reply and I have already recommended
auto_increment for the solution. We do need that quick fix until the
problem is fixed. How would I go about making Mysql wait one second
between inserts. We only get about 1000 hits per day, but they tend to
be concentrated in short time intervals.


--
Keith C. Ivey [EMAIL PROTECTED]
Tobacco Documents Online
http://tobaccodocuments.org





--
Jeff Mathis, Ph.D.  505-955-1434
The Prediction Company  [EMAIL PROTECTED]
525 Camino de los Marquez, Ste 6http://www.predict.com
Santa Fe, NM 87505
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Unique IDs

2004-02-12 Thread Eric @ Zomething
I like the auto-increment primary key method for unique ID's in MySQL, however, if 
your table's design will not allow you to use that for some (unknown to me) important 
reasons, you could programmatically generate a truly unique ID and INSERT that.  

Did you try a Google under GUID?  http://tinyurl.com/2l34e  I don't recall an 
algorithm offhand, but I believe one might typically use a number of variables to 
generate a GUID (like date-time, IP, a random number, etc.)  This is a well explored 
area of CS, I believe, and wouldn't require you to reinvent the wheel.

Eric


 Original Message
 From: Jeff Mathis [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Cc: Keith C. Ivey [EMAIL PROTECTED], [EMAIL PROTECTED]
 Date: Thu, Feb-12-2004 11:13 AM
 Subject: Re: Unique IDs
 
 as everyone has pointed out, using timestamps as a unique id was a 
 design flaw. you should fix the problem using an auto-increment field.
 
 that said, can you change the column type you are currently using as a 
 timestamp to be an auto-increment int field? the return type in both 
 cases is technically an integer for mysql versions  4.1, but your API 
 call may need to be changed nevertheless.
 
 my experience has always been to fix the problem right rather than do a 
 hork, even if it means down time.
 
 jeff
 Craig Jackson wrote:
  On Thu, 2004-02-12 at 11:47, Keith C. Ivey wrote:
  
 Craig Jackson [EMAIL PROTECTED] wrote:
 
 
 I have a very large web app that uses timestamp for unique IDs.
 Everything was rolling fine until we started getting many users per
 second, causing some of the unique IDs to not be unique -- users were
 being assigned the same timestamp. Since the web app is so large we
 don't want to change the method of assigning IDs as it would create a
 major project.
 
 I don't understand.  If you're getting many users per second, and 
 your timestamps have 1-second resolution, how could you possibly 
 solve the problem without changing the method of assigning IDs?
 Are the many users per second periods just short bursts, and you're 
 really only getting several hundred users per day?  If so, I guess 
 you could keep waiting a second and trying the insert again, but that 
 could lead to indefinite delays if traffic gets high.  I think you've 
 got to bite the bullet and change the unique ID to something that's 
 actually unique -- even an AUTO_INCREMENT would work.
  
  
  Thanks for the speedy reply and I have already recommended
  auto_increment for the solution. We do need that quick fix until the
  problem is fixed. How would I go about making Mysql wait one second
  between inserts. We only get about 1000 hits per day, but they tend to
  be concentrated in short time intervals.
  
  
  
 -- 
 Keith C. Ivey [EMAIL PROTECTED]
 Tobacco Documents Online
 http://tobaccodocuments.org
  
  
  
 
 
 -- 
 Jeff Mathis, Ph.D.505-955-1434
 The Prediction Company[EMAIL PROTECTED]
 525 Camino de los Marquez, Ste 6  http://www.predict.com
 Santa Fe, NM 87505
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique IDs

2004-02-12 Thread Bill Easton
Craig,

Instead of delaying a second, why not just add a second to the timestamp?
The following should work just fine:

create table tst (ts timestamp primary key, other_stuff varchar(127) not
null);
...
insert into tst
  select greatest(now(), max(T.ts) + interval 1 second)
   , value of other_stuff
  from tst T;

I checked that this works in 4.0.16.  It even appears to work for inserting
the first row into an empty table.  (I think it probably didn't work in
3.x.)

Of course, during a burst, the stored timestamps could be a few seconds
later than the actual time, but, at least the timestamps are unique
and ascending.

You could also keep the timestamp in a separate table and increment it,
using a variable:

update timestamp_table set ts = (@T := greatest(now(), ts + interval 1
second));

Then you can use @T as your generated unique ID in a subsequent
statement, within the same connection.  Of course the timestamps
could be out of order, unless you lock your other table or do
everything within a transaction.  Maybe this matters, maybe not.

auto_increment is probably still better.  With innodb, there is
less locking required.

You can use + 1 instead of + interval 1 second, but it may give
different results some day if MySQL changes the precision of timestamp.

HTH

Bill Easton

 Subject: Re: Unique IDs
 From: Craig Jackson [EMAIL PROTECTED]
 To: Keith C. Ivey [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Date: Thu, 12 Feb 2004 11:57:24 -0600

 On Thu, 2004-02-12 at 11:47, Keith C. Ivey wrote:
  Craig Jackson [EMAIL PROTECTED] wrote:
 
   I have a very large web app that uses timestamp for unique IDs.
   Everything was rolling fine until we started getting many users per
   second, causing some of the unique IDs to not be unique -- users were
   being assigned the same timestamp. Since the web app is so large we
   don't want to change the method of assigning IDs as it would create a
   major project.
 
  I don't understand.  If you're getting many users per second, and
  your timestamps have 1-second resolution, how could you possibly
  solve the problem without changing the method of assigning IDs?
  Are the many users per second periods just short bursts, and you're
  really only getting several hundred users per day?  If so, I guess
  you could keep waiting a second and trying the insert again, but that
  could lead to indefinite delays if traffic gets high.  I think you've
  got to bite the bullet and change the unique ID to something that's
  actually unique -- even an AUTO_INCREMENT would work.

 Thanks for the speedy reply and I have already recommended
 auto_increment for the solution. We do need that quick fix until the
 problem is fixed. How would I go about making Mysql wait one second
 between inserts. We only get about 1000 hits per day, but they tend to
 be concentrated in short time intervals.


 
  -- 
  Keith C. Ivey [EMAIL PROTECTED]
  Tobacco Documents Online
  http://tobaccodocuments.org


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Unique IDs

2004-02-12 Thread Tom Horstmann
Hi Craig,

 I have a very large web app that uses timestamp for unique IDs.
 Everything was rolling fine until we started getting many users per
 second, causing some of the unique IDs to not be unique -- users were
 being assigned the same timestamp. Since the web app is so large we
 don't want to change the method of assigning IDs as it would create a
 major project.

since you got a problem with concurrent inserts, the best way getting
an ID is the fastest one. I'm quite sure using an autoincrement-column
is the fastest.
I had a discussion about UIDs with dbase. There is no autoincrement
with dbase and we mentioned several ways to get a UID without. None
of them is as good as using an autoincrement-column.
Please look at:
http://www.talkaboutprogramming.com/group/comp.lang.clipper.visual-objects/m
essages/137428.html
or visit comp.lang.clipper.visual-objects .

Even if you got a really huge app, i can't imagine changing the way
IDs are assigned is a lot of work to do. If ID-assigning is done by a
central function, you should need and a few changes. If not, you need
some more ctrl-c/ctrl-v.
As the result is the clearest and best solution of having an UID
it should be worth the work.

hth,

TomH

-- 
PROSOFT EDV-Lösungen GmbH  Co. KG phone: +49 941 / 78 88 7 - 121
Ladehofstraße 28, D-93049 Regensburg  cellphone: +49 174 / 41 94 97 0
Geschäftsführer: Axel-Wilhelm Wegmann[EMAIL PROTECTED]
AG Regensburg HRA 6608 USt.183/68311http://www.proSoft-Edv.de
-- 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]