RE: Unique index - opinions sought

2012-07-16 Thread Rick James
How many rows?  If 1K, it does not matter.   If 1 billion, we need to discuss 
in more detail.  Let's assume 1M...

Dates should be stored in DATE datatype, which is 3 bytes.
Your GUID is non-standard, but should probably be stored in CHAR(6) CHARACTER 
SET ascii, unless it is expected to have non-ascii characters.  Is case folding 
important?
Given those, the PRIMARY KEY is 6+3+3=12 bytes long.  This is not bad for a 
million-row table.  If bigger, then the AUTO_INCREMENT should be considered.

Is this InnoDB or MyISAM?  (Please provide SHOW CREATE TABLE, not English, for 
describing tables.)

Let's see the SELECTs that will be hitting the tables.  Then we can discuss in 
more detail.

 -Original Message-
 From: Mark Goodge [mailto:m...@good-stuff.co.uk]
 Sent: Monday, July 16, 2012 8:09 AM
 To: mysql
 Subject: Unique index - opinions sought
 
 I have a MySQL table (call it, say, item_spine) which contains three
 fields which, together, form a unique key. These three fields are a
 guid, a start date and an end date. The guid is alphanumeric (a fixed-
 length six characters) and the dates are ISO format dates (-MM-DD).
 
 I also have another table (item_detail) containing data which is keyed
 to the first in that the unique key here is the unique key from
 item_spine plus a line id (which is numeric).
 
 At the moment, I simply have the three fields in the item_spine set as
 a unique key, and replicate those three columns in item_detail and have
 those plus line_id as the unique key, thus making a four-column key.
 
 But, for performance reasons, I was wondering if it might make more
 sense to create a single column in item_spine containing data which is
 generated from the original three and use that as a unique key instead.
 I then only need a single column in item_detail to link to item_spine,
 and thus my unique key there can be only two columns.
 
 Another option is to have an autoincrement column as a primary key for
 item_spine, and then use that as the link key for item_detail. But I'd
 prefer to avoid that, because the content of item_spine has to be
 updated on a regular basis from external data and using autoincrement
 means I can't do that using REPLACE INTO while still maintaining a key
 association with item_detail.
 
 Any thoughts? How would you do it?
 
 Mark
 --
   Sent from my Turing-Flowers Colossus
   http://mark.goodge.co.uk
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Unique index - opinions sought

2012-07-16 Thread Mark Goodge

On 16/07/2012 17:39, Rick James wrote:

How many rows?  If 1K, it does not matter.   If 1 billion, we need to
discuss in more detail.  Let's assume 1M...


Around 1M in the item_spine table and 10M in item_detail.


Dates should be stored in DATE datatype, which is 3 bytes. Your GUID
is non-standard, but should probably be stored in CHAR(6) CHARACTER
SET ascii, unless it is expected to have non-ascii characters.  Is
case folding important? Given those, the PRIMARY KEY is 6+3+3=12
bytes long.  This is not bad for a million-row table.  If bigger,
then the AUTO_INCREMENT should be considered.


The guid is case-insensitive. ISO dates map directly to MySQL's internal 
DATE type, so that's already taken care of. All data is ascii, and all 
alpha data is not case-sensitive.


I should maybe have mentioned earlier that this is external data over 
which I have no control (but do have a specification which I expect to 
be honoured).  My task is to store it and make it searchable for display.



Is this InnoDB or MyISAM?  (Please provide SHOW CREATE TABLE, not
English, for describing tables.)


It's MyISAM. I don't actually have a CREATE yet, as this is still just 
hypothetical :-)



Let's see the SELECTs that will be hitting the tables.  Then we can
discuss in more detail.


A typical select would be something like this:

SELECT guid
FROM item_spine
WHERE start_date = NOW()
AND end_date = NOW()
AND location_code = '$query'

followed by

SELECT *
FROM item_detail
WHERE guid = '$guid'
AND start_date = NOW()
AND end_date = NOW()
ORDER BY sequence

where $query is the initial query from the user and $guid is the result 
of the first query. location_code is VARCHAR(10) and is an alphanumeric 
string.


(I'm avoiding joins because doing multiple selects in the code is 
usually much faster)


Mark

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Unique index - opinions sought

2012-07-16 Thread Rick James
Did you really mean to have start/end_date in both tables?  Are the values 
identical?  If they are, that is another reason to use an INT UNSIGNED 
AUTO_INCREMENT.

Done correctly, JOIN can usually run faster than two separate queries.

SELECT  d.*
FROM  item_detail AS d
JOIN  item_spine AS s ON d.guid = s.guid
WHERE  s.guid = '$guid'
  AND  s.start_date = NOW()
  AND  s.end_date = NOW()
  AND  s.location_code = '$query'
  AND  d.start_date = NOW()
  AND  d.end_date = NOW()
ORDER BY  d.sequence 

Also, the value of NOW() could be different between the two queries; it will be 
the same in the JOIN verison.

Does it need to say
  s.start_date = d.start_date AND s.end_date = d.end_date ?

I would recommend InnoDB, at least for recovery after a crash.  PRIMARY KEYs 
are handled differently between the engines, but I don't have enough details of 
your app to explain relevant details.

Do you expect to get more than 1 guid from first query?  Multiple rows from 
second query?

item_spline needs an index beginning with location_code.
item_detail _might_ benefit from INDEX(guid, sequence).

 -Original Message-
 From: Mark Goodge [mailto:m...@good-stuff.co.uk]
 Sent: Monday, July 16, 2012 10:13 AM
 To: Mysql General List
 Subject: Re: Unique index - opinions sought
 
 On 16/07/2012 17:39, Rick James wrote:
  How many rows?  If 1K, it does not matter.   If 1 billion, we need to
  discuss in more detail.  Let's assume 1M...
 
 Around 1M in the item_spine table and 10M in item_detail.
 
  Dates should be stored in DATE datatype, which is 3 bytes. Your GUID
  is non-standard, but should probably be stored in CHAR(6) CHARACTER
  SET ascii, unless it is expected to have non-ascii characters.  Is
  case folding important? Given those, the PRIMARY KEY is 6+3+3=12
 bytes
  long.  This is not bad for a million-row table.  If bigger, then the
  AUTO_INCREMENT should be considered.
 
 The guid is case-insensitive. ISO dates map directly to MySQL's
 internal DATE type, so that's already taken care of. All data is ascii,
 and all alpha data is not case-sensitive.
 
 I should maybe have mentioned earlier that this is external data over
 which I have no control (but do have a specification which I expect to
 be honoured).  My task is to store it and make it searchable for
 display.
 
  Is this InnoDB or MyISAM?  (Please provide SHOW CREATE TABLE, not
  English, for describing tables.)
 
 It's MyISAM. I don't actually have a CREATE yet, as this is still just
 hypothetical :-)
 
  Let's see the SELECTs that will be hitting the tables.  Then we can
  discuss in more detail.
 
 A typical select would be something like this:
 
 SELECT guid
 FROM item_spine
 WHERE start_date = NOW()
 AND end_date = NOW()
 AND location_code = '$query'
 
 followed by
 
 SELECT *
 FROM item_detail
 WHERE guid = '$guid'
 AND start_date = NOW()
 AND end_date = NOW()
 ORDER BY sequence
 
 where $query is the initial query from the user and $guid is the result
 of the first query. location_code is VARCHAR(10) and is an alphanumeric
 string.
 
 (I'm avoiding joins because doing multiple selects in the code is
 usually much faster)
 
 Mark
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Unique on non null entries but allowing multiple nulls

2011-10-17 Thread Jigal van Hemert

Hi,

On 17-10-2011 15:39, Peng Yu wrote:

If I use NULL UNIQUE when I create a table, it seems that only one
NULL entry is allowed. Since NULL could mean unknown, in this case,
two unknowns are not the same and I want to allow multiple nulls but I
still want non null entries be unique. Is there a construct in mysql
that can create a table column like this?


From the documentation:
A UNIQUE index creates a constraint such that all values in the index 
must be distinct. An error occurs if you try to add a new row with a key 
value that matches an existing row. For all engines, a UNIQUE index 
permits multiple NULL values for columns that can contain NULL.


Only for 5.0 there is the exception that the BDB storage engine does not 
allow multiple NULL values in a column with a UNIQUE index.


--
Kind regards / met vriendelijke groet,

Jigal van Hemert.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Unique ID's across multiple databases

2010-09-14 Thread Johnny Withers
I may have missed what you are trying to do here. NoSQL is really a bad name
and should really be renamed to NoREL instead. NoSQL implementations are not
used just because of limitations of traditional RDBMS when it comes to sheer
traffic volume, they are also used because they scale horizontally very
well. When I was reading though all these emails it seems to be you would
have 200+ nodes here with the SAME data. Maybe that's not what you are
trying to do?

I also did not know the software for maintaining the data was already in
place.


JW

On Tue, Sep 14, 2010 at 12:57 AM, Johan De Meersman vegiv...@tuxera.bewrote:



 On Mon, Sep 13, 2010 at 8:59 PM, Johnny Withers joh...@pixelated.netwrote:


 This sounds like a good job for a 'NoSQL' system. Maybe?


 I can't help but blink at that. How exactly is NoSQL going to fix issues
 that are related to topology, not inherent SQL limitations ? Which
 particular incarnation of NoSQL are you thinking of ?

 Also, I suspect rewriting all 200 clients' services is not going to be an
 option :-)

 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Unique ID's across multiple databases

2010-09-13 Thread Johan De Meersman
On Sun, Sep 12, 2010 at 9:45 PM, Kiss Dániel n...@dinagon.com wrote:

 offset + increment thingy is good if you know in advance that you'll have a
 limited number of servers. But if you have no idea that you will have 2,
 20,
 or 200 servers in your array in the future, you just can't pick an optimal


What benefit do you think you will reap from that many masters ? Don't
forget that every write still has to be done on every server, so you're not
actually distributing that load; while for reads you only need simple
slaves.


-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


Re: Unique ID's across multiple databases

2010-09-13 Thread Kiss Dániel
This is actually more for failover scenarios where databases are spread in
multiple locations with unreliable internet connections. But you want to
keep every single location working even when they are cut off from the other
databases. The primary purpose is not load distribution.

On Mon, Sep 13, 2010 at 12:03 PM, Johan De Meersman vegiv...@tuxera.bewrote:



 On Sun, Sep 12, 2010 at 9:45 PM, Kiss Dániel n...@dinagon.com wrote:

 offset + increment thingy is good if you know in advance that you'll have
 a
 limited number of servers. But if you have no idea that you will have 2,
 20,
 or 200 servers in your array in the future, you just can't pick an optimal


 What benefit do you think you will reap from that many masters ? Don't
 forget that every write still has to be done on every server, so you're not
 actually distributing that load; while for reads you only need simple
 slaves.


 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel



Re: Unique ID's across multiple databases

2010-09-13 Thread Johan De Meersman
Hmm, that's a very interesting scenario, indeed.

One bad connection will break the chain, though, so in effect you'll be
multiplying the disconnecting rate...

I think you'd be better of with a star topology, but MySQL unfortunately
only allows ring-types. This is gonna require some good thinking on your
part :-)

On Mon, Sep 13, 2010 at 12:28 PM, Kiss Dániel n...@dinagon.com wrote:

 This is actually more for failover scenarios where databases are spread in
 multiple locations with unreliable internet connections. But you want to
 keep every single location working even when they are cut off from the
 other
 databases. The primary purpose is not load distribution.

 On Mon, Sep 13, 2010 at 12:03 PM, Johan De Meersman vegiv...@tuxera.be
 wrote:

 
 
  On Sun, Sep 12, 2010 at 9:45 PM, Kiss Dániel n...@dinagon.com wrote:
 
  offset + increment thingy is good if you know in advance that you'll
 have
  a
  limited number of servers. But if you have no idea that you will have 2,
  20,
  or 200 servers in your array in the future, you just can't pick an
 optimal
 
 
  What benefit do you think you will reap from that many masters ? Don't
  forget that every write still has to be done on every server, so you're
 not
  actually distributing that load; while for reads you only need simple
  slaves.
 
 
  --
  Bier met grenadyn
  Is als mosterd by den wyn
  Sy die't drinkt, is eene kwezel
  Hy die't drinkt, is ras een ezel
 




-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


Re: Unique ID's across multiple databases

2010-09-13 Thread Fish Kungfu
I could be way off here, but how about letting your unique id be a
calculated column of the the server's MAC address concatenated with an
auto-increment id column?

I hope this helps...
~~Fish~~


On Mon, Sep 13, 2010 at 7:26 AM, Johan De Meersman vegiv...@tuxera.bewrote:

 Hmm, that's a very interesting scenario, indeed.

 One bad connection will break the chain, though, so in effect you'll be
 multiplying the disconnecting rate...

 I think you'd be better of with a star topology, but MySQL unfortunately
 only allows ring-types. This is gonna require some good thinking on your
 part :-)

 On Mon, Sep 13, 2010 at 12:28 PM, Kiss Dániel n...@dinagon.com wrote:

  This is actually more for failover scenarios where databases are spread
 in
  multiple locations with unreliable internet connections. But you want to
  keep every single location working even when they are cut off from the
  other
  databases. The primary purpose is not load distribution.
 
  On Mon, Sep 13, 2010 at 12:03 PM, Johan De Meersman vegiv...@tuxera.be
  wrote:
 
  
  
   On Sun, Sep 12, 2010 at 9:45 PM, Kiss Dániel n...@dinagon.com wrote:
  
   offset + increment thingy is good if you know in advance that you'll
  have
   a
   limited number of servers. But if you have no idea that you will have
 2,
   20,
   or 200 servers in your array in the future, you just can't pick an
  optimal
  
  
   What benefit do you think you will reap from that many masters ? Don't
   forget that every write still has to be done on every server, so you're
  not
   actually distributing that load; while for reads you only need simple
   slaves.
  
  
   --
   Bier met grenadyn
   Is als mosterd by den wyn
   Sy die't drinkt, is eene kwezel
   Hy die't drinkt, is ras een ezel
  
 



 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel



Re: Unique ID's across multiple databases

2010-09-13 Thread Fish Kungfu
I had some coffee and realized that actually, using a UUID might be
something to look at.  There have been quite a few discussions about using a
UUID as a unique id and it does have some gotchas.  Just Google: mysql uuid
Have a great day
~~Fish~~




On Mon, Sep 13, 2010 at 7:30 AM, Fish Kungfu fish.kun...@gmail.com wrote:

 I could be way off here, but how about letting your unique id be a
 calculated column of the the server's MAC address concatenated with an
 auto-increment id column?

 I hope this helps...
 ~~Fish~~


 On Mon, Sep 13, 2010 at 7:26 AM, Johan De Meersman vegiv...@tuxera.bewrote:

 Hmm, that's a very interesting scenario, indeed.

 One bad connection will break the chain, though, so in effect you'll be
 multiplying the disconnecting rate...

 I think you'd be better of with a star topology, but MySQL unfortunately
 only allows ring-types. This is gonna require some good thinking on your
 part :-)

 On Mon, Sep 13, 2010 at 12:28 PM, Kiss Dániel n...@dinagon.com wrote:

  This is actually more for failover scenarios where databases are spread
 in
  multiple locations with unreliable internet connections. But you want to
  keep every single location working even when they are cut off from the
  other
  databases. The primary purpose is not load distribution.
 
  On Mon, Sep 13, 2010 at 12:03 PM, Johan De Meersman vegiv...@tuxera.be
  wrote:
 
  
  
   On Sun, Sep 12, 2010 at 9:45 PM, Kiss Dániel n...@dinagon.com
 wrote:
  
   offset + increment thingy is good if you know in advance that you'll
  have
   a
   limited number of servers. But if you have no idea that you will have
 2,
   20,
   or 200 servers in your array in the future, you just can't pick an
  optimal
  
  
   What benefit do you think you will reap from that many masters ? Don't
   forget that every write still has to be done on every server, so
 you're
  not
   actually distributing that load; while for reads you only need simple
   slaves.
  
  
   --
   Bier met grenadyn
   Is als mosterd by den wyn
   Sy die't drinkt, is eene kwezel
   Hy die't drinkt, is ras een ezel
  
 



 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel





Re: Unique ID's across multiple databases

2010-09-13 Thread Kiss Dániel
Hell, yeah. :)

Actually, the ID system I described below works quite well according to my
tests. I feel very comfortable with it both from primary key size and
dynamically increasable database number point of views.
What I actually don't like in it is the concatenated unique ID (ID + SID)
pairs. To use two fields for primary and foreign keys is not the most
convenient to say the least. :)
I am just wondering if anyone has any better idea to fulfill the
requirements (small index size, dynamically increasable numbe of databases
in the array, incremental-like ID's are optimal for the MySQL indexing
engine) and avoid this silly drawback. :)

On Mon, Sep 13, 2010 at 1:26 PM, Johan De Meersman vegiv...@tuxera.bewrote:

 Hmm, that's a very interesting scenario, indeed.

 One bad connection will break the chain, though, so in effect you'll be
 multiplying the disconnecting rate...

 I think you'd be better of with a star topology, but MySQL unfortunately
 only allows ring-types. This is gonna require some good thinking on your
 part :-)


 On Mon, Sep 13, 2010 at 12:28 PM, Kiss Dániel n...@dinagon.com wrote:

 This is actually more for failover scenarios where databases are spread in
 multiple locations with unreliable internet connections. But you want to
 keep every single location working even when they are cut off from the
 other
 databases. The primary purpose is not load distribution.

 On Mon, Sep 13, 2010 at 12:03 PM, Johan De Meersman vegiv...@tuxera.be
 wrote:

 
 
  On Sun, Sep 12, 2010 at 9:45 PM, Kiss Dániel n...@dinagon.com wrote:
 
  offset + increment thingy is good if you know in advance that you'll
 have
  a
  limited number of servers. But if you have no idea that you will have
 2,
  20,
  or 200 servers in your array in the future, you just can't pick an
 optimal
 
 
  What benefit do you think you will reap from that many masters ? Don't
  forget that every write still has to be done on every server, so you're
 not
  actually distributing that load; while for reads you only need simple
  slaves.
 
 
  --
  Bier met grenadyn
  Is als mosterd by den wyn
  Sy die't drinkt, is eene kwezel
  Hy die't drinkt, is ras een ezel
 




 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel



Re: Unique ID's across multiple databases

2010-09-13 Thread Kiss Dániel
Well, thanks, but I'm afraid using UUID's (even with hex compression) is
kind of a suicide, when it comes to performance.
This is a good summary about the issues:
http://www.mysqlperformanceblog.com/2007/03/13/to-uuid-or-not-to-uuid/

So, some kind of auto_increment or sequencing must be the optimal solution
here.

On Mon, Sep 13, 2010 at 2:05 PM, Fish Kungfu fish.kun...@gmail.com wrote:

 I had some coffee and realized that actually, using a UUID might be
 something to look at.  There have been quite a few discussions about using
 a
 UUID as a unique id and it does have some gotchas.  Just Google: mysql uuid
 Have a great day
 ~~Fish~~




 On Mon, Sep 13, 2010 at 7:30 AM, Fish Kungfu fish.kun...@gmail.com
 wrote:

  I could be way off here, but how about letting your unique id be a
  calculated column of the the server's MAC address concatenated with an
  auto-increment id column?
 
  I hope this helps...
  ~~Fish~~
 
 
  On Mon, Sep 13, 2010 at 7:26 AM, Johan De Meersman vegiv...@tuxera.be
 wrote:
 
  Hmm, that's a very interesting scenario, indeed.
 
  One bad connection will break the chain, though, so in effect you'll be
  multiplying the disconnecting rate...
 
  I think you'd be better of with a star topology, but MySQL unfortunately
  only allows ring-types. This is gonna require some good thinking on your
  part :-)
 
  On Mon, Sep 13, 2010 at 12:28 PM, Kiss Dániel n...@dinagon.com wrote:
 
   This is actually more for failover scenarios where databases are
 spread
  in
   multiple locations with unreliable internet connections. But you want
 to
   keep every single location working even when they are cut off from the
   other
   databases. The primary purpose is not load distribution.
  
   On Mon, Sep 13, 2010 at 12:03 PM, Johan De Meersman 
 vegiv...@tuxera.be
   wrote:
  
   
   
On Sun, Sep 12, 2010 at 9:45 PM, Kiss Dániel n...@dinagon.com
  wrote:
   
offset + increment thingy is good if you know in advance that
 you'll
   have
a
limited number of servers. But if you have no idea that you will
 have
  2,
20,
or 200 servers in your array in the future, you just can't pick an
   optimal
   
   
What benefit do you think you will reap from that many masters ?
 Don't
forget that every write still has to be done on every server, so
  you're
   not
actually distributing that load; while for reads you only need
 simple
slaves.
   
   
--
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel
   
  
 
 
 
  --
  Bier met grenadyn
  Is als mosterd by den wyn
  Sy die't drinkt, is eene kwezel
  Hy die't drinkt, is ras een ezel
 
 
 



RE: Unique ID's across multiple databases

2010-09-13 Thread Jerry Schwartz
-Original Message-
From: Kiss Dániel [mailto:n...@dinagon.com]
Sent: Sunday, September 12, 2010 1:47 PM
To: mysql@lists.mysql.com; replicat...@lists.mysql.com
Subject: Unique ID's across multiple databases

Hi,

I'm designing a master-to-master replication architecture.
I wonder what the best way is to make sure both databases generate unique
row ID's, so there won't be ID conflicts when replicating both directions.

I read on forums about pro's and con's using UUID's, also about setting the
*auto-increment-increment *and *auto-increment-offset* system variables.
I'm not a fan of UUID's for this purpose. They are two big, slow to
generate, and they don't 100% guarantee global uniqueness anyway.
On the other hand, the *auto-increment-offset *trick limits the number of
possible databases in the system in advance, which is not so nice.

[JS] UUIDs are likely enough to be unique that you don't have to worry about 
it.

I can't speak to the speed.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com




So, I'm thinking about a composite unique ID system, where the value of the
*server-id *system variable is always a part of the ID. The server-id has to
be set uniquely anyway for any replication architecture in MySQL.

A sample would look like this:

CREATE TABLE SampleParents (
ID INT UNSIGNED NOT NULL AUTO_INCREMENT,
SID SMALLINT UNSIGNED NOT NULL,

SampleData VARCHAR(50) NOT NULL,

PRIMARY KEY (ID, SID)) ENGINE=InnoDB;

CREATE TABLE SampleChildren (
ID INT UNSIGNED NOT NULL AUTO_INCREMENT,
SID SMALLINT UNSIGNED NOT NULL,

SampleParentID INT UNSIGNED NOT NULL,
SampleParentSID SMALLINT UNSIGNED NOT NULL,

SampleData VARCHAR(50) NOT NULL,

PRIMARY KEY (ID, SID),
KEY (SampleParentID, SampleParentSID),

CONSTRAINT FOREIGN KEY (SampleParentID, SampleParentSID) REFERENCES
SampleParents (ID, SID)) ENGINE=InnoDB;

Where SID is always filled with the value of the @@server_id global variable
according to the server where the record is created.

How I see it is that it has the following advantages.

   - The ID+SID pair guarantees pripary key uniqueness within the
   replication array.
   - Auto_increment is a magnitude faster than generating UUID's.
   - SID adds only 2 bytes in this case to the size of the primary key item.
   It can be even 1 byte if I'm sure I'll never exceed maximum 255 servers. 
 But
   anyhow, it is still way smaller than the 16 byte of a UUID field, even if
   using BIGINT's.
   - Keeps the efficiency of the primary key indexing, because the key is
   still very close to a strictly incremental value.

Well, you can see what the obvious disadvantage is:

   - Primary and foreign keys are always double fields. This is not so
   convinient when you are joining tables and add the WHERE clauses to your
   queries. It might even negatively affect the evaluation speed of join
   conditions, although I have no idea yet, how much. (My gut feeling is that
   it's not a big issue, due to the good query optimizer of MySQL.)

My question is. Does anyone have any better idea, how to approach this
problem?

Thanks,




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Unique ID's across multiple databases

2010-09-13 Thread Jerry Schwartz
-Original Message-
From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De
Meersman
Sent: Monday, September 13, 2010 7:27 AM
To: Kiss Dániel
Cc: Max Schubert; mysql@lists.mysql.com; replicat...@lists.mysql.com
Subject: Re: Unique ID's across multiple databases

Hmm, that's a very interesting scenario, indeed.

One bad connection will break the chain, though, so in effect you'll be
multiplying the disconnecting rate...

I think you'd be better of with a star topology, but MySQL unfortunately
only allows ring-types. This is gonna require some good thinking on your
part :-)

[JS] It sounds like you are trying to protect against a regional disaster.

This is precisely the type of scenario for which NAS or FibreChannel is used. 
You let the storage medium take care of replication. Typically you'd only need 
two units, perhaps on opposite sides of the country, using FibreChannel over 
IP.

I've been out of this market (sales/support side) for many years, so I don't 
know what the current technology costs, but if you can afford it that is the 
way to go. It will make your life much simpler.


Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



On Mon, Sep 13, 2010 at 12:28 PM, Kiss Dániel n...@dinagon.com wrote:

 This is actually more for failover scenarios where databases are spread in
 multiple locations with unreliable internet connections. But you want to
 keep every single location working even when they are cut off from the
 other
 databases. The primary purpose is not load distribution.

 On Mon, Sep 13, 2010 at 12:03 PM, Johan De Meersman vegiv...@tuxera.be
 wrote:

 
 
  On Sun, Sep 12, 2010 at 9:45 PM, Kiss Dániel n...@dinagon.com wrote:
 
  offset + increment thingy is good if you know in advance that you'll
 have
  a
  limited number of servers. But if you have no idea that you will have 2,
  20,
  or 200 servers in your array in the future, you just can't pick an
 optimal
 
 
  What benefit do you think you will reap from that many masters ? Don't
  forget that every write still has to be done on every server, so you're
 not
  actually distributing that load; while for reads you only need simple
  slaves.
 
 
  --
  Bier met grenadyn
  Is als mosterd by den wyn
  Sy die't drinkt, is eene kwezel
  Hy die't drinkt, is ras een ezel
 




--
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Unique ID's across multiple databases

2010-09-13 Thread Jerry Schwartz
-Original Message-
From: Kiss Dániel [mailto:n...@dinagon.com]
Sent: Monday, September 13, 2010 11:49 AM
To: Jerry Schwartz
Cc: Johan De Meersman; Max Schubert; mysql@lists.mysql.com;
replicat...@lists.mysql.com
Subject: Re: Unique ID's across multiple databases

Well, not exactly.

I do not own all the databases. Some of them are placed at customers, some
of them are at my data warehouse. So, neither NAS or Fibre Channel is a
solution in this case.

[JS] Then you have a mess on your hands.

Are you going to be mirroring these databases separately for each customer?

I wish you well.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com




On Mon, Sep 13, 2010 at 4:30 PM, Jerry Schwartz je...@gii.co.jp wrote:

 -Original Message-
 From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan
 De
 Meersman
 Sent: Monday, September 13, 2010 7:27 AM
 To: Kiss Dániel
 Cc: Max Schubert; mysql@lists.mysql.com; replicat...@lists.mysql.com
 Subject: Re: Unique ID's across multiple databases
 
 Hmm, that's a very interesting scenario, indeed.
 
 One bad connection will break the chain, though, so in effect you'll be
 multiplying the disconnecting rate...
 
 I think you'd be better of with a star topology, but MySQL unfortunately
 only allows ring-types. This is gonna require some good thinking on your
 part :-)
 
 [JS] It sounds like you are trying to protect against a regional disaster.

 This is precisely the type of scenario for which NAS or FibreChannel is
 used.
 You let the storage medium take care of replication. Typically you'd only
 need
 two units, perhaps on opposite sides of the country, using FibreChannel
 over
 IP.

 I've been out of this market (sales/support side) for many years, so I
 don't
 know what the current technology costs, but if you can afford it that is
 the
 way to go. It will make your life much simpler.


 Regards,

 Jerry Schwartz
 Global Information Incorporated
 195 Farmington Ave.
 Farmington, CT 06032

 860.674.8796 / FAX: 860.674.8341
 E-mail: je...@gii.co.jp
 Web site: www.the-infoshop.com



 On Mon, Sep 13, 2010 at 12:28 PM, Kiss Dániel n...@dinagon.com wrote:
 
  This is actually more for failover scenarios where databases are spread
 in
  multiple locations with unreliable internet connections. But you want to
  keep every single location working even when they are cut off from the
  other
  databases. The primary purpose is not load distribution.
 
  On Mon, Sep 13, 2010 at 12:03 PM, Johan De Meersman vegiv...@tuxera.be
  wrote:
 
  
  
   On Sun, Sep 12, 2010 at 9:45 PM, Kiss Dániel n...@dinagon.com
 wrote:
  
   offset + increment thingy is good if you know in advance that you'll
  have
   a
   limited number of servers. But if you have no idea that you will have
 2,
   20,
   or 200 servers in your array in the future, you just can't pick an
  optimal
  
  
   What benefit do you think you will reap from that many masters ? Don't
   forget that every write still has to be done on every server, so
 you're
  not
   actually distributing that load; while for reads you only need simple
   slaves.
  
  
   --
   Bier met grenadyn
   Is als mosterd by den wyn
   Sy die't drinkt, is eene kwezel
   Hy die't drinkt, is ras een ezel
  
 
 
 
 
 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel








-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Unique ID's across multiple databases

2010-09-13 Thread Johnny Withers
This sounds like a good job for a 'NoSQL' system. Maybe?

JW


On Mon, Sep 13, 2010 at 1:46 PM, Jerry Schwartz je...@gii.co.jp wrote:

 -Original Message-
 From: Kiss Dániel [mailto:n...@dinagon.com]
 Sent: Monday, September 13, 2010 11:49 AM
 To: Jerry Schwartz
 Cc: Johan De Meersman; Max Schubert; mysql@lists.mysql.com;
 replicat...@lists.mysql.com
 Subject: Re: Unique ID's across multiple databases
 
 Well, not exactly.
 
 I do not own all the databases. Some of them are placed at customers, some
 of them are at my data warehouse. So, neither NAS or Fibre Channel is a
 solution in this case.
 
 [JS] Then you have a mess on your hands.

 Are you going to be mirroring these databases separately for each customer?

 I wish you well.

 Regards,

 Jerry Schwartz
 Global Information Incorporated
 195 Farmington Ave.
 Farmington, CT 06032

 860.674.8796 / FAX: 860.674.8341
 E-mail: je...@gii.co.jp
 Web site: www.the-infoshop.com




 On Mon, Sep 13, 2010 at 4:30 PM, Jerry Schwartz je...@gii.co.jp wrote:
 
  -Original Message-
  From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of
 Johan
  De
  Meersman
  Sent: Monday, September 13, 2010 7:27 AM
  To: Kiss Dániel
  Cc: Max Schubert; mysql@lists.mysql.com; replicat...@lists.mysql.com
  Subject: Re: Unique ID's across multiple databases
  
  Hmm, that's a very interesting scenario, indeed.
  
  One bad connection will break the chain, though, so in effect you'll be
  multiplying the disconnecting rate...
  
  I think you'd be better of with a star topology, but MySQL
 unfortunately
  only allows ring-types. This is gonna require some good thinking on
 your
  part :-)
  
  [JS] It sounds like you are trying to protect against a regional
 disaster.
 
  This is precisely the type of scenario for which NAS or FibreChannel is
  used.
  You let the storage medium take care of replication. Typically you'd
 only
  need
  two units, perhaps on opposite sides of the country, using FibreChannel
  over
  IP.
 
  I've been out of this market (sales/support side) for many years, so I
  don't
  know what the current technology costs, but if you can afford it that is
  the
  way to go. It will make your life much simpler.
 
 
  Regards,
 
  Jerry Schwartz
  Global Information Incorporated
  195 Farmington Ave.
  Farmington, CT 06032
 
  860.674.8796 / FAX: 860.674.8341
  E-mail: je...@gii.co.jp
  Web site: www.the-infoshop.com
 
 
 
  On Mon, Sep 13, 2010 at 12:28 PM, Kiss Dániel n...@dinagon.com
 wrote:
  
   This is actually more for failover scenarios where databases are
 spread
  in
   multiple locations with unreliable internet connections. But you want
 to
   keep every single location working even when they are cut off from
 the
   other
   databases. The primary purpose is not load distribution.
  
   On Mon, Sep 13, 2010 at 12:03 PM, Johan De Meersman 
 vegiv...@tuxera.be
   wrote:
  
   
   
On Sun, Sep 12, 2010 at 9:45 PM, Kiss Dániel n...@dinagon.com
  wrote:
   
offset + increment thingy is good if you know in advance that
 you'll
   have
a
limited number of servers. But if you have no idea that you will
 have
  2,
20,
or 200 servers in your array in the future, you just can't pick an
   optimal
   
   
What benefit do you think you will reap from that many masters ?
 Don't
forget that every write still has to be done on every server, so
  you're
   not
actually distributing that load; while for reads you only need
 simple
slaves.
   
   
--
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel
   
  
  
  
  
  --
  Bier met grenadyn
  Is als mosterd by den wyn
  Sy die't drinkt, is eene kwezel
  Hy die't drinkt, is ras een ezel
 
 
 
 




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=joh...@pixelated.net




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Unique ID's across multiple databases

2010-09-13 Thread Kiss Dániel
Well, that would be the plan, yes. :-)
Anyway, I'll get over the problem sooner or later. :-)

On Mon, Sep 13, 2010 at 8:46 PM, Jerry Schwartz je...@gii.co.jp wrote:

 -Original Message-
 From: Kiss Dániel [mailto:n...@dinagon.com]
 Sent: Monday, September 13, 2010 11:49 AM
 To: Jerry Schwartz
 Cc: Johan De Meersman; Max Schubert; mysql@lists.mysql.com;
 replicat...@lists.mysql.com
 Subject: Re: Unique ID's across multiple databases
 
 Well, not exactly.
 
 I do not own all the databases. Some of them are placed at customers, some
 of them are at my data warehouse. So, neither NAS or Fibre Channel is a
 solution in this case.
 
 [JS] Then you have a mess on your hands.

 Are you going to be mirroring these databases separately for each customer?

 I wish you well.

 Regards,

 Jerry Schwartz
 Global Information Incorporated
 195 Farmington Ave.
 Farmington, CT 06032

 860.674.8796 / FAX: 860.674.8341
 E-mail: je...@gii.co.jp
 Web site: www.the-infoshop.com




 On Mon, Sep 13, 2010 at 4:30 PM, Jerry Schwartz je...@gii.co.jp wrote:
 
  -Original Message-
  From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of
 Johan
  De
  Meersman
  Sent: Monday, September 13, 2010 7:27 AM
  To: Kiss Dániel
  Cc: Max Schubert; mysql@lists.mysql.com; replicat...@lists.mysql.com
  Subject: Re: Unique ID's across multiple databases
  
  Hmm, that's a very interesting scenario, indeed.
  
  One bad connection will break the chain, though, so in effect you'll be
  multiplying the disconnecting rate...
  
  I think you'd be better of with a star topology, but MySQL
 unfortunately
  only allows ring-types. This is gonna require some good thinking on
 your
  part :-)
  
  [JS] It sounds like you are trying to protect against a regional
 disaster.
 
  This is precisely the type of scenario for which NAS or FibreChannel is
  used.
  You let the storage medium take care of replication. Typically you'd
 only
  need
  two units, perhaps on opposite sides of the country, using FibreChannel
  over
  IP.
 
  I've been out of this market (sales/support side) for many years, so I
  don't
  know what the current technology costs, but if you can afford it that is
  the
  way to go. It will make your life much simpler.
 
 
  Regards,
 
  Jerry Schwartz
  Global Information Incorporated
  195 Farmington Ave.
  Farmington, CT 06032
 
  860.674.8796 / FAX: 860.674.8341
  E-mail: je...@gii.co.jp
  Web site: www.the-infoshop.com
 
 
 
  On Mon, Sep 13, 2010 at 12:28 PM, Kiss Dániel n...@dinagon.com
 wrote:
  
   This is actually more for failover scenarios where databases are
 spread
  in
   multiple locations with unreliable internet connections. But you want
 to
   keep every single location working even when they are cut off from
 the
   other
   databases. The primary purpose is not load distribution.
  
   On Mon, Sep 13, 2010 at 12:03 PM, Johan De Meersman 
 vegiv...@tuxera.be
   wrote:
  
   
   
On Sun, Sep 12, 2010 at 9:45 PM, Kiss Dániel n...@dinagon.com
  wrote:
   
offset + increment thingy is good if you know in advance that
 you'll
   have
a
limited number of servers. But if you have no idea that you will
 have
  2,
20,
or 200 servers in your array in the future, you just can't pick an
   optimal
   
   
What benefit do you think you will reap from that many masters ?
 Don't
forget that every write still has to be done on every server, so
  you're
   not
actually distributing that load; while for reads you only need
 simple
slaves.
   
   
--
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel
   
  
  
  
  
  --
  Bier met grenadyn
  Is als mosterd by den wyn
  Sy die't drinkt, is eene kwezel
  Hy die't drinkt, is ras een ezel
 
 
 
 






RE: Unique ID's across multiple databases

2010-09-13 Thread Jerry Schwartz
From: Kiss Dániel [mailto:n...@dinagon.com]
Sent: Monday, September 13, 2010 3:17 PM
To: Jerry Schwartz
Cc: Johan De Meersman; Max Schubert; mysql@lists.mysql.com; 
replicat...@lists.mysql.com
Subject: Re: Unique ID's across multiple databases

Well, that would be the plan, yes. :-)
Anyway, I'll get over the problem sooner or later. :-)

[JS] I guess I don't grasp your topology at all. If you are going to be 
replicating each database separately, who cares whether or not the IDs are 
unique across databases? Were you thinking of commingling the data from all of 
these individual databases into one big database? That doesn't make sense to 
me from a fail-over standpoint.

Depending upon the industry and their level of paranoia (by inclination or by 
regulation), you might have to have a separate remote SYSTEM (not database) 
per customer.


Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com




On Mon, Sep 13, 2010 at 8:46 PM, Jerry Schwartz je...@gii.co.jp wrote:
-Original Message-
From: Kiss Dániel [mailto:n...@dinagon.com]
Sent: Monday, September 13, 2010 11:49 AM
To: Jerry Schwartz
Cc: Johan De Meersman; Max Schubert; mysql@lists.mysql.com;
replicat...@lists.mysql.com
Subject: Re: Unique ID's across multiple databases

Well, not exactly.

I do not own all the databases. Some of them are placed at customers, some
of them are at my data warehouse. So, neither NAS or Fibre Channel is a
solution in this case.

[JS] Then you have a mess on your hands.

Are you going to be mirroring these databases separately for each customer?

I wish you well.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com




On Mon, Sep 13, 2010 at 4:30 PM, Jerry Schwartz je...@gii.co.jp wrote:

 -Original Message-
 From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan
 De
 Meersman
 Sent: Monday, September 13, 2010 7:27 AM
 To: Kiss Dániel
 Cc: Max Schubert; mysql@lists.mysql.com; replicat...@lists.mysql.com
 Subject: Re: Unique ID's across multiple databases
 
 Hmm, that's a very interesting scenario, indeed.
 
 One bad connection will break the chain, though, so in effect you'll be
 multiplying the disconnecting rate...
 
 I think you'd be better of with a star topology, but MySQL unfortunately
 only allows ring-types. This is gonna require some good thinking on your
 part :-)
 
 [JS] It sounds like you are trying to protect against a regional disaster.

 This is precisely the type of scenario for which NAS or FibreChannel is
 used.
 You let the storage medium take care of replication. Typically you'd only
 need
 two units, perhaps on opposite sides of the country, using FibreChannel
 over
 IP.

 I've been out of this market (sales/support side) for many years, so I
 don't
 know what the current technology costs, but if you can afford it that is
 the
 way to go. It will make your life much simpler.


 Regards,

 Jerry Schwartz
 Global Information Incorporated
 195 Farmington Ave.
 Farmington, CT 06032

 860.674.8796 / FAX: 860.674.8341
 E-mail: je...@gii.co.jp
 Web site: www.the-infoshop.com



 On Mon, Sep 13, 2010 at 12:28 PM, Kiss Dániel n...@dinagon.com wrote:
 
  This is actually more for failover scenarios where databases are spread
 in
  multiple locations with unreliable internet connections. But you want to
  keep every single location working even when they are cut off from the
  other
  databases. The primary purpose is not load distribution.
 
  On Mon, Sep 13, 2010 at 12:03 PM, Johan De Meersman vegiv...@tuxera.be
  wrote:
 
  
  
   On Sun, Sep 12, 2010 at 9:45 PM, Kiss Dániel n...@dinagon.com
 wrote:
  
   offset + increment thingy is good if you know in advance that you'll
  have
   a
   limited number of servers. But if you have no idea that you will have
 2,
   20,
   or 200 servers in your array in the future, you just can't pick an
  optimal
  
  
   What benefit do you think you will reap from that many masters ? Don't
   forget that every write still has to be done on every server, so
 you're
  not
   actually distributing that load; while for reads you only need simple
   slaves.
  
  
   --
   Bier met grenadyn
   Is als mosterd by den wyn
   Sy die't drinkt, is eene kwezel
   Hy die't drinkt, is ras een ezel
  
 
 
 
 
 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel











-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Unique ID's across multiple databases

2010-09-13 Thread Daevid Vincent
 -Original Message-
 From: Kiss Dániel [mailto:n...@dinagon.com] 
 Sent: Monday, September 13, 2010 5:59 AM

 Well, thanks, but I'm afraid using UUID's (even with hex 
 compression) is
 kind of a suicide, when it comes to performance.
 This is a good summary about the issues:
 http://www.mysqlperformanceblog.com/2007/03/13/to-uuid-or-not-to-uuid/

Is this UUID issue unique to mySQL or are there other RDBMS's that handle
it better (Postgress, Oracle, SQL Server, etc?)

I too have a need for a unique identifier that will mesh with other
databases periodically. So that a user in one local DB/server will get
migrated to a master DB which in turn will sync up with remote sites so
that all sites will have all users in it each night (for example).

Having a mapping of UUID to local ID seems one way, but I feel there is a
lot of room for collisions and integrity issues that way no?

There are some solutions at the bottom of that blog post. Are those not
good then? They seem interesting to me.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Unique ID's across multiple databases

2010-09-13 Thread Wm Mussatto


On Mon, September 13, 2010 15:37, Daevid Vincent wrote:

-Original Message-

From: Kiss D�niel
[mailto:n...@dinagon.com]
 Sent: Monday, September 13, 2010
5:59 AM

 Well, thanks, but I'm afraid using
UUID's (even with hex
 compression) is
 kind of
a suicide, when it comes to performance.
 This is a good
summary about the issues:

http://www.mysqlperformanceblog.com/2007/03/13/to-uuid-or-not-to-uuid/
 
 Is this UUID issue unique to mySQL or are there other
RDBMS's that handle
 it better (Postgress, Oracle, SQL Server,
etc?)
 
 I too have a need for a unique identifier that
will mesh with other
 databases periodically. So that
a user in one local DB/server will get
 migrated to a
master DB which in turn will sync up with remote sites so
 that
all sites will have all users in it each night (for example).


 Having a mapping of UUID to local ID seems one way, but I feel
there is a
 lot of room for collisions and integrity issues that
way no?
 
 There are some solutions at the bottom of
that blog post. Are those not
 good then? They seem interesting
to me.
Why does it have to be one field.� Two fields: ServerID
and the SequenceID
Across servers the pair would be unique and within
a given server the Sequence ID is the equivalent of a manual
auto-increment fields�� Set it via Max(SequenceID)+1 where
ServerID is the local serverID.�� Have an index set for the
combined fields as well as the Sequence ID field perhaps.�

SOURCE IP FROM HEADER:

*Please block this
account's access to the *
*internet until its cleaned up.  We are
basing *
*this on an analysis of the header NOT the FROM*
*address.  *

--
William R. Mussatto
Systems Engineer
http://www.csz.com
909-920-9154


Re: Unique ID's across multiple databases

2010-09-13 Thread Johan De Meersman
On Mon, Sep 13, 2010 at 8:59 PM, Johnny Withers joh...@pixelated.netwrote:


 This sounds like a good job for a 'NoSQL' system. Maybe?


I can't help but blink at that. How exactly is NoSQL going to fix issues
that are related to topology, not inherent SQL limitations ? Which
particular incarnation of NoSQL are you thinking of ?

Also, I suspect rewriting all 200 clients' services is not going to be an
option :-)

-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


Re: Unique ID's across multiple databases

2010-09-12 Thread Marcus Bointon
On 12 Sep 2010, at 19:47, Kiss Dániel wrote:

 - SID adds only 2 bytes in this case to the size of the primary key item.
   It can be even 1 byte if I'm sure I'll never exceed maximum 255 servers. But
   anyhow, it is still way smaller than the 16 byte of a UUID field, even if
   using BIGINT's.

You can maintain your own sequence tables a la postgres if you use transactions 
to ensure atomicity, though that doesn't help across databases (I suspect the 
same is true in postgres).

FWIW my auto_increment_offset value is usually the same as my server ID. 
auto_increment_increment also reduces the number of usable values for integer 
keys too; e.g if it's 10, you've reduced your key space by a factor of 10 too, 
meaning you may need to use bigints if you potentially have more than 400m 
records in any given table.

One small observation that I've seen work is to use binary fields for UUIDs and 
always wrap access to them in hex/unhex functions. You halve your storage 
requirements for the same amount of data that way.

Marcus

smime.p7s
Description: S/MIME cryptographic signature


Re: Unique ID's across multiple databases

2010-09-12 Thread Max Schubert
Server offset + increment works really well, is simple, and well
documented and reliable - not sure why you would want to re-invent
something that works so well :).

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Unique ID's across multiple databases

2010-09-12 Thread Kiss Dániel
You may be right. I'm not arguing that offset + increment is working.

I'm just wondering if that's the optimal solution when you do not know how
many servers you will have in your array in the future. In my view, the
offset + increment thingy is good if you know in advance that you'll have a
limited number of servers. But if you have no idea that you will have 2, 20,
or 200 servers in your array in the future, you just can't pick an optimal
increment value. It just doesn't scale well enough to me.
If you go with BIGINT ID's, you may have a big enough interval to be
generous and pick a big increment value and allow 200 or even 2000 to make
sure you cover worst case scenarios. I'm just not sure if it's worth it to
use up 8 bytes for a primary key, when in general 4/5/6 is more than enough.

Any thoughts on this?

On Sun, Sep 12, 2010 at 9:32 PM, Max Schubert m...@webwizarddesign.comwrote:

 Server offset + increment works really well, is simple, and well
 documented and reliable - not sure why you would want to re-invent
 something that works so well :).

 --
 MySQL Replication Mailing List
 For list archives: http://lists.mysql.com/replication
 To unsubscribe:
 http://lists.mysql.com/replication?unsub=n...@dinagon.com




Re: UNIQUE KEY vs UNIQUE INDEX

2009-07-30 Thread mos

At 09:13 AM 7/30/2009, b wrote:
Are UNIQUE KEY  UNIQUE INDEX two ways of specifying the same thing? If 
not, what are the differences?


Feel free to tell me to RTFM but please post manual chapters. I've been 
looking but haven't been able to find anything.


They are the same thing. If  you meant Primary Key and Unique Index 
then they too are basically the same except Primary Key is NOT NULL and the 
optimizer will infer the primary key when doing RI joins if an index is not 
specified.


There may be other subtle differences that others can comment on.

Mike  



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: UNIQUE KEY vs UNIQUE INDEX

2009-07-30 Thread Joerg Bruehe
Hi !

mos wrote:
 At 09:13 AM 7/30/2009, b wrote:
 Are UNIQUE KEY  UNIQUE INDEX two ways of specifying the same thing?
 If not, what are the differences?

 Feel free to tell me to RTFM but please post manual chapters. I've
 been looking but haven't been able to find anything.
 
 They are the same thing. If  you meant Primary Key and Unique Index
 then they too are basically the same except Primary Key is NOT NULL and
 the optimizer will infer the primary key when doing RI joins if an index
 is not specified.
 
 There may be other subtle differences that others can comment on.

For all practical purposes, I agree.

From the database (or SQL) theory point of view, a unique constraint
is a logical concept (there cannot be any two cars having the same
registration number), and any index is a physical means of
implementation (to speed up access).

I have never heard of any DBMS implementing a unique constraint in any
way but using a unique index, but it could be done: at a huge
performance cost (sequential search through the table).


For primary key, there is also the school that holds a primary key
cannot change, it is a permanent attribute. If you share that idea, then
there is a difference to the uniqueness concept.
Take the example of a car: It has a manufacturer number which is
permanent, but the registration will change when it is moved (or sold)
to another area or country (depending on local law).
So the manufacturer number could be used as the primary key (will never
change), whereas the registration is guaranteed to be unique (at any
time) but may vary over time.


Regards,
Jörg

-- 
Joerg Bruehe,  MySQL Build Team,  joerg.bru...@sun.com
Sun Microsystems GmbH,   Komturstraße 18a,   D-12099 Berlin
Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Wolf Frenkel
Vorsitzender des Aufsichtsrates: Martin Haering Muenchen: HRB161028


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: UNIQUE KEY vs UNIQUE INDEX

2009-07-30 Thread b

On 07/30/2009 02:23 PM, Joerg Bruehe wrote:

Hi !

mos wrote:

At 09:13 AM 7/30/2009, b wrote:

Are UNIQUE KEY  UNIQUE INDEX two ways of specifying the same thing?
If not, what are the differences?

Feel free to tell me to RTFM but please post manual chapters. I've
been looking but haven't been able to find anything.

They are the same thing. If  you meant Primary Key and Unique Index
then they too are basically the same except Primary Key is NOT NULL and
the optimizer will infer the primary key when doing RI joins if an index
is not specified.



No, PK I understand.


There may be other subtle differences that others can comment on.


For all practical purposes, I agree.

 From the database (or SQL) theory point of view, a unique constraint
is a logical concept (there cannot be any two cars having the same
registration number), and any index is a physical means of
implementation (to speed up access).

I have never heard of any DBMS implementing a unique constraint in any
way but using a unique index, but it could be done: at a huge
performance cost (sequential search through the table).



This is what I was thinking, also.



For primary key, there is also the school that holds a primary key
cannot change, it is a permanent attribute. If you share that idea, then
there is a difference to the uniqueness concept.


Yes, I treat a PK as being something that's over and above simply being 
unique. But this touches upon the reason for my question: I guess I was 
wondering if they might be subtly different, though merging the two 
would result in a PK :-)



Take the example of a car: It has a manufacturer number which is
permanent, but the registration will change when it is moved (or sold)
to another area or country (depending on local law).
So the manufacturer number could be used as the primary key (will never
change), whereas the registration is guaranteed to be unique (at any
time) but may vary over time.



Thanks, both, for your replies.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: RE: Unique Id generation

2008-08-13 Thread R . Nils


 I try to generate a unique id for each row in a Mysql-InnoDB Table.
 Because of many deletes I  can't use an auto_increment column.
 After a Mysql restart, the next value for an auto_increment-column  is
 max(auto_increment-column)+1, and I need a really unique id.
 
 [JS] See if the UUID() function will suit you.

unfortunately I forgot to note that I use statement-based replication, so I 
can't use the UUID() function.

 
 My first solution looks like this:
 I use a second table with an auto-increment-column, and add an insert
 trigger to the first table.
 The insert trigger adds a row in the second table and uses  the
 last_insert_id() to get the unique value.
 The (last) row in the second table will never be deleted.
 Does anybody confirm with this solution?
 
 ###
 drop table unique_id_messages_1;
 create table unique_id_messages_1 (id bigint not null unique, subject
 text);
 
 drop table id_sequences_1;
 create table id_sequences_1 (id bigint not null primary key
 auto_increment);
 
 drop trigger trg_unique_id_messages_1;
 
 DELIMITER |
 
 create trigger trg_unique_id_messages_1 BEFORE INSERT ON
 unique_id_messages_1
 FOR EACH ROW BEGIN
   insert into id_sequences_1 values ();
   set NEW.id = (select last_insert_id());
 END;
 |
 DELIMITER ;
 
 insert into unique_id_messages_1 (subject) values (x1);
 insert into unique_id_messages_1 (subject) values (x2);
 insert into unique_id_messages_1 (subject) values (x3);
 insert into unique_id_messages_1 (subject) values (x4);
 select * from unique_id_messages_1;
 ###
 
 
-- 
Ist Ihr Browser Vista-kompatibel? Jetzt die neuesten 
Browser-Versionen downloaden: http://www.gmx.net/de/go/browser

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique Id generation

2008-08-13 Thread R . Nils

 Why would the auto_increment not work for you? The only case where you
 would have a problem is if the last record was deleted before mysql
 shutdown. If you are really concerned about this unique scenario,
 insert a dummy record before shutdown to guard against it and delete
 the dummy record after you start back up.
 
Hmm, and what about a server crash?

 
 On Tue, Aug 12, 2008 at 8:29 AM,  [EMAIL PROTECTED] wrote:
  Hi all,
 
  I try to generate a unique id for each row in a Mysql-InnoDB Table.
 Because of many deletes I  can't use an auto_increment column.
  After a Mysql restart, the next value for an auto_increment-column  is
 max(auto_increment-column)+1, and I need a really unique id.
 
 
  My first solution looks like this:
  I use a second table with an auto-increment-column, and add an insert
 trigger to the first table.
  The insert trigger adds a row in the second table and uses  the
 last_insert_id() to get the unique value.
  The (last) row in the second table will never be deleted.
  Does anybody confirm with this solution?
 
  ###
  drop table unique_id_messages_1;
  create table unique_id_messages_1 (id bigint not null unique, subject
 text);
 
  drop table id_sequences_1;
  create table id_sequences_1 (id bigint not null primary key
 auto_increment);
 
  drop trigger trg_unique_id_messages_1;
 
  DELIMITER |
 
  create trigger trg_unique_id_messages_1 BEFORE INSERT ON
 unique_id_messages_1
  FOR EACH ROW BEGIN
   insert into id_sequences_1 values ();
   set NEW.id = (select last_insert_id());
  END;
  |
  DELIMITER ;
 
  insert into unique_id_messages_1 (subject) values (x1);
  insert into unique_id_messages_1 (subject) values (x2);
  insert into unique_id_messages_1 (subject) values (x3);
  insert into unique_id_messages_1 (subject) values (x4);
  select * from unique_id_messages_1;
  ###
 
 
  Thanks in advance
 
  Rudi
 
 
 
  --
  GMX startet ShortView.de. Hier findest Du Leute mit Deinen Interessen!
  Jetzt dabei sein:
 http://www.shortview.de/[EMAIL PROTECTED]
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:   
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 

-- 
Ist Ihr Browser Vista-kompatibel? Jetzt die neuesten 
Browser-Versionen downloaden: http://www.gmx.net/de/go/browser

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Unique Id generation

2008-08-12 Thread Jerry Schwartz
I try to generate a unique id for each row in a Mysql-InnoDB Table.
Because of many deletes I  can't use an auto_increment column.
After a Mysql restart, the next value for an auto_increment-column  is
max(auto_increment-column)+1, and I need a really unique id.

[JS] See if the UUID() function will suit you.

My first solution looks like this:
I use a second table with an auto-increment-column, and add an insert
trigger to the first table.
The insert trigger adds a row in the second table and uses  the
last_insert_id() to get the unique value.
The (last) row in the second table will never be deleted.
Does anybody confirm with this solution?

###
drop table unique_id_messages_1;
create table unique_id_messages_1 (id bigint not null unique, subject
text);

drop table id_sequences_1;
create table id_sequences_1 (id bigint not null primary key
auto_increment);

drop trigger trg_unique_id_messages_1;

DELIMITER |

create trigger trg_unique_id_messages_1 BEFORE INSERT ON
unique_id_messages_1
FOR EACH ROW BEGIN
  insert into id_sequences_1 values ();
  set NEW.id = (select last_insert_id());
END;
|
DELIMITER ;

insert into unique_id_messages_1 (subject) values (x1);
insert into unique_id_messages_1 (subject) values (x2);
insert into unique_id_messages_1 (subject) values (x3);
insert into unique_id_messages_1 (subject) values (x4);
select * from unique_id_messages_1;
###


Thanks in advance

Rudi



--
GMX startet ShortView.de. Hier findest Du Leute mit Deinen Interessen!
Jetzt dabei sein:
http://www.shortview.de/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
infoshop.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique Id generation

2008-08-12 Thread Brent Baisley
Why would the auto_increment not work for you? The only case where you
would have a problem is if the last record was deleted before mysql
shutdown. If you are really concerned about this unique scenario,
insert a dummy record before shutdown to guard against it and delete
the dummy record after you start back up.

--
Brent Baisley

On Tue, Aug 12, 2008 at 8:29 AM,  [EMAIL PROTECTED] wrote:
 Hi all,

 I try to generate a unique id for each row in a Mysql-InnoDB Table. Because 
 of many deletes I  can't use an auto_increment column.
 After a Mysql restart, the next value for an auto_increment-column  is 
 max(auto_increment-column)+1, and I need a really unique id.


 My first solution looks like this:
 I use a second table with an auto-increment-column, and add an insert trigger 
 to the first table.
 The insert trigger adds a row in the second table and uses  the 
 last_insert_id() to get the unique value.
 The (last) row in the second table will never be deleted.
 Does anybody confirm with this solution?

 ###
 drop table unique_id_messages_1;
 create table unique_id_messages_1 (id bigint not null unique, subject text);

 drop table id_sequences_1;
 create table id_sequences_1 (id bigint not null primary key auto_increment);

 drop trigger trg_unique_id_messages_1;

 DELIMITER |

 create trigger trg_unique_id_messages_1 BEFORE INSERT ON unique_id_messages_1
 FOR EACH ROW BEGIN
  insert into id_sequences_1 values ();
  set NEW.id = (select last_insert_id());
 END;
 |
 DELIMITER ;

 insert into unique_id_messages_1 (subject) values (x1);
 insert into unique_id_messages_1 (subject) values (x2);
 insert into unique_id_messages_1 (subject) values (x3);
 insert into unique_id_messages_1 (subject) values (x4);
 select * from unique_id_messages_1;
 ###


 Thanks in advance

 Rudi



 --
 GMX startet ShortView.de. Hier findest Du Leute mit Deinen Interessen!
 Jetzt dabei sein: http://www.shortview.de/[EMAIL PROTECTED]

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Unique Id generation

2008-08-12 Thread Fish Kungfu
I agree with Jerry.  Take a look at the UUID() function.
http://dev.mysql.com/doc/refman/5.1/en/miscellaneous-functions.html#function_uuid
Second Life (http://secondlife.com) uses UUIDs to track millions and millions 
of unique items every day.

CheersFish


-Original Message-
From: Jerry Schwartz [EMAIL PROTECTED]
Sent: Aug 12, 2008 9:46 AM
To: [EMAIL PROTECTED], mysql@lists.mysql.com
Subject: RE: Unique Id generation

I try to generate a unique id for each row in a Mysql-InnoDB Table.
Because of many deletes I  can't use an auto_increment column.
After a Mysql restart, the next value for an auto_increment-column  is
max(auto_increment-column)+1, and I need a really unique id.

[JS] See if the UUID() function will suit you.

My first solution looks like this:
I use a second table with an auto-increment-column, and add an insert
trigger to the first table.
The insert trigger adds a row in the second table and uses  the
last_insert_id() to get the unique value.
The (last) row in the second table will never be deleted.
Does anybody confirm with this solution?

###
drop table unique_id_messages_1;
create table unique_id_messages_1 (id bigint not null unique, subject
text);

drop table id_sequences_1;
create table id_sequences_1 (id bigint not null primary key
auto_increment);

drop trigger trg_unique_id_messages_1;

DELIMITER |

create trigger trg_unique_id_messages_1 BEFORE INSERT ON
unique_id_messages_1
FOR EACH ROW BEGIN
  insert into id_sequences_1 values ();
  set NEW.id = (select last_insert_id());
END;
|
DELIMITER ;

insert into unique_id_messages_1 (subject) values (x1);
insert into unique_id_messages_1 (subject) values (x2);
insert into unique_id_messages_1 (subject) values (x3);
insert into unique_id_messages_1 (subject) values (x4);
select * from unique_id_messages_1;
###


Thanks in advance

Rudi



--
GMX startet ShortView.de. Hier findest Du Leute mit Deinen Interessen!
Jetzt dabei sein:
http://www.shortview.de/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
infoshop.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique Rowid

2007-07-19 Thread Olaf Stein
There is a general concern if the user can manipulate variables so that they
pose a threat to your app...


On 7/19/07 12:56 AM, John Comerford [EMAIL PROTECTED] wrote:

 Hi Folks,
 
 I am in the process of putting together a web application.  I have
 decided to add a 'RowId' field to all my tables and assign it a unique
 number so that I use it to retrieve data.  In concept this number might
 be passed back to the server as part of an ajax request etc.  My
 question is what is the best way to assign this number.  My strongest
 feeling is to use an MD5 string:
 
 Advantages:
 1) It is not 'guessable', a sequence number might open up some security
 issues by allowing a user to replace the RowId if it is held in hidden
 field or something like that.
 2) It is 'sort' of unique, with the changes of it being duplicate fairly
 slim (I really only need it to be unique by table).
 
 Dis-advantages:
 1) At 32 characters long it's pretty difficult to work with from an
 admin point of view
 2) Performance, I am still fairly green with MySQL but I would have to
 assume that having an extra 32 char field in a table is going to have a
 performance impact.
 
 Do you guys reckon the performance hit will be significant (I know this
 is a bit of a 'how long is a piece of string' question).
 Does anybody know of a better way to do this ?  I was also considering
 doing something along the lines of   ProcessID + Year + Month + Day +
 Hour + Second + MilliSecond + 4 digit random, would this be a better idea ?
 
 TIA,
   JC





-
Olaf Stein
DBA
Center for Quantitative and Computational Biology
Columbus Children's Research Institute
700 Children's Drive
phone: 1-614-355-5685
cell: 1-614-843-0432
email: [EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique Rowid

2007-07-19 Thread Perrin Harkins

On 7/19/07, John Comerford [EMAIL PROTECTED] wrote:

I am in the process of putting together a web application.  I have
decided to add a 'RowId' field to all my tables and assign it a unique
number so that I use it to retrieve data.  In concept this number might
be passed back to the server as part of an ajax request etc.  My
question is what is the best way to assign this number.


An auto_increment integer.


1) It is not 'guessable', a sequence number might open up some security
issues by allowing a user to replace the RowId if it is held in hidden
field or something like that.


You should be use a single MD5 (or probably SHA1) digest to prevent
that.  Here's an example:
http://www.oreilly.com/catalog/cgi2/chapter/ch08.html


2) Performance, I am still fairly green with MySQL but I would have to
assume that having an extra 32 char field in a table is going to have a
performance impact.


http://www.mysqlperformanceblog.com/2007/06/18/using-char-keys-for-joins-how-much-is-the-overhead/

- Perrin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: UNIQUE KEY vs NULLs

2006-12-11 Thread emierzwa
It is expected behavior, you can make the unique key a primary key
instead. This should prevent this situation.

Ed 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 11, 2006 7:42 AM
To: mysql@lists.mysql.com
Subject: UNIQUE KEY vs NULLs

Hi,

I have an InnoDB table similar to this:

CREATE TABLE Target
(IMSI VARCHAR(15) ASCII,
 IMEI VARCHAR(15) ASCII,
 UNIQUE KEY (IMSI, IMEI));

After playing a bit with it, I managed to add duplicate records, if one
of
the fields was a NULL:

+-+-+
| IMSI| IMEI|
+-+-+
| NULL| 35195600126418  |
| NULL| 35195600126418  |
+-+-+


Is this a bug, or a feature? :-)

If it is a feature, than how can I assure uniqueness for a table in a
sense
that won't allow such duplicates?

Thx

ImRe



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: UNIQUE KEY vs NULLs

2006-12-11 Thread Dan Buettner

This is a feature - a NULL value is an undefined value, therefore two
NULL values are not the same.  Can be a little confusing but makes
sense when you think about it.

A UNIQUE index does ensure that non-NULL values are unique; you could
specify that your column not accept NULL values.

Dan


On 12/11/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

Hi,

I have an InnoDB table similar to this:

CREATE TABLE Target
(IMSI VARCHAR(15) ASCII,
 IMEI VARCHAR(15) ASCII,
 UNIQUE KEY (IMSI, IMEI));

After playing a bit with it, I managed to add duplicate records, if one of
the fields was a NULL:

+-+-+
| IMSI| IMEI|
+-+-+
| NULL| 35195600126418  |
| NULL| 35195600126418  |
+-+-+


Is this a bug, or a feature? :-)

If it is a feature, than how can I assure uniqueness for a table in a sense
that won't allow such duplicates?

Thx

ImRe



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: UNIQUE KEY vs NULLs

2006-12-11 Thread Martijn Tonies

 I have an InnoDB table similar to this:

 CREATE TABLE Target
 (IMSI VARCHAR(15) ASCII,
  IMEI VARCHAR(15) ASCII,
  UNIQUE KEY (IMSI, IMEI));

 After playing a bit with it, I managed to add duplicate records, if one of
 the fields was a NULL:

 +-+-+
 | IMSI| IMEI|
 +-+-+
 | NULL| 35195600126418  |
 | NULL| 35195600126418  |
 +-+-+


 Is this a bug, or a feature? :-)

A feature. NULL isn't equal to NULL.

 If it is a feature, than how can I assure uniqueness for a table in a
sense
 that won't allow such duplicates?

If you don't want to have NULL, use a primary key instead
of a unique key.


Martijn Tonies
Database Workbench - development tool for MySQL, and more!
Upscene Productions
http://www.upscene.com
My thoughts:
http://blog.upscene.com/martijn/
Database development questions? Check the forum!
http://www.databasedevelopmentforum.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: UNIQUE KEY vs NULLs

2006-12-11 Thread imre

 From: Dan Buettner [mailto:[EMAIL PROTECTED] 
 This is a feature - a NULL value is an undefined value, 
 therefore two NULL values are not the same.  Can be a little 
 confusing but makes sense when you think about it.
 
 A UNIQUE index does ensure that non-NULL values are unique; 
 you could specify that your column not accept NULL values.

I was afraid to hear something like this.  I'd rather not use some invalid
value to mark unknown fields.  So I went a bit further, and tried to ensure
the uniqueness of the null values with a trigger.

CREATE TRIGGER Target_Before_Insert BEFORE INSERT ON Target
FOR EACH ROW IF NEW.IMSI IS NULL OR NEW.IMEI IS NULL THEN
  BEGIN
DECLARE c_ INT UNSIGNED;
SELECT COUNT(*) INTO c_ FROM Target 
 WHERE IMSI = NEW.IMSI AND IMEI = NEW.IMEI;
IF c_ THEN SET NEW.Id = NULL;
END IF;
  END;
END IF;

Here Id is a non NULL field, so setting it to NULL should trigger an error.
But when I run an insert where the trigger body would run, then I get the
following error.
Table 'Target' was not locked with LOCK TABLES

I am pretty sure, I don't use LOCK TABLES at all. What is going on here?
How can I get rid of this error?

Thx

ImRe


 
 Dan
 
 
 On 12/11/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
  Hi,
 
  I have an InnoDB table similar to this:
 
  CREATE TABLE Target
  (IMSI VARCHAR(15) ASCII,
   IMEI VARCHAR(15) ASCII,
   UNIQUE KEY (IMSI, IMEI));
 
  After playing a bit with it, I managed to add duplicate records, if 
  one of the fields was a NULL:
 
  +-+-+
  | IMSI| IMEI|
  +-+-+
  | NULL| 35195600126418  |
  | NULL| 35195600126418  |
  +-+-+
 
 
  Is this a bug, or a feature? :-)
 
  If it is a feature, than how can I assure uniqueness for a 
 table in a 
  sense that won't allow such duplicates?
 
  Thx
 
  ImRe
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 
 
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: UNIQUE constraint, proper use

2005-11-22 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Ferindo Middleton Jr [EMAIL PROTECTED] writes:

 I have this SQL statement:
 CREATE TABLE rooms (
  idSERIAL,
  room_name   TEXT UNIQUE,
  location  TEXT,
  last_updated  TIMESTAMP DEFAULT
 CURRENT_TIMESTAMP NOT NULL,
  last_user_updatedTEXT,
  PRIMARY KEY(id)
 );

 When I run this I get the following error:
 ERROR 1170 (42000): BLOB/TEXT column 'room_name' used in key
 specification without a key length

MySQL can index only the first N chars of a BLOB or TEXT column; you
must supply the value of N.

 I'm more used to PostgreSQL and when I run this command there, I don't
 get this error message.

Even in PostgreSQL it's not always beneficial to put a long bytea
column into an index.  Can't you use a surrogate key?


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: UNIQUE constraint, proper use

2005-11-22 Thread Martijn Tonies


 I have this SQL statement:

 CREATE TABLE rooms (
  idSERIAL,
  room_name   TEXT UNIQUE,
  location  TEXT,
  last_updated  TIMESTAMP DEFAULT
 CURRENT_TIMESTAMP NOT NULL,
  last_user_updatedTEXT,
  PRIMARY KEY(id)
 );

 When I run this I get the following error:
 ERROR 1170 (42000): BLOB/TEXT column 'room_name' used in key
 specification without a key length

 I'm more used to PostgreSQL and when I run this command there, I don't
 get this error message. I recognized that by changing room_name to a
 varchar type, I won't get the error message.

TEXT in MySQL, is a specific BLOB-type column. It isn't able to guarantee
uniqueness on the full blob. I wonder if that is what you want.

VARCHAR is not a blob and can be indexed (a unique constraint creates
an index automatically).

With regards,

Martijn Tonies
Database Workbench - tool for InterBase, Firebird, MySQL, Oracle  MS SQL
Server
Upscene Productions
http://www.upscene.com
Database development questions? Check the forum!
http://www.databasedevelopmentforum.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: UNIQUE constraint, proper use

2005-11-21 Thread Paul DuBois

At 22:49 -0500 11/21/05, Ferindo Middleton Jr wrote:

I have this SQL statement:

CREATE TABLE rooms (
idSERIAL,
room_name   TEXT UNIQUE,
location  TEXT,
last_updated  TIMESTAMP DEFAULT 
CURRENT_TIMESTAMP NOT NULL,

last_user_updatedTEXT,
PRIMARY KEY(id)
);

When I run this I get the following error:
ERROR 1170 (42000): BLOB/TEXT column 'room_name' used in key 
specification without a key length


I'm more used to PostgreSQL and when I run this command there, I 
don't get this error message. I recognized that by changing 
room_name to a varchar type, I won't get the error message.


Also, the PostgreSQL mailing lists had a special group from 
SQL-related issue, but I didn't see one of these types of lists in 
the lists of groups for MySQL community so I'm sorry if I irritate 
anyone by posting to the wrong group. Thanks.


What is your question?

--
Paul DuBois, MySQL Documentation Team
Madison, Wisconsin, USA
MySQL AB, www.mysql.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: UNIQUE and INDEX using same field.

2005-10-28 Thread SGreen
Michael J. Pawlowsky [EMAIL PROTECTED] wrote on 10/28/2005 11:28:42 
AM:

 
 When creating an index in phpMySQL I get a warning message about having
 2 indexes using the same field.
 
 The table is simply a product comment table.
 
 The first index is simply an INDEX on the product_id to speed up
 displaying them when someone wants to lookup the comments for that 
product.
 The second is a UNIQUE constraint on the product_id and the user_id to
 make sure a user does not insert more than one comment per product.
 
 phpMySQL shows me the following warning:
 *UNIQUE and INDEX keys should not both be set for column `product_id`
 
 *If that really is bad to do...  besides programatically how would I do
 this.
 
 
 Thanks,
 Mike
 

A UNIQUE *is* an INDEX with an added twist. It's a special kind of INDEX 
that only allows single combinations of the values in the columns that 
compose the INDEX.  A regular INDEX does not have this restriction.

Defining 

UNIQUE(product_id, user_id)

creates an INDEX. Any index that uses more than one column also acts as an 
index for each left-most sets of columns, including just the first one. 
Your UNIQUE and your INDEX overlap and the INDEX is not necessary.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Re: UNIQUE and INDEX using same field.

2005-10-28 Thread Michael J. Pawlowsky

[EMAIL PROTECTED] wrote:

Michael J. Pawlowsky [EMAIL PROTECTED] wrote on 10/28/2005 11:28:42 
AM:


 


When creating an index in phpMySQL I get a warning message about having
2 indexes using the same field.

The table is simply a product comment table.

The first index is simply an INDEX on the product_id to speed up
displaying them when someone wants to lookup the comments for that 
   


product.
 


The second is a UNIQUE constraint on the product_id and the user_id to
make sure a user does not insert more than one comment per product.

phpMySQL shows me the following warning:
*UNIQUE and INDEX keys should not both be set for column `product_id`

*If that really is bad to do...  besides programatically how would I do
this.


Thanks,
Mike

   



A UNIQUE *is* an INDEX with an added twist. It's a special kind of INDEX 
that only allows single combinations of the values in the columns that 
compose the INDEX.  A regular INDEX does not have this restriction.


Defining 


UNIQUE(product_id, user_id)

creates an INDEX. Any index that uses more than one column also acts as an 
index for each left-most sets of columns, including just the first one. 
Your UNIQUE and your INDEX overlap and the INDEX is not necessary.


Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine
 




Thanks for the answer.

I know it is an index...  I did not know that an index using 2 columns 
will be used when calling a statement that uses only one of those columns.

Just for the fun of it I will trace a query to see if uses it.

Thanks again,
Mike



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: UNIQUE issue at mysql 4.* and mysql 5.*

2005-08-02 Thread Gleb Paharenko
Hello.





It works for me on MySQL 5.0.10:





mysql CREATE TABLE `manager_new` (

-  `id` bigint(20) NOT NULL auto_increment,

-  `id_from_hits` bigint(20) default NULL,

-  `from_ip` varchar(15) default NULL,

-  `pool` char(1) NOT NULL default 'n',

-  `to_ip` varchar(15) default '',

-  `lstupdate` date default NULL,

-  `comment` varchar(50) default NULL,

-  `added_by` enum('yariv','eddie','lisman','sweiss','interface') default 
'interface',

-  `black_white` char(1) NOT NULL default 'B',

-  PRIMARY KEY  (`id`),

-  UNIQUE KEY `from_ip` (`from_ip`,`to_ip`)

- ) ENGINE=InnoDB DEFAULT CHARSET=latin1 ;

Query OK, 0 rows affected (0.22 sec)



mysql  insert into manager_new (id_from_hits,from_ip,to_ip,lstupdate,comment) 
values (39,'192.118.68.5','192.118.68.5',now(),'added automatically');

Query OK, 1 row affected (0.04 sec)



mysql select version();

+---+

| version() |

+---+

| 5.0.10-beta-debug-log |

+---+





Yariv Graf [EMAIL PROTECTED] wrote:

 Hi all

 I've created a table in mysql 4.1.12 and 5.0.9-beta-standard as follow

 

 | manager_new | CREATE TABLE `manager_new` (

  `id` bigint(20) NOT NULL auto_increment,

  `id_from_hits` bigint(20) default NULL,

  `from_ip` varchar(15) default NULL,

  `pool` char(1) NOT NULL default 'n',

  `to_ip` varchar(15) default '',

  `lstupdate` date default NULL,

  `comment` varchar(50) default NULL,

  `added_by` enum('yariv','eddie','lisman','sweiss','interface') default 

 'interface',

  `black_white` char(1) NOT NULL default 'B',

  PRIMARY KEY  (`id`),

  UNIQUE KEY `from_ip` (`from_ip`,`to_ip`)

 ) ENGINE=InnoDB DEFAULT CHARSET=latin1

 

 

 The two tables are EMPTY ,where I try this insert in each version I get :

 

 mysql 5

 

 mysql  insert into manager_new 

 (id_from_hits,from_ip,to_ip,lstupdate,comment) values 

 (39,'192.118.68.5','192.118.68.5',now(),'added automatically');

 ERROR 1062 (23000): Duplicate entry '192.118.68.5-192.118.68.5' for key 2

 

 mysql 4

 

 mysql insert into manager_new 

 (id_from_hits,from_ip,to_ip,lstupdate,comment) values 

 (39,'192.118.68.5','192.118.68.5',now(),'added automatically');

 Query OK, 1 row affected (0.00 sec)

 

 Is there any change in the way that mysql 5 treats UNIQUE index?

 or is it a bug?

 

 Best regards

 

 Yariv

 

 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: unique by field issue

2005-07-01 Thread SGreen
Seth Leonard [EMAIL PROTECTED] wrote on 07/01/2005 04:43:05 PM:

 
 I constantly bump up against this issue and am looking for a good
 solution.  I want a query that will return multiple rows, but only one
 row per unique identifier in one field.
 
 
 
 For instance, if I have the following table:
 
 
 
 Name | Food | Eat_Date
 
 Bob | Hot Dog | 2005-06-25
 
 Jan | Pizza | 2005-06-27
 
 Jim | Salad | 2005-05-25
 
 Bob | Soup | 2005-06-03
 
 Ann | Toast | 2005-06-13
 
 Jim | Pizza | 2005-06-28
 
 
 
 I want to be able to pull the most recent food by each name without
 running multiple queries.  I need a query that would give the results:
 
 
 
 Bob | Hot Dog | 2005-06-25
 
 Jan | Pizza | 2005-06-27
 
 Ann | Toast | 2005-06-13
 
 Jim | Pizza | 2005-06-28
 
 
 
 Anyone have ideas of how to do this in one query?
 
 
 
 Thanks,
 
 Seth
 

Sorry, even the single-statement version of the solution to this kind of 
query uses a subquery. So, it is technically two queries in one statement. 
There is no single-query solution to this problem, yet (you might count 
the max-concat hackbut I won't becaue it is so inefficient)

http://dev.mysql.com/doc/mysql/en/example-maximum-column-group-row.html

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Re: Unique Index on multiple columns that can contain NULL in MySQL

2005-05-05 Thread Jigal van Hemert
From: Dennis Fogg
 I'm getting lots of duplicate rows even though I have a
 unique index defined over multiple columns.
 The issue is that multiple NULL values are allowed,
 even when some values are not null.
 This could be as specified by the SQL standard,
 but it's certainly confusing for the developer.

It is confusing. You have to 'grok' NULL values to understand the problems
that may arise.
In Boolean logic you're used to two distinct values TRUE and FALSE. The
introduction of NULL actually introduced a second outcome of comparison
operators and functions. Since NULL represents unknown, comparing NULL to
NULL will result in MAYBE.
If you look at NULL as being a yet unknown variable it starts to make sense:
(x = 2) : maybe true, maybe false, depending on the value of 'x'.
(x != 2) : maybe true, maybe false, depending on the value of 'x'.
In MySQL the outcome MAYBE is represented by NULL, so (NULL = NULL) - NULL;
and (NULL != 2) - NULL; (NULL * 1) - NULL; etcetera.

Regarding indexes, there doesn't seem to be any logic involved, other than
what the standard says. UNIQUE indexes may have multiple NULL values
(excepting BDB tables). A PRIMARY key is defined as a combination of UNIQUE
and NOT NULL.

 Here's the test case:

 Goal: prevent duplicate rows on the (c1, c2) pair:
This depends on your definition of 'duplicate' in the light of the
NULL-logic I explained earlier.
If there is already a 1-1 pair in the db, inserting another 1-1 pair will
require the index to compare the entry to be inserted with the current
entries in the index. Comparing 1-1 to 1-1 will result in TRUE, so the new
entry is rejected.
If you try the same with a NULL-NULL pair in the db and you try to insert
another NULL-NULL pair, the comparison does not result in TRUE (it would be
MAYBE, thus NULL) and the entry is accepted.

 ++--+--+
 | pk | c1 | c2 |
 ++--+--+
 | 1 | 1 | NULL |
 | 2 | 1 | NULL |
 | 3 | 1 | |
 | 4 | NULL | NULL |
 | 5 | NULL | NULL |
 ++--+--+
 5 rows in set (0.00 sec)


 Note: this works even with bdb engine in MySQL:

 mysql alter table test_multi_column_null engine = bdb;
 Query OK, 5 rows affected (0.03 sec)
 Records: 5 Duplicates: 0 Warnings: 0

Does it? Try SHOW CREATE TABLE tablename; and check whether the engine is
really BDB??
If you do an ALTER TABLE tablename ENGINE=current_engine_type; the table
will be rebuilt anyway.

I tries your example and the engine type remained MyISAM...

 Conclusion: if you want to enforce uniqueness,
 don't use columns that allow NULL.

That entirely depends on your definition of 'uniqueness' as I explained
before. If you consider NULL to be equal to NULL (which it is not) then you
should only use NOT NULL columns.

Regards, Jigal.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique Index on multiple columns that can contain NULL in MySQL

2005-05-05 Thread Martijn Tonies


 From: Dennis Fogg
  I'm getting lots of duplicate rows even though I have a
  unique index defined over multiple columns.
  The issue is that multiple NULL values are allowed,
  even when some values are not null.
  This could be as specified by the SQL standard,
  but it's certainly confusing for the developer.

 It is confusing. You have to 'grok' NULL values to understand the
problems
 that may arise.
 In Boolean logic you're used to two distinct values TRUE and FALSE. The
 introduction of NULL actually introduced a second outcome of comparison
 operators and functions. Since NULL represents unknown, comparing NULL
to
 NULL will result in MAYBE.
 If you look at NULL as being a yet unknown variable it starts to make
sense:
 (x = 2) : maybe true, maybe false, depending on the value of 'x'.
 (x != 2) : maybe true, maybe false, depending on the value of 'x'.
 In MySQL the outcome MAYBE is represented by NULL, so (NULL = NULL) -
NULL;
 and (NULL != 2) - NULL; (NULL * 1) - NULL; etcetera.

 Regarding indexes, there doesn't seem to be any logic involved, other than
 what the standard says. UNIQUE indexes may have multiple NULL values
 (excepting BDB tables). A PRIMARY key is defined as a combination of
UNIQUE
 and NOT NULL.

Since when does the standard handle Indices?

It's about time that MySQL gets Unique Constraints - these are defined by
the SQL standard :-)

With regards,

Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL  MS SQL
Server
Upscene Productions
http://www.upscene.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique Index on multiple columns that can contain NULL in MySQL

2005-05-04 Thread Hank
I think you should review the very recent thread why NOT NULL in
PRIMARY key?? which might shed some light on your particular issue.

In a nutshell, NULL!=NULL, so the database engine can not detect the
duplicate rows, as is expected.

-Hank

On 5/4/05, Dennis Fogg [EMAIL PROTECTED] wrote:
 I'm getting lots of duplicate rows even though I have a
 unique index defined over multiple columns.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique items from all columns, very slow

2005-04-21 Thread SGreen
Willie Gnarlson [EMAIL PROTECTED] wrote on 04/20/2005 05:46:25 
PM:

 On 4/20/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
  Willie Gnarlson [EMAIL PROTECTED] wrote on 04/20/2005 
01:56:03 PM:
On 4/20/05, Dan Nelson [EMAIL PROTECTED] wrote:
 In the last episode (Apr 20), Willie Gnarlson said:
  I have been attempting to uncover a fast method for retrieving 
unique
  items from every column in a table. The table has approximately 
60
  thousand rows. Each row has an index associated with it, and 
running
  'explain select distinct class from dlist' shows that it 
isusing the
  index.
 
  However, since there are 24 columns, running that select query 
for
  each column takes about 4 long seconds on a P4 2.4 Ghz.
 
snip - see previous posts on this thread for details
 Okay, they aren't creating new values using that form. The form I've
 been talking about is a form to limit results from the database to
 only specific values for  specific fields without the user having to
 know from memory what the possible values are (plus, they'll change
 every so often).
 
snip
 
 -- 
 Willie Gnarlson

If I read that last part correctly, you have already identified the need 
for a mechanism for managing the value lists separately from what the user 
enters. In this case you should have everything in place (or almost in 
place) to put each list into its own table (Someone else suggested this 
design, too).

This kind of arrangement is very common within databases that support 
robust user interfaces. It takes WAY too long (as you already found out) 
to dynamically regenerate the full list of allowable values every time. 
What happens to your select boxes if you have an empty data table? Does 
that mean that the users can pick from NO values in any column?  By 
putting each pick list in its own table, you separate UI support data from 
your application's real data. 

Run your queries once to build your lists then use another form (or even 
use a separate application) to manage the lists. Because it's a one-time 
event, getting the first set of unique values can take all night if it 
needs to. What you need to be fast is the building of the select boxes. By 
having those lists ready-to-go on their own tables, it will be as fast as 
a sequential read from the disk (very fast) or a read from the disk cache 
(faster) or a read from the query cache (fastest). In any case, running 
the queries will no longer slow you down.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Re: Unique items from all columns, very slow

2005-04-21 Thread Willie Gnarlson
On 4/21/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 Willie Gnarlson [EMAIL PROTECTED] wrote on 04/20/2005 05:46:25  PM:
   On 4/20/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Willie Gnarlson [EMAIL PROTECTED] wrote on 04/20/2005
 01:56:03 PM:
  On 4/20/05, Dan Nelson [EMAIL PROTECTED] wrote:
   In the last episode (Apr 20), Willie Gnarlson said:
I have been attempting to uncover a fast method for retrieving
 unique
items from every column in a table. The table has approximately
 60
thousand rows. Each row has an index associated with it, and
 running
'explain select distinct class from dlist' shows that it isusing
 the
index.
   
However, since there are 24 columns, running that select query
 for
each column takes about 4 long seconds on a P4 2.4 Ghz.
   
  snip - see previous posts on this thread for details
   Okay, they aren't creating new values using that form. The form I've
   been talking about is a form to limit results from the database to
   only specific values for  specific fields without the user having to
   know from memory what the possible values are (plus, they'll change
   every so often).
 

(...)

  
 If I read that last part correctly, you have already identified the need for
 a mechanism for managing the value lists separately from what the user
 enters. In this case you should have everything in place (or almost in
 place) to put each list into its own table (Someone else suggested this
 design, too). 

Right, yes.
  
(...)
 Run your queries once to build your lists then use another form (or even use
 a separate application) to manage the lists. Because it's a one-time event,
 getting the first set of unique values can take all night if it needs to.
 What you need to be fast is the building of the select boxes. By having
 those lists ready-to-go on their own tables, it will be as fast as a
 sequential read from the disk (very fast) or a read from the disk cache
 (faster) or a read from the query cache (fastest). In any case, running the
 queries will no longer slow you down. 

It actually seems slower. The separate tables from a previous try look
like this:

CREATE TABLE `ET` (
  `ET` double NOT NULL default '0',
  PRIMARY KEY  (`ET`),
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

That table has 4781 records, but some have more (one has ~18
thousand). There are 42 column tables.

Querying all of them took 5.58 seconds.

-- 
Willie Gnarlson

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique items from all columns, very slow

2005-04-21 Thread Willie Gnarlson
On 4/21/05, Willie Gnarlson [EMAIL PROTECTED] wrote:

(...)

 It actually seems slower. The separate tables from a previous try look
 like this:
 
 CREATE TABLE `ET` (
   `ET` double NOT NULL default '0',
   PRIMARY KEY  (`ET`),
 ) ENGINE=MyISAM DEFAULT CHARSET=latin1;
 
 That table has 4781 records, but some have more (one has ~18
 thousand). There are 42 column tables.
 
 Querying all of them took 5.58 seconds.

I forgot to mention that those tables *do* have unique items, too. 

-- 
Willie Gnarlson

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique items from all columns, very slow

2005-04-21 Thread SGreen
Willie Gnarlson [EMAIL PROTECTED] wrote on 04/21/2005 01:39:15 
PM:

 On 4/21/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
  Willie Gnarlson [EMAIL PROTECTED] wrote on 04/20/2005 
 05:46:25  PM:
On 4/20/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 Willie Gnarlson [EMAIL PROTECTED] wrote on 04/20/2005
  01:56:03 PM:
   On 4/20/05, Dan Nelson [EMAIL PROTECTED] wrote:
In the last episode (Apr 20), Willie Gnarlson said:
 I have been attempting to uncover a fast method for 
retrieving
  unique
 items from every column in a table. The table has 
approximately
  60
 thousand rows. Each row has an index associated with it, 
and
  running
 'explain select distinct class from dlist' shows that it 
isusing
  the
 index.

 However, since there are 24 columns, running that select 
query
  for
 each column takes about 4 long seconds on a P4 2.4 Ghz.

   snip - see previous posts on this thread for details
Okay, they aren't creating new values using that form. The form 
I've
been talking about is a form to limit results from the database to
only specific values for  specific fields without the user having 
to
know from memory what the possible values are (plus, they'll change
every so often).

 
 (...)
 
  
  If I read that last part correctly, you have already identified the 
need for
  a mechanism for managing the value lists separately from what the user
  enters. In this case you should have everything in place (or almost in
  place) to put each list into its own table (Someone else suggested 
this
  design, too). 
 
 Right, yes.
 
 (...)
  Run your queries once to build your lists then use another form (or 
even use
  a separate application) to manage the lists. Because it's a one-time 
event,
  getting the first set of unique values can take all night if it needs 
to.
  What you need to be fast is the building of the select boxes. By 
having
  those lists ready-to-go on their own tables, it will be as fast as a
  sequential read from the disk (very fast) or a read from the disk 
cache
  (faster) or a read from the query cache (fastest). In any case, 
running the
  queries will no longer slow you down. 
 
 It actually seems slower. The separate tables from a previous try look
 like this:
 
 CREATE TABLE `ET` (
   `ET` double NOT NULL default '0',
   PRIMARY KEY  (`ET`),
 ) ENGINE=MyISAM DEFAULT CHARSET=latin1;
 
 That table has 4781 records, but some have more (one has ~18
 thousand). There are 42 column tables.
 
 Querying all of them took 5.58 seconds.
 
 -- 
 Willie Gnarlson

I am SO confused!!! (and that's hard to do) You have a lookup list with 
eighteen thousand entries in it? Your users must pick 1 of 4781 floating 
point numbers from a select box? What kind of information are you trying 
to work with? I cannot imagine an application that uses data like you are 
describing. Can you help a poor fellow out and let us in on what you are 
working on?

Here's a question, it may have taken 5.58 seconds for the first pass 
through all 42 tables but how fast was the second pass? Was that 5.58 
measuring just the queries or the time it took to build the select boxes, 
too? As far as performance goes, It may be faster to check the user's 
entries during the form validation/processing phase than it will be to 
force them to enter the information the correct way the first time. 

Thanks!

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Re: Unique items from all columns, very slow

2005-04-21 Thread Willie Gnarlson
On 4/21/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 Willie Gnarlson [EMAIL PROTECTED] wrote on 04/21/2005 01:39:15
 PM:
  On 4/21/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
   Willie Gnarlson [EMAIL PROTECTED] wrote on 04/20/2005
  05:46:25  PM:
 On 4/20/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
  Willie Gnarlson [EMAIL PROTECTED] wrote on 04/20/2005
   01:56:03 PM:
On 4/20/05, Dan Nelson [EMAIL PROTECTED] wrote:
 In the last episode (Apr 20), Willie Gnarlson said:
  I have been attempting to uncover a fast method for
 retrieving
   unique
  items from every column in a table. The table has
 approximately
   60
  thousand rows. Each row has an index associated with it,
 and
   running
  'explain select distinct class from dlist' shows that it
 isusing
   the
  index.
 
  However, since there are 24 columns, running that select
 query
   for
  each column takes about 4 long seconds on a P4 2.4 Ghz.

snip - see previous posts on this thread for details
 Okay, they aren't creating new values using that form. The form
 I've
 been talking about is a form to limit results from the database to
 only specific values for  specific fields without the user having
 to
 know from memory what the possible values are (plus, they'll change
 every so often).

 
  (...)
 
  
   If I read that last part correctly, you have already identified the
 need for
   a mechanism for managing the value lists separately from what the user
   enters. In this case you should have everything in place (or almost in
   place) to put each list into its own table (Someone else suggested
 this
   design, too).
 
  Right, yes.
 
  (...)
   Run your queries once to build your lists then use another form (or
 even use
   a separate application) to manage the lists. Because it's a one-time
 event,
   getting the first set of unique values can take all night if it needs
 to.
   What you need to be fast is the building of the select boxes. By
 having
   those lists ready-to-go on their own tables, it will be as fast as a
   sequential read from the disk (very fast) or a read from the disk
 cache
   (faster) or a read from the query cache (fastest). In any case,
 running the
   queries will no longer slow you down.
 
  It actually seems slower. The separate tables from a previous try look
  like this:
 
  CREATE TABLE `ET` (
`ET` double NOT NULL default '0',
PRIMARY KEY  (`ET`),
  ) ENGINE=MyISAM DEFAULT CHARSET=latin1;
 
  That table has 4781 records, but some have more (one has ~18
  thousand). There are 42 column tables.
 
  Querying all of them took 5.58 seconds.
 
  --
  Willie Gnarlson
 
 I am SO confused!!! (and that's hard to do) You have a lookup list with
 eighteen thousand entries in it? Your users must pick 1 of 4781 floating
 point numbers from a select box? What kind of information are you trying
 to work with? I cannot imagine an application that uses data like you are
 describing. Can you help a poor fellow out and let us in on what you are
 working on?

LOL! 

It sounds a little insane, I'll agree. Actually that table example I
provided holds Elapsed Times. Many, many entries can contain the same
ET.

 Here's a question, it may have taken 5.58 seconds for the first pass
 through all 42 tables but how fast was the second pass? Was that 5.58
 measuring just the queries or the time it took to build the select boxes,

No, I singled out the code up until the end of the queries, and only
that is 5.58 seconds. Yipes, the CPU isn't under any real load now,
previously it was. Sorry. It's still too slow I feel.

I thought the caching might have helped, but apparently:

1 pass:  3.35 seconds
2 pass:  3.64 seconds (!?)
3 pass:  3.36 seconds

 too? As far as performance goes, It may be faster to check the user's
 entries during the form validation/processing phase than it will be to
 force them to enter the information the correct way the first time.

I was trying to set it up a series of select  boxes for limiting a
search. I may have to re-think this if I can't get the data from the
tables fast enough.

-- 
Willie Gnarlson

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique items from all columns, very slow

2005-04-20 Thread Willie Gnarlson
On 4/20/05, Dan Nelson [EMAIL PROTECTED] wrote:
 In the last episode (Apr 20), Willie Gnarlson said:
  I have been attempting to uncover a fast method for retrieving unique
  items from every column in a table. The table has approximately 60
  thousand rows. Each row has an index associated with it, and running
  'explain select distinct class from dlist' shows that it is using the
  index.
 
  However, since there are 24 columns, running that select query for
  each column takes about 4 long seconds on a P4 2.4 Ghz.
 
 If there are a significant number of identical values (ages, sizes,
 etc), then setting PACK_KEYS=1 on the table may help by making smaller

There are, but PACK_KEYS=1 seemed to shed about a second (just
guessing, but it did seem only slightly faster) off the total query
time.

 indexes.  You might also be able to convert some columns to ENUMs,
 which will make their indexes even smaller (and let your SELECT be
 almost instantaneous if mysql knows how to optimize it right :)

One column was already ENUM, but it wouldn't be possible to make any
others that type.
 
 Another alternative would be to build a table for each column, holding
 the output of SELECT DISTINCT(column) FROM dlist.  Rebuild it as
 frequently as needed.

Okay, I decided to try that option, as it sounded like a good one. I
created the tables, and since the main table would be rarely updated I
was going to run DELETE FROM on the column tables and run the queries
`INSERT INTO table (col) SELECT DISTINCT col FROM dlist' each time a
new record was updated or added.

It turns out that is almost as slow as the original method. So then I
added an index to every column table, and it was still no better.

I thought maybe generating a select box for each column's contents was
the problem, since unique items in some tables are as much as 4
thousand, but singling out the code which performs the queries,
eliminating the select box generation, confirmed that the queries were
indeed the slow part.

I may have to abandon the retrieving of unique items from every column
in a table unless you or anyone else have any other ideas.
 
Thanks for your help, Dan.
 
-- 
Willie Gnarlson

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique items from all columns, very slow

2005-04-20 Thread SGreen
Willie Gnarlson [EMAIL PROTECTED] wrote on 04/20/2005 01:56:03 
PM:

 On 4/20/05, Dan Nelson [EMAIL PROTECTED] wrote:
  In the last episode (Apr 20), Willie Gnarlson said:
   I have been attempting to uncover a fast method for retrieving 
unique
   items from every column in a table. The table has approximately 60
   thousand rows. Each row has an index associated with it, and running
   'explain select distinct class from dlist' shows that it is using 
the
   index.
  
   However, since there are 24 columns, running that select query for
   each column takes about 4 long seconds on a P4 2.4 Ghz.
  
  If there are a significant number of identical values (ages, sizes,
  etc), then setting PACK_KEYS=1 on the table may help by making smaller
 
 There are, but PACK_KEYS=1 seemed to shed about a second (just
 guessing, but it did seem only slightly faster) off the total query
 time.
 
  indexes.  You might also be able to convert some columns to ENUMs,
  which will make their indexes even smaller (and let your SELECT be
  almost instantaneous if mysql knows how to optimize it right :)
 
 One column was already ENUM, but it wouldn't be possible to make any
 others that type.
 
  Another alternative would be to build a table for each column, holding
  the output of SELECT DISTINCT(column) FROM dlist.  Rebuild it as
  frequently as needed.
 
 Okay, I decided to try that option, as it sounded like a good one. I
 created the tables, and since the main table would be rarely updated I
 was going to run DELETE FROM on the column tables and run the queries
 `INSERT INTO table (col) SELECT DISTINCT col FROM dlist' each time a
 new record was updated or added.
 
 It turns out that is almost as slow as the original method. So then I
 added an index to every column table, and it was still no better.
 
 I thought maybe generating a select box for each column's contents was
 the problem, since unique items in some tables are as much as 4
 thousand, but singling out the code which performs the queries,
 eliminating the select box generation, confirmed that the queries were
 indeed the slow part.
 
 I may have to abandon the retrieving of unique items from every column
 in a table unless you or anyone else have any other ideas.
 
 Thanks for your help, Dan.
 
 -- 
 Willie Gnarlson
 

If you are using select boxes for every column, how can you possibly worry 
about new or changed values as the users must pick from a fixed list of 
values, don't they?

You don't rebuild _all_ of your umpteen pick-list tables from scratch 
every time, detect just the new or changed values and add/update them 
(which goes back to my original question of how are the users creating new 
values if you are really using a select box and not something else like a 
combo box (part list, part text field))

Maybe if I understood your front-end application a little bit better, I 
could help you work around the issue. The way you described your user 
interface design made me wince; something just doesn't feel right about 
it.



Re: Unique items from all columns, very slow

2005-04-20 Thread Willie Gnarlson
On 4/20/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 Willie Gnarlson [EMAIL PROTECTED] wrote on 04/20/2005 01:56:03 PM:
   On 4/20/05, Dan Nelson [EMAIL PROTECTED] wrote:
In the last episode (Apr 20), Willie Gnarlson said:
 I have been attempting to uncover a fast method for retrieving unique
 items from every column in a table. The table has approximately 60
 thousand rows. Each row has an index associated with it, and running
 'explain select distinct class from dlist' shows that it is using the
 index.

 However, since there are 24 columns, running that select query for
 each column takes about 4 long seconds on a P4 2.4 Ghz.

If there are a significant number of identical values (ages, sizes,
etc), then setting PACK_KEYS=1 on the table may help by making smaller
   
   There are, but PACK_KEYS=1 seemed to shed about a second (just
   guessing, but it did seem only slightly faster) off the total query
   time.
   
indexes.  You might also be able to convert some columns to ENUMs,
which will make their indexes even smaller (and let your SELECT be
almost instantaneous if mysql knows how to optimize it right :)
   
   One column was already ENUM, but it wouldn't be possible to make any
   others that type.

Another alternative would be to build a table for each column, holding
the output of SELECT DISTINCT(column) FROM dlist.  Rebuild it as
frequently as needed.
   
   Okay, I decided to try that option, as it sounded like a good one. I
   created the tables, and since the main table would be rarely updated I
   was going to run DELETE FROM on the column tables and run the queries
   `INSERT INTO table (col) SELECT DISTINCT col FROM dlist' each time a
   new record was updated or added.
   
   It turns out that is almost as slow as the original method. So then I
   added an index to every column table, and it was still no better.
   
   I thought maybe generating a select box for each column's contents was
   the problem, since unique items in some tables are as much as 4
   thousand, but singling out the code which performs the queries,
   eliminating the select box generation, confirmed that the queries were
   indeed the slow part.
   
   I may have to abandon the retrieving of unique items from every column
   in a table unless you or anyone else have any other ideas.

(...)

 If you are using select boxes for every column, how can you possibly worry
 about new or changed values as the users must pick from a fixed list of
 values, don't they? 

I am building (or attempting tobuild) the select boxes on the fly
based on unique items from columns in the table.
  
 You don't rebuild _all_ of your umpteen pick-list tables from scratch every
 time, detect just the new or changed values and add/update them (which goes

Yes, that would be the smarter thing to do, you're right. But alas
that can be set aside for my testing so far. At this point I'm simply
trying to get the values from the columns in a reasonable amount of
time.

 back to my original question of how are the users creating new values if you
 are really using a select box and not something else like a combo box (part
 list, part text field)) 

Okay, they aren't creating new values using that form. The form I've
been talking about is a form to limit results from the database to
only specific values for  specific fields without the user having to
know from memory what the possible values are (plus, they'll change
every so often).
  
 Maybe if I understood your front-end application a little bit better, I
 could help you work around the issue. The way you described your user
 interface design made me wince; something just doesn't feel right about it. 

No, it's not as bad as it sounds. :-) (I hope)

-- 
Willie Gnarlson

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique items from all columns, very slow

2005-04-19 Thread Dan Nelson
In the last episode (Apr 20), Willie Gnarlson said:
 I have been attempting to uncover a fast method for retrieving unique
 items from every column in a table. The table has approximately 60
 thousand rows. Each row has an index associated with it, and running
 'explain select distinct class from dlist' shows that it is using the
 index.

 However, since there are 24 columns, running that select query for
 each column takes about 4 long seconds on a P4 2.4 Ghz.

If there are a significant number of identical values (ages, sizes,
etc), then setting PACK_KEYS=1 on the table may help by making smaller
indexes.  You might also be able to convert some columns to ENUMs,
which will make their indexes even smaller (and let your SELECT be
almost instantaneous if mysql knows how to optimize it right :)

Another alternative would be to build a table for each column, holding
the output of SELECT DISTINCT(column) FROM dlist.  Rebuild it as
frequently as needed.

-- 
Dan Nelson
[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: UNIQUE Key Allowing Duplicate NULL Values

2005-02-23 Thread Gustafson, Tim
Is there any flag I can set on the column or key to not allow duplicate
nulls?

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 480-1870 Mobile/Emergencies
(516) 908-4185 Fax
http://www.meitech.com/ 



-Original Message-
From: Paul DuBois [mailto:[EMAIL PROTECTED]
Sent: Tuesday, February 22, 2005 3:13 PM
To: Gustafson, Tim; mysql@lists.mysql.com
Subject: Re: UNIQUE Key Allowing Duplicate NULL Values


At 15:00 -0500 2/22/05, Gustafson, Tim wrote:
Hi there!

I have a table, defined as follows:

CREATE TABLE `WebSiteDomainNames` (
   `ID` int(10) unsigned NOT NULL auto_increment,
   `WebSite` int(10) unsigned NOT NULL default '0',
   `DomainName` int(10) unsigned NOT NULL default '0',
   `Alias` char(16) default NULL,
   PRIMARY KEY  (`ID`),
   UNIQUE KEY `DomainName` (`DomainName`,`Alias`),
) ENGINE=InnoDB DEFAULT CHARSET=latin1

The way I read this definition, it should be impossible for someone to
put in two rows with the same DomainName and Alias, however, right now
I
have the following rows in the table:

+-+-++---+
| ID  | WebSite | DomainName | Alias |
+-+-++---+
| 543 |1086 |   1334 | NULL  |
| 545 |1086 |   1334 | NULL  |
| 509 |1086 |   1334 | * |
+-+-++---+

And I can insert even more NULL rows if I want to.  Shouldn't the
UNIQUE
key prevent this from happening?

Not for NULL values, no.  See the description for UNIQUE indexes here:

http://dev.mysql.com/doc/mysql/en/create-table.html

-- 
Paul DuBois, MySQL Documentation Team
Madison, Wisconsin, USA
MySQL AB, www.mysql.com


smime.p7s
Description: S/MIME cryptographic signature


RE: UNIQUE Key Allowing Duplicate NULL Values

2005-02-23 Thread Paul DuBois
At 8:10 -0500 2/23/05, Gustafson, Tim wrote:
Is there any flag I can set on the column or key to not allow duplicate
nulls?
Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 480-1870 Mobile/Emergencies
(516) 908-4185 Fax
http://www.meitech.com/
No, you'd have to use a BDB table, because only BDB allows a single
NULL per UNIQUE index.


-Original Message-
From: Paul DuBois [mailto:[EMAIL PROTECTED]
Sent: Tuesday, February 22, 2005 3:13 PM
To: Gustafson, Tim; mysql@lists.mysql.com
Subject: Re: UNIQUE Key Allowing Duplicate NULL Values
At 15:00 -0500 2/22/05, Gustafson, Tim wrote:
Hi there!
I have a table, defined as follows:
CREATE TABLE `WebSiteDomainNames` (
   `ID` int(10) unsigned NOT NULL auto_increment,
   `WebSite` int(10) unsigned NOT NULL default '0',
   `DomainName` int(10) unsigned NOT NULL default '0',
   `Alias` char(16) default NULL,
   PRIMARY KEY  (`ID`),
   UNIQUE KEY `DomainName` (`DomainName`,`Alias`),
) ENGINE=InnoDB DEFAULT CHARSET=latin1
The way I read this definition, it should be impossible for someone to
put in two rows with the same DomainName and Alias, however, right now
I
have the following rows in the table:
+-+-++---+
| ID  | WebSite | DomainName | Alias |
+-+-++---+
| 543 |1086 |   1334 | NULL  |
| 545 |1086 |   1334 | NULL  |
| 509 |1086 |   1334 | * |
+-+-++---+
And I can insert even more NULL rows if I want to.  Shouldn't the
UNIQUE
key prevent this from happening?
Not for NULL values, no.  See the description for UNIQUE indexes here:
http://dev.mysql.com/doc/mysql/en/create-table.html

--
Paul DuBois, MySQL Documentation Team
Madison, Wisconsin, USA
MySQL AB, www.mysql.com
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: UNIQUE Key Allowing Duplicate NULL Values

2005-02-23 Thread Gustafson, Tim
Oh well, thanks anyhow.

If I could put in a feature suggestion, it would be a flag that I could
set to disallow duplicate nulls.  :)

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 480-1870 Mobile/Emergencies
(516) 908-4185 Fax
http://www.meitech.com/ 



-Original Message-
From: Paul DuBois [mailto:[EMAIL PROTECTED]
Sent: Wednesday, February 23, 2005 8:14 AM
To: Gustafson, Tim; mysql@lists.mysql.com
Subject: RE: UNIQUE Key Allowing Duplicate NULL Values


At 8:10 -0500 2/23/05, Gustafson, Tim wrote:
Is there any flag I can set on the column or key to not allow duplicate
nulls?

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 480-1870 Mobile/Emergencies
(516) 908-4185 Fax
http://www.meitech.com/

No, you'd have to use a BDB table, because only BDB allows a single
NULL per UNIQUE index.





-Original Message-
From: Paul DuBois [mailto:[EMAIL PROTECTED]
Sent: Tuesday, February 22, 2005 3:13 PM
To: Gustafson, Tim; mysql@lists.mysql.com
Subject: Re: UNIQUE Key Allowing Duplicate NULL Values


At 15:00 -0500 2/22/05, Gustafson, Tim wrote:
Hi there!

I have a table, defined as follows:

CREATE TABLE `WebSiteDomainNames` (
`ID` int(10) unsigned NOT NULL auto_increment,
`WebSite` int(10) unsigned NOT NULL default '0',
`DomainName` int(10) unsigned NOT NULL default '0',
`Alias` char(16) default NULL,
PRIMARY KEY  (`ID`),
UNIQUE KEY `DomainName` (`DomainName`,`Alias`),
) ENGINE=InnoDB DEFAULT CHARSET=latin1

The way I read this definition, it should be impossible for someone to
put in two rows with the same DomainName and Alias, however, right now
I
have the following rows in the table:

+-+-++---+
| ID  | WebSite | DomainName | Alias |
+-+-++---+
| 543 |1086 |   1334 | NULL  |
| 545 |1086 |   1334 | NULL  |
| 509 |1086 |   1334 | * |
+-+-++---+

And I can insert even more NULL rows if I want to.  Shouldn't the
UNIQUE
key prevent this from happening?

Not for NULL values, no.  See the description for UNIQUE indexes here:

http://dev.mysql.com/doc/mysql/en/create-table.html


-- 
Paul DuBois, MySQL Documentation Team
Madison, Wisconsin, USA
MySQL AB, www.mysql.com


smime.p7s
Description: S/MIME cryptographic signature


Re: UNIQUE Key Allowing Duplicate NULL Values

2005-02-23 Thread Martijn Tonies
Tim,

 Oh well, thanks anyhow.

 If I could put in a feature suggestion, it would be a flag that I could
 set to disallow duplicate nulls.  :)

What for?

NULL is not equal to NULL. Period.

If you don't want NULLs, make the column not null.

The specification is correct.

With regards,

Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL  MS SQL
Server
Upscene Productions
http://www.upscene.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: UNIQUE Key Allowing Duplicate NULL Values

2005-02-23 Thread Gustafson, Tim
Martijn,

The problem is that I don't want more than one row in the table that has
a null value in the column.  As you've pointed out in your e-mail,
there's a difference between NULL and BLANK.  It's not that I don't want
NULL values, it's that I don't want MORE THAN ONE.

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 480-1870 Mobile/Emergencies
(516) 908-4185 Fax
http://www.meitech.com/ 



-Original Message-
From: Martijn Tonies [mailto:[EMAIL PROTECTED]
Sent: Wednesday, February 23, 2005 8:37 AM
To: Gustafson, Tim; Paul DuBois; mysql@lists.mysql.com
Subject: Re: UNIQUE Key Allowing Duplicate NULL Values


Tim,

 Oh well, thanks anyhow.

 If I could put in a feature suggestion, it would be a flag that I
could
 set to disallow duplicate nulls.  :)

What for?

NULL is not equal to NULL. Period.

If you don't want NULLs, make the column not null.

The specification is correct.

With regards,

Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL  MS
SQL
Server
Upscene Productions
http://www.upscene.com



smime.p7s
Description: S/MIME cryptographic signature


Re: UNIQUE Key Allowing Duplicate NULL Values

2005-02-23 Thread Martijn Tonies

 The problem is that I don't want more than one row in the table that has
 a null value in the column.  As you've pointed out in your e-mail,
 there's a difference between NULL and BLANK.  It's not that I don't want
 NULL values, it's that I don't want MORE THAN ONE.

I can easily continue arguing about this ... :-)

NULL is not a value. There's no such thing as a null value. NULL
is a state. NOT NULL is a state. That's the two possible states
of a column.

Why do you allow NULL, in this case? And what does it mean?

With regards,

Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL  MS SQL
Server
Upscene Productions
http://www.upscene.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: UNIQUE Key Allowing Duplicate NULL Values

2005-02-23 Thread Martijn Tonies
Tim,

 The table in question is used to store aliases to web sites.  That is,
 meitech.com is the main web site, and www.meitech.com is the alias.
 So, the first column is the domain name in question, and the second on
 is the host name part of the alias, in this example www.

 Now, I also have mrpc.com, and I want the web site to respond on both
 www.mrpc.com and mrpc.com, so I add two more aliases - one with the
 alias set to NULL and one with the alias set to www.  This causes my
 Apache configuration script to add the proper lines to my httpd.conf
 file.

Storing it this way doesn't make sense then ... If you need to be able to
store multiple aliasses, use a parent-child relationship.

 Now, suppose I try to add an alias to another web site - mrpcusa.net.
 Because you can have duplicate null values right now, I can add another
 mrpc.com alias and the database will allow it, but it confuses Apache
 and sometimes you'll get the correct web site, and sometimes you do not.

 One could argue that I could just use a blank value instead of null, but
 I have a function right now that I use to escape all user-inputted
 strings before I add them to the database, and that function is set to
 return NULL if the user-inputted string is empty.  I use this function
 all over the place, and I can't just rewrite the function to return a
 blank string in the case that the user-inputted string is empty, because
 I don't know what it might break elsewhere.

In general, think of not storing NULLs at all if you don't have to. There's
no point in storing NULLs, as this is the _absence_ of data, not data in
itself. What point is there in storing nothing or unknown?

NULL means unknown. Not empty. If the input is empty, store empty.

All IMO, of course, but avoiding NULL usually helps avoiding many other
problems as well :-)

With regards,

Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL  MS SQL
Server
Upscene Productions
http://www.upscene.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: UNIQUE Key Allowing Duplicate NULL Values

2005-02-23 Thread Harald Fuchs
In article [EMAIL PROTECTED],
Gustafson, Tim [EMAIL PROTECTED] writes:

 Martijn,
 The problem is that I don't want more than one row in the table that has
 a null value in the column.  As you've pointed out in your e-mail,
 there's a difference between NULL and BLANK.  It's not that I don't want
 NULL values, it's that I don't want MORE THAN ONE.

You really should change your requirements.  Since NULL != NULL, every
DB enforcing a single NULL row by a unique index would not be SQL.
You seem to want some special value to occur only once, but NULL is
no value at all.  Can't you make 0 or  (the empty string) that
special value?


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: UNIQUE Key Allowing Duplicate NULL Values

2005-02-22 Thread Paul DuBois
At 15:00 -0500 2/22/05, Gustafson, Tim wrote:
Hi there!
I have a table, defined as follows:
CREATE TABLE `WebSiteDomainNames` (
  `ID` int(10) unsigned NOT NULL auto_increment,
  `WebSite` int(10) unsigned NOT NULL default '0',
  `DomainName` int(10) unsigned NOT NULL default '0',
  `Alias` char(16) default NULL,
  PRIMARY KEY  (`ID`),
  UNIQUE KEY `DomainName` (`DomainName`,`Alias`),
) ENGINE=InnoDB DEFAULT CHARSET=latin1
The way I read this definition, it should be impossible for someone to
put in two rows with the same DomainName and Alias, however, right now I
have the following rows in the table:
+-+-++---+
| ID  | WebSite | DomainName | Alias |
+-+-++---+
| 543 |1086 |   1334 | NULL  |
| 545 |1086 |   1334 | NULL  |
| 509 |1086 |   1334 | * |
+-+-++---+
And I can insert even more NULL rows if I want to.  Shouldn't the UNIQUE
key prevent this from happening?
Not for NULL values, no.  See the description for UNIQUE indexes here:
http://dev.mysql.com/doc/mysql/en/create-table.html
--
Paul DuBois, MySQL Documentation Team
Madison, Wisconsin, USA
MySQL AB, www.mysql.com
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Unique IDs

2005-01-07 Thread Joshua J. Kugler
When I saw this message a few weeks ago, I *knew* MySQL had something for 
this, but I couldn't remember where I saw it, and I couldn't find it.  Today 
I found it.

Take a look at: http://dev.mysql.com/doc/mysql/en/Miscellaneous_functions.html
and scroll down to UUID()

Returns a Universal Unique Identifier (UUID) generated according to ``DCE 1.1: 
Remote Procedure Call'' (Appendix A) CAE (Common Applications Environment) 
Specifications published by The Open Group in October 1997 (Document Number 
C706). A UUID is designed as a number that is globally unique in space and 
time. Two calls to UUID() are expected to generate two different values, even 
if these calls are performed on two separate computers that are not connected 
to each other.

This was added in MySQL 4.1.2.  You didn't mention the version you were using, 
but 4.1 is production now.

Hope that helps!!

j- k-

On Monday 20 December 2004 05:33, Andrew Mull said something like:
 I'm working on a rather large database with many cross-linked tables
 currently using auto increment IDs.  The system is primarily a web based
 system, however, there will be times that the system will be run as a stand
 alone server...meaning no internet connection is available.

 The question arises that if someone enters information to the database on
 the website, while others are entering information on the local database,
 what is the best way to merge the data?  I would imagine that we would run
 into many duplicate auto increment IDs.

 I'm sure that for one table insert, this would not be a problem as I could
 store the SQL statement in a text file without the ID specified, and run it
 as a batch process on the live server when we get connectivity.  But I
 don't have a handle on how to update the sub tables that have a FK pointer.

 Any ideas?

 Thanks!
 -Andy

-- 
Joshua J. Kugler -- Fairbanks, Alaska -- ICQ#:13706295
Every knee shall bow, and every tongue confess, in heaven, on earth, and under 
the earth, that Jesus Christ is LORD -- Count on it!

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: unique key - primary key

2004-12-29 Thread Martijn Tonies

   I have two tables, seemigly very similar setup; the primary key is the
   combination of two columns. With mysqldump, however, the table
   definition of the two tables looks different.
 
   Mysqldump on table 1 says
   ...
   UNIQUE KEY HONstid (HONstid,HONname)
   whereas on table 2 it says
   ...
   PRIMARY KEY  (IMstid,IMfnum)
 
   What is the difference? Is there any?
 
 A table can have only 1 primary key, but multiple unique constraints.

 And columns in primary keys must be NOT NULL.  Columns in unique
 keys can be NULL (if they are NOT NULL, then the unique key is
 functionally the same as a primary key).

Ah yes Paul, thanks for adding that.

With regards,

Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL  MS SQL
Server
Upscene Productions
http://www.upscene.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: unique key - primary key

2004-12-29 Thread Martijn Tonies
Hello,

 RE:
  And columns in primary keys must be NOT NULL.  Columns in unique
  keys can be NULL (if they are NOT NULL, then the unique key is
  functionally the same as a primary key).

 OK, thanks guys for the explanation.

 Then the result of mysqldump table definition part:

 UNIQUE KEY HONstid (HONstid,HONname)

 means that I have two unique keys:
 HONstid
 and also the
 (HONstid,HONname) combination?

No, as far as I can tell, this means you have a unique constraint
named HONstid for columns HONstid,HONname

 This does not make too much sense; if I can have only one HONstid in my
 table, then of course i can have only one kind of (HONstid,HONname)
 combination. In the actual table, I have multiple occurences of values
 (HONstid,HONname)in the HONstid column, i.e. it does not seem like
 HONstid in itself was a unique key. Only the (HONstid,HONname)
 combination is unique.

With regards,

Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL  MS SQL
Server
Upscene Productions
http://www.upscene.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: unique key - primary key

2004-12-28 Thread Martijn Tonies
Hello,

 I have two tables, seemigly very similar setup; the primary key is the
 combination of two columns. With mysqldump, however, the table
 definition of the two tables looks different.

 Mysqldump on table 1 says
 ...
 UNIQUE KEY HONstid (HONstid,HONname)
 whereas on table 2 it says
 ...
 PRIMARY KEY  (IMstid,IMfnum)

 What is the difference? Is there any?

A table can have only 1 primary key, but multiple unique constraints.

With regards,

Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL  MS SQL
Server
Upscene Productions
http://www.upscene.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: unique key - primary key

2004-12-28 Thread Paul DuBois
At 22:27 +0100 12/28/04, Martijn Tonies wrote:
Hello,
 I have two tables, seemigly very similar setup; the primary key is the
 combination of two columns. With mysqldump, however, the table
 definition of the two tables looks different.
 Mysqldump on table 1 says
 ...
 UNIQUE KEY HONstid (HONstid,HONname)
 whereas on table 2 it says
 ...
 PRIMARY KEY  (IMstid,IMfnum)
 What is the difference? Is there any?
A table can have only 1 primary key, but multiple unique constraints.
And columns in primary keys must be NOT NULL.  Columns in unique
keys can be NULL (if they are NOT NULL, then the unique key is
functionally the same as a primary key).
--
Paul DuBois, MySQL Documentation Team
Madison, Wisconsin, USA
MySQL AB, www.mysql.com
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: unique key - primary key

2004-12-28 Thread Gaspar Bakos
Hi,

RE:
 And columns in primary keys must be NOT NULL.  Columns in unique
 keys can be NULL (if they are NOT NULL, then the unique key is
 functionally the same as a primary key).

OK, thanks guys for the explanation.

Then the result of mysqldump table definition part:

UNIQUE KEY HONstid (HONstid,HONname)

means that I have two unique keys:
HONstid
and also the
(HONstid,HONname) combination?

This does not make too much sense; if I can have only one HONstid in my
table, then of course i can have only one kind of (HONstid,HONname)
combination. In the actual table, I have multiple occurences of values
(HONstid,HONname)in the HONstid column, i.e. it does not seem like
HONstid in itself was a unique key. Only the (HONstid,HONname)
combination is unique.

Cheers
Gaspar

p.s.: Paul, I very much enjoy your MySQL book.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique IDs

2004-12-22 Thread Philippe Poelvoorde
with implicit join :
INSERT into table1(id, list of other columns)
 select m.newID, list of other columns from table2, tmpIDMAP
where
m.id = table2.id
 UPDATE table3, tmpIDMAP
 SET table3.parentID = m.newID
where
m.id = table3.parentID
Andrew Mull wrote:
Thank you.  I'll have to review the SQL carefully as I haven't used inner 
joins in quite some time :)


Quoting [EMAIL PROTECTED]:

Create a temporary table that will act as a map between your old IDs and 
your new ones. That way you can re-use those values and change them from 
the old values to the new values. 

LOCK TABLE table1 WRITE, table2 WRITE, table3 WRITE;
select @max_id:=max(id) FROM table1;
CREATE TEMPORARY TABLE tmpIDMAP
SELECT id, [EMAIL PROTECTED] as newID
FROM table2
INSERT into table1(id, list of other columns) 
select m.newID, list of other columns from table2
INNER JOIN tmpIDMAP m
ON m.id = table2.id

UPDATE table3
INNER JOIN tmpIDMAP m
ON m.id = table3.parentID
SET table3.parentID = m.newID
(repeat for other child tables)
UNLOCK
I don't have time to give a better explanation right now but if you write 
the list back, I can fill in the details later.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine
Andrew Mull [EMAIL PROTECTED] wrote on 12/21/2004 12:20:57 
PM:


You are correct in that I need to merge two tables.  However, I'm 
not clear on 
how to update the FKs in the sub tables. 

Example, the main table (Person) has a sub table (Address).  Person has 
a 1:N 

relationship with Address.  So Address has the Person's ID as a FK. How 
do I 

update the FK in Address with the new ID assigned to Person that was 
created 

with the merge? 

Thanks for the help!
Quoting Philippe Poelvoorde [EMAIL PROTECTED]:

Andrew Mull wrote:

I'm working on a rather large database with many cross-linked tables 

currently using auto increment IDs.  The system is primarily a web 
based 

system, however, there will be times that the system will be run as 
a 

stand
alone server...meaning no internet connection is available. 

The question arises that if someone enters information to the 
database on
the 

website, while others are entering information on the local 
database, what
is 

the best way to merge the data?  I would imagine that we would run 
into
many 

duplicate auto increment IDs. 

I'm sure that for one table insert, this would not be a problem as I 
could
store the SQL statement in a text file without the ID specified, and 
run 

it
as 

a batch process on the live server when we get connectivity.  But I 
don't
have 

a handle on how to update the sub tables that have a FK pointer.
Any ideas? 

Thanks!
-Andy 

I understand your question as the following: you want to merge 2 
tables 

comming from different database in a single table. If this is not what 

you want, sorry :)
I would do that :
LOCK TABLE table1 WRITE, table2 WRITE
select @max_id:=max(id) FROM table1;
UPDATE table2 SET [EMAIL PROTECTED];
insert into table1(list of columns) select [list of columns] from 
table2
UNLOCK
and then updating your FK within the update query.
Keep in mind that I didn't try with InnoDb tables... (but works fine 
for 

MyIsam)
--
Philippe Poelvoorde
COS Trading Ltd.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
Philippe Poelvoorde
COS Trading Ltd.
+44.(0)20.7376.2401
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Unique IDs

2004-12-21 Thread Philippe Poelvoorde
Andrew Mull wrote:
I'm working on a rather large database with many cross-linked tables 
currently using auto increment IDs.  The system is primarily a web based 
system, however, there will be times that the system will be run as a stand 
alone server...meaning no internet connection is available.  

The question arises that if someone enters information to the database on the 
website, while others are entering information on the local database, what is 
the best way to merge the data?  I would imagine that we would run into many 
duplicate auto increment IDs.  

I'm sure that for one table insert, this would not be a problem as I could 
store the SQL statement in a text file without the ID specified, and run it as 
a batch process on the live server when we get connectivity.  But I don't have 
a handle on how to update the sub tables that have a FK pointer.

Any ideas?   

Thanks!
-Andy 

I understand your question as the following: you want to merge 2 tables 
comming from different database in a single table. If this is not what 
you want, sorry :)
I would do that :
LOCK TABLE table1 WRITE, table2 WRITE
select @max_id:=max(id) FROM table1;
UPDATE table2 SET [EMAIL PROTECTED];
insert into table1(list of columns) select [list of columns] from table2
UNLOCK

and then updating your FK within the update query.
Keep in mind that I didn't try with InnoDb tables... (but works fine for 
MyIsam)

--
Philippe Poelvoorde
COS Trading Ltd.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Unique IDs

2004-12-21 Thread Andrew Mull
You are correct in that I need to merge two tables.  However, I'm not clear on 
how to update the FKs in the sub tables.  

Example, the main table (Person) has a sub table (Address).  Person has a 1:N 
relationship with Address.  So Address has the Person's ID as a FK.  How do I 
update the FK in Address with the new ID assigned to Person that was created 
with the merge?  

Thanks for the help!


Quoting Philippe Poelvoorde [EMAIL PROTECTED]:

 Andrew Mull wrote:
 
  I'm working on a rather large database with many cross-linked tables 
  currently using auto increment IDs.  The system is primarily a web based 
  system, however, there will be times that the system will be run as a 
stand
 
  alone server...meaning no internet connection is available.  
  
  The question arises that if someone enters information to the database on
 the 
  website, while others are entering information on the local database, what
 is 
  the best way to merge the data?  I would imagine that we would run into
 many 
  duplicate auto increment IDs.  
  
  I'm sure that for one table insert, this would not be a problem as I could
 
  store the SQL statement in a text file without the ID specified, and run 
it
 as 
  a batch process on the live server when we get connectivity.  But I don't
 have 
  a handle on how to update the sub tables that have a FK pointer.
  
  Any ideas?   
  
  Thanks!
  -Andy 
  
 
 I understand your question as the following: you want to merge 2 tables 
 comming from different database in a single table. If this is not what 
 you want, sorry :)
 I would do that :
 LOCK TABLE table1 WRITE, table2 WRITE
 select @max_id:=max(id) FROM table1;
 UPDATE table2 SET [EMAIL PROTECTED];
 insert into table1(list of columns) select [list of columns] from table2
 UNLOCK
 
 and then updating your FK within the update query.
 Keep in mind that I didn't try with InnoDb tables... (but works fine for 
 MyIsam)
 
 -- 
 Philippe Poelvoorde
 COS Trading Ltd.
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique IDs

2004-12-21 Thread SGreen
Create a temporary table that will act as a map between your old IDs and 
your new ones. That way you can re-use those values and change them from 
the old values to the new values. 

LOCK TABLE table1 WRITE, table2 WRITE, table3 WRITE;

select @max_id:=max(id) FROM table1;

CREATE TEMPORARY TABLE tmpIDMAP
SELECT id, [EMAIL PROTECTED] as newID
FROM table2

INSERT into table1(id, list of other columns) 
select m.newID, list of other columns from table2
INNER JOIN tmpIDMAP m
ON m.id = table2.id

UPDATE table3
INNER JOIN tmpIDMAP m
ON m.id = table3.parentID
SET table3.parentID = m.newID

(repeat for other child tables)

UNLOCK

I don't have time to give a better explanation right now but if you write 
the list back, I can fill in the details later.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Andrew Mull [EMAIL PROTECTED] wrote on 12/21/2004 12:20:57 
PM:

 You are correct in that I need to merge two tables.  However, I'm 
 not clear on 
 how to update the FKs in the sub tables. 
 
 Example, the main table (Person) has a sub table (Address).  Person has 
a 1:N 
 relationship with Address.  So Address has the Person's ID as a FK. How 
do I 
 update the FK in Address with the new ID assigned to Person that was 
created 
 with the merge? 
 
 Thanks for the help!
 
 
 Quoting Philippe Poelvoorde [EMAIL PROTECTED]:
 
  Andrew Mull wrote:
  
   I'm working on a rather large database with many cross-linked tables 

   currently using auto increment IDs.  The system is primarily a web 
based 
   system, however, there will be times that the system will be run as 
a 
 stand
  
   alone server...meaning no internet connection is available. 
   
   The question arises that if someone enters information to the 
database on
  the 
   website, while others are entering information on the local 
database, what
  is 
   the best way to merge the data?  I would imagine that we would run 
into
  many 
   duplicate auto increment IDs. 
   
   I'm sure that for one table insert, this would not be a problem as I 
could
  
   store the SQL statement in a text file without the ID specified, and 
run 
 it
  as 
   a batch process on the live server when we get connectivity.  But I 
don't
  have 
   a handle on how to update the sub tables that have a FK pointer.
   
   Any ideas? 
   
   Thanks!
   -Andy 
   
  
  I understand your question as the following: you want to merge 2 
tables 
  comming from different database in a single table. If this is not what 

  you want, sorry :)
  I would do that :
  LOCK TABLE table1 WRITE, table2 WRITE
  select @max_id:=max(id) FROM table1;
  UPDATE table2 SET [EMAIL PROTECTED];
  insert into table1(list of columns) select [list of columns] from 
table2
  UNLOCK
  
  and then updating your FK within the update query.
  Keep in mind that I didn't try with InnoDb tables... (but works fine 
for 
  MyIsam)
  
  -- 
  Philippe Poelvoorde
  COS Trading Ltd.
  
  
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


Re: Unique IDs

2004-12-21 Thread Andrew Mull
Thank you.  I'll have to review the SQL carefully as I haven't used inner 
joins in quite some time :)



Quoting [EMAIL PROTECTED]:

 Create a temporary table that will act as a map between your old IDs and 
 your new ones. That way you can re-use those values and change them from 
 the old values to the new values. 
 
 LOCK TABLE table1 WRITE, table2 WRITE, table3 WRITE;
 
 select @max_id:=max(id) FROM table1;
 
 CREATE TEMPORARY TABLE tmpIDMAP
 SELECT id, [EMAIL PROTECTED] as newID
 FROM table2
 
 INSERT into table1(id, list of other columns) 
 select m.newID, list of other columns from table2
 INNER JOIN tmpIDMAP m
 ON m.id = table2.id
 
 UPDATE table3
 INNER JOIN tmpIDMAP m
 ON m.id = table3.parentID
 SET table3.parentID = m.newID
 
 (repeat for other child tables)
 
 UNLOCK
 
 I don't have time to give a better explanation right now but if you write 
 the list back, I can fill in the details later.
 
 Shawn Green
 Database Administrator
 Unimin Corporation - Spruce Pine
 
 Andrew Mull [EMAIL PROTECTED] wrote on 12/21/2004 12:20:57 
 PM:
 
  You are correct in that I need to merge two tables.  However, I'm 
  not clear on 
  how to update the FKs in the sub tables. 
  
  Example, the main table (Person) has a sub table (Address).  Person has 
 a 1:N 
  relationship with Address.  So Address has the Person's ID as a FK. How 
 do I 
  update the FK in Address with the new ID assigned to Person that was 
 created 
  with the merge? 
  
  Thanks for the help!
  
  
  Quoting Philippe Poelvoorde [EMAIL PROTECTED]:
  
   Andrew Mull wrote:
   
I'm working on a rather large database with many cross-linked tables 
 
currently using auto increment IDs.  The system is primarily a web 
 based 
system, however, there will be times that the system will be run as 
 a 
  stand
   
alone server...meaning no internet connection is available. 

The question arises that if someone enters information to the 
 database on
   the 
website, while others are entering information on the local 
 database, what
   is 
the best way to merge the data?  I would imagine that we would run 
 into
   many 
duplicate auto increment IDs. 

I'm sure that for one table insert, this would not be a problem as I 
 could
   
store the SQL statement in a text file without the ID specified, and 
 run 
  it
   as 
a batch process on the live server when we get connectivity.  But I 
 don't
   have 
a handle on how to update the sub tables that have a FK pointer.

Any ideas? 

Thanks!
-Andy 

   
   I understand your question as the following: you want to merge 2 
 tables 
   comming from different database in a single table. If this is not what 
 
   you want, sorry :)
   I would do that :
   LOCK TABLE table1 WRITE, table2 WRITE
   select @max_id:=max(id) FROM table1;
   UPDATE table2 SET [EMAIL PROTECTED];
   insert into table1(list of columns) select [list of columns] from 
 table2
   UNLOCK
   
   and then updating your FK within the update query.
   Keep in mind that I didn't try with InnoDb tables... (but works fine 
 for 
   MyIsam)
   
   -- 
   Philippe Poelvoorde
   COS Trading Ltd.
   
   
  
  
  -- 
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
  
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique index on two fields

2004-11-18 Thread Andrew Kuebler
ALTER TABLE tablename ADD UNIQUE (Column1, Column2);

Best Regards,
Andrew


 Can someone explain how I can make a combination of two fields be a
 unique index.

 For example, in my table I have an OrderID field and a LineItems field.
 Individually the fields are not unique but when combined they are. I'd
 like to create a unique index of the two together but not individually.

 Thanks for the help.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique Key question

2004-08-05 Thread Michael Stassen
BINARY is for CHAR and VARCHAR, but he's using TINYTEXT.  The binary version 
of TINYTEXT is TINYBLOB.

Michael
mos wrote:
Paul,
Just use the Binary column attribute to make it case sensitive..
 From MySQL Manual: http://dev.mysql.com/doc/mysql/en/CHAR.html
As of MySQL 4.1, values in CHAR and VARCHAR columns are sorted and 
compared according to the collation of the character set assigned to the 
column. Before MySQL 4.1, sorting and comparison are based on the 
collation of the server character set; you can declare the column with 
the BINARY attribute to cause sorting and comparison to be case 
sensitive using the underlying character code values rather then a 
lexical ordering. BINARY doesn't affect how the column is stored or 
retrieved.

Mike
At 07:20 AM 8/5/2004, you wrote:
Good morning.
I have a table with field name DATA type tinytext.  The table is already
populated.  I need DATA to be unique.  It seems that MySQL doesn't 
look at
the uniqueness of a field in a binary fashion.

Example (chosen because I think it is just plain odd)
june = Júne
If I query as -
Select * from myTable where DATA = 'june' - I am returned both.
If I query as
Select * from myTable where cast(DATA as binary) = 'june' - I am returned
only one.
How can I set this table so that the unique key is based on the binary 
value
of the field?  Do I have to alter the field type or is there a better 
way?

Thank you.
God Bless
Paul C. McNeil
Developer in Java, MS-SQL, MySQL, and web technologies.
Microneil Research
Sortmonster Anti Spam







GOD BLESS AMERICA!
To God Be The Glory!
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: unique field problem

2004-04-09 Thread chillz
BT Hello again,

BT I have a form that users fill out and submit into a mysql database using php. 
BT The problem I have is that there is only one unique field in the main 
BT table... it's an int that's auto-incremented with each new entry. This works 
BT fine unless the user accidentally submits the data twice. When this happens, 
BT two entries are made that are identical except for the auto-incrementing int 
BT field. 

it will be easier if you just make sure the form cannot be sent twice...

input type=submit [...] onclick=this.disabled=true; this.form.submit();
you may need to add hidden field with the same name as submit button
and some value...

or get last unique id from DB, store it into forms hidden field and
fetch the id again when the form has been submitted, it the id is
different the data has been stored already...


Pozdrawiam,

Lecho Buszczynski
SWAPZILLA - wymieniaj gry ZA DARMO!
http://www.exerion.pl/swapzilla.php


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: unique varchar in field

2004-04-07 Thread Michael Stassen
Brad Tilley wrote:
Hello again,

How do you make a field unique, but not have it as part of the primary key? 
For example, I have a 'computers' table that has a 'comp_id' field that's the 
primary key. It's an int  that's auto-incremented with each new entry. The 
same table has a 'comp_serial' field. I want this field to be unique, but I 
don't want it to be a part of the primary key, or any key for that matter. I 
just want it to be unique. How do I achieve this?

Thanks,
Brad
ALTER TABLE computers ADD UNIQUE INDEX (comp_serial);

The index on comp_serial will enforce uniqueness, and it's separate from the 
primary key.  I don't know of any way to enforce uniqueness without making a 
unique key on the column.

Michael

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Unique from this point forward

2004-03-29 Thread Martijn Tonies
Hi Scott,


 Is there any way to define a column in MySql 4 that is to be unique from
 this point forward.  Meaning, there are indeed dupes in the column now,
 those are ok, there is really no repairing that damage as the data was
 inherited from somewhere else.  I am constantly having to select the
column
 to see if there is a dupe entry in it before inserting, I would like to
stop
 having to do this and define the column as unique from this point forward.
 However, in trying this, MySql will not allow me to make it unique until I
 remove all the duplicates.

Well, that should tell you something, right?

What's the point in having data unique from this point forward anyway?

If you want it unique, repair it.

With regards,

Martijn Tonies
Database Workbench - developer tool for InterBase, Firebird, MySQL  MS SQL
Server.
Upscene Productions
http://www.upscene.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: unique values across more than one column

2004-03-01 Thread Colin Williams
Hi,

Could you just break out the email information into its own table?  Then
you could have columns like the following:

UserID (foreign key), email (varchar), type (int: 1=primary email,
2=alias email).

Then you can just have the email column defined as a unique index and
relate the email to the user table using a foreign key.

Regards,

Colin.



-Original Message-
From: Anand Buddhdev [mailto:[EMAIL PROTECTED] 
Sent: 27 February 2004 08:50
To: [EMAIL PROTECTED]
Subject: unique values across more than one column


Hi everyone,

I'm using mysql version 3.23.58, on Fedora core 1 (the default supplied
on the system).

I have searched the mailing list archives, and google, for my query, but
have not yet found an answer.

Does anyone know if it's possible to define 2 columns in a table, with a
constraint that will ensure that values on both columns are unique? For
example, if I have columns a and b, then if I insert value x in
column a, then I may not insert value x again in EITHER column a or
column b again.

I'm trying to develop a structure for a table that will hold a user's
primary email address, and an alias, and I'd like to have a column
called address and a column called alias, and of course, there must
be no address or alias duplication. This allows addition and removal of
an address and its alias in one insert, and if the insert fails, then we
know there's duplication, and return an error message.

I have thought of other ways around this issue, but my ideal solution
would be as above. If this is not possible, then I will go back to my
other (IMHO less elegant) solutions.

-- 
Anand Buddhdev
Celtel International

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: unique values across more than one column

2004-02-27 Thread Egor Egorov
Anand Buddhdev [EMAIL PROTECTED] wrote:
 
 I'm using mysql version 3.23.58, on Fedora core 1 (the default
 supplied on the system).
 
 I have searched the mailing list archives, and google, for my query,
 but have not yet found an answer.
 
 Does anyone know if it's possible to define 2 columns in a table, with
 a constraint that will ensure that values on both columns are unique?
 For example, if I have columns a and b, then if I insert value x
 in column a, then I may not insert value x again in EITHER column
 a or column b again.
 
 I'm trying to develop a structure for a table that will hold a user's
 primary email address, and an alias, and I'd like to have a column
 called address and a column called alias, and of course, there
 must be no address or alias duplication. This allows addition and
 removal of an address and its alias in one insert, and if the insert
 fails, then we know there's duplication, and return an error message.
 
 I have thought of other ways around this issue, but my ideal solution
 would be as above. If this is not possible, then I will go back to my
 other (IMHO less elegant) solutions.

No, you can't do in the above way. You can first check with SELECT statement if 
address or alias already exists.



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: unique values across more than one column

2004-02-27 Thread Anand Buddhdev
On Fri, Feb 27, 2004 at 12:58:14PM +0200, Egor Egorov wrote:

  I'm using mysql version 3.23.58, on Fedora core 1 (the default
  supplied on the system).
  
  I have searched the mailing list archives, and google, for my query,
  but have not yet found an answer.
  
  Does anyone know if it's possible to define 2 columns in a table, with
  a constraint that will ensure that values on both columns are unique?
  For example, if I have columns a and b, then if I insert value x
  in column a, then I may not insert value x again in EITHER column
  a or column b again.
  
  I'm trying to develop a structure for a table that will hold a user's
  primary email address, and an alias, and I'd like to have a column
  called address and a column called alias, and of course, there
  must be no address or alias duplication. This allows addition and
  removal of an address and its alias in one insert, and if the insert
  fails, then we know there's duplication, and return an error message.
  
  I have thought of other ways around this issue, but my ideal solution
  would be as above. If this is not possible, then I will go back to my
  other (IMHO less elegant) solutions.
 
 No, you can't do in the above way. You can first check with SELECT
 statement if address or alias already exists.

Ok, thanks for the response.

The problem with first doing a select, and then an insert, is that
there exists a race condition. Between the select and insert, someone
else could insert a row which might cause a duplication.

I have another solution, in which I use a transaction to avoid this
problem. But this means I have to change to mysql 4 or postgresql.
Fedora core is still shipping mysql 3.23.58 :(

-- 
Anand Buddhdev
Celtel International

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: unique values across more than one column

2004-02-27 Thread Egor Egorov
Anand Buddhdev [EMAIL PROTECTED] wrote:
 On Fri, Feb 27, 2004 at 12:58:14PM +0200, Egor Egorov wrote:
 
  I'm using mysql version 3.23.58, on Fedora core 1 (the default
  supplied on the system).
  
  I have searched the mailing list archives, and google, for my query,
  but have not yet found an answer.
  
  Does anyone know if it's possible to define 2 columns in a table, with
  a constraint that will ensure that values on both columns are unique?
  For example, if I have columns a and b, then if I insert value x
  in column a, then I may not insert value x again in EITHER column
  a or column b again.
  
  I'm trying to develop a structure for a table that will hold a user's
  primary email address, and an alias, and I'd like to have a column
  called address and a column called alias, and of course, there
  must be no address or alias duplication. This allows addition and
  removal of an address and its alias in one insert, and if the insert
  fails, then we know there's duplication, and return an error message.
  
  I have thought of other ways around this issue, but my ideal solution
  would be as above. If this is not possible, then I will go back to my
  other (IMHO less elegant) solutions.
 
 No, you can't do in the above way. You can first check with SELECT
 statement if address or alias already exists.
 
 Ok, thanks for the response.
 
 The problem with first doing a select, and then an insert, is that
 there exists a race condition. Between the select and insert, someone
 else could insert a row which might cause a duplication.
 
 I have another solution, in which I use a transaction to avoid this
 problem. But this means I have to change to mysql 4 or postgresql.
 Fedora core is still shipping mysql 3.23.58 :(
 

If you use MyISAM table type you can lock table with LOCK TABLES statement. Take a 
look at:
http://www.mysql.com/doc/en/LOCK_TABLES.html

Transactional table InnoDB and BDB are available in 3.23.58 if MySQL server was 
configured with --with-innodb/--with-bdb option.



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Egor Egorov
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: unique values across more than one column

2004-02-27 Thread Alec . Cawley







Anand Buddhdev [EMAIL PROTECTED] wrote on 27/02/2004 11:26:41:


 The problem with first doing a select, and then an insert, is that
 there exists a race condition. Between the select and insert, someone
 else could insert a row which might cause a duplication.

 I have another solution, in which I use a transaction to avoid this
 problem. But this means I have to change to mysql 4 or postgresql.
 Fedora core is still shipping mysql 3.23.58 :(


You could also lock the table manually, which is supported by MySQL 3.xx
See
http://www.mysql.com/doc/en/LOCK_TABLES.html

  Alec


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: unique values across more than one column

2004-02-27 Thread Anand Buddhdev
On Fri, Feb 27, 2004 at 01:54:13PM +0200, Egor Egorov wrote:

  Ok, thanks for the response.
  
  The problem with first doing a select, and then an insert, is that
  there exists a race condition. Between the select and insert, someone
  else could insert a row which might cause a duplication.
  
  I have another solution, in which I use a transaction to avoid this
  problem. But this means I have to change to mysql 4 or postgresql.
  Fedora core is still shipping mysql 3.23.58 :(
 
 If you use MyISAM table type you can lock table with LOCK TABLES
 statement. Take a look at:
 http://www.mysql.com/doc/en/LOCK_TABLES.html
 
 Transactional table InnoDB and BDB are available in 3.23.58 if MySQL
 server was configured with --with-innodb/--with-bdb option.

Hi Egor,

Thank you very much for the pointer to use locking, and for the
pointer to InnoDB and BDB. I didn't know about those options.

-- 
Anand Buddhdev
Celtel International

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: unique values across more than one column

2004-02-27 Thread Michael Stassen
Anand Buddhdev wrote:

On Fri, Feb 27, 2004 at 12:58:14PM +0200, Egor Egorov wrote:


I'm using mysql version 3.23.58, on Fedora core 1 (the default
supplied on the system).
I have searched the mailing list archives, and google, for my query,
but have not yet found an answer.
Does anyone know if it's possible to define 2 columns in a table, with
a constraint that will ensure that values on both columns are unique?
For example, if I have columns a and b, then if I insert value x
in column a, then I may not insert value x again in EITHER column
a or column b again.
I'm trying to develop a structure for a table that will hold a user's
primary email address, and an alias, and I'd like to have a column
called address and a column called alias, and of course, there
must be no address or alias duplication. This allows addition and
removal of an address and its alias in one insert, and if the insert
fails, then we know there's duplication, and return an error message.
I have thought of other ways around this issue, but my ideal solution
would be as above. If this is not possible, then I will go back to my
other (IMHO less elegant) solutions.
No, you can't do in the above way. You can first check with SELECT
statement if address or alias already exists.


Ok, thanks for the response.

The problem with first doing a select, and then an insert, is that
there exists a race condition. Between the select and insert, someone
else could insert a row which might cause a duplication.
I have another solution, in which I use a transaction to avoid this
problem. But this means I have to change to mysql 4 or postgresql.
Fedora core is still shipping mysql 3.23.58 :(
A few thoughts.

As you are no doubt aware, you can ensure that your addresses are unique 
by putting a unique index on the address column, and you can ensure that 
your aliases are unique by putting a unique index on the alias column. 
Unless you are working with only one domain , I assume your address 
column will hold the complete address (e.g. [EMAIL PROTECTED]), otherwise it 
couldn't be unique.  Will you allow email addresses in the alias column? 
 If not, it seems to me you can get the result you want.  If your 
application logic makes sure the address is an address (e.g., it 
contains an @) and the alias is not an address (e.g., does not contain 
an @), then no alias could match an address, so the individual 
uniqueness contstraints should be sufficient.



Your requirement of no overlap between aliases and addresses implies 
that you plan to let the user enter either without specifying which. 
Presumably, you would then do something like:

  SELECT * FROM usertable
  WHERE alias = 'user_input' OR address = 'user_input'
That's fine, but you should be aware that OR searches tend to be slow. 
There's a work-around using UNION, but that requires mysql 4.  If you 
(your application) can tell the difference between an address and an 
alias (by having the user specify, or by checking for the 
presence/absence of @), however, you (your application) can then check 
only the relevant column, which will be fast due to the index. 
Something like

if 'user_input' contains '@'  #it's an address
  SELECT * FROM usertable WHERE address = 'user_input'
else# it's an alias
  SELECT * FROM usertable WHERE alias = 'user_input'
Michael

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Unique IDs

2004-02-13 Thread Keith C. Ivey
On 12 Feb 2004 at 22:57, Bill Easton wrote:

 You can use + 1 instead of + interval 1 second, but it may give
 different results some day if MySQL changes the precision of
 timestamp.

Actually, that won't work in all cases.  If the current timestamp is 
20040213114859, then adding 1 (rather than INTERVAL 1 SECOND), gives 
20040213114860, which gets converted to 20040213114800 on insert.  If 
you've already used all the timestamps for that minute, then you're 
never going to get to the next minute (and thus never going to get a 
usable ID) by repeatedly adding 1.

-- 
Keith C. Ivey [EMAIL PROTECTED]
Tobacco Documents Online
http://tobaccodocuments.org


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique IDs

2004-02-12 Thread Eamon Daly
You definitely don't want to use timestamping for unique
IDs. You want to use an auto-incrementing column or similar.
See

http://www.mysql.com/doc/en/example-AUTO_INCREMENT.html


Eamon Daly
NextWave Media Group LLC
Tel: 1 773 975-1115
Fax: 1 773 913-0970



- Original Message - 
From: Craig Jackson [EMAIL PROTECTED]
To: mysql [EMAIL PROTECTED]
Sent: Thursday, February 12, 2004 11:30 AM
Subject: Unique IDs


 Hello People,

 This is my first post to this list. I am having a bit of a problem that
 Google doesn't seem to help with, and I'm not sure what part of Mysql
 docs will help.

 I have a very large web app that uses timestamp for unique IDs.
 Everything was rolling fine until we started getting many users per
 second, causing some of the unique IDs to not be unique -- users were
 being assigned the same timestamp. Since the web app is so large we
 don't want to change the method of assigning IDs as it would create a
 major project.

 I have looked at Locking and Insert Delay, but I'm not sure that will
 help.

 Does anyone have a suggestion? We are using 4.0.14 and ISAM tables and
 PHP 4.3.

 Thanks,
 Craig Jackson


 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique IDs

2004-02-12 Thread Keith C. Ivey
Craig Jackson [EMAIL PROTECTED] wrote:

 I have a very large web app that uses timestamp for unique IDs.
 Everything was rolling fine until we started getting many users per
 second, causing some of the unique IDs to not be unique -- users were
 being assigned the same timestamp. Since the web app is so large we
 don't want to change the method of assigning IDs as it would create a
 major project.

I don't understand.  If you're getting many users per second, and 
your timestamps have 1-second resolution, how could you possibly 
solve the problem without changing the method of assigning IDs?
Are the many users per second periods just short bursts, and you're 
really only getting several hundred users per day?  If so, I guess 
you could keep waiting a second and trying the insert again, but that 
could lead to indefinite delays if traffic gets high.  I think you've 
got to bite the bullet and change the unique ID to something that's 
actually unique -- even an AUTO_INCREMENT would work.

-- 
Keith C. Ivey [EMAIL PROTECTED]
Tobacco Documents Online
http://tobaccodocuments.org


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique IDs

2004-02-12 Thread Brent Baisley
You have a design flaw in your database, using timestamp as a unique 
ID. There really is no work around. You can't reliably keep using 
timestamps for unique IDs. As a rule, a unique ID should not be 
dependent on anything else and should not represent anything else aside 
from a unique id. For instance, social security number is a very poor 
choice for unique ids. Yes, it's unique (to a certain degree), but it 
relies on accurate data entry and it represents confidential 
information.
I always use an unsigned integer field with auto increment for my 
unique id fields. An auto increment field is really the only way to 
assure uniqueness.

One way or another you are going to have change the way you database 
works.
Sorry, but any other solution I can think will only buy you a little 
time, it won't solve the problem.

On Feb 12, 2004, at 12:30 PM, Craig Jackson wrote:

Hello People,

This is my first post to this list. I am having a bit of a problem that
Google doesn't seem to help with, and I'm not sure what part of Mysql
docs will help.
I have a very large web app that uses timestamp for unique IDs.
Everything was rolling fine until we started getting many users per
second, causing some of the unique IDs to not be unique -- users were
being assigned the same timestamp. Since the web app is so large we
don't want to change the method of assigning IDs as it would create a
major project.
I have looked at Locking and Insert Delay, but I'm not sure that will
help.
Does anyone have a suggestion? We are using 4.0.14 and ISAM tables and
PHP 4.3.
Thanks,
Craig Jackson
--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Unique IDs

2004-02-12 Thread Craig Jackson
On Thu, 2004-02-12 at 11:47, Keith C. Ivey wrote:
 Craig Jackson [EMAIL PROTECTED] wrote:
 
  I have a very large web app that uses timestamp for unique IDs.
  Everything was rolling fine until we started getting many users per
  second, causing some of the unique IDs to not be unique -- users were
  being assigned the same timestamp. Since the web app is so large we
  don't want to change the method of assigning IDs as it would create a
  major project.
 
 I don't understand.  If you're getting many users per second, and 
 your timestamps have 1-second resolution, how could you possibly 
 solve the problem without changing the method of assigning IDs?
 Are the many users per second periods just short bursts, and you're 
 really only getting several hundred users per day?  If so, I guess 
 you could keep waiting a second and trying the insert again, but that 
 could lead to indefinite delays if traffic gets high.  I think you've 
 got to bite the bullet and change the unique ID to something that's 
 actually unique -- even an AUTO_INCREMENT would work.

Thanks for the speedy reply and I have already recommended
auto_increment for the solution. We do need that quick fix until the
problem is fixed. How would I go about making Mysql wait one second
between inserts. We only get about 1000 hits per day, but they tend to
be concentrated in short time intervals.


 
 -- 
 Keith C. Ivey [EMAIL PROTECTED]
 Tobacco Documents Online
 http://tobaccodocuments.org


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique IDs

2004-02-12 Thread Mike Miller
code it within your program.  If this is in fact a primary key (whcih you 
seem to describe it as), then it should be distinct anyway.  So do an INSERT 
IGNORE or even just an INSERT.  It will fail upon duplicates.  Check the 
affected rows or the insert_id (using whatever API you use to access MySQL), 
sleep for a second, then try the insert again.  I doubt there's a good way 
to hang up the database on the issue.

Cheers;
-M


From: Craig Jackson [EMAIL PROTECTED]
Reply-To: [EMAIL PROTECTED]
To: Keith C. Ivey [EMAIL PROTECTED]
CC: [EMAIL PROTECTED]
Subject: Re: Unique IDs
Date: Thu, 12 Feb 2004 11:57:24 -0600
On Thu, 2004-02-12 at 11:47, Keith C. Ivey wrote:
 Craig Jackson [EMAIL PROTECTED] wrote:

  I have a very large web app that uses timestamp for unique IDs.
  Everything was rolling fine until we started getting many users per
  second, causing some of the unique IDs to not be unique -- users were
  being assigned the same timestamp. Since the web app is so large we
  don't want to change the method of assigning IDs as it would create a
  major project.

 I don't understand.  If you're getting many users per second, and
 your timestamps have 1-second resolution, how could you possibly
 solve the problem without changing the method of assigning IDs?
 Are the many users per second periods just short bursts, and you're
 really only getting several hundred users per day?  If so, I guess
 you could keep waiting a second and trying the insert again, but that
 could lead to indefinite delays if traffic gets high.  I think you've
 got to bite the bullet and change the unique ID to something that's
 actually unique -- even an AUTO_INCREMENT would work.
Thanks for the speedy reply and I have already recommended
auto_increment for the solution. We do need that quick fix until the
problem is fixed. How would I go about making Mysql wait one second
between inserts. We only get about 1000 hits per day, but they tend to
be concentrated in short time intervals.

 --
 Keith C. Ivey [EMAIL PROTECTED]
 Tobacco Documents Online
 http://tobaccodocuments.org
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
_
Add photos to your e-mail with MSN 8. Get 2 months FREE*.  
http://join.msn.com/?page=features/photospgmarket=en-caRU=http%3a%2f%2fjoin.msn.com%2f%3fpage%3dmisc%2fspecialoffers%26pgmarket%3den-ca

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Unique IDs

2004-02-12 Thread nalaka_nanayakkara
Add a second field that is auto incremented and change the primary key to
that
So for now you can keep the time-stamp


Thanks,

Nalaka Nanayakkara
Overwaitea Food Group
Tel: 604-888-1213 (ext 3742)

 -Original Message-
From:   Craig Jackson [mailto:[EMAIL PROTECTED] 
Sent:   Thursday, February 12, 2004 9:57 AM
To: Keith C. Ivey
Cc: [EMAIL PROTECTED]
Subject:Re: Unique IDs

On Thu, 2004-02-12 at 11:47, Keith C. Ivey wrote:
 Craig Jackson [EMAIL PROTECTED] wrote:
 
  I have a very large web app that uses timestamp for unique IDs.
  Everything was rolling fine until we started getting many users per
  second, causing some of the unique IDs to not be unique -- users were
  being assigned the same timestamp. Since the web app is so large we
  don't want to change the method of assigning IDs as it would create a
  major project.
 
 I don't understand.  If you're getting many users per second, and 
 your timestamps have 1-second resolution, how could you possibly 
 solve the problem without changing the method of assigning IDs?
 Are the many users per second periods just short bursts, and you're 
 really only getting several hundred users per day?  If so, I guess 
 you could keep waiting a second and trying the insert again, but that 
 could lead to indefinite delays if traffic gets high.  I think you've 
 got to bite the bullet and change the unique ID to something that's 
 actually unique -- even an AUTO_INCREMENT would work.

Thanks for the speedy reply and I have already recommended
auto_increment for the solution. We do need that quick fix until the
problem is fixed. How would I go about making Mysql wait one second
between inserts. We only get about 1000 hits per day, but they tend to
be concentrated in short time intervals.


 
 -- 
 Keith C. Ivey [EMAIL PROTECTED]
 Tobacco Documents Online
 http://tobaccodocuments.org


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


-
Note:  This message is only intended for the use of the individual or entity to which 
it is addressed and may contain information that is privileged, confidential and 
exempt from disclosure.  If the reader of this message is not the intended recipient, 
or an employee or agent responsible for delivering the message to the intended 
recipient, you are hereby notified that any dissemination, distribution, or copying of 
this communication is strictly prohibited.  If you have received this communication in 
error, please notify us by telephone (604-888-2079) or electronically by return 
message, and delete or destroy all copies of this communication.  Thank you.


Re: Unique IDs

2004-02-12 Thread Keith C. Ivey
On 12 Feb 2004 at 11:57, Craig Jackson wrote:

 Thanks for the speedy reply and I have already recommended
 auto_increment for the solution. We do need that quick fix until the
 problem is fixed. How would I go about making Mysql wait one second
 between inserts. We only get about 1000 hits per day, but they tend to
 be concentrated in short time intervals.

You'd have to code that logic into your application, using Perl or 
PHP or C or whatever.  Try the insert.  If it fails, wait a second 
and try again with the new timestamp.  Repeat until you succeed or 
until you've gone through some number of tries (at which point you 
give up).  It's ugly, but that's what you asked for.

How about converting the column to a BIGINT AUTO_INCREMENT with a new 
value of, say, 30?  Then your old values would still be 
around with values like 20040212131422.  That's also ugly, but not as 
ugly as the solution with waiting.

-- 
Keith C. Ivey [EMAIL PROTECTED]
Tobacco Documents Online
http://tobaccodocuments.org


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique IDs

2004-02-12 Thread Jeff Mathis
as everyone has pointed out, using timestamps as a unique id was a 
design flaw. you should fix the problem using an auto-increment field.

that said, can you change the column type you are currently using as a 
timestamp to be an auto-increment int field? the return type in both 
cases is technically an integer for mysql versions  4.1, but your API 
call may need to be changed nevertheless.

my experience has always been to fix the problem right rather than do a 
hork, even if it means down time.

jeff
Craig Jackson wrote:
On Thu, 2004-02-12 at 11:47, Keith C. Ivey wrote:

Craig Jackson [EMAIL PROTECTED] wrote:


I have a very large web app that uses timestamp for unique IDs.
Everything was rolling fine until we started getting many users per
second, causing some of the unique IDs to not be unique -- users were
being assigned the same timestamp. Since the web app is so large we
don't want to change the method of assigning IDs as it would create a
major project.
I don't understand.  If you're getting many users per second, and 
your timestamps have 1-second resolution, how could you possibly 
solve the problem without changing the method of assigning IDs?
Are the many users per second periods just short bursts, and you're 
really only getting several hundred users per day?  If so, I guess 
you could keep waiting a second and trying the insert again, but that 
could lead to indefinite delays if traffic gets high.  I think you've 
got to bite the bullet and change the unique ID to something that's 
actually unique -- even an AUTO_INCREMENT would work.


Thanks for the speedy reply and I have already recommended
auto_increment for the solution. We do need that quick fix until the
problem is fixed. How would I go about making Mysql wait one second
between inserts. We only get about 1000 hits per day, but they tend to
be concentrated in short time intervals.


--
Keith C. Ivey [EMAIL PROTECTED]
Tobacco Documents Online
http://tobaccodocuments.org





--
Jeff Mathis, Ph.D.  505-955-1434
The Prediction Company  [EMAIL PROTECTED]
525 Camino de los Marquez, Ste 6http://www.predict.com
Santa Fe, NM 87505
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Unique IDs

2004-02-12 Thread Eric @ Zomething
I like the auto-increment primary key method for unique ID's in MySQL, however, if 
your table's design will not allow you to use that for some (unknown to me) important 
reasons, you could programmatically generate a truly unique ID and INSERT that.  

Did you try a Google under GUID?  http://tinyurl.com/2l34e  I don't recall an 
algorithm offhand, but I believe one might typically use a number of variables to 
generate a GUID (like date-time, IP, a random number, etc.)  This is a well explored 
area of CS, I believe, and wouldn't require you to reinvent the wheel.

Eric


 Original Message
 From: Jeff Mathis [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Cc: Keith C. Ivey [EMAIL PROTECTED], [EMAIL PROTECTED]
 Date: Thu, Feb-12-2004 11:13 AM
 Subject: Re: Unique IDs
 
 as everyone has pointed out, using timestamps as a unique id was a 
 design flaw. you should fix the problem using an auto-increment field.
 
 that said, can you change the column type you are currently using as a 
 timestamp to be an auto-increment int field? the return type in both 
 cases is technically an integer for mysql versions  4.1, but your API 
 call may need to be changed nevertheless.
 
 my experience has always been to fix the problem right rather than do a 
 hork, even if it means down time.
 
 jeff
 Craig Jackson wrote:
  On Thu, 2004-02-12 at 11:47, Keith C. Ivey wrote:
  
 Craig Jackson [EMAIL PROTECTED] wrote:
 
 
 I have a very large web app that uses timestamp for unique IDs.
 Everything was rolling fine until we started getting many users per
 second, causing some of the unique IDs to not be unique -- users were
 being assigned the same timestamp. Since the web app is so large we
 don't want to change the method of assigning IDs as it would create a
 major project.
 
 I don't understand.  If you're getting many users per second, and 
 your timestamps have 1-second resolution, how could you possibly 
 solve the problem without changing the method of assigning IDs?
 Are the many users per second periods just short bursts, and you're 
 really only getting several hundred users per day?  If so, I guess 
 you could keep waiting a second and trying the insert again, but that 
 could lead to indefinite delays if traffic gets high.  I think you've 
 got to bite the bullet and change the unique ID to something that's 
 actually unique -- even an AUTO_INCREMENT would work.
  
  
  Thanks for the speedy reply and I have already recommended
  auto_increment for the solution. We do need that quick fix until the
  problem is fixed. How would I go about making Mysql wait one second
  between inserts. We only get about 1000 hits per day, but they tend to
  be concentrated in short time intervals.
  
  
  
 -- 
 Keith C. Ivey [EMAIL PROTECTED]
 Tobacco Documents Online
 http://tobaccodocuments.org
  
  
  
 
 
 -- 
 Jeff Mathis, Ph.D.505-955-1434
 The Prediction Company[EMAIL PROTECTED]
 525 Camino de los Marquez, Ste 6  http://www.predict.com
 Santa Fe, NM 87505
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique IDs

2004-02-12 Thread Bill Easton
Craig,

Instead of delaying a second, why not just add a second to the timestamp?
The following should work just fine:

create table tst (ts timestamp primary key, other_stuff varchar(127) not
null);
...
insert into tst
  select greatest(now(), max(T.ts) + interval 1 second)
   , value of other_stuff
  from tst T;

I checked that this works in 4.0.16.  It even appears to work for inserting
the first row into an empty table.  (I think it probably didn't work in
3.x.)

Of course, during a burst, the stored timestamps could be a few seconds
later than the actual time, but, at least the timestamps are unique
and ascending.

You could also keep the timestamp in a separate table and increment it,
using a variable:

update timestamp_table set ts = (@T := greatest(now(), ts + interval 1
second));

Then you can use @T as your generated unique ID in a subsequent
statement, within the same connection.  Of course the timestamps
could be out of order, unless you lock your other table or do
everything within a transaction.  Maybe this matters, maybe not.

auto_increment is probably still better.  With innodb, there is
less locking required.

You can use + 1 instead of + interval 1 second, but it may give
different results some day if MySQL changes the precision of timestamp.

HTH

Bill Easton

 Subject: Re: Unique IDs
 From: Craig Jackson [EMAIL PROTECTED]
 To: Keith C. Ivey [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Date: Thu, 12 Feb 2004 11:57:24 -0600

 On Thu, 2004-02-12 at 11:47, Keith C. Ivey wrote:
  Craig Jackson [EMAIL PROTECTED] wrote:
 
   I have a very large web app that uses timestamp for unique IDs.
   Everything was rolling fine until we started getting many users per
   second, causing some of the unique IDs to not be unique -- users were
   being assigned the same timestamp. Since the web app is so large we
   don't want to change the method of assigning IDs as it would create a
   major project.
 
  I don't understand.  If you're getting many users per second, and
  your timestamps have 1-second resolution, how could you possibly
  solve the problem without changing the method of assigning IDs?
  Are the many users per second periods just short bursts, and you're
  really only getting several hundred users per day?  If so, I guess
  you could keep waiting a second and trying the insert again, but that
  could lead to indefinite delays if traffic gets high.  I think you've
  got to bite the bullet and change the unique ID to something that's
  actually unique -- even an AUTO_INCREMENT would work.

 Thanks for the speedy reply and I have already recommended
 auto_increment for the solution. We do need that quick fix until the
 problem is fixed. How would I go about making Mysql wait one second
 between inserts. We only get about 1000 hits per day, but they tend to
 be concentrated in short time intervals.


 
  -- 
  Keith C. Ivey [EMAIL PROTECTED]
  Tobacco Documents Online
  http://tobaccodocuments.org


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



  1   2   >