constraining unique values across columns

2006-05-01 Thread Scott Purcell
I occasionally run into issues where I would like to ensure that the values across two or three columns are not duplicated. At work we use Oracle, and I see that they add constraints when creating the columns. I am running mysql 5.0 and was hoping someone could give me the simple explanation of

RE: constraining unique values across columns

2006-05-01 Thread Tim Lucia
How about this? CREATE TABLE `test`.`test` ( `x` INTEGER UNSIGNED NOT NULL DEFAULT 0, `y` INTEGER UNSIGNED NOT NULL DEFAULT 0, `Z` INTEGER UNSIGNED NOT NULL DEFAULT 0, UNIQUE `Index_1`(`x`, `y`, `Z`) ) ENGINE = InnoDB; Tim -Original Message- From: Scott Purcell [mailto:[EMAIL

Re: constraining unique values across columns

2006-05-01 Thread Shawn Green
--- Scott Purcell [EMAIL PROTECTED] wrote: I occasionally run into issues where I would like to ensure that the values across two or three columns are not duplicated. At work we use Oracle, and I see that they add constraints when creating the columns. I am running mysql 5.0 and was hoping

Re: constraining unique values across columns

2006-05-01 Thread Shawn Green
sorry :( I clicked send too early --- Shawn Green [EMAIL PROTECTED] wrote: --- Scott Purcell [EMAIL PROTECTED] wrote: I occasionally run into issues where I would like to ensure that the values across two or three columns are not duplicated. At work we use Oracle, and I see that

Optimizing DISTINCT searches

2006-05-01 Thread Stephen P. Fracek, Jr.
Several of my DISTINCT searches are frequently showing up in the slow query log. These queries use multiple table joins. Using EXPLAIN shows that the queries are using the appropriate keys, as far as I know. Are DISTINCT searches using multiple joins slow? TIA. Stephen P. Fracek, Jr. [EMAIL

Re: Timeseries

2006-05-01 Thread David Hillman
On Apr 28, 2006, at 7:12 PM, David Lindelöf wrote: Is there an easy way to get the last timestamped record not later than a given date/time? Well, yeah, that part is easy. select * from your_table where timestamp target_time order by timestamp desc limit 1 Your problem seems to

RE: Optimizing DISTINCT searches

2006-05-01 Thread Robert DiFalco
Well, normally a DISTINCT has to do a type of sort and is slower than non-DISTINCT queries. Each field of the result set is considered in the DISTINCT logic. Can you modify the query so that it does not require the DISTINCT? Can you post the query? R. -Original Message- From: Stephen P.

FW: Optimizing DISTINCT searches

2006-05-01 Thread Stephen P. Fracek, Jr.
On 2006-05-01 11:55 AM, Robert DiFalco [EMAIL PROTECTED] wrote: Well, normally a DISTINCT has to do a type of sort and is slower than non-DISTINCT queries. Each field of the result set is considered in the DISTINCT logic. Can you modify the query so that it does not require the DISTINCT? Can

Finding duplicates, etc.

2006-05-01 Thread Dirk Bremer
I have the following table: *Column Information For - production.group15034_I*/ FieldType Collation NullKey Default Extra Privileges Comment ---

RE: Optimizing DISTINCT searches

2006-05-01 Thread Robert DiFalco
Would you need the DISTINCT if you change the query like so? SELECT Site.Site_ID, Site, Status, Type FROM Site JOIN Project ON Site.Site_ID = Project.Site_ID ORDER BY Site; You may also want to just try your initial query without the distinct to see if that is

Query Help

2006-05-01 Thread Robert Gehrig
I have a table like so: ID int val1int val2int I can have multiple records where id is the same. e.g: ID val1val2 1 1 2 1 1 1 2 2 2 2 1 1 2 2 2 I need to find which id has the most records. Thanks Robert Gehrig

Script run how???

2006-05-01 Thread Ravi Kumar
Hi, I want to run following command thru script . analyze table t1; optimize table t1; analyze table t2; optimize table t2; Can I do it on linux mysql ? I also want to run script like during analysing if we notice any error related with table then run repair table t1 l

Re: Query Help

2006-05-01 Thread Gabriel PREDA
SELECT id, count(*) AS cnt FROM `table_name` GROUP BY id ORDER BY cnt DESC [ LIMIT 1 ] -- Gabriel PREDA Senior Web Developer

RE: Script run how???

2006-05-01 Thread George Law
perl ? just quickly throwing something together - this is untested. $done = 0; $count = 0; while ($done) { $dbh-do(analyze table t1); my $info = $dbh-{mysql_info}; if ($info =~ /some kind of error message/) { $dbh-do(repair table t1); my $info2 =

Re: Optimizing DISTINCT searches

2006-05-01 Thread Stephen P. Fracek, Jr.
On 2006-05-01 1:14 PM, Robert DiFalco [EMAIL PROTECTED] wrote: Would you need the DISTINCT if you change the query like so? SELECT Site.Site_ID, Site, Status, Type FROM Site JOIN Project ON Site.Site_ID = Project.Site_ID ORDER BY Site; You may also want to just try your initial query

RE: HELP --- Slow SP

2006-05-01 Thread Quentin Bennett
The THEORY behind both statements is fine 1. Have a primary, single column integer index, auto incrementing (probably) for every record 2. Have a primary index that uniquely identifies your data. The advantage of 2 is that it is (usually) obvious what the unique characteristics of your data

Fixing Databases When Replication Is Enabled?

2006-05-01 Thread Robinson, Eric
I ran mysqlcheck against a replication master database and it reported a problem with a table, which it corrected. Is the slave now out of sync? If so, how do I correct the problem without copying the whole database over a slow WAN link? --Eric -- MySQL General Mailing List For list archives:

Re: Fixing Databases When Replication Is Enabled?

2006-05-01 Thread Kishore Jalleda
On 5/1/06, Robinson, Eric [EMAIL PROTECTED] wrote: I ran mysqlcheck against a replication master database and it reported a problem with a table, which it corrected. Is the slave now out of sync? If so, how do I correct the problem without copying the whole database over a slow WAN link? --Eric

Re: Finding duplicates, etc.

2006-05-01 Thread Peter Brawley
Dirk, I would like to know where there ar duplicate accounts that do not have duplicate addresses. I think that at this point in time that these are included in the first query. I would like to separate these out to report on them. How about ... select account,ident,address from group15034_i

RE: Optimizing DISTINCT searches

2006-05-01 Thread Robert DiFalco
How about something like this? SELECT Site.Site_ID, Site, Status, Type FROM Site WHERE EXISTS( SELECT * FROM Project) ORDER BY Site; I'm assuming Site_ID is unique in the Site table? -Original Message- From: Stephen P. Fracek, Jr. [mailto:[EMAIL PROTECTED] Sent: Monday, May 01,

MySQL 5.0.21 has been released

2006-05-01 Thread Joerg Bruehe
Hi, MySQL 5.0.21, a new version of the popular Open Source Database Management System, has been released. The Community Edition is now available in source and binary form for a number of platforms from our download pages at http://dev.mysql.com/downloads/ and mirror sites. Note that not

RE: Fixing Databases When Replication Is Enabled?

2006-05-01 Thread Robinson, Eric
I ran mysqlcheck, not myisamchk or REPAIR. Is what you said still true? -Original Message- From: Kishore Jalleda [mailto:[EMAIL PROTECTED] Sent: Monday, May 01, 2006 1:56 PM To: Robinson, Eric Cc: mysql@lists.mysql.com Subject: Re: Fixing Databases When Replication Is Enabled? On

Re: Query Help

2006-05-01 Thread Robert Gehrig
Thanks that got it. Robert Gehrig Webmaster at www.gdbarri.com e-mail: [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: Fixing Databases When Replication Is Enabled?

2006-05-01 Thread Kishore Jalleda
Yes On 5/1/06, Robinson, Eric [EMAIL PROTECTED] wrote: I ran mysqlcheck, not myisamchk or REPAIR. Is what you said still true? -Original Message- From: Kishore Jalleda [mailto:[EMAIL PROTECTED] Sent: Monday, May 01, 2006 1:56 PM To: Robinson, Eric Cc: mysql@lists.mysql.com Subject: Re:

Re: innodb file per table

2006-05-01 Thread Ware Adams
On Apr 26, 2006, at 3:54 AM, Dr. Frank Ullrich wrote: Duzenbury, Rich wrote: Hi all, I've inherited an innodb database that is configured like: innodb_file_per_table innodb_data_file_path = ibdata1:3000M;ibdata2:3000M;ibdata3:3000M;ibdata4:3000M:autoextend Um, doesn't this allocate 12G that