constraining unique values across columns
I occasionally run into issues where I would like to ensure that the values across two or three columns are not duplicated. At work we use Oracle, and I see that they add constraints when creating the columns. I am running mysql 5.0 and was hoping someone could give me the simple explanation of how to create constraints, and whether indexes' are needed for these or not. I have tried reading the docs in this regard, but because I do not understand the concept I am having trouble translating to the actual creation of these. Thanks, Scott
RE: constraining unique values across columns
How about this? CREATE TABLE `test`.`test` ( `x` INTEGER UNSIGNED NOT NULL DEFAULT 0, `y` INTEGER UNSIGNED NOT NULL DEFAULT 0, `Z` INTEGER UNSIGNED NOT NULL DEFAULT 0, UNIQUE `Index_1`(`x`, `y`, `Z`) ) ENGINE = InnoDB; Tim -Original Message- From: Scott Purcell [mailto:[EMAIL PROTECTED] Sent: Monday, May 01, 2006 9:07 AM To: mysql@lists.mysql.com Subject: constraining unique values across columns I occasionally run into issues where I would like to ensure that the values across two or three columns are not duplicated. At work we use Oracle, and I see that they add constraints when creating the columns. I am running mysql 5.0 and was hoping someone could give me the simple explanation of how to create constraints, and whether indexes' are needed for these or not. I have tried reading the docs in this regard, but because I do not understand the concept I am having trouble translating to the actual creation of these. Thanks, Scott -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: constraining unique values across columns
--- Scott Purcell [EMAIL PROTECTED] wrote: I occasionally run into issues where I would like to ensure that the values across two or three columns are not duplicated. At work we use Oracle, and I see that they add constraints when creating the columns. I am running mysql 5.0 and was hoping someone could give me the simple explanation of how to create constraints, and whether indexes' are needed for these or not. I have tried reading the docs in this regard, but because I do not understand the concept I am having trouble translating to the actual creation of these. Thanks, Scott Hi Scott, What you want to do is called a CHECK CONSTRAINT and is not yet part of MySQL. However, now that MySQL has STORED PROCEDURES and FUNCTIONS it should be a matter of a revision or two before CHECK CONSTRAINTS __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: constraining unique values across columns
sorry :( I clicked send too early --- Shawn Green [EMAIL PROTECTED] wrote: --- Scott Purcell [EMAIL PROTECTED] wrote: I occasionally run into issues where I would like to ensure that the values across two or three columns are not duplicated. At work we use Oracle, and I see that they add constraints when creating the columns. I am running mysql 5.0 and was hoping someone could give me the simple explanation of how to create constraints, and whether indexes' are needed for these or not. I have tried reading the docs in this regard, but because I do not understand the concept I am having trouble translating to the actual creation of these. Thanks, Scott Hi Scott, What you want to do is called a CHECK CONSTRAINT and is not yet part of MySQL. However, now that MySQL has STORED PROCEDURES and FUNCTIONS it should be a matter of a revision or two before CHECK CONSTRAINTS are part of the mix. Until then, you will have to validate your data in your application. Shawn Green __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Optimizing DISTINCT searches
Several of my DISTINCT searches are frequently showing up in the slow query log. These queries use multiple table joins. Using EXPLAIN shows that the queries are using the appropriate keys, as far as I know. Are DISTINCT searches using multiple joins slow? TIA. Stephen P. Fracek, Jr. [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Timeseries
On Apr 28, 2006, at 7:12 PM, David Lindelöf wrote: Is there an easy way to get the last timestamped record not later than a given date/time? Well, yeah, that part is easy. select * from your_table where timestamp target_time order by timestamp desc limit 1 Your problem seems to be how to get multiple last-timestamped records, and I don't think that answer will be elegant. -- David Hillman LiveText, Inc 1.866.LiveText x235
RE: Optimizing DISTINCT searches
Well, normally a DISTINCT has to do a type of sort and is slower than non-DISTINCT queries. Each field of the result set is considered in the DISTINCT logic. Can you modify the query so that it does not require the DISTINCT? Can you post the query? R. -Original Message- From: Stephen P. Fracek, Jr. [mailto:[EMAIL PROTECTED] Sent: Monday, May 01, 2006 7:52 AM To: mysql@lists.mysql.com Subject: Optimizing DISTINCT searches Several of my DISTINCT searches are frequently showing up in the slow query log. These queries use multiple table joins. Using EXPLAIN shows that the queries are using the appropriate keys, as far as I know. Are DISTINCT searches using multiple joins slow? TIA. Stephen P. Fracek, Jr. [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
FW: Optimizing DISTINCT searches
On 2006-05-01 11:55 AM, Robert DiFalco [EMAIL PROTECTED] wrote: Well, normally a DISTINCT has to do a type of sort and is slower than non-DISTINCT queries. Each field of the result set is considered in the DISTINCT logic. Can you modify the query so that it does not require the DISTINCT? Can you post the query? Robert - Query: SELECT DISTINCT Project.Site_ID, Site, Status, Type FROM Project, Site WHERE Site.Site_ID = Project.Site_ID ORDER BY Site; Site is the site name, Status and Type contain additional information about the site, and Site_ID is the unique site id. The Project table contains among other things a list of sites where the projects are being done. The results of this query are supposed to be a non-duplicated list of sites that are associated with at least one project. As the number of projects and sites have increased, this query is now frequently in the slow query log. Stephen P. Fracek, Jr. [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Finding duplicates, etc.
I have the following table: *Column Information For - production.group15034_I*/ FieldType Collation NullKey Default Extra Privileges Comment --- - - -- -- -- -- --- --- identint(11)NULL PRI (NULL) auto_increment select,insert,update,references account int(10) unsigned zerofill NULL 00 select,insert,update,references sub_account tinyint(3) unsignedNULL 0 select,insert,update,references address varchar(132) latin1_swedish_ci select,insert,update,references data text latin1_swedish_ci select,insert,update,references /*Index Information For - production.group15034_I*/ --- Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed NullIndex_type Comment -- --- - --- -- -- -- --- group15034_I 0 PRIMARY 1 identA 0(NULL) (NULL) BTREE /*DDL Information For - production.group15034_I*/ - Table Create Table -- group15034_I CREATE TABLE `group15034_I` ( `ident` int(11) NOT NULL auto_increment, `account` int(10) unsigned zerofill NOT NULL default '00', `sub_account` tinyint(3) unsigned NOT NULL default '0', `address` varchar(132) NOT NULL default '', `data` text NOT NULL, PRIMARY KEY (`ident`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 The account and address fields will contain duplicate values under certain scenarios. I have the following 3 requirements: 1. Select the accounts that have unique accounts and addresses, i.e. not duplicate: select ident,account,address,count(*) as N,data from group15034_I group by account,address having N = 1 This select appears to work fine. 2. Select the acocunts that have both duplicate accounts and addresses: select account,address,count(*) as N from group15034_I group by account,address having N 1 This select appears to work fine. 3. I want the exceptions to the above two conditions. Specifically, I would like to know where there ar duplicate accounts that do not have duplicate addresses. I think that at this point in time that these are included in the first query. I would like to separate these out to report on them. I can't figure out a way to accomplish number 3 but know that in the given data set that this condition does occur. Dirk Bremer - Senior Systems Engineer - ESS/AMS - NISC Lake St. Louis MO - USA Central Time Zone 636-755-2652 fax 636-755-2503 [EMAIL PROTECTED] www.nisc.coop -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Optimizing DISTINCT searches
Would you need the DISTINCT if you change the query like so? SELECT Site.Site_ID, Site, Status, Type FROM Site JOIN Project ON Site.Site_ID = Project.Site_ID ORDER BY Site; You may also want to just try your initial query without the distinct to see if that is the issue. Also, do you have an index on the Site column? The issue with this query is that you are pretty much selecting everything from the Project table. R. -Original Message- From: Stephen P. Fracek, Jr. [mailto:[EMAIL PROTECTED] Sent: Monday, May 01, 2006 11:05 AM To: mysql@lists.mysql.com Subject: FW: Optimizing DISTINCT searches On 2006-05-01 11:55 AM, Robert DiFalco [EMAIL PROTECTED] wrote: Well, normally a DISTINCT has to do a type of sort and is slower than non-DISTINCT queries. Each field of the result set is considered in the DISTINCT logic. Can you modify the query so that it does not require the DISTINCT? Can you post the query? Robert - Query: SELECT DISTINCT Project.Site_ID, Site, Status, Type FROM Project, Site WHERE Site.Site_ID = Project.Site_ID ORDER BY Site; Site is the site name, Status and Type contain additional information about the site, and Site_ID is the unique site id. The Project table contains among other things a list of sites where the projects are being done. The results of this query are supposed to be a non-duplicated list of sites that are associated with at least one project. As the number of projects and sites have increased, this query is now frequently in the slow query log. Stephen P. Fracek, Jr. [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Query Help
I have a table like so: ID int val1int val2int I can have multiple records where id is the same. e.g: ID val1val2 1 1 2 1 1 1 2 2 2 2 1 1 2 2 2 I need to find which id has the most records. Thanks Robert Gehrig Webmaster at www.gdbarri.com e-mail: [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Script run how???
Hi, I want to run following command thru script . analyze table t1; optimize table t1; analyze table t2; optimize table t2; Can I do it on linux mysql ? I also want to run script like during analysing if we notice any error related with table then run repair table t1 l repair table t2; Thanks, - How low will we go? Check out Yahoo! Messengers low PC-to-Phone call rates.
Re: Query Help
SELECT id, count(*) AS cnt FROM `table_name` GROUP BY id ORDER BY cnt DESC [ LIMIT 1 ] -- Gabriel PREDA Senior Web Developer
RE: Script run how???
perl ? just quickly throwing something together - this is untested. $done = 0; $count = 0; while ($done) { $dbh-do(analyze table t1); my $info = $dbh-{mysql_info}; if ($info =~ /some kind of error message/) { $dbh-do(repair table t1); my $info2 = $dbh-{mysql_info}; if ($info2 =~ /another error message/) { print Error while repairing table t1\n; last; } } else { $done = 1; } $count++; if ($count 5) { print unable to repair errors in 5 passes\n; last; } } You would have to figure what kind of errors may come back and put those in the if conditions -Original Message- From: Ravi Kumar [mailto:[EMAIL PROTECTED] Sent: Monday, May 01, 2006 2:56 PM To: Sergei Golubchik; Robert DiFalco Cc: [EMAIL PROTECTED]; mysql@lists.mysql.com Subject: Script run how??? Hi, I want to run following command thru script . analyze table t1; optimize table t1; analyze table t2; optimize table t2; Can I do it on linux mysql ? I also want to run script like during analysing if we notice any error related with table then run repair table t1 l repair table t2; Thanks, - How low will we go? Check out Yahoo! Messenger's low PC-to-Phone call rates. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Optimizing DISTINCT searches
On 2006-05-01 1:14 PM, Robert DiFalco [EMAIL PROTECTED] wrote: Would you need the DISTINCT if you change the query like so? SELECT Site.Site_ID, Site, Status, Type FROM Site JOIN Project ON Site.Site_ID = Project.Site_ID ORDER BY Site; You may also want to just try your initial query without the distinct to see if that is the issue. Also, do you have an index on the Site column? The issue with this query is that you are pretty much selecting everything from the Project table. Robert - Your query doesn't work - it finds ALL the rows in Project table and hence repeats the sites.. I do have an index on the Site table, it is the Site_ID. The Project.Site_ID is also indexed. Stephen P. Fracek, Jr. [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: HELP --- Slow SP
The THEORY behind both statements is fine 1. Have a primary, single column integer index, auto incrementing (probably) for every record 2. Have a primary index that uniquely identifies your data. The advantage of 2 is that it is (usually) obvious what the unique characteristics of your data are, and so a natural PK emerges. The advantages of 1 are based on two premises: a. Indices are all about efficiency, and it the efficiency of comparing 4-byte integers is greater than the efficiency of comparing 51 characters of a combined key. b. Relationships need to be simple to allow point a. to work - if the author wishes, some time in the future, to create a relationship to this table, s/he has two choices - create the child table with a foreign key containing all three elements of the original primary key or add a new auto-increment primary key to his_msisdn_imei_activ_hist at that stage. If, at that time, the table is involved in a 24x7x52 system with 100s millions of records, then adding a new column and index may not be practical. So, if your system is a small, stable one and will remain that way, index efficiency is less of an issue, and the use of a 51-byte multi column index is not a problem. However, if you want to design in future proofing, get in to the habit of putting a single column integer, auto_increment primary key on every table (or at least considering doing so!) The speed of MySQL can lead to some bad habits that don't transfer well to other DBMS products, and good practice is good practice anywhere. Have an awesome day. Quentin P.S. 51 bytes assumes DATETIME is 8 bytes, but it may be 6? -Original Message- From: Martijn Tonies [mailto:[EMAIL PROTECTED] Sent: Friday, 28 April 2006 8:56 p.m. To: mysql@lists.mysql.com Subject: Re: HELP --- Slow SP CREATE TABLE `his_msisdn_imei_activ_hist` ( `MSISDN` varchar(23) NOT NULL, `ACTIV_IMEI` varchar(20) NOT NULL, `ACTIV_PHONE_TYPE` varchar(100) NOT NULL, `PREV_IMEI` varchar(20) default NULL, `PREV_PHONE_TYPE` varchar(100) default NULL, `ACTIV_TIME` datetime NOT NULL, PRIMARY KEY (`MSISDN`,`ACTIV_IMEI`,`ACTIV_TIME`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; This primary key is a bad idea. A VERY VERY bad idea. For starters, a primary key should have ONE field, not THREE. While it is allowed, it's not going to help performance at all. Next is that the primary key Care for a fight over this one? :-) A primary key should be the primary key. If this is 3 columns, or 1 varchar column, it's all fine. I agree with your point of the ACTIV_TIME being a bad candidate for being part of a PK though. Oh, and having multiple columns in a PK does not mean you cannot create additional indices as/if required. All in all, your statement about multiple columns in a PK is a very very bad statement ;-) should be a numeric field. You've got varchars and datetimes! Yuck! If you want to enforce a rule such as restricting duplicate values, then start by creating yourself a sane primary key ( an unsigned int, for example ), and *THEN* put an index ( with your don't allow duplicates rule ) across your (`MSISDN`,`ACTIV_IMEI`,`ACTIV_TIME`) fields. Martijn Tonies Database Workbench - development tool for MySQL, and more! Upscene Productions http://www.upscene.com My thoughts: http://blog.upscene.com/martijn/ Database development questions? Check the forum! http://www.databasedevelopmentforum.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] The information contained in this email is privileged and confidential and intended for the addressee only. If you are not the intended recipient, you are asked to respect that confidentiality and not disclose, copy or make use of its contents. If received in error you are asked to destroy this email and contact the sender immediately. Your assistance is appreciated. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Fixing Databases When Replication Is Enabled?
I ran mysqlcheck against a replication master database and it reported a problem with a table, which it corrected. Is the slave now out of sync? If so, how do I correct the problem without copying the whole database over a slow WAN link? --Eric -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Fixing Databases When Replication Is Enabled?
On 5/1/06, Robinson, Eric [EMAIL PROTECTED] wrote: I ran mysqlcheck against a replication master database and it reported a problem with a table, which it corrected. Is the slave now out of sync? If so, how do I correct the problem without copying the whole database over a slow WAN link? --Eric By Default when you run REPAIR or myisamchk --recover , MySQL writes the changes made to the Binlog , and those will be obviously replicated to the slave, so if everything went fine with your myisamck on the master, then your slave is in sync and you don't have to do anyhthing special on the slave... Kishore Jalleda http://kjalleda.googlepages.com/projects -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Finding duplicates, etc.
Dirk, I would like to know where there ar duplicate accounts that do not have duplicate addresses. I think that at this point in time that these are included in the first query. I would like to separate these out to report on them. How about ... select account,ident,address from group15034_i g1 inner join group15034_i g2 using (account) where g1.address g2.address; PB - Dirk Bremer wrote: I have the following table: *Column Information For - production.group15034_I*/ FieldType Collation NullKey Default Extra Privileges Comment --- - - -- -- -- -- --- --- identint(11)NULL PRI (NULL) auto_increment select,insert,update,references account int(10) unsigned zerofill NULL 00 select,insert,update,references sub_account tinyint(3) unsignedNULL 0 select,insert,update,references address varchar(132) latin1_swedish_ci select,insert,update,references data text latin1_swedish_ci select,insert,update,references /*Index Information For - production.group15034_I*/ --- Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed NullIndex_type Comment -- --- - --- -- -- -- --- group15034_I 0 PRIMARY 1 identA 0(NULL) (NULL) BTREE /*DDL Information For - production.group15034_I*/ - Table Create Table -- group15034_I CREATE TABLE `group15034_I` ( `ident` int(11) NOT NULL auto_increment, `account` int(10) unsigned zerofill NOT NULL default '00', `sub_account` tinyint(3) unsigned NOT NULL default '0', `address` varchar(132) NOT NULL default '', `data` text NOT NULL, PRIMARY KEY (`ident`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 The account and address fields will contain duplicate values under certain scenarios. I have the following 3 requirements: 1. Select the accounts that have unique accounts and addresses, i.e. not duplicate: select ident,account,address,count(*) as N,data from group15034_I group by account,address having N = 1 This select appears to work fine. 2. Select the acocunts that have both duplicate accounts and addresses: select account,address,count(*) as N from group15034_I group by account,address having N 1 This select appears to work fine. 3. I want the exceptions to the above two conditions. Specifically, I would like to know where there ar duplicate accounts that do not have duplicate addresses. I think that at this point in time that these are included in the first query. I would like to separate these out to report on them. I can't figure out a way to accomplish number 3 but know that in the given data set that this condition does occur. Dirk Bremer - Senior Systems Engineer - ESS/AMS - NISC Lake St. Louis MO - USA Central Time Zone 636-755-2652 fax 636-755-2503 [EMAIL PROTECTED] www.nisc.coop -- No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.1.385 / Virus Database: 268.5.1/327 - Release Date: 4/28/2006 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Optimizing DISTINCT searches
How about something like this? SELECT Site.Site_ID, Site, Status, Type FROM Site WHERE EXISTS( SELECT * FROM Project) ORDER BY Site; I'm assuming Site_ID is unique in the Site table? -Original Message- From: Stephen P. Fracek, Jr. [mailto:[EMAIL PROTECTED] Sent: Monday, May 01, 2006 1:27 PM To: mysql@lists.mysql.com Subject: Re: Optimizing DISTINCT searches On 2006-05-01 1:14 PM, Robert DiFalco [EMAIL PROTECTED] wrote: Would you need the DISTINCT if you change the query like so? SELECT Site.Site_ID, Site, Status, Type FROM Site JOIN Project ON Site.Site_ID = Project.Site_ID ORDER BY Site; You may also want to just try your initial query without the distinct to see if that is the issue. Also, do you have an index on the Site column? The issue with this query is that you are pretty much selecting everything from the Project table. Robert - Your query doesn't work - it finds ALL the rows in Project table and hence repeats the sites.. I do have an index on the Site table, it is the Site_ID. The Project.Site_ID is also indexed. Stephen P. Fracek, Jr. [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
MySQL 5.0.21 has been released
Hi, MySQL 5.0.21, a new version of the popular Open Source Database Management System, has been released. The Community Edition is now available in source and binary form for a number of platforms from our download pages at http://dev.mysql.com/downloads/ and mirror sites. Note that not all mirror sites may be up to date at this point in time - if you can't find this version on some mirror, please try again later or choose another download site. This is a bugfix release for the current production release family. This MySQL 5.0.21 release includes the patches for recently reported security vulnerabilites in the MySQL client-server protocol. We would like to thank Stefano Di Paola [EMAIL PROTECTED] for finding and reporting these to us. This section documents all changes and bug fixes that have been applied since the last official MySQL release. If you would like to receive more fine-grained and personalised update alerts about fixes that are relevant to the version and features you use, please consider subscribing to MySQL Network (a commercial MySQL offering). For more details please see http://www.mysql.com/network/advisors.html. We welcome and appreciate your feedback! Functionality added or changed: * Security enhancement: Added the global max_prepared_stmt_count system variable to limit the total number of prepared statements in the server. This limits the potential for denial-of-service attacks based on causing the server to run causing the server to run out of memory by preparing huge numbers of statements. The current number of prepared statements is available through the 'prepared_stmt_count' status variable. (Bug#16365: http://bugs.mysql.com/16365) * NDB Cluster: It is now possible to perform a partial start of a cluster. That is, it is now possible to bring up the cluster without running ndbd --initial on all configured data nodes first. (Bug#18606: http://bugs.mysql.com/18606) * NDB Cluster: It is now possible to install MySQL with Cluster support to a non-default location and change the search path for font description files using either the --basedir or --character-sets-dir options. (Previously in MySQL 5.0, ndbd searched only the default path for character sets.) * In result set metadata, the MYSQL_FIELD.length value for BIT columns now is reported in number of bits. For example, the value for a BIT(9) column is 9. (Formerly, the value was related to number of bytes.) (Bug#13601: http://bugs.mysql.com/13601) * The default for the innodb_thread_concurrency system variable was changed to 8. (Bug#15868: http://bugs.mysql.com/15868) Bugs fixed: * Security bugfix: A malicious client, using specially crafted invalid COM_TABLE_DUMP packets was able to trigger an exploitable buffer overflow on the server. Thanks to Stefano Di Paola [EMAIL PROTECTED] for finding and reporting this bug. * Security bugfix: A malicious client, using specially crafted invalid login or COM_TABLE_DUMP packets was able to read uninitialized memory, which potentially, though unlikely in MySQL, could lead to an information disclosure. Thanks to Stefano Di Paola [EMAIL PROTECTED] for finding and reporting this bug. * NDB Cluster: A simultaneous DROP TABLE and table update operation utilising a table scan could trigger a node failure. (Bug#18597: http://bugs.mysql.com/18597) * Conversion of a number to a CHAR UNICODE string returned an invalid result. (Bug#18691: http://bugs.mysql.com/18691) * DELETE and UPDATE statements that used large NOT IN (value_list) clauses could use large amounts of memory. (Bug#15872: http://bugs.mysql.com/15872) * Prevent recursive views caused by using RENAME TABLE on a view after creating it. (Bug#14308: http://bugs.mysql.com/14308) * A LOCK TABLES statement that failed could cause MyISAM not to update table statistics properly, causing a subsequent CHECK TABLE to report table corruption. (Bug#18544: http://bugs.mysql.com/18544) * For a reference to a non-existent stored function in a stored routine that had a CONTINUE handler, the server continued as though a useful result had been returned, possibly resulting in a server crash. (Bug#18787: http://bugs.mysql.com/18787) * InnoDB did not use a consistent read for CREATE ... SELECT when innodb_locks_unsafe_for_binlog was set. (Bug#18350: http://bugs.mysql.com/18350) * InnoDB could read a delete mark from its system tables incorrectly. (Bug#19217: http://bugs.mysql.com/19217) * Corrected a syntax error in mysql-test-run.sh. (Bug#19190: http://bugs.mysql.com/19190) * A missing DBUG_RETURN() caused the server to emit a spurious error message: missing DBUG_RETURN or DBUG_VOID_RETURN macro in function open_table. (Bug#18964: http://bugs.mysql.com/18964) * DROP
RE: Fixing Databases When Replication Is Enabled?
I ran mysqlcheck, not myisamchk or REPAIR. Is what you said still true? -Original Message- From: Kishore Jalleda [mailto:[EMAIL PROTECTED] Sent: Monday, May 01, 2006 1:56 PM To: Robinson, Eric Cc: mysql@lists.mysql.com Subject: Re: Fixing Databases When Replication Is Enabled? On 5/1/06, Robinson, Eric [EMAIL PROTECTED] wrote: I ran mysqlcheck against a replication master database and it reported a problem with a table, which it corrected. Is the slave now out of sync? If so, how do I correct the problem without copying the whole database over a slow WAN link? --Eric By Default when you run REPAIR or myisamchk --recover , MySQL writes the changes made to the Binlog , and those will be obviously replicated to the slave, so if everything went fine with your myisamck on the master, then your slave is in sync and you don't have to do anyhthing special on the slave... Kishore Jalleda http://kjalleda.googlepages.com/projects -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Query Help
Thanks that got it. Robert Gehrig Webmaster at www.gdbarri.com e-mail: [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Fixing Databases When Replication Is Enabled?
Yes On 5/1/06, Robinson, Eric [EMAIL PROTECTED] wrote: I ran mysqlcheck, not myisamchk or REPAIR. Is what you said still true? -Original Message- From: Kishore Jalleda [mailto:[EMAIL PROTECTED] Sent: Monday, May 01, 2006 1:56 PM To: Robinson, Eric Cc: mysql@lists.mysql.com Subject: Re: Fixing Databases When Replication Is Enabled? On 5/1/06, Robinson, Eric [EMAIL PROTECTED] wrote: I ran mysqlcheck against a replication master database and it reported a problem with a table, which it corrected. Is the slave now out of sync? If so, how do I correct the problem without copying the whole database over a slow WAN link? --Eric By Default when you run REPAIR or myisamchk --recover , MySQL writes the changes made to the Binlog , and those will be obviously replicated to the slave, so if everything went fine with your myisamck on the master, then your slave is in sync and you don't have to do anyhthing special on the slave... Kishore Jalleda http://kjalleda.googlepages.com/projects -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: innodb file per table
On Apr 26, 2006, at 3:54 AM, Dr. Frank Ullrich wrote: Duzenbury, Rich wrote: Hi all, I've inherited an innodb database that is configured like: innodb_file_per_table innodb_data_file_path = ibdata1:3000M;ibdata2:3000M;ibdata3:3000M;ibdata4:3000M:autoextend Um, doesn't this allocate 12G that winds up being unused, since innodb_file_per_table is set? If so, what is the correct way to reclaim the 12G? Thanks! Regards, Rich Duzenbury Hi, but you don't know __when__ innodb_file_per_table was set! So it's possible that many innodb tables actually reside in ibdata [1-4]. Check your data directory to see the individual innodb files/tables (*.ibd). This is true, and even on a fresh install that has always had innodb_file_per_table, InnoDB still needs the shared tablespace (though it probably doesn't need to be that large). Once you have an InnoDB tablespace the only way to reduce the size of the shared tablespace is to completely dump the data and recreate the tablespace. Roughly the sequence is: mysqldump to text...be very careful to keep a consistent snapshot, handle blobs, quoting names, etc Test this. Shut down mysql Rename/move old mysql data and log directories, create new, empty ones (copy over mysql database...it's not innodb and will keep the same users) Alter my.cnf, point to include new InnoDB shared table definition Start mysql, make sure InnoDB initializes correctly (check .err file) Read in dump file you took in step 1 Again, be careful with this. It essentially involves exporting and importing all your data, so make sure you have a valid export file. Good luck, Ware -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]