Re: Recommended backup scripts for mysql databases
I finally got the script working. Seems to run smooth on my FreeBSD 4.11system: MYSQLDUMP=`which mysqldump 2/dev/null` || MYSQLDUMP=/usr/local/bin/mysqldump MYSQLDUMP_ARGS=--opt --extended-insert -h localhost -umyuser -pmypassword mydb ARCHDIR=/backup/mysql NAME=db_dump # Remove archives older than 64 days find ${ARCHDIR} -type f -mtime +64 | xargs rm -f # Create new archives cd ${ARCHDIR} ${MYSQLDUMP} ${MYSQLDUMP_ARGS} ${NAME}.`date +%Y%m%d` I just have two more questions: Should I use --extended-insert or not? I guess I can use both, but what is recommended? What would the correct script/way to restore the database? Thanks, Andreas
howto set mysql to readonly
Hello I need to move my databases from on server to an other. As lots of data are in production I cannot just stop mysql for 1/2 an hour and reopen it on the new server. What I expect to do is while backuping and restoring to the new server, I wish to set the original server in read only mode so that nobody can write in the databases while transfering to the new one. I've seen the LOCK table and flush commands, but I'am not sure if this is the right method, and how to use them. Lock table just locks tables as it's name implies and not a whole database ? is there a kind of lock all databases ? thanks. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Recommended backup scripts for mysql databases
Good morning Andreas - The --opt flag implies --extended-insert, in addition to some other options, to generate an optimized (fast) dump file. See man mysqldump. You don't need both but having both shouldn't hurt. To restore, pipe your dump file back into the mysql client, a la mysql -u user -ppassword -D database db_dump20060330 Keep in mind that one dump file may contain data for multiple databases depending on how you have things set up. Not really a problem until you want to restore just one database or just one table - then you have to get creative. Dan On 6/30/06, Andreas Widerøe Andersen [EMAIL PROTECTED] wrote: I finally got the script working. Seems to run smooth on my FreeBSD 4.11system: MYSQLDUMP=`which mysqldump 2/dev/null` || MYSQLDUMP=/usr/local/bin/mysqldump MYSQLDUMP_ARGS=--opt --extended-insert -h localhost -umyuser -pmypassword mydb ARCHDIR=/backup/mysql NAME=db_dump # Remove archives older than 64 days find ${ARCHDIR} -type f -mtime +64 | xargs rm -f # Create new archives cd ${ARCHDIR} ${MYSQLDUMP} ${MYSQLDUMP_ARGS} ${NAME}.`date +%Y%m%d` I just have two more questions: Should I use --extended-insert or not? I guess I can use both, but what is recommended? What would the correct script/way to restore the database? Thanks, Andreas -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
REPOST: Calling sp w/ Out Parameters from ASP?
How do I call a MySQL stored procedure from an ASP application and get the value of an Out Parameter? I've got the following Stored Procedure defined in my database: DELIMITER $$ DROP PROCEDURE IF EXISTS `bpa`.`sp_GetNextInv` $$ CREATE PROCEDURE `sp_GetNextInv`( IN nChapterID Int, OUT cInvNo VarChar(7)) BEGIN Declare cPrefix VarChar(1); Declare cNextInv VarChar(7); Set cInvNo = ''; IF nChapterID 0 THEN SELECT TempInvNo INTO cInvNo FROM Chapters WHERE ID=nChapterID; END IF; IF (cInvNo = '') or (cInvNo IS NULL) THEN SELECT NextInvoiceNo INTO cInvNo FROM Config; SET cNextInv = Right('000' + CONVERT(CONVERT(cInvNo, UNSIGNED) + 1, CHAR), 7); UPDATE Config SET NextInvoiceNo=cNextInv; IF nChapterID = -1 THEN Set cInvNo = CONCAT('L',Right(CONCAT('00',cInvNo),6)); END IF; IF nChapterID = -2 THEN Set cInvNo = CONCAT('C',Right(CONCAT('00',cInvNo),6)); END IF; IF nChapterID 0 THEN SELECT CT.InvPrefix INTO cPrefix FROM ChapterType CT, Chapters C WHERE C.ID=nChapterID AND CT.ChapterType=C.ChapterType; Set cInvNo = CONCAT(cPrefix,Right(CONCAT('00',cInvNo),6)); UPDATE Chapters SET TempInvNo=cInvNo WHERE ID=nChapterID; END IF; END IF; END $$ DELIMITER ; I've currently got the following ASP (VBScript) code, which worked with a Microsoft SQL database, but does not work with the MySQL Database: function GetNextInv(nChapterID) Dim adocmd Set adocmd = Server.CreateObject(ADODB.Command) adocmd.CommandText = sp_GetNextInv adocmd.ActiveConnection = Conn adocmd.CommandType = adCmdStoredProc adocmd.Parameters.Append adocmd.CreateParameter(ChapterID, adInteger, adParamInput, 16, nChapterID) adocmd.Parameters.Append adocmd.CreateParameter(InvNo, adVarChar, adParamOutput,7) adocmd.Execute GetNextInv = adocmd.Parameters(InvNo).Value set adocmd=Nothing end function I know that the proper way to call this function from the MySQL command line would be sp_GetNextInv(1234,@cInvNo), but don't know how to do this in ASP. Thanks, Jesse -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Client still reports table full
Wow, I'm really sorry about that. Left out a zero. I should stop answering questions before the holiday weekend. I was suggesting a minor change to 500 to see if that would work. Everything I've read about adjusting for table full errors always specifies both. Since only one was changed, MySQL might not have realized things were different(?). Unfortunately, there's not a whole lot of info on this topic, at least that I can find. I learned a while ago to set these parameters when I create the table if I think the table will be huge. I've also switched to using merge tables, which makes it a lot easier to archive parts of the table. You might try running the alter table command again with both variables specified. - Original Message - From: Jacob, Raymond A Jr [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Thursday, June 29, 2006 5:21 PM Subject: RE: Client still reports table full If I understand the results from SHOW TABLE STATUS LIKE 'data'; My avg_row_length = 497 Why would descreasing it to 50 have a positive Effect. I would assume I should increase it? Thank you/Raymond -Original Message- From: Brent Baisley [mailto:[EMAIL PROTECTED] Sent: Thursday, June 29, 2006 15:53 To: Jacob, Raymond A Jr; mysql@lists.mysql.com Subject: Re: Client still reports table full Oops, left out an important part. You should change the Avg_row_length also. ALTER TABLE AVG_ROW_LENGTH = 50 You need to specify an average row length if you have dynamic length fields in the table (blob, text, etc.). Also, perhaps a silly question which you may have answered earlier, but does you file system allow files larger than 4GB? Sometimes you have to specifically enable that feature in a file system. If that doesn't work, or you're limited to 4GB files, you may need to switch to using a merge table. - Original Message - From: Jacob, Raymond A Jr [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Thursday, June 29, 2006 1:37 PM Subject: Client still reports table full Yesterday: I ran the following command: ALTER TABLE data max_rows=1100 Today: The client still reported table is full. I rebooted the client and stopped and started the mysql server. I still get the table is full error on the data table. I ran the command: \ echo SHOW TABLE STATUS LIKE 'data'; | mysql -u user -p snortdb \ below is the output. Name Engine Version Row_format Rows Avg_row_length Data_length Max_data_length Index_length Data_free Auto_increment Create_time Update_time Check_time Collation Checksum Create_options Comment data MyISAM 9 Dynamic 8721565 497 4335220336 1099511627775 127599616 0 NULL 2006-06-28 20:54:55 2006-06-29 18:02:32 NULL latin1_swedish_ci NULL max_rows=1100 \\ df shows /var the partition with the database has enoungh room: Filesystem1K-blocks Used Avail Capacity Mounted on /dev/amrd0s1f 27792614 18449326 711988072%/var -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: howto set mysql to readonly
Instead of locking the table, why not just change the privileges for the specific accounts so they have select only privileges? Then you still of write access through admin accounts if need be. - Original Message - From: Jehan PROCACCIA [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Friday, June 30, 2006 6:28 AM Subject: howto set mysql to readonly Hello I need to move my databases from on server to an other. As lots of data are in production I cannot just stop mysql for 1/2 an hour and reopen it on the new server. What I expect to do is while backuping and restoring to the new server, I wish to set the original server in read only mode so that nobody can write in the databases while transfering to the new one. I've seen the LOCK table and flush commands, but I'am not sure if this is the right method, and how to use them. Lock table just locks tables as it's name implies and not a whole database ? is there a kind of lock all databases ? thanks. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
number of rows in EXPLAIN for unpacked vs packed tables
Hi All, I compressed a MyISAM table successfully with MYISAMPACK, followed by MYISAMCHK. Both tables (MyISAM + Compressed ) have exactly the same number of rows with Count(*). But when I give a SELECT query with EXPLAIN on both tables, I get different number of rows. For example: EXPLAIN SELECT ... FROM data ... shows 1983 rows, but the same EXPLAIN SELECT ... FROM pack ... shows 2064 rows. Any idea why ? Regards, Cor
Multiple joins
What am I missing here? select m.*, d.discussion, d.discussion_id, u.user_id, t.topic_id from forums_messages left join forums_members m, forums_discussions d, users u, forums_topics t on m.topic_id = t.topic_id, t.discussion_id = d.discussion_id where m.topic_id = 1; Is it legal to do multiple joins like this? Thanks Steffan --- T E L 6 0 2 . 5 7 9 . 4 2 3 0 | F A X 6 0 2 . 9 7 1 . 1 6 9 4 Steffan A. Cline [EMAIL PROTECTED] Phoenix, Az http://www.ExecuChoice.net USA AIM : SteffanC ICQ : 57234309 Lasso Partner Alliance Member --- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Multiple joins
Steffan A. Cline wrote: What am I missing here? select m.*, d.discussion, d.discussion_id, u.user_id, t.topic_id from forums_messages left join forums_members m, forums_discussions d, users u, forums_topics t on m.topic_id = t.topic_id, t.discussion_id = d.discussion_id where m.topic_id = 1; Is it legal to do multiple joins like this? Thanks Steffan forum_messages does not take part in any selection or any where clause. You have 5 tables listed, with only 4 of them appearing in 2 disjointed, and improperly formed joins. From the comma separated table list after a LEFT JOIN, I am not sure whether you intend them to be LEFT JOINs or INNER JOINs. Try: SELECT fields FROM file1 LEFT JOIN file2 ON something INNER JOIN file3 ON something INNER JOIN file4 ON something INNER JOIN file5 ON something WHERE somecondition -- Gerald L. Clark Supplier Systems Corporation -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Multiple joins
This is what I finally settled on. It seems to work well. select m.*, d.discussion, d.discussion_id, u.user_id, u.icon, u.nick_name from forums_messages m left join forums_topics t on m.topic_id = t.topic_id left join forums_discussions d on t.discussion_id = d.discussion_id left join users u on m.user_id = u.user_id where m.topic_id = 1; Thanks Steffan --- T E L 6 0 2 . 5 7 9 . 4 2 3 0 | F A X 6 0 2 . 9 7 1 . 1 6 9 4 Steffan A. Cline [EMAIL PROTECTED] Phoenix, Az http://www.ExecuChoice.net USA AIM : SteffanC ICQ : 57234309 Lasso Partner Alliance Member --- From: Gerald L. Clark [EMAIL PROTECTED] Date: Fri, 30 Jun 2006 09:54:14 -0500 To: Steffan A. Cline [EMAIL PROTECTED] Cc: mysql@lists.mysql.com Subject: Re: Multiple joins Steffan A. Cline wrote: What am I missing here? select m.*, d.discussion, d.discussion_id, u.user_id, t.topic_id from forums_messages left join forums_members m, forums_discussions d, users u, forums_topics t on m.topic_id = t.topic_id, t.discussion_id = d.discussion_id where m.topic_id = 1; Is it legal to do multiple joins like this? Thanks Steffan forum_messages does not take part in any selection or any where clause. You have 5 tables listed, with only 4 of them appearing in 2 disjointed, and improperly formed joins. From the comma separated table list after a LEFT JOIN, I am not sure whether you intend them to be LEFT JOINs or INNER JOINs. Try: SELECT fields FROM file1 LEFT JOIN file2 ON something INNER JOIN file3 ON something INNER JOIN file4 ON something INNER JOIN file5 ON something WHERE somecondition -- Gerald L. Clark Supplier Systems Corporation -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
summing values
I'd like to sum up values from two different columns in one table that is constructed like this: CREATE TABLE `nominations` ( `id` smallint(5) unsigned NOT NULL auto_increment, `name` varchar(255) default NULL, `SectionA_Nominee1` varchar(255) NOT NULL default 'n/a', `SectionA_Nominee2` varchar(255) NOT NULL default 'n/a', `SectionB_Nominee1` varchar(255) NOT NULL default 'n/a', `SectionB_nominee2` varchar(255) NOT NULL default 'n/a' `submittedwhen` timestamp NOT NULL default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP, `submittedfrom` varchar(50) NOT NULL default 'unavailable', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 The table has a row added each time a vote is cast. For each section one or two votes can be placed from a list of 4 nominees. Since one person can be listed in both 1 and 2 in each section (just not in the same row) I need to sum up that person's votes between the two columns. I've tried summing two subqueries but it errors out because of returning two values. Any other ways to accomplish this? Kathy Mazur Worden Prairie Area Library System
Re: Multiple joins
Yes it's legal to do multiple join. No, the way you declared you joins is not legal. At the very least it confusing. Do you want to left join discussion, users and topics, or just memebers? I'm not sure if MySQL would accept things in that order. I always specifically declare my joins (left or otherwise), it makes things more readable. You'll dicover that you are missing a couple of join specifications by structuring it differently. select m.*, d.discussion, d.discussion_id, u.user_id, t.topic_id FROM forums_messages, forums_discussions d, users u, LEFT JOIN forums_topics t on t.discussion_id = d.discussion_id LEFT JOIN forums_members m on m.topic_id = t.topic_id WHERE m.topic_id = 1; You are not stating how you are going to join discussions and users to the messages table. Thus it's going to do a full join, which you never want. By always specifically declaring all your joins, you end up with something like this: select m.*, d.discussion JOIN d.discussion_id ON ? JOIN u.user_id ON ? JOIN t.topic_id ON ? FROM forums_messages, forums_discussions d, users u, LEFT JOIN forums_topics t on t.discussion_id = d.discussion_id LEFT JOIN forums_members m on m.topic_id = t.topic_id WHERE m.topic_id = 1; The ? are missing join specifications. This is actually the desired query format going forward (i.e. MySQL 5), so you may want to start using it. - Original Message - From: Steffan A. Cline [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Friday, June 30, 2006 10:30 AM Subject: Multiple joins What am I missing here? select m.*, d.discussion, d.discussion_id, u.user_id, t.topic_id from forums_messages left join forums_members m, forums_discussions d, users u, forums_topics t on m.topic_id = t.topic_id, t.discussion_id = d.discussion_id where m.topic_id = 1; Is it legal to do multiple joins like this? Thanks Steffan --- T E L 6 0 2 . 5 7 9 . 4 2 3 0 | F A X 6 0 2 . 9 7 1 . 1 6 9 4 Steffan A. Cline [EMAIL PROTECTED] Phoenix, Az http://www.ExecuChoice.net USA AIM : SteffanC ICQ : 57234309 Lasso Partner Alliance Member --- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: number of rows in EXPLAIN for unpacked vs packed tables
Cor - Those numbers are an estimate, not a hard number, of how many rows MySQL thinks it will have to exmaine to find what you're looking for. They could be different because your compressed table is brand-new and all the indexes are clean and up to date, whereas your original table has possibly been inserted, updated, and deleted from, causing the key information to be less accurate. You could try running a CHECK TABLE tablename EXTENDED on the old one, which should update all the key info, and then checking your EXPLAIN results again. Be aware it could take a while for a large table and will lock the table for the duration. If your indexes are different on the compressed table that would make a difference too, as MySQL might well be basing its estimate off a different index. Dan On 6/30/06, C.R.Vegelin [EMAIL PROTECTED] wrote: Hi All, I compressed a MyISAM table successfully with MYISAMPACK, followed by MYISAMCHK. Both tables (MyISAM + Compressed ) have exactly the same number of rows with Count(*). But when I give a SELECT query with EXPLAIN on both tables, I get different number of rows. For example: EXPLAIN SELECT ... FROM data ... shows 1983 rows, but the same EXPLAIN SELECT ... FROM pack ... shows 2064 rows. Any idea why ? Regards, Cor -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: howto set mysql to readonly
I personally would be wary of a solution like what you're proposing (locking all tables, that is). The problem I see is this - you lock all tables and proceed to move your data over to another host. Meanwhile, clients could well be queueing up insert or update commands that are simply blocking, waiting for you to release the locks. At the end, when you either release the locks or shutdown the database server, those clients' operations may complete, but against the outdated databases on the old host, or they may go into a deadlock waiting for the host to come back (and not come out of it), or they may fail ... or you may have users who think their computer is frozen and reboot, losing work. Seems risky, too much potential for data loss. I would insist on finding a window in which to shut down the database server and accomplish the migration in an orderly fashion. Dan On 6/30/06, Brent Baisley [EMAIL PROTECTED] wrote: Instead of locking the table, why not just change the privileges for the specific accounts so they have select only privileges? Then you still of write access through admin accounts if need be. - Original Message - From: Jehan PROCACCIA [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Friday, June 30, 2006 6:28 AM Subject: howto set mysql to readonly Hello I need to move my databases from on server to an other. As lots of data are in production I cannot just stop mysql for 1/2 an hour and reopen it on the new server. What I expect to do is while backuping and restoring to the new server, I wish to set the original server in read only mode so that nobody can write in the databases while transfering to the new one. I've seen the LOCK table and flush commands, but I'am not sure if this is the right method, and how to use them. Lock table just locks tables as it's name implies and not a whole database ? is there a kind of lock all databases ? thanks. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
'on duplicate key update' and 'last_insert_id'
I have a table `event` with two keys: `id` MEDIUMINT(8) UNSIGNED NOT NULL AUTO_INCREMENT, `location_id` MEDIUMINT(8) UNSIGNED NOT NULL, `timestamp` DATETIME NOT NULL, `type` ENUM('0','1','2','3','4','5','6','7','8','9') NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY (`location_id`,`timestamp`,`type`) Sometimes a client will attempt to perform an insert into the table where the row has already been inserted, i.e. the unique key already exists. In that case I want it to appear as though it wasn't there before and has been inserted, returning the new value of id. I don't want to perform an INSERT IGNORE as this ignores far more errors than just duplicate keys. I'd rather not use REPLACE as if the unique key matches then the rest of the row definitely matches. That leaves me with ON DUPLICATE KEY UPDATE. It's not amazingly helpful as you have to provide a column to update - however I can just say e.g. ON DUPLICATE KEY UPDATE id=id The problem with this is that if I then do SELECT LAST_INSERT_ID(); then I don't get the id of the 'updated' table, I get the *next* auto increment value. Is the last bit a bug? Can I get what I want without using REPLACE? Does this post make sense? Thanks ;-D Rob __ This email has been scanned by the MessageLabs Email Security System. For more information please visit http://www.messagelabs.com/email __ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: summing values
Since one person can be listed in both 1 and 2 in each section (just not in the same row) I need to sum up that person's votes between the two columns. I've tried summing two subqueries but it errors out because of returning two values. Unclear. Where is a user's 1 or 2 recorded? What is a "sum .. between two columns"? PB - Mazur Worden, Kathy wrote: I'd like to sum up values from two different columns in one table that is constructed like this: CREATE TABLE `nominations` ( `id` smallint(5) unsigned NOT NULL auto_increment, `name` varchar(255) default NULL, `SectionA_Nominee1` varchar(255) NOT NULL default 'n/a', `SectionA_Nominee2` varchar(255) NOT NULL default 'n/a', `SectionB_Nominee1` varchar(255) NOT NULL default 'n/a', `SectionB_nominee2` varchar(255) NOT NULL default 'n/a' `submittedwhen` timestamp NOT NULL default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP, `submittedfrom` varchar(50) NOT NULL default 'unavailable', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 The table has a row added each time a vote is cast. For each section one or two votes can be placed from a list of 4 nominees. Since one person can be listed in both 1 and 2 in each section (just not in the same row) I need to sum up that person's votes between the two columns. I've tried summing two subqueries but it errors out because of returning two values. Any other ways to accomplish this? Kathy Mazur Worden Prairie Area Library System No virus found in this incoming message. Checked by AVG Free Edition. Version: 7.1.394 / Virus Database: 268.9.8/380 - Release Date: 6/30/2006 No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.1.394 / Virus Database: 268.9.8/380 - Release Date: 6/30/2006 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: summing values
Here's a sample of some votes in the table: SectionA Nominee1 SectionA Nominee2 Cynthia Smith n/a Maggie Doe n/a Maggie Doe Cynthia Smith Maggie Doe Cynthia Smith Maggie Doe Cynthia Smith Maggie Doe Cynthia Smith Maggie Doe Cynthia Smith Maggie Doe Sally Brown Maggie Doe Cynthia Smith Maggie Doe n/a I want a result that looks like this: Section A Number of Votes Maggie Doe 9 Cynthia Smith 8 Sally Brown 1 Number of votes needs to sum up the number of votes for one person from both columns. Does that help? Kathy Mazur Worden Prairie Area Library System From: Peter Brawley [mailto:[EMAIL PROTECTED] Sent: Friday, June 30, 2006 10:45 AM To: Mazur Worden, Kathy Cc: mysql@lists.mysql.com Subject: Re: summing values Since one person can be listed in both 1 and 2 in each section (just not in the same row) I need to sum up that person's votes between the two columns. I've tried summing two subqueries but it errors out because of returning two values. Unclear. Where is a user's 1 or 2 recorded? What is a sum .. between two columns? PB - Mazur Worden, Kathy wrote: I'd like to sum up values from two different columns in one table that is constructed like this: CREATE TABLE `nominations` ( `id` smallint(5) unsigned NOT NULL auto_increment, `name` varchar(255) default NULL, `SectionA_Nominee1` varchar(255) NOT NULL default 'n/a', `SectionA_Nominee2` varchar(255) NOT NULL default 'n/a', `SectionB_Nominee1` varchar(255) NOT NULL default 'n/a', `SectionB_nominee2` varchar(255) NOT NULL default 'n/a' `submittedwhen` timestamp NOT NULL default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP, `submittedfrom` varchar(50) NOT NULL default 'unavailable', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 The table has a row added each time a vote is cast. For each section one or two votes can be placed from a list of 4 nominees. Since one person can be listed in both 1 and 2 in each section (just not in the same row) I need to sum up that person's votes between the two columns. I've tried summing two subqueries but it errors out because of returning two values. Any other ways to accomplish this? Kathy Mazur Worden Prairie Area Library System No virus found in this incoming message. Checked by AVG Free Edition. Version: 7.1.394 / Virus Database: 268.9.8/380 - Release Date: 6/30/2006 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: howto set mysql to readonly
OK so it seems to be a bad idea ... I was expecting that I missed a magic command like set readonly on all databases ... however, still thinking in a probably bad solution , what about setting unix file system acces mode to the database files to read only (400) wouldn't be a good idea ? (if I don't care about clients trying to write, i just want those trying to read to be able to do so ) Dan Buettner wrote: I personally would be wary of a solution like what you're proposing (locking all tables, that is). The problem I see is this - you lock all tables and proceed to move your data over to another host. Meanwhile, clients could well be queueing up insert or update commands that are simply blocking, waiting for you to release the locks. At the end, when you either release the locks or shutdown the database server, those clients' operations may complete, but against the outdated databases on the old host, or they may go into a deadlock waiting for the host to come back (and not come out of it), or they may fail ... or you may have users who think their computer is frozen and reboot, losing work. Seems risky, too much potential for data loss. I would insist on finding a window in which to shut down the database server and accomplish the migration in an orderly fashion. Dan On 6/30/06, Brent Baisley [EMAIL PROTECTED] wrote: Instead of locking the table, why not just change the privileges for the specific accounts so they have select only privileges? Then you still of write access through admin accounts if need be. - Original Message - From: Jehan PROCACCIA [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Friday, June 30, 2006 6:28 AM Subject: howto set mysql to readonly Hello I need to move my databases from on server to an other. As lots of data are in production I cannot just stop mysql for 1/2 an hour and reopen it on the new server. What I expect to do is while backuping and restoring to the new server, I wish to set the original server in read only mode so that nobody can write in the databases while transfering to the new one. I've seen the LOCK table and flush commands, but I'am not sure if this is the right method, and how to use them. Lock table just locks tables as it's name implies and not a whole database ? is there a kind of lock all databases ? thanks. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: 'on duplicate key update' and 'last_insert_id'
On Jun 30, 2006, at 10:44 AM, Rob Desbois wrote: That leaves me with ON DUPLICATE KEY UPDATE. It's not amazingly helpful as you have to provide a column to update - however I can just say e.g. ON DUPLICATE KEY UPDATE id=id The problem with this is that if I then do SELECT LAST_INSERT_ID (); then I don't get the id of the 'updated' table, I get the *next* auto increment value. Is the last bit a bug? Yes, http://bugs.mysql.com/bug.php?id=19243 -- David Hillman LiveText, Inc 1.866.LiveText x235
Re: MySQL Denormalized
John Hicks wrote: I don't see a question here. But that won't stop me from giving a little advice :) It is generally more important to keep things simple (by not denormalizing) than to try to optimize performance by complicating things significantly. Moreover, I can't see how combining several columns into one will improve performance. I would think it will slow things down whenever you have to retrieve data, particular if you query against anything in column 3. And now you say you want to save disk space by compressing the field separators in your combined column? Forget it all! Go back to a fully normalized design. If you have problems, post them here I kind of disagree on what you said regarding denormalization, but believe me when I say that I have experienced a 90% improvement on performance with that. As I said before, my table has +20 million entries; if it was normalized this number would be around 20 billion, since it would be a 1 N relation. Off course I dont make any selections based on column 3, but only by the tables keys. Forget that!!! Runing for normalization would not be viable for me. I need a response time lower than 0.01 sec. (and Ive been achieving less than that) However I would like to make a better use of this columns space, once I use two only characters for separators. Heres my question: Is there anyway I could minimize that? Is there any specific character that would occupy less space? Once again thank you very much == Atenciosamente, Jan Gomes - [EMAIL PROTECTED] Jan Gomes wrote: Hy Guys, I needed denormalized my table to obtain high performance, but i want best appropriate the space. I joint two column (of the JOIN) intro one column with two separadores (# and ;) Example: ID | column_1 | column_denormalized 1 | Test | 1#20202;5#1000101; It has some method to minimize the space(disk space) required for this separadores ? Like some character that i can use for minimize the table size? PS: The table has 20.000.000 of rows with 2 GB data length. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MySQL Denormalized
A specific character that would occupy less space? You mean like using lower case instead of upper case? All characters use the same amount of space (unless your using Chinese or something). I don't think reducing the number of columns will help. Usually you take other performance enhancing measures first, like structuring it to use fixed length records. You should probably use InnoDB and index the fields you normaly retrieve. Since InnoDB stores the data with the index, it doesn't need to access the actual table unless you are pulling data that isn't indexed. While denormaliztion is certainly applicable and desired in some case (whoever saw a database in fifth normal form!), you are not denormalzing, your combining data elements. There are so many other things you can try before you unstructure your data. If you're going to unstructure your data, you might as well compress it too. Heck, you might even look into separating out the data you don't search on into a separate compressed table, and have a 1-1 relation. Kind of a search table and a detail table. I don't know which response time you are trying to keep to .01 or lower. If it's end to end, you probably want to look at your network. Network latency can be the biggest culprit. - Original Message - From: Jan Gomes [EMAIL PROTECTED] To: mysql mysql@lists.mysql.com Cc: johnlist [EMAIL PROTECTED] Sent: Friday, June 30, 2006 3:28 PM Subject: Re: MySQL Denormalized John Hicks wrote: I don't see a question here. But that won't stop me from giving a little advice :) It is generally more important to keep things simple (by not denormalizing) than to try to optimize performance by complicating things significantly. Moreover, I can't see how combining several columns into one will improve performance. I would think it will slow things down whenever you have to retrieve data, particular if you query against anything in column 3. And now you say you want to save disk space by compressing the field separators in your combined column? Forget it all! Go back to a fully normalized design. If you have problems, post them here I kind of disagree on what you said regarding denormalization, but believe me when I say that I have experienced a 90% improvement on performance with that. As I said before, my table has +20 million entries; if it was normalized this number would be around 20 billion, since it would be a 1 - N relation. Off course I don't make any selections based on column 3, but only by the table's keys. Forget that!!! Runing for normalization would not be viable for me. I need a response time lower than 0.01 sec. (and I've been achieving less than that) However I would like to make a better use of this column's space, once I use two only characters for separators. Here's my question: Is there anyway I could minimize that? Is there any specific character that would occupy less space? Once again thank you very much == Atenciosamente, Jan Gomes - [EMAIL PROTECTED] Jan Gomes wrote: Hy Guys, I needed denormalized my table to obtain high performance, but i want best appropriate the space. I joint two column (of the JOIN) intro one column with two separadores (# and ;) Example: ID | column_1 | column_denormalized 1 | Test | 1#20202;5#1000101; It has some method to minimize the space(disk space) required for this separadores ? Like some character that i can use for minimize the table size? PS: The table has 20.000.000 of rows with 2 GB data length. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Show tables replacement
Is there a Select statement I can use to get table names so that I could use other Select statement syntax on the results? What I'd like to do is this, SHOW Replace(TABLES, 'tbl','') Like 'tbl%'; But this doesn't work so I need a Select statement that can do the same thing. Thanks in advance
Re: MySQL Denormalized
Pardon me, i think I din't express myself properly. I did not only combining data elements. My structure was: Table_1{ id_table1 (int) COLLUM 1 (int) COLLUM 2 (int) } Table_2{ id_table1 (int) COLLUM 1 (int) COLLUM 2 (int) } Being a relationship between table_1 and table_2 1-N. For each entrance in table_1 it would have among 100-1000 register in table_2. After many tests I was with the following structure: Table{ id_table1 (int) COLLUM 1 (int) COLLUM 2 (int) COLLUM 3 (TEXT) (UNION COLLUM 1 and 2 of table_2) } What it brought me storage and speed benefits. What I imagined was a caracter to use as separating that it occupied little space, but seems not to exist. Thanks for all. == Atenciosamente, Jan Gomes - [EMAIL PROTECTED] A specific character that would occupy less space? You mean like using lower case instead of upper case? All characters use the same amount of space (unless your using Chinese or something). I don't think reducing the number of columns will help. Usually you take other performance enhancing measures first, like structuring it to use fixed length records. You should probably use InnoDB and index the fields you normaly retrieve. Since InnoDB stores the data with the index, it doesn't need to access the actual table unless you are pulling data that isn't indexed. While denormaliztion is certainly applicable and desired in some case (whoever saw a database in fifth normal form!), you are not denormalzing, your combining data elements. There are so many other things you can try before you unstructure your data. If you're going to unstructure your data, you might as well compress it too. Heck, you might even look into separating out the data you don't search on into a separate compressed table, and have a 1-1 relation. Kind of a search table and a detail table. I don't know which response time you are trying to keep to .01 or lower. If it's end to end, you probably want to look at your network. Network latency can be the biggest culprit. - Original Message - From: Jan Gomes [EMAIL PROTECTED] To: mysql mysql@lists.mysql.com Cc: johnlist [EMAIL PROTECTED] Sent: Friday, June 30, 2006 3:28 PM Subject: Re: MySQL Denormalized John Hicks wrote: I don't see a question here. But that won't stop me from giving a little advice :) It is generally more important to keep things simple (by not denormalizing) than to try to optimize performance by complicating things significantly. Moreover, I can't see how combining several columns into one will improve performance. I would think it will slow things down whenever you have to retrieve data, particular if you query against anything in column 3. And now you say you want to save disk space by compressing the field separators in your combined column? Forget it all! Go back to a fully normalized design. If you have problems, post them here I kind of disagree on what you said regarding denormalization, but believe me when I say that I have experienced a 90% improvement on performance with that. As I said before, my table has +20 million entries; if it was normalized this number would be around 20 billion, since it would be a 1 - N relation. Off course I don't make any selections based on column 3, but only by the table's keys. Forget that!!! Runing for normalization would not be viable for me. I need a response time lower than 0.01 sec. (and I've been achieving less than that) However I would like to make a better use of this column's space, once I use two only characters for separators. Here's my question: Is there anyway I could minimize that? Is there any specific character that would occupy less space? Once again thank you very much == Atenciosamente, Jan Gomes - [EMAIL PROTECTED] Jan Gomes wrote: Hy Guys, I needed denormalized my table to obtain high performance, but i want best appropriate the space. I joint two column (of the JOIN) intro one column with two separadores (# and ;) Example: ID | column_1 | column_denormalized 1 | Test | 1#20202;5#1000101; It has some method to minimize the space(disk space) required for this separadores ? Like some character that i can use for minimize the table size? PS: The table has 20.000.000 of rows with 2 GB data length. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: is there a way to optimize like '%..%' searches ?
It's basically a log that people needs to be able to search with wildcards in... the log grows many thousand records per day and never gets smaller, so searches just gets slower and slower. There is a sort field, the timestamp which is used in the searches, but it only makes the searches lsower yet instead of helping in the query, since all that does is sort by timestamp desc basically the query works like this: some searches for foo bar baz and i create an sql that looks like: select * from table where logline like '%foo%bar%baz%' order by timestamp desc. I have wrekced my brian plenty but have not come up with any otehr way of doing it that gives the needed flexibility in the searces. Since what is searched for is not words as such - most loglines are actually a single word on the form somethingspecialcharsomethingspecialcharsomethingspecialcharsomethingspecialcharsomethingspecialchar and so on - the logline is varibale length and variable number of entities between the sepcial chars (even the special chars are very varied) and of no specific format, thus the needed flexibility in the searches. If i coud i would changes the log format, but that is not possible since this database has loglines going all the way back to the 1980's (with more old lines being added as well as new ones) and the format has changed many times since then... Basically i am stuck with a very crappy heap of data i need to be able to search in a smart manner. Fulltext seaching would have been ideal if i was able to do boolean macthes with leading wildcard, but without it is useless :/ btw the result doesn't need scoring for relevance at all - what is searched for is always the newest matches to the searchterm, regardless of relevance (relevance could become handy at a later stage tho, but i dare not even think about it atm) Dan Buettner wrote: Bummer, Martin. What more can you tell us about your specific application? What are you storing and searching on, exactly? Any chance you can filter on another column to reduce the number of records that need a string search? Dan On 6/28/06, Martin Jespersen [EMAIL PROTECTED] wrote: I was too fast there it seems fulltext searches doesn't help after all since i can't use leading wildcards to words :( too bad i loved the speed :/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: is there a way to optimize like '%..%' searches ?
Wow, that is a tough one. My question would be, how often is older data really accessed? Could you start incorporating a default date range like past 3 months or past 1 year into all searches, but allow people to override it if needed? Then if you add an index on the timestamp column it would help any searches with a date clause. Dan On 6/30/06, Martin Jespersen [EMAIL PROTECTED] wrote: It's basically a log that people needs to be able to search with wildcards in... the log grows many thousand records per day and never gets smaller, so searches just gets slower and slower. There is a sort field, the timestamp which is used in the searches, but it only makes the searches lsower yet instead of helping in the query, since all that does is sort by timestamp desc basically the query works like this: some searches for foo bar baz and i create an sql that looks like: select * from table where logline like '%foo%bar%baz%' order by timestamp desc. I have wrekced my brian plenty but have not come up with any otehr way of doing it that gives the needed flexibility in the searces. Since what is searched for is not words as such - most loglines are actually a single word on the form somethingspecialcharsomethingspecialcharsomethingspecialcharsomethingspecialcharsomethingspecialchar and so on - the logline is varibale length and variable number of entities between the sepcial chars (even the special chars are very varied) and of no specific format, thus the needed flexibility in the searches. If i coud i would changes the log format, but that is not possible since this database has loglines going all the way back to the 1980's (with more old lines being added as well as new ones) and the format has changed many times since then... Basically i am stuck with a very crappy heap of data i need to be able to search in a smart manner. Fulltext seaching would have been ideal if i was able to do boolean macthes with leading wildcard, but without it is useless :/ btw the result doesn't need scoring for relevance at all - what is searched for is always the newest matches to the searchterm, regardless of relevance (relevance could become handy at a later stage tho, but i dare not even think about it atm) -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Show tables replacement
In the last episode (Jun 30), Ed Reed said: Is there a Select statement I can use to get table names so that I could use other Select statement syntax on the results? What I'd like to do is this, SHOW Replace(TABLES, 'tbl','') Like 'tbl%'; But this doesn't work so I need a Select statement that can do the same thing. SELECT table_name FROM information_schema.tables; -- Dan Nelson [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Show tables replacement
Opps! Thanks for the quick reply, Dan. But I forgot to mention I'm running MySQL 4.1.11 Any other suggestions? Thanks again. Dan Nelson [EMAIL PROTECTED] 6/30/06 2:48:57 PM In the last episode (Jun 30), Ed Reed said: Is there a Select statement I can use to get table names so that I could use other Select statement syntax on the results? What I'd like to do is this, SHOW Replace(TABLES, 'tbl','') Like 'tbl%'; But this doesn't work so I need a Select statement that can do the same thing. SELECT table_name FROM information_schema.tables; -- Dan Nelson [EMAIL PROTECTED]
Solved: Client still reports table full
I forgot I have two databases snortdb and archive_snortdb with same schema. Syslog did not distinguish between the two(2). Both were suffering from the 4GB limit. Once I increased max_row the error stopped on the client. Brent, Thanks again, Raymond -Original Message- From: Brent Baisley [mailto:[EMAIL PROTECTED] Sent: Friday, June 30, 2006 8:49 To: Jacob, Raymond A Jr; mysql@lists.mysql.com Subject: Re: Client still reports table full Wow, I'm really sorry about that. Left out a zero. I should stop answering questions before the holiday weekend. I was suggesting a minor change to 500 to see if that would work. Everything I've read about adjusting for table full errors always specifies both. Since only one was changed, MySQL might not have realized things were different(?). Unfortunately, there's not a whole lot of info on this topic, at least that I can find. I learned a while ago to set these parameters when I create the table if I think the table will be huge. I've also switched to using merge tables, which makes it a lot easier to archive parts of the table. You might try running the alter table command again with both variables specified. - Original Message - From: Jacob, Raymond A Jr [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Thursday, June 29, 2006 5:21 PM Subject: RE: Client still reports table full If I understand the results from SHOW TABLE STATUS LIKE 'data'; My avg_row_length = 497 Why would descreasing it to 50 have a positive Effect. I would assume I should increase it? Thank you/Raymond -Original Message- From: Brent Baisley [mailto:[EMAIL PROTECTED] Sent: Thursday, June 29, 2006 15:53 To: Jacob, Raymond A Jr; mysql@lists.mysql.com Subject: Re: Client still reports table full Oops, left out an important part. You should change the Avg_row_length also. ALTER TABLE AVG_ROW_LENGTH = 50 You need to specify an average row length if you have dynamic length fields in the table (blob, text, etc.). Also, perhaps a silly question which you may have answered earlier, but does you file system allow files larger than 4GB? Sometimes you have to specifically enable that feature in a file system. If that doesn't work, or you're limited to 4GB files, you may need to switch to using a merge table. - Original Message - From: Jacob, Raymond A Jr [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Thursday, June 29, 2006 1:37 PM Subject: Client still reports table full Yesterday: I ran the following command: ALTER TABLE data max_rows=1100 Today: The client still reported table is full. I rebooted the client and stopped and started the mysql server. I still get the table is full error on the data table. I ran the command: \ echo SHOW TABLE STATUS LIKE 'data'; | mysql -u user -p snortdb \ below is the output. Name Engine Version Row_format Rows Avg_row_length Data_length Max_data_length Index_length Data_free Auto_increment Create_time Update_time Check_time Collation Checksum Create_options Comment data MyISAM 9 Dynamic 8721565 497 4335220336 1099511627775 127599616 0 NULL 2006-06-28 20:54:55 2006-06-29 18:02:32 NULL latin1_swedish_ci NULL max_rows=1100 \\ df shows /var the partition with the database has enoungh room: Filesystem1K-blocks Used Avail Capacity Mounted on /dev/amrd0s1f 27792614 18449326 711988072%/var -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
DROP TEMPORARY TABLE waiting for table???
I'm seeing something that I don't think should be happening, but I'm not sure if it's a MySQL bug. To allow some of my stored procedures to operate concurrently, I make a temporary memory copy of some shared tables that are accessed by the procedures. The temporary heap table has the same name as the table I'm copying. The scenario here is a data warehouse with a bunch of data marts and some shared dimensions that are in a separate database. I'm copying the necessary rows of the commonly used shared dimension tables, to overcome the problem of stored procedures locking all the tables they're going to use, which was preventing concurrency. The problem is that despite this, I'm seeing processes that are stuck with status Waiting for table when they are trying to drop the temporary table if it exists (DROP TEMPORARY TABLE IF EXISTS shared_dimensions.page_dim). I always drop and recreate it at the start of a series of analyses, so that they have the most recent copy. I create the temporary heap table in an independent procedure, so it can't be locking the table. There are other procedures using their own temporary heap table copies of the table I'm copying, but they're not using the real table, only the copy. So... my question is, why is there any problem dropping a table that should only be visible to the connection that's trying to drop it? What's even more bizarre is that I get this problem even when the temporary table doesn't exist, on a brand-new connection. I've had this code running for a couple of weeks and just noticed the problem, so I'm not sure if it cropped up right away or not. Haven't had a chance to bounce the server yet. Any insight appreciated. Nick -- Nick Arnett [EMAIL PROTECTED] Messages: 408-904-7198 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Need way to see all relations?
Is there a way with InnoDB tables to see all related tables/columns? Basically what I want to do is make a script that somehow will dynamically create a backup of a single user record. But I don't want to have to manually add a new table or column everytime to the script. So for example, I have a 'users' table. And there are all kinds of related tables like 'logs', 'preferences', etc. that all have a FK reference back to this user ID. I want to make a query or use some PHP to build a list of all tables in my DB that are references. Then I can just dump out that user ID (via this script) and it will backup that user and all related table data across all tables. ÐÆ5ÏÐ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Need way to see all relations?
That is, as far as I know, impossible. Mysql does not know which tables are related to which ones. Peter On 6/30/06, Daevid Vincent [EMAIL PROTECTED] wrote: Is there a way with InnoDB tables to see all related tables/columns? Basically what I want to do is make a script that somehow will dynamically create a backup of a single user record. But I don't want to have to manually add a new table or column everytime to the script. So for example, I have a 'users' table. And there are all kinds of related tables like 'logs', 'preferences', etc. that all have a FK reference back to this user ID. I want to make a query or use some PHP to build a list of all tables in my DB that are references. Then I can just dump out that user ID (via this script) and it will backup that user and all related table data across all tables. ÐÆ5ÏÐ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- find videoblogs: http://mefeedia.com my blog: http://poorbuthappy.com/ease/ my job: http://petervandijck.net -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Need way to see all relations?
That doesn't seem right. I know that MYISAM tables don't understand relations, but INNODB tables most certainly understand foreign key constraints and in fact cause me a lot of grief sometimes when trying to insert a record or create a new table that violates said constraints ;-) DÆVID -Original Message- From: Peter Van Dijck [mailto:[EMAIL PROTECTED] Sent: Friday, June 30, 2006 6:32 PM To: Daevid Vincent Cc: mysql@lists.mysql.com Subject: Re: Need way to see all relations? That is, as far as I know, impossible. Mysql does not know which tables are related to which ones. Peter On 6/30/06, Daevid Vincent [EMAIL PROTECTED] wrote: Is there a way with InnoDB tables to see all related tables/columns? Basically what I want to do is make a script that somehow will dynamically create a backup of a single user record. But I don't want to have to manually add a new table or column everytime to the script. So for example, I have a 'users' table. And there are all kinds of related tables like 'logs', 'preferences', etc. that all have a FK reference back to this user ID. I want to make a query or use some PHP to build a list of all tables in my DB that are references. Then I can just dump out that user ID (via this script) and it will backup that user and all related table data across all tables. ÐÆ5ÏÐ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- find videoblogs: http://mefeedia.com my blog: http://poorbuthappy.com/ease/ my job: http://petervandijck.net -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Need way to see all relations?
Oh, InnoDB tables? I was assuming MyISAM tables.. :) And I'm not a total expert, but that's my understanding. Peter On 6/30/06, Daevid Vincent [EMAIL PROTECTED] wrote: That doesn't seem right. I know that MYISAM tables don't understand relations, but INNODB tables most certainly understand foreign key constraints and in fact cause me a lot of grief sometimes when trying to insert a record or create a new table that violates said constraints ;-) DÆVID -Original Message- From: Peter Van Dijck [mailto:[EMAIL PROTECTED] Sent: Friday, June 30, 2006 6:32 PM To: Daevid Vincent Cc: mysql@lists.mysql.com Subject: Re: Need way to see all relations? That is, as far as I know, impossible. Mysql does not know which tables are related to which ones. Peter On 6/30/06, Daevid Vincent [EMAIL PROTECTED] wrote: Is there a way with InnoDB tables to see all related tables/columns? Basically what I want to do is make a script that somehow will dynamically create a backup of a single user record. But I don't want to have to manually add a new table or column everytime to the script. So for example, I have a 'users' table. And there are all kinds of related tables like 'logs', 'preferences', etc. that all have a FK reference back to this user ID. I want to make a query or use some PHP to build a list of all tables in my DB that are references. Then I can just dump out that user ID (via this script) and it will backup that user and all related table data across all tables. ÐÆ5ÏÐ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- find videoblogs: http://mefeedia.com my blog: http://poorbuthappy.com/ease/ my job: http://petervandijck.net -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- find videoblogs: http://mefeedia.com my blog: http://poorbuthappy.com/ease/ my job: http://petervandijck.net -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Need way to see all relations?
Daevid, I have a 'users' table. And there are all kinds of related tables like 'logs', 'preferences', etc. that all have a FK reference back to this user ID. I want to make a query or use some PHP to build a list of all tables in my DB that are references. Query to find tables with FK references to $db.$table: SELECT c.table_schema,u.table_name,u.column_name,u.referenced_column_name FROM information_schema.table_constraints AS c INNER JOIN information_schema.key_column_usage AS u USING( constraint_schema, constraint_name ) WHERE c.constraint_type = 'FOREIGN KEY' . AND u.referenced_table_schema='$db' AND u.referenced_table_name = '$table' ORDER BY c.table_schema,u.table_name; PB Daevid Vincent wrote: Is there a way with InnoDB tables to see all related tables/columns? Basically what I want to do is make a script that somehow will dynamically create a backup of a single user record. But I don't want to have to manually add a new table or column everytime to the script. So for example, I have a 'users' table. And there are all kinds of related tables like 'logs', 'preferences', etc. that all have a FK reference back to this user ID. I want to make a query or use some PHP to build a list of all tables in my DB that are references. Then I can just dump out that user ID (via this script) and it will backup that user and all related table data across all tables. ÐÆ5ÏÐ -- No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.1.394 / Virus Database: 268.9.8/380 - Release Date: 6/30/2006 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Need way to see all relations?
EXCELLENT! Thanks! -Original Message- From: Peter Brawley [mailto:[EMAIL PROTECTED] Sent: Friday, June 30, 2006 8:31 PM To: Daevid Vincent Cc: mysql@lists.mysql.com Subject: Re: Need way to see all relations? Daevid, I have a 'users' table. And there are all kinds of related tables like 'logs', 'preferences', etc. that all have a FK reference back to this user ID. I want to make a query or use some PHP to build a list of all tables in my DB that are references. Query to find tables with FK references to $db.$table: SELECT c.table_schema,u.table_name,u.column_name,u.referenced_column_name FROM information_schema.table_constraints AS c INNER JOIN information_schema.key_column_usage AS u USING( constraint_schema, constraint_name ) WHERE c.constraint_type = 'FOREIGN KEY' . AND u.referenced_table_schema='$db' AND u.referenced_table_name = '$table' ORDER BY c.table_schema,u.table_name; PB Daevid Vincent wrote: Is there a way with InnoDB tables to see all related tables/columns? Basically what I want to do is make a script that somehow will dynamically create a backup of a single user record. But I don't want to have to manually add a new table or column everytime to the script. So for example, I have a 'users' table. And there are all kinds of related tables like 'logs', 'preferences', etc. that all have a FK reference back to this user ID. I want to make a query or use some PHP to build a list of all tables in my DB that are references. Then I can just dump out that user ID (via this script) and it will backup that user and all related table data across all tables. ÐÆ5ÏÐ -- No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.1.394 / Virus Database: 268.9.8/380 - Release Date: 6/30/2006 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
what are those MySQL files for?
Hi, In the /data/database_name directory I have found a few files I don't know what they are used for. I have seen that some of them are pretty big. I don't think it is safe to delete them, but can I do something to decrease their size at least? Here are those files and their sizes in MB: 1 #sql-2a91_cdf.frm 397 #sql-2a91_cdf.MYD 253 #sql-2a91_cdf.MYI 1 #sql-6094_2.frm 397 #sql-6094_2.MYD 1 #sql-6094_2.MYI 2 #sql-6094_2.TMD Thank you. Teddy -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]