--flush option and fsynch option for innodb engine
Dear All, We are using mysql ver 4.0.17 on lunux with external SCSI media for db storage. We are using innodb engine and we experience that the db processes still services some transactions even when the scsi media on which db resides is physically disconnected. Would anybody throw light on use of flush option or the fsynch option on linux. Is there any problems using these option (sometimes our system panics if we use flush option). Thanks a lot in advance for your help and assistance. With Regards, Ravi Confidentiality Notice The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain confidential or privileged information. If you are not the intended recipient, please notify the sender at Wipro or [EMAIL PROTECTED] immediately and destroy all copies of this message and any attachments.
RE: Data loss problem with mysql
Andy, Thanks a lot for the response. We are using Linux OS. Is there any configuration parameter that enables more frequent flush or a parameter that enables direct write to disk and any idea about performance implications. Thanks once again, Ravi -Original Message- From: andy thomas [mailto:[EMAIL PROTECTED] Sent: Wednesday, November 24, 2004 1:28 PM To: Ravi T Ramachandra (WT01 - EMBEDDED PRODUCT ENGINEERING SOLUTIONS) Cc: [EMAIL PROTECTED] Subject: Re: Data loss problem with mysql On Wed, 24 Nov 2004 [EMAIL PROTECTED] wrote: Dear all, We are running mysql 4.0.17 on linux environment. Our database resides on external disk connected via FC cables. We recently noticed a loss of data in the following scenario. Inserted a row in a table in a separate transaction by a java application, queried a row in the table in a separate transaction by a java application and was successful. Then the FC cable connecting to external db disks was pulled and after sometime put it back Now the inserted row is missing in the database. From our logs, we have a query log that shows the inserted statement prior to FC cable disconnection. After cable pull, we have taken database dump that reveals the missing row that was inserted prior to FC cable disconnection. If somebody would have accidentally deleted, then we can expect the delete statement in the query log. But there is no delete statement in the query log. Can anybody help. What operating system(s) are you using for the system you are making the query from and also for the external database server? mysqld makes as much use of database server system memory as possible and a lot the live database will be cached in memory. If you insert a row and then read it back, it will be in the table but the table is in memory and hasn't necessarily been written to physical disk. Also, UNIX and Unix-like systems normally work with disk buffers so that when a file is written to, it is the disk buffer that is written to, not the physical disk itself. The disk buffers are then flushed out to disk every 30 seconds. It could be that the FC cable was unplugged during the buffer flush, causing the operating system to abort the flush and not update the file on the physical disk. Andy Confidentiality Notice The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain confidential or privileged information. If you are not the intended recipient, please notify the sender at Wipro or [EMAIL PROTECTED] immediately and destroy all copies of this message and any attachments. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Data loss problem with mysql
Dear all, We are running mysql 4.0.17 on linux environment. Our database resides on external disk connected via FC cables. We recently noticed a loss of data in the following scenario. Inserted a row in a table in a separate transaction by a java application, queried a row in the table in a separate transaction by a java application and was successful. Then the FC cable connecting to external db disks was pulled and after sometime put it back Now the inserted row is missing in the database. From our logs, we have a query log that shows the inserted statement prior to FC cable disconnection. After cable pull, we have taken database dump that reveals the missing row that was inserted prior to FC cable disconnection. If somebody would have accidentally deleted, then we can expect the delete statement in the query log. But there is no delete statement in the query log. Can anybody help. With Regards, Ravi Confidentiality Notice The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain confidential or privileged information. If you are not the intended recipient, please notify the sender at Wipro or [EMAIL PROTECTED] immediately and destroy all copies of this message and any attachments.
replication problem
Hi Friends: We are using mysql version 4.0.17 on Linux with a master and a single slave both running on the same node. We have encountered a problem in replication in the following scenario: First the slave got abnormally terminated while there were some active connections to the master. The master was also terminated abnormally in quick succession. After both master and slave was brought up, we noticed that the last transaction in the master just before the abnormal termination is getting replicated twice in the slave. Thanks in advance. Please pardon me if there is a confidentiality note that gets automatically attached with this mail and do ignore it. Regards, Ravi Confidentiality Notice The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain confidential or privileged information. If you are not the intended recipient, please notify the sender at Wipro or [EMAIL PROTECTED] immediately and destroy all copies of this message and any attachments.
mysqladmin shutdown command hangs
Hello: We are using mysql 4.0.17 on Linux We have installed a mysql database on our our server. The data and log files are stored on an external SCSI disk array which is connected to our server using FC cable connected to SCSI port. The mysql process is running on the local machine but the data and logs are stored on the disk array. When there is an accidental communication breakdown between disk array and server on which mysql process is running (e.g, pulling out FC cable), this is being detected by our scripts and then we are trying to shutdown the mysql process by using mysqladmin shutdown option. However this command hangs and does not complete. We tried the --force option also with the mysqladmin shutdown command, but it still hung. Does anybody have any suggestion/solution. Thanks, Ravi
RE: mysqladmin shutdown command hangs
Charles, Sometimes, the connection to disk array will not be available to this server for long time and in such a case, we want another server to connect to the disk array and run mysql processes. Unfortunately, from our design, this cannot happen until the existing server shuts down. I am happy even if mysqladmin reports that mysqladmin shutdown has failed instead of hanging so that I can kill/stop the process using cruder methods. Thanks again, Ravi -Original Message- From: Charles Sprickman [mailto:[EMAIL PROTECTED] Sent: Wednesday, July 07, 2004 10:07 PM To: Ravi T Ramachandra (WT01 - EMBEDDED PRODUCT ENGINEERING SOLUTIONS) Cc: [EMAIL PROTECTED] Subject: Re: mysqladmin shutdown command hangs On Thu, 8 Jul 2004 [EMAIL PROTECTED] wrote: When there is an accidental communication breakdown between disk array and server on which mysql process is running (e.g, pulling out FC cable), this is being detected by our scripts and then we are trying to shutdown the mysql process by using mysqladmin shutdown option. However this command hangs and does not complete. I would imagine it would continue to hang until the array becomes available. It's probably in disk wait state, since mysql wants to do a clean shutdown, which I'm sure requires it touching a number of files in your db directory. Shutting it down seems like a bad thing to do; I would imagine stopping client access to the db would be a more useful action to take if you lose the array. Charles We tried the --force option also with the mysqladmin shutdown command, but it still hung. Does anybody have any suggestion/solution. Thanks, Ravi -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
mySQL: Table locking problems when non-index keys used
Friends, Sorry to post this question again. I got a message saying that the server couldn't transfer this message to some groups. Also I didn't get any response to this question. We are using mysql 4.0.17 with innodb option. In a query, when a WHERE clause contains a non-indexed columns, it locks the entire table instead of row lock. Is there any solution apart from building index on each query key ? Is there a solution in any of the later versions ? With Best Regards, Ravi Confidentiality Notice The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain confidential or privileged information. If you are not the intended recipient, please notify the sender at Wipro or [EMAIL PROTECTED] immediately and destroy all copies of this message and any attachments.
mySQL: Table locking problems when non-index keys used
Hi Friends, We are using mysql 4.0.17 with innodb option. In a query, when a WHERE clause contains a non-indexed columns, it locks the entire table instead of row lock. Is there any solution apart from building index on each query key ? Is there a solution in any of the later versions ? With Best Regards, Ravi Confidentiality Notice The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain confidential or privileged information. If you are not the intended recipient, please notify the sender at Wipro or [EMAIL PROTECTED] immediately and destroy all copies of this message and any attachments.