Confused About Rows and Skipping On Import with MySQL Migration Toolkit 1.1
Hi folks, I'm trying to use MySQL Migration Toolkit 1.1 with MS SQL server 2005 http://dev.mysql.com/downloads/gui-tools/5.0.html - basically all is working great - some tables import no problem - except... I'm trying to import an address table and in the summary it says that there's a few problems like: incorrect string value for column 'street' at row 655 0 rows transferred The problem is when I try to import just 650 or so, then I go to phpmyadim to look - well the rows are not in the original order - they start with addressID 1-2-3-4 etc but randomly skips some so there's no way I can match the imported row 655 to the addressID (655) (they were in order - no deletions) - to find the bad field to fix it... Q: why is it importing rows and seemingly skipping many? Q: how can I figure out which is the REAL row 655 (without counting by hand) to fix the field text string error? Thanks in advance - dave -- Thanks - RevDave Cool @ hosting4days . com [db-lists 09] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: MySQL Log and Data directories
Thanks Baron... Also, curious question.. as you might have used what is called GOOD hw configurarion with RAID 5/10 .. so whats the typical IO (rnd rw) that you archive/expect on high trafficked sites ? --- On Wed, 3/4/09, Baron Schwartz wrote: From: Baron Schwartz Subject: Re: MySQL Log and Data directories To: dbrb2002-...@yahoo.com Cc: mysql@lists.mysql.com Date: Wednesday, March 4, 2009, 11:40 AM On Wed, Mar 4, 2009 at 1:22 PM, wrote: > On a high read/write load.. is it good to split log (binlogs, innodb txn > logs) and data (all tables, innodb tablespace) in different partitions ? > > Anybody had any experience ? > > For example; out of 25 disks array with 142GB 1rpm... I would like to > keep few disks to logs and rest to data .. is it advised or better to keep > everything in spool so that all spindles can be efficiently managed... > > Thanks in advance There are exceptions to everything I'm about to write, but: Under high read loads, there is no benefit. Under high write loads, there might be. With this many disks, yes. With fewer disks, the relatively trivial sequential log writes will not actually degrade performance much, and the non-trivial performance impact of stealing disks away and dedicating them to the logging workload will make a lot of difference. The real answer is always -- run a benchmark and see. Does the improvement offset things like any kind of penalty the OS imposes on you (e.g. LVM can't take a snapshot across multiple volumes)? -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html
How expensive is exception handling at end of cursor vs. UNIONing a sentinel?
Although I've been doing lots of work with SQL over the years, I've never done anything with stored procedures. While reading a book on MySQL stored programs, I'm struck by the fact that the common method of iterating through a resultset using a cursor requires doing "exception-based branching" (a NOT FOUND handler). Typically, in conventional programming languages, doing exception-based branching is considered very bad practice. It seems, however, that the tools available in a PSM limits you to this strategy, in the common cases. It occurred to me, however, that another way to do this would be to "UNION" in a hardcoded row of at least one "impossible" hardcoded value (sometimes called a sentinel) that is used for checking for the last row (or in this case, one past the last row of real data). This would allow for detecting the end of the resultset without using exception-based branching. The important question is, is it even worth it? Is exception handling in a MySQL PSM more expensive than normal processing, like it's considered to be in other languages, like Java? If this works, would it make the resulting code harder to understand than the conventional strategy? Is this a deranged idea?
Re: Possible to get better error handling for invalid enum parameter to stored program?
Thanks. I thought that was the case, but I wanted to be sure. On Fri, Mar 6, 2009 at 12:07 PM, Perrin Harkins wrote: > I'm afraid enums are useless for anything except compressing your > data, since they don't reject bad values. If you want to limit a > field to a set of values, you need to use a lookup table and a foreign > key constraint. > > - Perrin > > On Fri, Mar 6, 2009 at 1:35 PM, David Karr > wrote: > > If I define an enum parameter for a stored program, and the calling code > > sends an invalid value, they get the less than useful "data truncated" > > error. Is it possible to define the stored program to produce better > error > > handling for that kind of error? > > > > This is probably a FAQ, but in general, it appears that error diagnostics > in > > stored programs are very primitive. Are there any plans in a roadmap to > > improve this? > > >
Re: Possible to get better error handling for invalid enum parameter to stored program?
I'm afraid enums are useless for anything except compressing your data, since they don't reject bad values. If you want to limit a field to a set of values, you need to use a lookup table and a foreign key constraint. - Perrin On Fri, Mar 6, 2009 at 1:35 PM, David Karr wrote: > If I define an enum parameter for a stored program, and the calling code > sends an invalid value, they get the less than useful "data truncated" > error. Is it possible to define the stored program to produce better error > handling for that kind of error? > > This is probably a FAQ, but in general, it appears that error diagnostics in > stored programs are very primitive. Are there any plans in a roadmap to > improve this? > -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Possible to get better error handling for invalid enum parameter to stored program?
If I define an enum parameter for a stored program, and the calling code sends an invalid value, they get the less than useful "data truncated" error. Is it possible to define the stored program to produce better error handling for that kind of error? This is probably a FAQ, but in general, it appears that error diagnostics in stored programs are very primitive. Are there any plans in a roadmap to improve this?
ANNOUNCE: check_mysql_health
Hi list, allow me to announce my Nagios plugin check_mysql_health. It was written to get as many important metrics from a database as possible. Have a look at the screenshot http://people.consol.de/~lausser/nagios/check_mysql_health.png to get an impression how a MySQL database can be monitored with Nagios. The plugin can be downloaded from http://www.consol.de/opensource/nagios/check-mysql-health Currently the documentation is in german only, but the command line parameters should be self-explaining. For those of you who use check_mysql_perf they will look familiar. Greetings from Munich, Gerhard -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Finding replicated database
Hi, care to elaborate on the pit-falls of binlog-do-db? Thanks, Atle On Wed, 4 Mar 2009, Baron Schwartz wrote: No. But I would recommend removing binlog-do-db and using replicate-wild-do-table=.% on the slave instead. The binlog-do-db approach has many pitfalls anyway, and is likely to burn you at some point, e.g. get the slave out of sync with the master. The bonus of my suggestion is that you'll be able to see from SHOW SLAVE STATUS what's being replicated. -- Baron Schwartz, Director of Consulting, Percona Inc. Our Blog: http://www.mysqlperformanceblog.com/ Our Services: http://www.percona.com/services.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Concurrent Inserts with merged table
At 08:54 PM 3/5/2009, Baron Schwartz wrote: > Is there a way I can restore the concurrent select feature to a working > state without having to shut down the server and rebuild the entire data > base? Usually when concurrent insert is not permitted, it's because there are holes in the table that cause inserts to go somewhere other than at the end of the table. So if Baron is right. then just run "optimize table1, table2,...tablen" to get rid of the holes. See http://dev.mysql.com/doc/refman/5.1/en/optimize-table.html Mike -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
View creator host updates
Hello, After a system host name had changed, my organization updated the MySQL user records, changing the old (and now non-existent) host name to the new one. This caused any views that were created by users accessing the MtSQL server from the old host to become invalid as the view's creating user is listed as u...@host. Is there a better practice for updating large batches of user records that will not require views to be recreated? Our MySQL version is 5.0.67. Thank you, Chance -- Center for the Application of Information Technologies -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org