retrieve ignored records from LOAD DATA INFILE IGNORE
Is there anyway to get mySQL to generate a warning or other info when it ignores a row via LOAD DATA INFILE IGNORE? I'm happy having the duplicates ignored but ideally would like to log which records were dupes in a place I can find them again. -- thanks, Will -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Time series
>> First: select * from table1 order by field1 asc limit 1 >> Last: select * from table1 order by field1 desc limit 1 > That only returns one number.. what we are really looking for is something And worse: as far as I can tell 3.22.x even if field1 is indexed, ONE of those queries is going to be very slow. The query planner doesn't seem to be smart enough to read the index in reverse for ORDER BY DESC clauses. -- thanks, Will -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
4.1.1 alpha and mysql_install_db grant tables issue
The mysql_install_db script shipped with 4.1.1-alpha seems to leave the mysql/* (user,host,etc.) tables owned root.root on my Debian system; this makes mysqld fail to start after the grant tables are installed. It looks like this is because mysql_install_db calls mysqld with --bootstrap and *without* --user=mysql. mysql_install_db already takes a --user argument but doesn't use it; perhaps it should be added to line 208? -- thanks, Will -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: 100,000,000 row limit?
> I don't believe this. I'm going to write a script to disprove this theory > right now.. We have a lot more than 100,000,000 more than that in a single MyISAM table at work: mysql> select count(*) from probe_result; +---+ | count(*) | +---+ | 302045414 | +---+ 1 row in set (0.00 sec) -- thanks, Will -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Query syntax.
> Select User_Account from Users as a, Devices as b > WHERE > a.User_Account = (Select DISTINCT(b.Device_Account) from b.Devices >WHERE b.Device_Name LIKE 'HP%' ) > I'm running 3.23.49 which I know is not the most current..it was installed 3.x does not support subselects ("select x from (select y from ...)"). You'll need to upgrade to 4.1. -- thanks, Will -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
mysqlhotcopy as a replication scheme
I've got an application that uses a fairly large (~50MM rows, ~1GB of disk) table of read-only data. The table changes maybe once a month, but when it changes, (almost) EVERY row in the table changes. The app needs to be replicated into several datacenters worldwide using relatively slow backend links. For this reason and others (I need to be able to control when each datacenter picks up updates, etc.) native MySQL replication isn't attractive. I'm considering building a scheme where I insert the data into a table once and ship around a gzipped mysqldump and load it into each datacenter -- this is easy, uses less bandwidth, is easy to control via cron and fits well into the rest of our infrastructure. Then I found mysqlhotcopy. Neato! I've tested, and this seems to work: 1) use mysqlhotcopy to copy the table on the "replication master" 2) gzip the table/index/data files and ship them someplace remote 3) (on the slave) unzip them 4) LOCK TABLES foo WRITE 5) FLUSH TABLE foo 6) copy the unzipped data files over the running mysql data files for the single table I'm intersted in. There's clearly a problem here if the machine crashes during this step, but it can be worked out to just 3 calls to rename(2), which is atomic on a POSIX fs, so that's less an issue than it could be. 7) FLUSH TABLE foo 8) profit! It looks like table foo now contains the new data. It takes a LOT less time than reinserting all the data into the table. Other than "you should really use mysql native replication", does anyone have any comments on whether this is likely to be reliable, or why it's a bad idea? I'm using 3.23.49 (Debian stable); Is FLUSH TABLE likely to change in future versions in a way that will break this? -- thanks, Will -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]