Re: 5.1.51 Database Replica Slows Down Suddenly, Lags For Days, and Recovers Without Intervention

2011-10-24 Thread Luis Motta Campos
The data size is about 200 GB. I would have noticed increase on writes. No backup activity is running (actually I don't do conventional backups). Any theories? Thank you for your interest. Kind regards, -- Luis Motta Campos On 23 Oct 2011, at 14:06, Tyler Poland tpol...@engineyard.com wrote:

Re: 5.1.51 Database Replica Slows Down Suddenly, Lags For Days, and Recovers Without Intervention

2011-10-24 Thread Luis Motta Campos
Claudio, Thank you for your interest. I will wait for the issue to happen again and will see what kind of information I can get back with strace. This is indeed something I didn't think of trying yet. I'll keep you people posted on this. Much appreciated on the new approaches and fresh

Re: 5.1.51 Database Replica Slows Down Suddenly, Lags For Days, and Recovers Without Intervention

2011-10-24 Thread Luis Motta Campos
Thank you for sharing your experience, Howard. As those are replica servers, I don't care much about losing a second worth of data in case of power failure. I believe the data centre has double independent power sources, and my hardware man assured me if the power goes down at the data centre

large temp files created by mysql

2011-10-24 Thread Joey L
I have a very large table - approx 3 or 4 gig in size. When i initiate a process on my webpage - mysql starts to create a temporary table in /tmp directory. Is there a way i can create this file/table ahead of time so mysql does not have to create it ? thanks mjh -- MySQL General Mailing List

Re: large temp files created by mysql

2011-10-24 Thread Joey L
On Mon, Oct 24, 2011 at 10:52 AM, Joey L mjh2...@gmail.com wrote: I have a very large table - approx 3 or 4 gig in size. When i initiate a process on my webpage - mysql starts to create a temporary table in /tmp directory. Is there a way i can create this file/table ahead of time so mysql

Re: large temp files created by mysql

2011-10-24 Thread Antony T Curtis
Something you're doing is creating a very large temporary table as part of handling it's query. Usual culprit would be something doing a full table join combined with an order by or group by which would typically cause MySQL to need to create a temp table. You should do EXPLAINs on your

Re: large temp files created by mysql

2011-10-24 Thread Reindl Harald
select id from table order by rand() limit 1; is doing as example a dumb temporary table with the full size Am 24.10.2011 17:03, schrieb Joey L: On Mon, Oct 24, 2011 at 10:52 AM, Joey L mjh2...@gmail.com wrote: I have a very large table - approx 3 or 4 gig in size. When i initiate a process

Re: large temp files created by mysql

2011-10-24 Thread mos
At 10:34 AM 10/24/2011, you wrote: select id from table order by rand() limit 1; is doing as example a dumb temporary table with the full size Because it has to sort the entire table, then it returns the one row. This of course is extremely inefficient. :) You need to choose a random row by

Re: large temp files created by mysql

2011-10-24 Thread Joey L
You should do EXPLAINs on your slow-running queries to find out which ones are likely... If it is generating a 30gig file, I'd expect it must be a very slow query. I know why the files are being created - but is there anything i can do on the server to accomodate for large file operations ?

Re: large temp files created by mysql

2011-10-24 Thread Joey L
MySQLTuner 1.2.0 - Major Hayden ma...@mhtx.net Bug reports, feature requests, and downloads at http://mysqltuner.com/ Run with '--help' for additional options and output filtering Please enter your MySQL administrative login: root Please enter your MySQL administrative password:

Re: large temp files created by mysql

2011-10-24 Thread Reindl Harald
Am 24.10.2011 18:02, schrieb mos: At 10:34 AM 10/24/2011, you wrote: select id from table order by rand() limit 1; is doing as example a dumb temporary table with the full size Because it has to sort the entire table, then it returns the one row. This of course is extremely inefficient.

Re: large temp files created by mysql

2011-10-24 Thread Reindl Harald
Am 24.10.2011 18:27, schrieb Joey L: Performance Metrics - [--] Up for: 10s (2K q [237.300 qps], 591 conn, TX: 947K, RX: 261K) [!!] Key buffer size / total MyISAM indexes: 2.0G/12.0G [!!] Key buffer hit rate: 89.9% (5K cached / 524

Benetl, a free ETL tool for MySQL, is out in version 3.8.

2011-10-24 Thread BenoƮt Carpentier
Dear all, Benetl, a free ETL tool for MySQL, is out in version 3.8. It is now supporting official MySQL (5.x) driver(s), not provided. Benetl is freely dowloadable at: http://www.benetl.net Please read documentation to see how to use the official MySQL driver instead of the provided driver.

Re: large temp files created by mysql

2011-10-24 Thread Johan De Meersman
- Original Message - From: Joey L mjh2...@gmail.com I know why the files are being created - but is there anything i can do on the server to accomodate for large file operations ? *We* don't, however :-) If you'd care to enlighten us as to the particular job and why [you believe

Within-group aggregate query help please - customers and latest subscription row

2011-10-24 Thread Daevid Vincent
I know this is a common problem, and I've been struggling with it for a full day now but I can't get it. I also tried a few sites for examples: http://www.artfulsoftware.com/infotree/queries.php#101 http://forums.devarticles.com/general-sql-development-47/select-max-datetime -problem-10210.html

Re: large temp files created by mysql

2011-10-24 Thread mos
At 11:32 AM 10/24/2011, Reindl Harald wrote: Am 24.10.2011 18:02, schrieb mos: At 10:34 AM 10/24/2011, you wrote: select id from table order by rand() limit 1; is doing as example a dumb temporary table with the full size Because it has to sort the entire table, then it returns the one

Re: large temp files created by mysql

2011-10-24 Thread Reindl Harald
Am 24.10.2011 23:31, schrieb mos: At 11:32 AM 10/24/2011, Reindl Harald wrote: Am 24.10.2011 18:02, schrieb mos: At 10:34 AM 10/24/2011, you wrote: select id from table order by rand() limit 1; is doing as example a dumb temporary table with the full size Because it has to sort

RE: Within-group aggregate query help please - customers and latest subscription row

2011-10-24 Thread Daevid Vincent
A kind (and shy) soul replied to me off list and suggested this solution, however, this takes 28 seconds (that's for a single customer_id, so this is not going to scale). Got any other suggestions? :-) SELECT c.customer_id, c.email, c.name, c.username,

RE: Within-group aggregate query help please - customers and latest subscription row

2011-10-24 Thread Daevid Vincent
Okay, it seems I am learning... slowly... So there needs to be a second WHERE in the sub-select... To get ONE customer's last subscription (0.038s): SELECT c.customer_id, c.email, c.name, c.username, s.subscription_id, s.`date` FROM

Re: large temp files created by mysql

2011-10-24 Thread mos
At 05:31 PM 10/24/2011, Reindl Harald wrote: Am 24.10.2011 23:31, schrieb mos: At 11:32 AM 10/24/2011, Reindl Harald wrote: Am 24.10.2011 18:02, schrieb mos: At 10:34 AM 10/24/2011, you wrote: select id from table order by rand() limit 1; is doing as example a dumb temporary table