The simplest method may be to set them up in a multi-master configuration,
similar to what's documented here:
http://www.onlamp.com/pub/a/onlamp/2006/04/20/advanced-mysql-replication.html
This way you won't have to worry about switching between two masters in a
failover scenario since they're
SCSI/FC/SAS drives are absolutely faster, especially at 15kRPM. Your
requirement of IOPS vs Usable space may actually make it less expensive to
use FC drives (ex. if you don't have to retain much data but need it to be
really fast _all_ the time). This can be especially true if you take
This is confirmed working on a small test set with MySQL 4.1.22
transitioning from SPARC to X64. Thanks everyone!
-Aaron
Here's the test set we used:
CREATE TABLE `test_innodb` (
`id` int(11) unsigned NOT NULL AUTO_INCREMENT,
`bigint` bigint(11) DEFAULT NULL,
`float` float DEFAULT NULL,
All,
I've currently got a project to migrate a LARGE (3.5TB) MySQL data set from
a Sun SPARC machine to a Sun x86 machine, both running Solaris 10 (though
obviously one is x86 and the other is SPARC). Is it possible to simply copy
the data files from one host to the other or is a full
I know how you feel! I think your two best options are these:
1.) Use LVM snapshots per the MPB links you mentioned as a guide. Your
incremental backup would be the binary logs that MySQL writes. You could
copy any of this data off site by mounting the snapshots and using your
remote copy
This doesn't work the same way as system RAM though. You can't extend your
innodb buffer pool onto a block device or filesystem. Though this
technology would be good for segregating things like InnoDB logs and mysql
binary logs.
-Aaron
On Thu, Jan 22, 2009 at 7:40 AM, mos mo...@fastmail.fm
I'm also having a similar issue with some tables I've been trying to dump
(total data set is around 3TB). I'm dumping directly from one host to
another (mysqldump -hSOURCE DATABASE | mysql -hLOCALHOST DATABASE) using
mysql 4.1.22. One system is Solaris 10 SPARC, while the other is Solaris 10
x64
I've run into the following scenario: we've got several database servers
with 2+TB of data spread across several 900GB InnoDB data files. We can
move to innodb_file_per_table in an incremental fashion (ie ALTER TABLE
ENGINE='InnoDB' one table at a time), however, to complete this task, we'll
need
Hi Michael,
Overall and most of the time, SANs are a good thing. They have several
advantaged over dedicated directly attached storage arrays:
1.) They're generally a lot smarter about how and when they write and read
to the disks. Often they understand what's going on down at the head level,
We'd need more information on what the where clauses of the queries
look like to assist with this.
-Aaron
On 9/5/08, Krishna Chandra Prajapati [EMAIL PROTECTED] wrote:
Hi,
What would you say about the below table . What can i do to make it more
efficient.
CREATE TABLE mailer_student_status
Here are a couple ideas:
* Decrease innodb_autoextend_increment to 8 or even 4. You may see
additional IO wait because you're pre-allocating space in chunks
disproportinate to what you immediately need, causing bursty performance.
* If your remaining MyISAM tables don't need it, take 2GB of the
Generally your error log will be HOSTNAME.err in your data_dir.
-Aaron
On Thu, Jun 26, 2008 at 8:46 AM, [EMAIL PROTECTED] wrote:
Sorry about the long signature in the email. I forgot to remove it...
Guillermo
From:
[EMAIL PROTECTED]
To:
mysql@lists.mysql.com
Date:
26.06.2008
I would suggest running something like this from your new linux server:
mysqldump -u USERNAME -p -h WINDOWS_HOST --opt DATABASE | mysql -u
root -h localhost DATABASE
This will pump the ouptut directly from mysqldump into mysql running
on the linux machine, so you don't need another transfer
I have a question about how InnoDB deals with fragmentation within
it's data files. Let me describe my usage scenario to you:
1.) Records are inserted into a InnoDB table. We'll call this table
A. It contains several different kinds of columns including
VARCHARs.
2.) Records are then processed
Will you be using the MyISAM or InnoDB table engines?
I had heard that InnoDB uses 16k blocks internally, so that might be a good
starting point, though I'd love to have someone confirm or deny that this is
actually true.
-Aaron
On Fri, May 9, 2008 at 12:01 AM, Iñigo Medina García
[EMAIL
That's true in some workloads, but with InnoDB you'll usually run into data
file fragmentation before filesystem fragmentation (unless it's a shared
system). This is especially true if your application runs a lot of updates
or deletes as random chunks of space will become free at different points
So I've got fairly big sized table (20,000 records) which has an ENUM
row. I need to change the name of one of the ENUM options. For
example, if the ENUM value was 'blue' before, it needs to be 'purple'
now. What's the best way to do this?
Thanks,
-Aaron
---
Aaron Blew
Jackson
17 matches
Mail list logo