Re: Replication switch Master to slave and back

2009-07-08 Thread Aaron Blew
The simplest method may be to set them up in a multi-master configuration,
similar to what's documented here:
http://www.onlamp.com/pub/a/onlamp/2006/04/20/advanced-mysql-replication.html

This way you won't have to worry about switching between two masters in a
failover scenario since they're both writable.

-Aaron

On Wed, Jul 8, 2009 at 1:38 PM, Cantwell, Bryan bcantw...@firescope.comwrote:

 I have successfully set up a replication master/slave scenario with my
 MySQL 5.0.51a
 Now in the event of a fail over to the slave (an identical appliance), I
 want the old master to become the slave for an eventual switch back the
 other way.
 Would it really take the same process to keep the old master up to date
 with the new one? Or is there a way to bring it up to date with the new
 machine without a mysqldump or copying data files?

 I have binary logging running on both machines in hopes that I could just
 tell the new slave how to catch up with the new master...
 Any assistance here?

 thanks

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=aaronb...@gmail.com




Re: Best RAID for a DB + LVM?

2009-02-24 Thread Aaron Blew
SCSI/FC/SAS drives are absolutely faster, especially at 15kRPM.  Your
requirement of IOPS vs Usable space may actually make it less expensive to
use FC drives (ex. if you don't have to retain much data but need it to be
really fast _all_ the time).  This can be especially true if you take
datacenter costs into consideration.

There's also a difference between Enterprise SATA drives and regular
Desktop SATA drives.  The Enterprise class drives tend to be optimized for
workloads that a database may throw at them.

One thing to keep in mind if your dataset isn't terribly large would be to
cram as much RAM in the host as you can.  If you've only got a portion of
your data that's heavily accessed, keeping it in RAM would be ideal.

-Aaron


On Mon, Feb 23, 2009 at 9:17 AM, Brent Baisley brentt...@gmail.com wrote:

 SCSI isn't necessarily faster now. The big difference used to be
 SCSI's support for command queueing, which is why it was faster in
 multi-user environments. Command queueing is now fairly common in SATA
 drives.
 The highest end SCSI is probably still faster than the highest end
 SATA, but you will have less disk space and it will cost much more.
 I would recommend using one of the RAID in a box solution. They have
 big caches for the whole RAID and they are optimized to the
 controllers. If money isn't really an issue, you may look into
 something like NetApp. That would have everything you need.

 Brent Baisley

 On Mon, Feb 23, 2009 at 6:58 AM, Andy Smith a.sm...@ukgrid.net wrote:
  What RAID level to use, whether to use SCSI or SATA etc are all pretty
 much
  how long is a piece of string? questions. If you have a really high end
  hardware array RAID 5 may be faster than RAID1+0 is on a cheaper system.
 
  Basically
 
  RAID 5 = slower
  SATA = slower
  RAID 1+0 = faster
  SCSI = faster
  more physical disks = faster
  more expensive controller = faster
 
  ;)
 
  If you want to compare specific hardware you'll need to get your hands on
 it
  or find someone else who has already done a comparison. But it will make
 a
  huge difference to performance what disk array you have hooked up, just
  depends how much you want to spend
 
  Quoting Waynn Lue waynn...@gmail.com:
 
  I currently have a RAID 5 setup for our database server.  Our space is
  running out, so I'm looking to increase the disk space.  Since I'm doing
  that anyway, I decided to re-evaluate our current disk array.  I was
 told
  that RAID 5 isn't a good choice for databases since it's slower to
 write.
  In addition, I've also been considering setting up LVM to take quick db
  snapshots, after reading various links on the web (and posts to this
  list).
 
  So on to the questions!  First, if that's what I eventually want to do
  (get
  a new RAID server with LVM), do I need to do anything special to set up
  LVM
  on the new system?  Second, what is a good RAID setup for databases?
  RAID
  10?  0+1?  Third, I have the choice of using SATA or SCSI in conjuction
  with
  the RAID drives I choose.  How much of a difference is there in using
 SATA
  instead of SCSI, especially in light of whatever RAID I end up going
 with?
 
  Thanks for any insights,
  Waynn
 
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
 http://lists.mysql.com/mysql?unsub=brentt...@gmail.com
 
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=aaronb...@gmail.com




Re: SPARC to x64 Transition

2009-02-18 Thread Aaron Blew
This is confirmed working on a small test set with MySQL 4.1.22
transitioning from SPARC to X64.  Thanks everyone!

-Aaron

Here's the test set we used:

CREATE TABLE `test_innodb` (
  `id` int(11) unsigned NOT NULL AUTO_INCREMENT,
  `bigint` bigint(11) DEFAULT NULL,
  `float` float DEFAULT NULL,
  `double` double DEFAULT NULL,
  `deci` decimal(6,2) DEFAULT NULL,
  `var` varchar(255) DEFAULT NULL,
  `datetime` datetime DEFAULT NULL,
  `timestamp` timestamp NULL DEFAULT NULL ON UPDATE CURRENT_TIMESTAMP,

  big_unsigned bigint unsigned,
  float_unsigned float unsigned,
  int_unsigned  int unsigned,


  PRIMARY KEY (`id`),
  KEY `deci` (`deci`)
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=latin1;


insert into `test_innodb` values
 ('1' ,'9223372036854775807' ,'321.123' ,'123132321.213213' ,'12.20'
,'somehcar' ,'2009-02-10 10:44:10' ,'2009-02-17 10:46:05' ,(400 *
400) ,444.888 ,123456) ,
 ('2' ,'-9223372036854775807' ,'-67498.7' ,'-6.84616419684968e+17' ,'-0.84'
,'somevarchar' ,'2009-02-05 10:45:12' ,'2009-02-17 10:46:12' ,(500 *
300) ,444.888 ,123456) ,
 ('3' ,'0' ,'0.0' ,'0.0' ,'0.0' ,NULL ,'-00-00 00:00:00' ,'2009-02-17
10:46:12' ,(0) ,0.0 ,0);



On Mon, Feb 16, 2009 at 8:26 AM, Heikki Tuuri heikki.tu...@oracle.comwrote:

 Aaron,

 
 I've currently got a project to migrate a LARGE (3.5TB) MySQL data set from
 a Sun SPARC machine to a Sun x86 machine, both running Solaris 10 (though
 obviously one is x86 and the other is SPARC).  Is it possible to simply
 copy
 the data files from one host to the other or is a full mysqldump/import
 necessary to preserve data integrity?

 If a file copy doesn't work, why specificially would it fail?

 Thanks,
 -Aaron
 

 you can simply copy the files, whether InnoDB or MyISAM. As far as I know,
 all modern processors use the same floating point format. And all integer
 and other data structures are platform independent in MyISAM and InnoDB.

 Best regards,

 Heikki
 Innobase/Oracle

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=aaronb...@gmail.com




SPARC to x64 Transition

2009-02-13 Thread Aaron Blew
All,
I've currently got a project to migrate a LARGE (3.5TB) MySQL data set from
a Sun SPARC machine to a Sun x86 machine, both running Solaris 10 (though
obviously one is x86 and the other is SPARC).  Is it possible to simply copy
the data files from one host to the other or is a full mysqldump/import
necessary to preserve data integrity?

If a file copy doesn't work, why specificially would it fail?

Thanks,
-Aaron


Re: How do you backup HUGE tables?

2009-01-23 Thread Aaron Blew
I know how you feel!  I think your two best options are these:
1.) Use LVM snapshots per the MPB links you mentioned as a guide.  Your
incremental backup would be the binary logs that MySQL writes.  You could
copy any of this data off site by mounting the snapshots and using your
remote copy mechanizm of choice.
2.) Create a slave and create your backups from it, again using the binary
logs as the incremental backup.  You could also use the snapshot technique
to create the initial data set for this host if you're not able to take this
host down for an extended period of time.

Claudio has an excellent point with innodb-file-per-table as well.  Just
make sure a single table will never grow to more than the maximum file size
of your filesystem.

Good luck,
-Aaron

On Fri, Jan 23, 2009 at 1:18 PM, Daevid Vincent dae...@daevid.com wrote:

 We have some INNODB tables that are over 500,000,000 rows. These
 obviously make for some mighty big file sizes:

 -rw-rw 1 mysql mysql 73,872,179,200 2009-01-22 02:31 ibdata1

 This can take a good amount of time to copy even just the file, and a
 mysqldump can take hours to export and import took 3 days (with the
 usual intelligent optimizations, locks, inserts, keys, etc.). Yowza!

 So, it's not really feasible or ideal to do this daily. We'd like to do
 some kind of daily diff and weekly or monthly full backup. Also, as
 any good backup strategy involves off site backups, it's not very
 convenient or even feasible to transfer 80+ GB over our pipe ever day
 (which according to iperf is getting around 11 MBytes/sec from our data
 center to our office).

 Looking for ideas as to what others are doing to backup their very large
 sets?

 We do have a master/slave setup. We're thinking of adding two more
 slaves that are read-only and not accessed via the web at all. Just sits
 there being a backup effectively. One being offsite in another building
 and the logic that we'll trickle in maybe 100k per minute as the data
 is inserted into the real M/S so that should be negligible on our
 intra/internet.


 ---
 I've done some research here, but nothing stands out as the winner...
 but I'm open to any of these ideas if you can make a strong case for
 them.

 http://dev.mysql.com/doc/refman/5.1/en/mysqlhotcopy.html
 mysqlhotcopy works only for backing up MyISAM and ARCHIVE tables.

 http://dev.mysql.com/doc/refman/5.0/en/innodb-backup.html
 InnoDB Hot Backup is an online backup tool you can use to backup your
 InnoDB database while it is running. InnoDB Hot Backup does not require
 you to shut down your database and it does not set any locks or disturb
 your normal database processing. InnoDB Hot Backup is a non-free
 (commercial) add-on tool with an annual license fee per computer on
 which the MySQL server is run.
 http://www.innodb.com/hot-backup/
 [not loving that it's a commercial tool]


 http://dev.mysql.com/doc/refman/5.0/en/replication-implementation-details.html

 http://dev.mysql.com/doc/refman/5.1/en/backup.html
 read the comments You can also take very fast online or hot backups if
 you have linux volume management or LVM ... I knew there was a way to
 use LVM for backups somehow. Maybe a solution for us?
 http://www.mysqlperformanceblog.com/?s=backup

 http://www.mysqlperformanceblog.com/2006/08/21/using-lvm-for-mysql-backup-and-replication-setup/

 Some other backup solutions maybe:
 http://www.debianhelp.co.uk/mysqlscript.htm
 http://mysqlbackup.phpmagazine.net/ (seems dead, but maybe it just works
 so well, no need to improve it?)
 http://www.ozerov.de/bigdump.php




Re: How much memory can mysql 5.1 take advantage of?

2009-01-22 Thread Aaron Blew
This doesn't work the same way as system RAM though.  You can't extend your
innodb buffer pool onto a block device or filesystem.  Though this
technology would be good for segregating things like InnoDB logs and mysql
binary logs.

-Aaron

On Thu, Jan 22, 2009 at 7:40 AM, mos mo...@fastmail.fm wrote:

 At 01:08 PM 1/20/2009, you wrote:

 While specing out a new server, I was wondering if there is any limit to
 how much memory can be allocated to mysql 5.1.  If a server has 16GB of ram,
 can mysql take advantage of that much ram (minus a reserved amount for the
 OS obviously)?  Is there any limit such as those imposed by 32-bit
 processors?

 Thanks!


 You can get an external ram drive fairly cheap that will handle 32gb per
 drive (4gb sticks) and can use raid 0 to give you larger drives. They cost
 around $500 for the hardware not including the RAM. On page 9 of the report
 http://techreport.com/articles.x/16255/9 they show database and web server
 performance that blows away all hard drives by a factor of 10x. They have a
 small battery backup that will preserve the contents between reboots and for
 a few hours during power loss. An option allows you to back up the contents
 to flash storage. Of course you should use a UPS on it. What appeals to me
 is you can use it 24/7 and it is not going to wear out. This is definitely
 going on my wish list. :-)

 http://techreport.com/articles.x/16255/1

 Mike

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=aaronb...@gmail.com




Re: mysqldump: Error 2013: Lost connection to MySQL server

2009-01-12 Thread Aaron Blew
I'm also having a similar issue with some tables I've been trying to dump
(total data set is around 3TB).  I'm dumping directly from one host to
another (mysqldump -hSOURCE DATABASE | mysql -hLOCALHOST DATABASE) using
mysql 4.1.22.  One system is Solaris 10 SPARC, while the other is Solaris 10
x64 (64bit MySQL as well).

I wrote a script that starts a mysqldump process for each table within a
database, which shouldn't be a problem since the host currently has around
12G unused memory.  Midway through the dump I seem to lose the connection as
Dan described.  After attempting to drop/re-import (using a single process),
the larger tables continue to fail (though at different points) while some
of the small-medium sized tables made it across.

Anyone else run into this before? Ideas?

Thanks,
-Aaron


Obsoleting old ibdata files

2008-10-13 Thread Aaron Blew
I've run into the following scenario: we've got several database servers
with 2+TB of data spread across several 900GB InnoDB data files.  We can
move to innodb_file_per_table in an incremental fashion (ie ALTER TABLE
ENGINE='InnoDB' one table at a time), however, to complete this task, we'll
need to phase out the existing ibdata files and delete them.  Is there a way
to safely delete ibdata files after the data has been removed from them
without doing a mysqldump and re-import?  Unfortunately we can't take the
databases down for the multiple days it would take to do a full mysqldump
based migration.

Thanks,
-Aaron


Re: MySQL and SAN storage - a request for comments

2008-09-26 Thread Aaron Blew
Hi Michael,
Overall and most of the time, SANs are a good thing.  They have several
advantaged over dedicated directly attached storage arrays:
1.) They're generally a lot smarter about how and when they write and read
to the disks.  Often they understand what's going on down at the head level,
and can leverage that for better performance.
2.) They've generally got a lot more cache than a directly attached array
(some systems can have up to 256GB of cache)
3.) They're a lot more reliable than many direct attached arrays.  There
have been many many hours put into algorithms to detect and predict disk
failures by these SAN vendors, and they're designed to keep that data online
as much as possible as their reputation rides on their availabity.  Hitachi
Data Systems (as one example) even offers configurations with a 100% data
availability guarantee (so long as the unit has power)
4.) Having all those spindles under one management/virtualization framework
makes you a lot more agile with how you can make use of your storage.  The
MySQL workloads your environment has may not all be striped across all the
spindles within the SANs, segregating the workloads.  However, using all the
spindles available can have advantages in some workloads as well, since not
all databases will be hammering down to the spindle all the time.

A SAN environment isn't always a trivial thing to operate, but it will save
a lot of time over managing 100s of direct attached arrays and can offer
performance capabilities way beyond what can be practically achieved by
using direct attached storage.

-Aaron


On Thu, Sep 25, 2008 at 6:38 PM, Michael Dykman [EMAIL PROTECTED] wrote:

 Hello all,

 I recent started employment with a company which has a lot of mysql
 servers (100+ is my best estimate so far) and have all of their
 database servers, masters and slaves  alike, using one of 2 SANs for
 data storage.  They servers are connected to dedicated switches with
 fibre to to SANs and the SANs themselves seem to be well configured
 and tuned.

 However, it seems preposterous to me that all those very busy
 databases should, by design, have a common bottleneck and share a
 single point of failure. I am not deeply knowledgeable about SANs or
 their performance characteristics; my reaction thus far is pretty much
 intuition but I help can't but picture the simple analogue of single
 disk or a RAID 10 with synchronized spindles frantically thrashing
 back and forth to respond to tens of thousands of queries per second.

 Would anyone care to comment?  Is my concern justified or am I merely
 confused?

 --
  - michael dykman
  - [EMAIL PROTECTED]

  - All models are wrong.  Some models are useful.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: INDEXING ALL COLUMNS

2008-09-05 Thread Aaron Blew
We'd need more information on what the where clauses of the queries
look like to assist with this.

-Aaron

On 9/5/08, Krishna Chandra Prajapati [EMAIL PROTECTED] wrote:
 Hi,

 What would you say about the below table . What can i do to make it more
 efficient.

 CREATE TABLE mailer_student_status (
   student_id decimal(22,0) NOT NULL default '0',
   param varchar(128) NOT NULL default '',
   value varchar(128) default NULL,
   PRIMARY KEY  (student_id,param).
  KEY idx_value (value)
 )

 On Sat, Sep 6, 2008 at 1:16 AM, ewen fortune [EMAIL PROTECTED] wrote:

 Hi,

 Following on from what Mike mentioned, indexing all columns does not
 really help as MySQL will at most use one index for a query, so its
 important to pick your indexes carefully and consider constructing
 composite indexes. An index on a single column may not even be used
 due to poor cardinality.

 Ewen

 On Fri, Sep 5, 2008 at 9:27 PM, Krishna Chandra Prajapati
 [EMAIL PROTECTED] wrote:
  Hi all,
 
  I am looking for, is there any specific reason for not indexing all
 columns
  of a table. whats the impact on the performance. Although indexing is
 meant
  for getting great performance. So, why indexing all columns is not
  feasible.  (Read in docs that  all columns should not be indexed)
 
  --
  Krishna Chandra Prajapati
 




 --
 Krishna Chandra Prajapati


-- 
Sent from my mobile device

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: innodb/myisam performance issues

2008-09-04 Thread Aaron Blew
Here are a couple ideas:
* Decrease innodb_autoextend_increment to 8 or even 4.  You may see
additional IO wait because you're pre-allocating space in chunks
disproportinate to what you immediately need, causing bursty performance.
* If your remaining MyISAM tables don't need it, take 2GB of the key_buffer
alocation and put it towards the innodb buffer pool

What are the system's specs?  What's it's underlying storage?  What flags
were used when you created the filesystem(s)?  What OS/Version of MySQL are
you running?  Could you send us some iostat output?

Thanks and good luck,
-Aaron

On Thu, Sep 4, 2008 at 1:26 PM, Josh Miller [EMAIL PROTECTED]wrote:

 Good afternoon,

 I have recently converted a large table from MyISAM to InnoDB and am
 experiencing severe performance issues because of it.  HTTP response times
 have gone from avg .25 seconds to avg 2-3 seconds.  Details follow:

 PHP/MySQL website, no memcached, 3 web nodes that interact with DB, one
 that serves images, one master DB that serves all reads/writes, backup DB
 that only serves for backup/failover at this time (app being changed to
 split reads/writes, not yet).

 The one table that I converted is 130M rows, around 10GB data MyISAM to
 22GB InnoDB.  There are around 110 tables on the DB total.


 My.cnf abbreviated settings:

 [mysqld]
 port  = 3306
 socket  = /tmp/mysql.sock
 skip-locking
 key_buffer= 3G
 sort_buffer_size  = 45M
 max_allowed_packet  = 16M
 table_cache = 2048

 tmp_table_size= 512M
 max_heap_table_size = 512M

 myisam_sort_buffer_size = 512M
 myisam_max_sort_file_size = 10G
 myisam_repair_threads   = 1
 thread_cache_size   = 300

 query_cache_type  = 1
 query_cache_limit = 1M
 query_cache_size  = 600M

 thread_concurrency  = 8
 max_connections   = 2048
 sync_binlog = 1

 innodb_buffer_pool_size = 14G
 innodb_log_file_size  = 20M
 innodb_flush_log_at_trx_commit=1
 innodb_flush_method = O_DIRECT
 skip-innodb-doublewrite
 innodb_support_xa = 1
 innodb_autoextend_increment = 16
 innodb_data_file_path   = ibdata1:40G:autoextend

 We're seeing a significantly higher percentage of IO wait on the system,
  averaging 20% now with the majority of that being user IO.  The system is
 not swapping at all.

 Any ideas for what to check or modify to increase the performance here and
 let MyISAM and InnoDB play better together?  The plan is to convert all
 tables to InnoDB which does not seem like a great idea at this point, we're
 considering moving back to MyISAM.

 Thanks!
 Josh Miller, RHCE

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Performance problem with more than 500 concurrent queries

2008-06-26 Thread Aaron Blew
Generally your error log will be HOSTNAME.err in your data_dir.

-Aaron

On Thu, Jun 26, 2008 at 8:46 AM,  [EMAIL PROTECTED] wrote:
 Sorry about the long signature in the email. I forgot to remove it...

 Guillermo






 From:
 [EMAIL PROTECTED]
 To:
 mysql@lists.mysql.com
 Date:
 26.06.2008 17:39
 Subject:
 Re: Performance problem with more than 500 concurrent queries



 Hello,

 thanks for the answer.

 Where is the error.log stored? I run the mysqladmin, it requires the
 password and it exits immediately. But I cannot find any error.log.

 Thanks,

 Guillermo


 ---
 Guillermo Acilu
 Senior Engineer, Koiaka GmbH

 Koiaka GmbH
 Riesserkopfstr. 17
 82467 Garmisch-Partenkirchen
 Tel: +49 (0)8821 9679555
 Fax: +49 (0)8821 730 9185
 Mailto:[EMAIL PROTECTED]
 http://www.koiaka.com

 Amtsgericht München: HR B 161 041
 Geschäftsführer: Guillermo Acilu
 Sitz: Garmisch-Partenkirchen

 Diese Email kann vertrauliche und/oder rechtlich geschützte Informationen
 enthalten. Wenn Sie nicht der richtige Adressat sind oder diese Email
 irrtümlich erhalten haben, dürfen Sie diese weder benutzen, kopieren,
 weiterleiten oder irgend eine Maßnahme einleiten, die im Zusammenhang mit
 dem Inhalt dieser Email steht. Informieren Sie bitte sofort den Absender
 und vernichten Sie die irrtümlich erhaltene Email vollständig.
 Vielen Dank!

 This e-mail message may contain confidential and/or privileged
 information. If you are not an addressee or otherwise authorized to
 receive this message, you should not use, copy, disclose or take any
 action based on this e-mail or any information contained in the message.
 If you have received this material in error, please advise the sender
 immediately by reply e-mail and delete this message completely.
 Thank you!





 From:
 Ananda Kumar [EMAIL PROTECTED]
 To:
 [EMAIL PROTECTED] [EMAIL PROTECTED]
 Cc:
 mysql@lists.mysql.com
 Date:
 26.06.2008 16:30
 Subject:
 Re: Performance problem with more than 500 concurrent queries



 do this

 mysqladmin -uroot -p debug

 and check the error.log, see if there are any locks on the tables.


 On 6/26/08, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 Hello guys,

 I am new to this list and also kind of new to mysql too.

 I have a multi-thread application written in Ruby. The application is
 reading one table that has two columns (Father, Children). As you might
 suspect, this is a tree. The fields are foreign keys to a second table,
 but the second table is not involved in the problem. The father-children
 table has around 10 rows.

 What the code does is simple. It starts in the root and it navigates the
 tree up to the leafs. The first thread takes the root and runs a select
 to
 get all the children. Then it triggers new threads per children and it
 ends, leaving the other threads alive. Every thread does exactly the
 same
 until they reach the leafs.

 When the threads reach the leafs, they read the description from the
 other
 table using the leaf code, write the value in a global array and end.

 With a few rows, the algorithm is very fast. The problem starts when
 each
 node has many children. To give you an idea, in one point in time there
 are more than 600 threads running, but for some reason I always see no
 more than two queries running in parallel from the MySQL Administrator.

 Each thread opens a new connection, runs the select, closes the
 connection
 and ends. I have the default maximum connections, 100. So I should see
 more queries in parallel than only two or three. All the connections are
 constantly used while the algorithm runs. So the connections are open,
 but
 the database is not doing anything with them. It sounds like the table
 is
 locked, or something. I have checked the code several times but
 everything
 is correct. The code is only 25 lines long.

 The other symptom I can see is that when I start the script, there are
 up
 to 30 or 40 queries in parallel, but then the number goes down quickly
 until it reaches only 2 or 3 concurrent queries a few seconds later. And
 it stays like this.

 I've started playing around with the caches and memory values for MySQL
 server, but to be honest, I am just guessing and the performance does
 not
 change. I am Oracle DBA and I am trying to find some points in common
 with
 mysql to gain performance, but I cannot find the source of the problem.

 I am with Mac OS X Leopard in a very fast machine and MySQL 5.0.51b. The
 problem is also present in 5.1.25-rc

 Any ideas why is this happening?

 Thanks,

 Guillermo





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Moving Database from Windows to Linux

2008-06-26 Thread Aaron Blew
I would suggest running something like this from your new linux server:
mysqldump -u USERNAME -p -h WINDOWS_HOST --opt DATABASE | mysql -u
root -h localhost DATABASE

This will pump the ouptut directly from mysqldump into mysql running
on the linux machine, so you don't need another transfer mechanism
(FTP, scp, etc.).

-Aaron

On Thu, Jun 26, 2008 at 5:09 PM,  [EMAIL PROTECTED] wrote:
 See Thread at: http://www.techienuggets.com/Detail?tx=42035 Posted on behalf 
 of a User

 Hi,

 I'm running MySQL 5.0.51 Community edition. I have a half a gigabyte sized 
 database that I'd like to move from a Windows 2000 based server to a Linux 
 server. What's the best way to do this? I normally take complete backup using 
 MySQL administrator which creates a file with sql to recreate the tables and 
 insert the data. Can I simply ftp this to the Linux box and restore it to a 
 database there? Is there a MySQL administrator for Linux?

 Thanks



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



InnoDB File Fragmentation

2008-06-20 Thread Aaron Blew
I have a question about how InnoDB deals with fragmentation within
it's data files.  Let me describe my usage scenario to you:

1.) Records are inserted into a InnoDB table.  We'll call this table
A.  It contains several different kinds of columns including
VARCHARs.
2.) Records are then processed by a process running on another server.
 The processed information is then stored in table B (this table
also has VARCHARs), and the row that was processed it DELETEed from
table A.

This happens tens of times per second.Over time, additional InnoDB
data files have been added because of data growth.

My questions are these:
* How does InnoDB store VARCHAR information?  Is it based on the
column max length?
* How does InnoDB decide to re-use free blocks within the data files?
Are rows prone to fragment?

Thanks,
-Aaron

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Block size of filesystem

2008-05-09 Thread Aaron Blew
Will you be using the MyISAM or InnoDB table engines?

I had heard that InnoDB uses 16k blocks internally, so that might be a good
starting point, though I'd love to have someone confirm or deny that this is
actually true.

-Aaron


On Fri, May 9, 2008 at 12:01 AM, Iñigo Medina García 
[EMAIL PROTECTED] wrote:

 Hi friends,

 we're developing a new web application that works with a db around 1gb
 and 30 tables.

 We work with linux, and I'm evaluating the benefit of making an specific
 partition (ext3) for mysql in order to have a bigger Block Size. Block
 Size by default in / partition is 4096.

 Do you think that could be good? Any experience?

 Iñigo

 --
 
 Iñigo Medina García
 Librería Díaz de Santos
 [EMAIL PROTECTED]

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Block size of filesystem

2008-05-09 Thread Aaron Blew
That's true in some workloads, but with InnoDB you'll usually run into data
file fragmentation before filesystem fragmentation (unless it's a shared
system).  This is especially true if your application runs a lot of updates
or deletes as random chunks of space will become free at different points
within the InnoDB data file.  In a business intelligence application where
there's not much deleting this probably isn't a concern...

I think the best way to approach it may be to look at your average row size
and base your InnoDB and filesystem block sizes around that.

-Aaron

On Fri, May 9, 2008 at 12:43 AM, Ben Clewett [EMAIL PROTECTED] wrote:



 I would use as large a block size as you dare, especially with InnoDB.
 Makes reading and writing faster as custs down seek time as cuts down disk
 fragmenation and avoids block table reads.  With MyIsam you have lots of
 files, but if you only have a few again might work well with a large block
 size.

 Also have a look at the stripe size of your raid system, might work well
 aligning them if you can.  This URL also gives some tips for an ext3 file
 system on RAID, look for 'stride':

 http://insights.oetiker.ch/linux/raidoptimization.html

 Ben

 Iñigo Medina García wrote:

 Hi friends,

 we're developing a new web application that works with a db around 1gb
 and 30 tables.

 We work with linux, and I'm evaluating the benefit of making an specific
 partition (ext3) for mysql in order to have a bigger Block Size. Block
 Size by default in / partition is 4096.

 Do you think that could be good? Any experience?

 Iñigo


 *
 This e-mail is confidential and may be legally privileged. It is intended
 solely for the use of the individual(s) to whom it is addressed. Any
 content in this message is not necessarily a view or statement from Road
 Tech Computer Systems Limited but is that of the individual sender. If
 you are not the intended recipient, be advised that you have received
 this e-mail in error and that any use, dissemination, forwarding,
 printing, or copying of this e-mail is strictly prohibited. We use
 reasonable endeavours to virus scan all e-mails leaving the company but
 no warranty is given that this e-mail and any attachments are virus free.
 You should undertake your own virus checking. The right to monitor e-mail
 communications through our networks is reserved by us

  Road Tech Computer Systems Ltd. Shenley Hall, Rectory Lane, Shenley,
  Radlett, Hertfordshire, WD7 9AN. - VAT Registration No GB 449 3582 17
  Registered in England No: 02017435, Registered Address: Charter Court,
  Midland Road, Hemel Hempstead,  Hertfordshire, HP2 5GE.
 *


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Changing ENUM Values

2003-07-17 Thread Aaron Blew
So I've got fairly big sized table (20,000 records) which has an ENUM
row.  I need to change the name of one of the ENUM options.  For
example, if the ENUM value was 'blue' before, it needs to be 'purple'
now.  What's the best way to do this?

Thanks,
-Aaron


---
Aaron Blew
Jackson County School District #6 Network/Systems Analyst
[EMAIL PROTECTED]
(541)494-6900

You can destroy your now by worrying about tomorrow.
-- Janis Joplin


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]