RE: mysqldump with single-transaction with high-concurrency DB

2006-07-11 Thread Mark Steele

I have no long running transactions. I believe the deadlock to have been
caused by a high concurrency of transactions. 

As soon as the backup started, mysqldump got the global lock, and seemed
to hang during the flush tables (while hundreds of other clients try to
execute queries).

Should be easy enough to reproduce, but unfortunately I cannot as this
is my production DB.


-Original Message-
From: Devananda [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, July 11, 2006 10:59 AM
To: Mark Steele
Cc: mysql@lists.mysql.com
Subject: Re: mysqldump with single-transaction with high-concurrency DB

Argh! I should stop writing emails this early in the morning - I always 
miss something. I just noticed that, at the end of your first email, you

did say the backup process was stuck on "flush tables". The only way 
that I can think of for this to happen is if another thread were holding

a table open for a long transaction. As soon as that thread were to 
COMMIT or close, the table should be flushed. Every client that 
connected after the backup process /should/ have to wait for the FLUSH 
TABLES to complete, but /that/ has to wait for all previous connections.

Does that seem to fit what happened?


Regards,
Devananda




Mark Steele wrote:
> Dan: The options I specified are correct (according to the
> documentation) to get a consistent non-blocking snapshot. 
> (--single-transaction disables --lock-tables, --opt is the default
> behavior for mysqldump).
>
> My question was more in the nature of will these options work in high
> concurrency situations or will they cause a deadlock. (or am I missing
> something here)
>
> The documentation states that --single-transaction will get a global
> lock 'for a short period of time', which I thought to mean that it'll
be
> short enough to not disturb normal operations (which is what is
implied
> in the documentation).
>
> If this isn't the case in high-concurrency situations, anyone have
> another method to get a consistent snapshot?
>
> Cheers,
>
> Mark
>
> -Original Message-----
> From: Dan Buettner [mailto:[EMAIL PROTECTED] 
> Sent: Monday, July 10, 2006 3:21 PM
> To: Mark Steele
> Cc: mysql@lists.mysql.com
> Subject: Re: mysqldump with single-transaction with high-concurrency
DB
>
> Sorry Mark - I thought your question was more of a "does this seem
> right" and "how do I" than a "something's wrong here" post.
>
> I think your problem is coming in with the use of --opt.  The article
> you reference, where it says "This is an online, non-blocking backup",
> makes no mention of --opt, which as you note includes --lock-tables.
> >From mysqldump man page:
>
> --lock-tables, -l
>
>   Lock all tables before starting the dump. The tables are
> locked with
>   READ LOCAL to allow concurrent inserts in the case of MyISAM
> tables.
>   For transactional tables such as InnoDB and BDB,
>   --single-transaction is a much better option, because it
does
> not
>   need to lock the tables at all.
>
>   Please note that when dumping multiple databases,
> --lock-tables
>   locks tables for each database separately. So, this option
> does not
>   guarantee that the tables in the dump file are logically
> consistent
>   between databases. Tables in different databases may be
dumped
> in
>   completely different states.
>
> Try running without --opt, possibly specifying the included options
> you need individually, and see if that works better for you.
>
> I understand what you're saying about MySQL replication; hence the
> need for monitoring the replication to ensure good backups.
>
> Dan
>
>
>
>
> On 7/10/06, Mark Steele <[EMAIL PROTECTED]> wrote:
>   
>> Hi Dan,
>>
>>
>>   --single-transaction
>>   Creates a consistent snapshot by dumping all tables in a
>>   single transaction. Works ONLY for tables stored in
>>   storage engines which support multiversioning (currently
>>   only InnoDB does); the dump is NOT guaranteed to be
>>   consistent for other storage engines. Option
>>   automatically turns off --lock-tables.
>>   --opt
>>   Same as --add-drop-table, --add-locks, --create-options,
>>   --quick, --extended-insert, --lock-tables, --set-charset,
>>   and --disable-keys. Enabled by default, disable with
>>   --skip-opt.
>>
>> See http://dev.mysql.com/doc/refman/5.0/en/backup-policy.html
>>
>> These options should produce a non-blocking consistent database
>> snapshot.
>>
>> I can already accompl

RE: mysqldump with single-transaction with high-concurrency DB

2006-07-11 Thread Mark Steele
5.0.22

-Original Message-
From: Dan Buettner [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, July 11, 2006 9:55 AM
To: Mark Steele
Cc: mysql@lists.mysql.com
Subject: Re: mysqldump with single-transaction with high-concurrency DB

Sorry Mark, appears you're right.  --opt is the default now; didn't
used to be, I don't think.  Also, my experience with MyISAM is a total
lock on all tables across all databases during a mysqldump ... but you
are using InnoDB obviously.  I think you're right in your reading of
the docs - that you should be able to keep going during a dump.  Bug?
What version are you on?

Dan



On 7/10/06, Mark Steele <[EMAIL PROTECTED]> wrote:
>
> Dan: The options I specified are correct (according to the
> documentation) to get a consistent non-blocking snapshot.
> (--single-transaction disables --lock-tables, --opt is the default
> behavior for mysqldump).
>
> My question was more in the nature of will these options work in high
> concurrency situations or will they cause a deadlock. (or am I missing
> something here)
>
> The documentation states that --single-transaction will get a global
> lock 'for a short period of time', which I thought to mean that it'll
be
> short enough to not disturb normal operations (which is what is
implied
> in the documentation).
>
> If this isn't the case in high-concurrency situations, anyone have
> another method to get a consistent snapshot?
>
> Cheers,
>
> Mark
>
> -----Original Message-
> From: Dan Buettner [mailto:[EMAIL PROTECTED]
> Sent: Monday, July 10, 2006 3:21 PM
> To: Mark Steele
> Cc: mysql@lists.mysql.com
> Subject: Re: mysqldump with single-transaction with high-concurrency
DB
>
> Sorry Mark - I thought your question was more of a "does this seem
> right" and "how do I" than a "something's wrong here" post.
>
> I think your problem is coming in with the use of --opt.  The article
> you reference, where it says "This is an online, non-blocking backup",
> makes no mention of --opt, which as you note includes --lock-tables.
> From mysqldump man page:
>
> --lock-tables, -l
>
>   Lock all tables before starting the dump. The tables are
> locked with
>   READ LOCAL to allow concurrent inserts in the case of MyISAM
> tables.
>   For transactional tables such as InnoDB and BDB,
>   --single-transaction is a much better option, because it
does
> not
>   need to lock the tables at all.
>
>   Please note that when dumping multiple databases,
> --lock-tables
>   locks tables for each database separately. So, this option
> does not
>   guarantee that the tables in the dump file are logically
> consistent
>   between databases. Tables in different databases may be
dumped
> in
>   completely different states.
>
> Try running without --opt, possibly specifying the included options
> you need individually, and see if that works better for you.
>
> I understand what you're saying about MySQL replication; hence the
> need for monitoring the replication to ensure good backups.
>
> Dan
>
>
>
>
> On 7/10/06, Mark Steele <[EMAIL PROTECTED]> wrote:
> > Hi Dan,
> >
> >
> >   --single-transaction
> >   Creates a consistent snapshot by dumping all tables in a
> >   single transaction. Works ONLY for tables stored in
> >   storage engines which support multiversioning (currently
> >   only InnoDB does); the dump is NOT guaranteed to be
> >   consistent for other storage engines. Option
> >   automatically turns off --lock-tables.
> >   --opt
> >   Same as --add-drop-table, --add-locks, --create-options,
> >   --quick, --extended-insert, --lock-tables, --set-charset,
> >   and --disable-keys. Enabled by default, disable with
> >   --skip-opt.
> >
> > See http://dev.mysql.com/doc/refman/5.0/en/backup-policy.html
> >
> > These options should produce a non-blocking consistent database
> > snapshot.
> >
> > I can already accomplish this on a slave server, however MySQL
> > replication can lead to slave drift as it is statement based (as
> opposed
> > to row-based replication). The only safe way to guarantee a real
> backup
> > in a MySQL replication setup is via snapshots on the master.
> >
> > -Original Message-
> > From: Dan Buettner [mailto:[EMAIL PROTECTED]
> > Sent: Monday, July 10, 2006 2:42 PM
> > To: Mark Steele
> > Cc: mysql@lists.mysql.com
> > Subject: Re: mysqldump with single-transaction wi

RE: mysqldump with single-transaction with high-concurrency DB

2006-07-10 Thread Mark Steele

Dan: The options I specified are correct (according to the
documentation) to get a consistent non-blocking snapshot. 
(--single-transaction disables --lock-tables, --opt is the default
behavior for mysqldump).

My question was more in the nature of will these options work in high
concurrency situations or will they cause a deadlock. (or am I missing
something here)

The documentation states that --single-transaction will get a global
lock 'for a short period of time', which I thought to mean that it'll be
short enough to not disturb normal operations (which is what is implied
in the documentation).

If this isn't the case in high-concurrency situations, anyone have
another method to get a consistent snapshot?

Cheers,

Mark

-Original Message-
From: Dan Buettner [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 10, 2006 3:21 PM
To: Mark Steele
Cc: mysql@lists.mysql.com
Subject: Re: mysqldump with single-transaction with high-concurrency DB

Sorry Mark - I thought your question was more of a "does this seem
right" and "how do I" than a "something's wrong here" post.

I think your problem is coming in with the use of --opt.  The article
you reference, where it says "This is an online, non-blocking backup",
makes no mention of --opt, which as you note includes --lock-tables.
>From mysqldump man page:

--lock-tables, -l

  Lock all tables before starting the dump. The tables are
locked with
  READ LOCAL to allow concurrent inserts in the case of MyISAM
tables.
  For transactional tables such as InnoDB and BDB,
  --single-transaction is a much better option, because it does
not
  need to lock the tables at all.

  Please note that when dumping multiple databases,
--lock-tables
  locks tables for each database separately. So, this option
does not
  guarantee that the tables in the dump file are logically
consistent
  between databases. Tables in different databases may be dumped
in
  completely different states.

Try running without --opt, possibly specifying the included options
you need individually, and see if that works better for you.

I understand what you're saying about MySQL replication; hence the
need for monitoring the replication to ensure good backups.

Dan




On 7/10/06, Mark Steele <[EMAIL PROTECTED]> wrote:
> Hi Dan,
>
>
>   --single-transaction
>   Creates a consistent snapshot by dumping all tables in a
>   single transaction. Works ONLY for tables stored in
>   storage engines which support multiversioning (currently
>   only InnoDB does); the dump is NOT guaranteed to be
>   consistent for other storage engines. Option
>   automatically turns off --lock-tables.
>   --opt
>   Same as --add-drop-table, --add-locks, --create-options,
>   --quick, --extended-insert, --lock-tables, --set-charset,
>   and --disable-keys. Enabled by default, disable with
>   --skip-opt.
>
> See http://dev.mysql.com/doc/refman/5.0/en/backup-policy.html
>
> These options should produce a non-blocking consistent database
> snapshot.
>
> I can already accomplish this on a slave server, however MySQL
> replication can lead to slave drift as it is statement based (as
opposed
> to row-based replication). The only safe way to guarantee a real
backup
> in a MySQL replication setup is via snapshots on the master.
>
> -Original Message-
> From: Dan Buettner [mailto:[EMAIL PROTECTED]
> Sent: Monday, July 10, 2006 2:42 PM
> To: Mark Steele
> Cc: mysql@lists.mysql.com
> Subject: Re: mysqldump with single-transaction with high-concurrency
DB
>
> Mark, that's the expected behavior of mysqldump with --opt and
> --single-transaction; it locks all databases and all tables for the
> duration of the dump, ensuring a consistent snapshot.
>
> With a database this size (100 GB), it's an area where throwing
> hardware at the problem may be your best bet.  I suggest one of two
> approaches as possible solutions:
>
> 1) Buy a *really fast* disk array and set it up as striped on a
> superfast connection, like Ultra320 SCSI or fibre.  This will lower
> the amount of time required to write the mysqldump output (which will
> likely exceed 100 GB data size due to overhead within the file).  You
> might even look at 2 disk arrays on 2 channels, striping across both
> the disks in the array and across the arrays.  Pros: fairly easy to
> do, not terribly expensive.  Cons: You still lock up your main
> database server for backups, though possibly for less time than you do
> now.
>
> 2) Buy a second physical server for MySQL and set up replication.
> Then use the replication server to do your backups - provided you
> never let people

mysqldump with single-transaction with high-concurrency DB

2006-07-10 Thread Mark Steele
Hi folks,

 

I've recently tried to do a database backup on a database server that
has a fairly high concurrency rate (1000+ queries/sec) and have noticed
that the backup process seemed to deadlock the machine and I had to
resort to extreme measures to get the database back up (killed the
process and had to restart it in recovery mode).

 

The command:

mysqldump --all-databases --opt --single-transaction --master-data=1
>dump.txt

 

All my tables use InnoDB, and the database is about 100 gigabytes in
size.

 

Does anyone have any suggestions for getting consistent database
snapshots? 

 

I tried the InnoDB binary backup tool in the past, but that lead to a
corrupted database, and I'm not sure that it'll lead to a different
outcome as both single-transaction and the binary backup tool use the
same mechanism (versionnning). The documentation describes the
single-transaction as taking a short global lock, which is the root
cause of the deadlock I saw I believe.

 

When the server was deadlocked, all the connections were 'waiting on
table', and the backup process was apparently stuck on 'flushing
tables'.

 

Cheers,

 

Mark Steele
Information Systems Manager

Zango

E: [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]> 
P: 514.787.4681 |  F: 514.787.4707 

www.zango.com http://www.zango.com> 

Read our blog at http://blog.zango.com http://blog.zango.com> 

 



RE: MySQL support for AMD64

2004-12-03 Thread Mark Steele
Hi there,

I have had good success with gentoo (full source
install/compile) and MySQL's binaries on Opteron systems.

We're running 2 opteron boxes (1 with 16 gigs of RAM,
1 with 32 gigs of RAM) with no problems.

We are running at about 1000-2000 queries per second
(mostly inserts).

Cheers,

Mark Steele
Implementation Director
CDT Inc.

-Original Message-
From: Lynn Bender [mailto:[EMAIL PROTECTED] 
Sent: November 30, 2004 2:23 PM
To: [EMAIL PROTECTED]
Subject: MySQL support for AMD64


I just received a box with the following specs:

Dual AMD64
8G ram
Two 3ware 2.4 terabyte RAID 5 arrays.

My company has been using Redhat for most of its production machines.

1. Does anyone have any success/horror stories running MySQL 4.0.x on
RHES 3/ AMD64?

2. Does anyone have alternate recommendations for running MySQL
databases in the terabyte range on AMD64?

Thanks
Lynn Bender




UnsubCentral
Secure Email List Suppression Management
Neutral. Bonded. Trusted.

You are receiving this commercial email
from a representative of UnsubCentral, Inc.
13171 Pond Springs Road, Austin, TX 78729
Toll Free: 800.589.0445

To cease all communication with UnsubCentral, visit
http://www.unsubcentral.com/unsubscribe
or send an email to [EMAIL PROTECTED] 



smime.p7s
Description: S/MIME cryptographic signature


RE: Fast method needed to determine if a table is corrupt

2004-11-10 Thread Mark Steele
Hi All,

InnoDB tables as the solution is incorrect.
I've been running some fairly large InnoDB databases,
and crashes using InnoDB are probably ALOT worse than
with MyIsam tables.

InnoDB tables tend to corrupt very easily on such things
as power outages, with corrupted page data error which means
that mysql doesn't start at all, and the only option is to
start InnoDB in recovery mode, dump and hope for the best.

How big is your database? If it can fit in RAM, I'd suggest
using a ramdisk to store your database with snapshots taken
every X minutes and stored to disk (or using NVRAM to store
the database).

Other than that, there's no quick way to check for corruption
that I know of.

Cheers,

Mark Steele
Implementation Director
CDT Inc.


-Original Message-
From: Dan Nelson [mailto:[EMAIL PROTECTED] 
Sent: November 8, 2004 12:43 PM
To: Tim Murtaugh
Cc: '[EMAIL PROTECTED]'
Subject: Re: Fast method needed to determine if a table is corrupt


In the last episode (Nov 08), Tim Murtaugh said:
> I'm using MySQL server version 4.0.15a in an embedded envirionment (as

> a standalone server, I'm not using the embedded server library). I 
> have 128 MB of memory and disk space is tight. I'm using MyISAM 
> tables.
>  
> If my system loses power, some tables are left in a corrupt state. As 
> stated in the MySQL documentation, I think the data tables are OK, its

> just that the tables were not closed properly and are considered 
> corrupt by MySQL.
>  
> I need a FAST way to determine if a table is corrupt. I've tried 
> myisamcheck --fast and --check-only-changed options, and increased the

> buffer sizes (-O key_buffer_size and -O sort_buffer_size), as 
> mentioned in the documentation. The fastest time I can achieve is 
> 6:55.
>  
> I've also tried "CHECK TABLE tablename FAST QUICK" on a table I know 
> is marked as corrupt, and the fastest time I can achieve is 6:58.
>  
> I need to detemine if a table is corrupt within a few SECONDS, not 
> minutes. How can I do this?

Make your tables smaller? :)  You have to check each record to see that
it's okay.  If your tables are big, you have to spend time reading them.
  
> The documentation says there is a flag in myisam tables that indicates

> when a table is corrupt. Is there a way I can quickly check this flag?

If mysql tries to read a record or index and can't, it sets this flag to
keep you from accessing the table until you repair it.

You may be better off using InnoDB tables and taking the
space/performance hit.  InnoDB uses a logfile to allow it to roll back
partially-commited transactions after a crash, so you never have to
check or repair your tables.

-- 
Dan Nelson
[EMAIL PROTECTED]


smime.p7s
Description: S/MIME cryptographic signature


memory utilization

2004-10-01 Thread Mark Steele
Hi folks,

I have to setup some high performance servers
that will be used for MySQL databases and have
a couple questions regarding MySQL running on
Linux AMD-64 (Opteron).

We are looking at setting up these machines
with 16-64 gb of RAM, can MySQL running on Linux
handle this amount of RAM efficiently? Also
most of the tables (almost all of them) will
be using the InnoDB storage engine, any pointers
on what configuration settings we should use?
(for example on a 16 gb RAM server)

Anyone have experience with this kind of setup?

Regards,


Mark Steele
Implementation Director
CDT Inc.
Tel: (514) 842-7054
Fax: (514) 221-3395 
 


smime.p7s
Description: S/MIME cryptographic signature


RE: Innodb assertion failure after binary backup-restore

2004-07-28 Thread Mark Steele
Hi Sp,

The best advice I can give you is to implement a replication slave,
and perform hot backups using the innodb ibbackup tool from the slave
server. This ensures that you maintain high availability and
disaster recovery in case of catastrophic failure. 

The ibbackup tool from innodb (http://www.innodb.com) works
very well even on large databases (currently we backup over
40 gigs/day).

Cheers,

Mark Steele
Implementation Director
CDT Inc.

-Original Message-
From: Sp.Raja [mailto:[EMAIL PROTECTED] 
Sent: July 27, 2004 10:51 AM
To: Heikki Tuuri; [EMAIL PROTECTED]
Subject: Re: Innodb assertion failure after binary backup-restore


Thanks for your replies.

Now I have three ways to go

1. replication 
2. innodb hot backup tool
3. Make sure that no one is writing in to the database and start
backup when modified db pages in "BUFFER POOL AND MEMORY" becomes zero

#FLUSH TABLES WITH READ LOCK
still_to_flush=1  
while [ $still_to_flush != 0 ]
do
still_to_flush=`/usr/local/mysql/bin/mysql -e "SHOW INNODB
STATUS\G" |  grep "Modified db pages" | awk '{print $4}'`
sleep 1
done
#UNLOCK TABLES

Do you think #3 will work?

Regards,
Sp.Raja

> Original Message
> From: "Heikki Tuuri" <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Date: Tue, Jul-27-2004 6:24 PM
> Subject: Re: Innodb assertion failure after binary backup-restore
> 
> Hi!
> 
> sync will not help.
> 
> You can run SHOW INNODB STATUS\G to monitor when InnoDB has flushed 
> its buffer pool.
> 
> Best regards,
> 
> Heikki
> Innobase Oy
> InnoDB - transactions, row level locking, and foreign keys for MySQL 
> InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up 
> MyISAM tables
> http://www.innodb.com/order.php
> 
> Order MySQL support from http://www.mysql.com/support/index.html
> 






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Script to purge

2004-07-09 Thread Mark Steele
Hi Luis,

I've made a small script to do this, it's in the 'backing up mysql
databases' on http://www.control-alt-del.org/code

Basically, you probably want to save the binary logs by archiving
them (in case of a database crash). On a binary log I get about
60-80% compression ratios, so it's worth just archiving them.

Here's the script (see the web site for explanations)

#!/usr/bin/perl
## Binary log backup utility
## Author: Mark Steele <[EMAIL PROTECTED]>
## Bug: This script will fail when you hit binlog #999 and roll over to
1
##  I'm too lazy to fix this just now, you've been warned.
use DBI;
use strict;

my $DBUSER = 'user';
my $DBPASSWORD = 'password';

## where the binlogs are eg: /usr/local/mysql/data
my $PATH_TO_DATA = '/path/to/mysql/data/files'; 

my $HOSTNAME = `hostname -s`;

## Which server to copy binary logs to
my $REMOTE_SERVER = 'backupserver.yourdomain.com'; 

## Path on remote machine where you want the backups to go
my $REMOTE_PATH = '/tmp'; 

my $dbh = DBI->connect("DBI:mysql:database=mysql;host=127.0.0.1",
 $DBUSER,$DBPASSWORD) || die;

## Figure out the current binary log file
my $sth = $dbh->prepare("SHOW MASTER STATUS");
$sth->execute();

my $lastfile = $sth->fetchrow_hashref()->{File};

## Get log list from server
$sth = $dbh->prepare("SHOW MASTER LOGS");
$sth->execute();

my @files;
while (my $ref = $sth->fetchrow_hashref()) {
last if ($ref->{Log_name} eq $lastfile);
push(@files,$ref->{Log_name});
}

## Figure out first and last binlog numbers
$lastfile =~ /\.(\d+)$/;
my $lastno = $1 - 1;
$files[0] =~ /\.(\d+)$/;
my $firstno = $1;

## Make a list of the files to backup
my $a = join(" ",@files);
chdir($PATH_TO_DATA);

## Backup the binary logs, and remove them once they are backed up
`tar cvfj $PATH_TO_DATA/$HOSTNAME-binlogs-$firstno-$lastno.tar.bz2 $a`;
$dbh->do("PURGE MASTER LOGS TO '$lastfile'");
$dbh->disconnect;

## Copy to the remote machine, comment this out if you don't want to 
## backup your binary logs to a remote machine
`/usr/bin/scp -i /root/.ssh/backup-key
$PATH_TO_DATA/$HOSTNAME-binlogs-$firstno-$lastno.tar.bz2
$REMOTE_SERVER:$REMOTE_PATH`;

## Remove the backup (comment this out if you aren't 
## backing up to a remote server)
unlink("$PATH_TO_DATA/$HOSTNAME-binlogs-$firstno-$lastno.tar.gz"); 
  

Cheers,

Mark Steele
Implementation Director
CDT Inc.

-Original Message-
From: Luis Mediero [mailto:[EMAIL PROTECTED] 
Sent: July 7, 2004 1:43 PM
To: [EMAIL PROTECTED]
Subject: Script to purge

Hi,

I would like write a script to purge every nigth the master log
with a cron
process. I need do it every nigth after load a lot of data into the
master.
I know if i do 'show master status' i can see the file_name of the last
log
file and then do -purge master logs to 'file_name'-. Is possible put the
file name into a variable and then do - purge master logs to '$variable'
-,
into a script?. Someone have a example?.

I can't use 'PURGE MASTER LOGS BEFORE ' because I use the
4.0.20
version. :-(


TIA

Luis




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



long queries with InnoDB...

2004-07-08 Thread Mark Steele
Hi folks, 

Is there a way to speed up count(*) operations on InnoDB tables?

I have a table with roughly 18 million rows in it which has the
following
structure:

CREATE TABLE `tbl_test` (
  `col_serial` varchar(32) default NULL,
  UNIQUE KEY `col_serial` (`col_serial`)
) TYPE=InnoDB;

++-+--+-+-+---+
| Field  | Type| Null | Key | Default | Extra |
++-+--+-+-+---+
| col_serial | varchar(32) | YES  | MUL | NULL|   |
++-+--+-+-+---+

Running a count(*) on the table takes about 2 hours, which
seems rather long. The only strange thing I've thought of looking
into so far is that the person who made this table used a bit of
a weird setup (unique key instead of using primary key). 

Am I missing something or is this normal for InnoDB tables?

Regards,

Mark Steele
Implementation Director
CDT inc.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: slave hotbackup question

2004-03-08 Thread Mark Steele
Heikki,

So if I understand this correctly, the following procedure could be used
to create a hot backup from a slave which could be used to create a new
slave server:

1) Create a hot backup of the innodb tables on theslave with ibbackup
(with --suspend-at-end)
2) Create a hot backup of the MyISAM tables with the same kind of
procedure
as mysqlhotcopy once ibbackup suspends near the end of the backup (flush
tables with read lock, copy the *.frm|*.MYI|*.MYD files)
3) Before unlocking the tables from the read lock get the master log
file
and log file position from the slave and save this information for
setting
up the new slave (from the slave, show slave status and store the values
of Master_Log_File and Read_Master_Log_Pos)
4) unlock the tables, allow ibbackup to finish it's backup
5) Resume normal operation

Personally I would much rather be able to setup new slaves from another
Slave than having to take snapshots of the master, as this is disruptive
in a mixed InnoDB/MyISAM environment (We are doing somewhere in the
vicinity of
500 queries/sec in peak times).

Regards,

Mark Steele
Implementation Director
CDT Inc.
[EMAIL PROTECTED]

-Original Message-
From: Heikki Tuuri [mailto:[EMAIL PROTECTED] 
Sent: March 5, 2004 1:28 PM
To: [EMAIL PROTECTED]
Subject: Re: slave hotbackup question

Mark,

http://www.innodb.com/manual.php#advanced.slave describes how to set up
a
new slave from a hot backup of a master.

If you take a hot backup of a slave, and want to use it to set up a new
slave, then the important coordinate is the MASTER binlog position that
mysqld prints when you start mysqld on that new hot backup (do not mix
this
with the binlog position of the slave's own binlog). The new slave
should
start replication from that master binlog position.

Relay logs etc. are not needed in the backup. Only

1) .frm files,
2) .MYI and .MYD files,
3) ibdata files,
4) ib_logfiles (these are generated in the ibbackup --apply-log phase),
5) .ibd files if you are using >= MySQL-4.1.1,
6) default charset files in database directories if you are using >=
4.1,
and have worked with exotic charsets.

Best regards,

Heikki
Innobase Oy
InnoDB - transactions, row level locking, and foreign keys for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up
MyISAM
tables
http://www.innodb.com/order.php

Register now for the 2004 MySQL Users Conference!
http://www.mysql.com/events/uc2004/index.html

...
From: Mark Steele Date: March 3 2004 6:53pm
Subject: slave hotbackup question

Hi folks,

I have a question regarding backups taken from a running slave. I have a
slave
replicating to a master server, and do a hot backup using the following
script:

...

My question is, suppose I would like to use one of these backups to
create a new
slave server, what additional information would I need to include in the
backup
to be able to start up a new slave properly?  (relay logs/info files,
etc...)

Ideally, I would like these snapshots to be taken off the running server
while it
is in use, is there any way to do this?

Regards,

Mark Steele
Implementation Director
CDT Inc.
[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



slave hotbackup question

2004-03-03 Thread Mark Steele
Copying INDEX $_\n";
$dbh->do("SELECT UNIX_TIMESTAMP()");
open(FIDX,"<$_") || die "couldn't open index... ack!";
my $buff;
my $length = read(FIDX,$buff,2048);
#die "Can't read index header from $_\n" if ($length <
1024);
close(FIDX);
open(FIDX,">/storage/backups/backup-$date$_") || die
"couldn't open file for copy: $_: $!";
if (syswrite(FIDX,$buff) != length($buff)) {
die "Error when writing data to
/storage/backups/backup-$date$_: $!\n";
}
close FIDX || die "Error on close of
/storage/backups/backup-$date$_: $!\n";
}
}
closedir(DIR);

print "Removing /storage/backups/backup-$date/sql/ibbackup_suspended\n";
unlink("/storage/backups/backup-$date/sql/ibbackup_suspended");

print "Waiting for ibbackup to finish\n";
waitpid($ibbackup_pid,0);

print "Unlocking tables\n";
$dbh->do("UNLOCK TABLES");
$dbh->do("DROP TABLE IF EXISTS ibbackup_binlog_marker");

$dbh->disconnect;

print "Compressing the backup\n";
## Tar the backup
`tar cfz /storage/backups/sql-backup-$date.tar.gz
/storage/backups/backup-$date`;

print "Removing the backup folder\n";
## Remove the folder
`rm -rf /storage/backups/backup-$date`;

print "Removing backup from 2 days ago\n";
## Remove the 2 day old backup
($d,$m,$y) = (localtime(time()-60*60*24*2))[3..5];
$date = sprintf("%d-%02d-%02d",$y+1900,$m+1,$d);
unlink("/storage/backups/sql-backup-$date.tar.gz");
## END



This script uses the ibbackup tool from innodb (non-free) to take a hot
backup
of the innodb tables, then near the end of the innodb backup,ibbackup
waits for a backup
to be taken of the MyISAM tables and table definitions (lock tables with
read lock, copy the 
*.FRM, *.MYI, *.MYD files, unlock).

Once the MyISAM tables and table definitions are backed up, ibbackup
resumes and 
theoretically I _should_ have a consistent backup of the whole database
server  once ibbackup
is finished.  (although I truncate the MyISAM index files, but they can
be reconstructed 
with myisamchk -r)

My question is, suppose I would like to use one of these backups to
create a new
slave server, what additional information would I need to include in the
backup
to be able to start up a new slave properly?  (relay logs/info files,
etc...)

Ideally, I would like these snapshots to be taken off the running server
while it
is in use, is there any way to do this?

Regards,

Mark Steele
Implementation Director
CDT Inc.
[EMAIL PROTECTED]

PS: the perl code is sloppy and needs more error checking,etc... use at
your own risk.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]