Re: Setup a replication slave without stopping master

2009-01-08 Thread Jed Reynolds

Baron Schwartz wrote:

In contrast, InnoDB actually needs to shut down to cleanly close its table
structures before you can physically copy the filesystem.



Actually, not true -- an LVM snapshot (or other snapshot) is a great
way to take a backup of InnoDB.  You just need a truly atomic
snapshot, and then you can let InnoDB run its recovery routine on the
snapshot to get back to a consistent state.
  


Fascinating. From reading the mysql docs, I would never have assumed 
that I was doing the right thing by taking a snapshot of a live innodb 
instance with a flush tables. I will certainly keep this in mind for 
future. My current snapshotting procedure takes between 45s and 90s 
depending on which instance I snapshot, and that's about 20G of data 
that I start copying over. That's not counting time copying anything 
from the snapshotted volume.



Why do I leave it firewalled? Because once you start writing to an LVM
volume that's been snapshotted, you start copying disk extents like mad,
creating a high load condition that can force queries to reach
connect_timeout.  I have my connect_timeout set pretty low in my
environment.



That will depend a lot on the workload.
  


Yes, very pertinent point. I should have qualified how I have a 
write-intensive environment.


I used to just drop the firewall after I restarted mysql, but when my 
application reliability was criticized during my snapshots, I had to 
leave the firewall up until the snapshotted copy was copied off before 
pooling it back in. Luckily I have four servers and there's only rare 
conditions when I need to switch masters. I'm grateful that I have a 
maintenance window for the site, too.


Thank you, Baron!

Jed


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: long login times?

2009-01-06 Thread Jed Reynolds

Jim Ginn wrote:

Jed:

Are you using an in-network IP to access the database ie:

127.0.0.1
192.168.98.##

or are you using a public IP or full domain name ie:

DB001.MY-DOMAIN.COM
...
  



A very reasonable question. I'm connecting using IP addresses.

$servers = array( '192.168.44.84','192.168.44.94','192.168.44.104' );
foreach( $servers as $server )
{
   $delta['connect_start'] = microtime( true );
   @$db-real_connect( $server, $user, secret, $dbname );
   $delta['connect_end'] = microtime( true );
   $d = $delta['connect_end'] - $delta['connect_start'];
   if( $d  0.5 ) error_log( Login attempt took $d seconds );
   if( mysqli_connect_error() ) error_log( Login Timeout for $server );
}



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Setup a replication slave without stopping master

2009-01-06 Thread Jed Reynolds

Claudio Nanni wrote:

All, Happy New Year, and let's hope somebody tries to stop the killing in
gaza.

I need to setup a replication slave off a master that is currently
production and not stoppable or lockable(possibly).
  


If you are using InnoDB, there is a --single-transaction method backup
( http://lists.mysql.com/replication/1235 ) however, you will need to 
try that in your staging environment under realistic load to see if you 
running that kind of transaction creates an unreasonable load spike or 
memory usage.



Do you have any idea on how to setup the slave with minimum or no impact on
the master?
  


If you are using LVM, you might consider snapshotting, however, doing a 
live snapshot without stopping mysql server would only work if you were 
copying only myisam tables. Mysql-hot-copy would probably be better, but 
either way, you need to flush your tables, which will briefly lock them, 
so they can get onto disk.


In contrast, InnoDB actually needs to shut down to cleanly close its 
table structures before you can physically copy the filesystem.


I use a method where I flush the tables, firewall off the system, shut 
down mysql, do LVM snapshot, start mysql server, and then copy the 
snapshot before unfirewalling it.


Why do I leave it firewalled? Because once you start writing to an LVM 
volume that's been snapshotted, you start copying disk extents like mad, 
creating a high load condition that can force queries to reach 
connect_timeout.  I have my connect_timeout set pretty low in my 
environment.




The database is about 80GB.
  

Consider the transfer time with a dataset this large.

I would have a business level meeting with stakeholders telling them the 
possible risks and adjust their expectations for uptime or service 
availability.  Write some scripts to automate the transfer, however you 
do it, so that you don't fat-finger the process in production. Test and 
time your scripts in a staging environment. Use this data, adjust it as 
necessary for production load, to set stakeholder expectations.


Good luck!

Jed


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



long login times?

2009-01-05 Thread Jed Reynolds
I've started logging how long my php application takes to login to 
mysql, and often it's well below 1 millisecond. However, I'll sometimes 
get a wave of long login attempts that take 3 to 6 seconds. I've enabled 
skip-name-resolv, and I think that helps. There are no long running 
queries on the system. There are only a dozen accounts on the system. 
The system is not running under stress, it's about load 0.9, four cores, 
2Gz, 4GB ram, no swapping. (CentOS 5, Mysql 5.0.45).


I'd hate to set the connect timeout to 1 second, fail and retry in under 
a second just to try to get a connection as fast as possible. Any thots 
on how I might keep login times consistently low?


Thanks

Jed

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: recurring corrupt table?

2008-11-20 Thread Jed Reynolds

Jed Reynolds wrote:
I'm seeing errors in my application where I'm getting from simple 
selects every few hours:


I do a flush table activity_profiles; check table activity_profiles; 
and the table seems to have fixed itself.


Any thots?


Thanks for the thots, guys. It turns out one of my co-masters regressed 
it's my.cnf file...with the same auto_increment_offset value. I think 
replication was creating primary key collisions that were then 
re-written by concurrent activity. Lost data resulted, but the table 
didn't get trashed.


Jed

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Using Replication in mySQL version 5

2008-11-20 Thread Jed Reynolds

Tompkins Neil wrote:

Hi
We are looking to upgrade our version of mySQL to the latest version of
mySQL 5.  One of the main features we are going to think about using is
replication for our website data.  Basically we have 2 websites located in
the UK and US which share similar information, and we are going to be using
replication as a way of keeping the data up to date and in sync.

Based on your experiences, is there anything we should be aware of before
investigating this route and putting it into practice ?
  


I've had to take servers out for bad raid-controllers, bad ram, bad 
mobos. Disks have been the least of my problems. So make sure your 
architecture tolerates the ability to take members of your pool out 
without load-spiking the remaining members. And if you're doing 
filesystem snapshots from a master to a replicant, you will have to 
either have policy or extra servers available to maintain your uptime 
when you interrupt the master to flush all the tables, sync the 
filesystem and do an LVM snapshot. Innodb would require a shutdown. 
Don't forget that LVM snapshots are copy-on-write, so when that master 
comes back up and starts processing modifying tables, you'll get amazing 
system load on a busy system as your file system starts madly copying 
extents into the snapshot volume.


Define a procedure for junior staff how to properly down and up a pool 
member. Like, if you get a disk-full on one member, and it borks 
replication, what's the step-by-step for a) determining if replication 
can re-establish after you do a FLUSH LOGS, b) under what conditions do 
you have to re-copy all data from one master to another because your 
replication window has expired and your logs have gotten flushed. Your 
replication binlogs get really big if you're pushing large materialized 
views regularly via replication, or your servers have fast disks, not 
enough size to handle a more than a weekend or whole day (for example) 
of neglect.


Define a procedure for checking your my.cnf files for correct 
auto-increment-* settings and server-id settings. Junior staff, and even 
senior staff rarely add more members to the pool, so these settings are 
often mistaken during a midnight maintenance hour. Procedure for adding 
members and changing master replication settings is very important. 
Often your DBA is not racking and changing the equipment.


Make sure that you have a good understanding of what kind of capacity 
you're growing at. I started a project with two four-core boxes with 
plenty of 15krpm disk and when they got into production, they regularly 
spiked to load 20 and 30. Not pretty. Not only had my old architecture 
refused traffic to lighten the load, my new architecture didn't. My data 
set was growing so fast my sort-buffer settings for the old servers were 
too small for new servers. I ended up with four DL380s with 8 cores per 
box. I really had to scramble to get more servers in there. The addition 
of two more read-only members really helped, and backups handled by 
replication to an off-site replicant.


Another load capacity warning: if your traffic is very spiky, and you 
get high-load conditions, I've seen reset/dropped connections and also 
plain old connection timeouts. So if you have RAM for 1024 connections, 
you prolly can't service 1024 connections when you've got table 
contention and connections from your web-nodes just start failing. If 
they fail for too long, then you have to do some FLUSH HOSTS to reset 
connection attempt counters.


I don't know what your application does, but I certainly monitor 
replication lag. Load spikes can certainly increase lag. I've had to 
move from single instances of mysql to mysqld_multi and separate 
databases by replication rate. Your monitoring should also track sql 
threads. You might need to define procedure on how to deal with 
pooling-out members that fall too far behind in replication.


I've written an iptables script to block webnode connections but allow 
sql pool member connections. I use this to take a member out to run 
table repairs or to lighten the load while it does replication catch-up.


WAN connectivity for replication is interesting! I did site-to-site 
transfer using stunnel. I had to negotiate weird Cisco 5502 VPN 
behavior. Copying gigs of myisam files between sites would knock over my 
vpn so I had to rate-limit using rsync --bwlimit. Bursting bandwidth 
charges were still brutal, though. Later, we ended up configuring CBQ 
(search freshmeat.net for cbq-init) on my backup replicant to limit 
bandwidth so it wouldn't provoke bursting charges.


Jed


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



recurring corrupt table?

2008-11-13 Thread Jed Reynolds
 Application in php 5.1.6.I'm running MySQL version 5.0.45 on CentOS5 
using a HP DL380 with 8G ram and 15krpm raid10 disks. Tables are myisam.


I'm seeing errors in my application where I'm getting from simple 
selects every few hours:


SELECT id, host_id, uri_id, profile 
FROM activity_profiles 
WHERE 1=1  AND id IN (48823962 )


Table 'activity_profiles' is marked as crashed and should be repaired.

My application isn't reporting errors when writing, though.

I do a flush table activity_profiles; check table activity_profiles; 
and the table seems to have fixed itself.


Any thots?

TIA,

Jed

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Importing and exporting from MySQL, escape slash problem

2008-03-02 Thread Jed Reynolds

Dave M G wrote:

PHP List, MySQL List

In my PHP environment, I have Magic Quotes turned off, and I use the 
mysql_real_escape_string() function clean strings of SQL syntax before 
inserting them into my database.


So the data stored in my database does not have escape characters in 
it. Particularly, double and single quotes don't have slashes in front 
of them.


This seems to work fine so long as I'm reading data into and out of 
the database from within my scripts.


However, when I backup and import databases - I use the phpMyAdmin 
interface - they have escape slashes in front of every double and 
single quote characters. I'm not sure if it's on the export or import 
where they get added in.


It probably depends on what you mean by exporting...if you do a select 
* into outfile it might remove them. If you're doing a mysql dump, it 
will add them because it's constructing  sql statements, and they have 
to be escaped.


I would take phpMyAdmin out of the equation and just use mysqldump. You 
should have no problem doing something like creating a scratch table, 
dumping it, and re-importing from the dump.


mysql create table B like table A;
mysql insert into table B select * from A where c like %'% limit 10;
bash$ mysqldump --opt frumble B  b.sql
bash$ mysql -u -p frumble  b.sql

This should  replace table B in database frumble without incident. If 
you're having trouble importing a mysql dump using phpMyAdmin, it might 
be simpler not to use it, and use mysqldump instead.


HTH

Jed

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump: Got error: 1: Can't create/write to file 'dumpdir/tablename.txt' (Errcode: 13) when executing 'SELECT INTO OUTFILE'

2008-02-22 Thread Jed Reynolds

Mike Spreitzer wrote:
So I am trying to use mysqldump --tab for the first time.  I am running 
into the same problem everybody does on their first try --- Errcode: 13. I 
have set the permissions on the target directory to be completely liberal 
--- anybody can do anything with it --- and I still get Errcode: 13.  I 
can even write into that directory when logged in as mysql (UID 100, GID 
100, and yes that is what the mysqld process is running as).  What's going 
wrong here?
  


SELinux enabled?

What's the syntax of the command you're using?

Jed

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump: Got error: 1: Can't create/write to file 'dumpdir/tablename.txt' (Errcode: 13) when executing 'SELECT INTO OUTFILE'

2008-02-22 Thread Jed Reynolds
what happens when you delete the files that are already in there? Looks 
like you're dumping to a file owned by root.


Jed

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Move couple files to another directory/hard drive

2008-02-22 Thread Jed Reynolds

Grzegorz Paszka wrote:

Hi.

I have extra hdd only for mysql /var/lib/mysql directory.
I have one big database and ten different tables into it.
I had to move three of them into another directory on different hdd becouse
lack of disk space at /var/lib/mysql directory.
Is it possible and how can I do that ?

Regards.
  


http://dev.mysql.com/doc/refman/5.0/en/symbolic-links-to-tables.html

Jed

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump: Got error: 1: Can't create/write to file 'dumpdir/tablename.txt' (Errcode: 13) when executing 'SELECT INTO OUTFILE'

2008-02-22 Thread Jed Reynolds

Mike Spreitzer wrote:


Sorry if I have confused people by presenting
evidence in the wrong order.  Also, it seems that the .sql file is
written by the client while the .txt file is (attempted to be) written
by the server.  Here is a single typescript with all the evidence:
  
[EMAIL PROTECTED] ~]# cd /
  
[EMAIL PROTECTED] /]# rm -f dump1/*
  
[EMAIL PROTECTED] /]# ls -ld dump1
  
drwxrwxrwx  2 mysql mysql 4096

Feb 22 20:45 dump1
  
[EMAIL PROTECTED] /]# ls -l dump1
  
total 0
  
[EMAIL PROTECTED] /]# mysqldump -u root -p

--skip-opt --quick --tab=/dump1 wyky red1_p2
  
Enter password: 
mysqldump: Got error: 1: Can't create/write

to file '/dump1/red1_p2.txt' (Errcode: 13) when executing 'SELECT INTO
OUTFILE'
  



I'm reading thru http://dev.mysql.com/doc/refman/5.0/en/select.html and it says 
that it will not write to an already existing file (so you deleted the old files)
and you need the FILE privilege, and that it writes the file mode 777 as the user 
running the client (and suggests not using root).


I would try 
$ rm -f /dump1/*

$ echo SELECT * FROM red1_p2 INTO OUTFILE '/dump1/red1_p2.txt'; \
| mysql -u root -p

to see if this is a mysql permissions issue. The above will create a file 
owned by the mysql process. 
You might try changing the directory to /tmp or /var/tmp to see if that 
makes some kind of magical difference. 



Might also check your grants for [EMAIL PROTECTED] Unlikely, but possible.

Also, check /var/log/messages and if there's any SELinux warnings. 
You might have the option 'secure_file_priv' set? 
http://dev.mysql.com/doc/refman/5.0/en/server-options.html#option_mysqld_secure-file-priv




Jed

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]