Re: [OT] When MySQL Bites: Quirks to Watch Out For

2008-03-28 Thread Ofer Inbar
It appears to be an article for perl programmers not familiar with
MySQL, warning them of MySQL quirks they should be aware of.  If you
write a perl script with DBD::MySQL and try one of the examples he
gave, and check your return status to see if the statement succeeded,
does it appear to have succeeded?  If so, unless you're very familiar
with MySQL specifically, you probably wouldn't think to look for
warnings, and parse them, just to make sure that the statement that
succeeded actually did what it was supposed to do.  Definitely a
potential pitfall that someone writing perl scripts for MySQL needs
to be aware of and know how to work around.
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



valuation tangent (was Re: Sun and mysql)

2008-01-17 Thread Ofer Inbar
Olaf Stein [EMAIL PROTECTED] wrote:
 It will be interesting to see if facebook, youtube and alike websites will
 ever generate enough earnings to cover the costs they were bought for.
 Just because 50 million people know a website, it does not mean it makes
 money automatically.

The Google model shows that if you can get huge numbers of people to
visit your web site(s) *repeatedly*, and target advertising well, you
can make gigantic gobs of money.  Facebook in particular clearly has
the first part down: they've got millions of members who visit the
site several times a day every day (and millions more who visit
several times a week).  I think some of their ways of targeting ads
may work out, and their potential for revenue is far more than MySQL's.

YouTube... got folded into the existing Google.
Google's just trying to make sure every single one of us visits
their sites multiple times every single time we get online :)

 nuts. Maybe I am too old with 30, but I have never even been to facebook.

I think that puts you near the current median age on Facebook.  The
net moves fast!  Facebook as a site mainly for 20-somethings is so 2006 :)
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ERROR 3 (HY000): Error writing file (Errcode: 5)

2007-11-19 Thread Ofer Inbar
Daevid Vincent [EMAIL PROTECTED] wrote:
 mysql use mydbB;
 mysql CREATE TABLE foo ( id int(10) unsigned NOT NULL auto_increment, name
 varchar(255) NOT NULL, PRIMARY KEY id (id) );
 ERROR 3 (HY000): Error writing file './mydbB/foo.frm' (Errcode: 5)
 
 mysql use mydbA;
 mysql CREATE TABLE foo ( id int(10) unsigned NOT NULL auto_increment, name
 varchar(255) NOT NULL, PRIMARY KEY id (id) );
 Query OK, 0 rows affected (0.04 sec)
 
 [EMAIL PROTECTED]:/var/lib/mysql# ll
 total 28748
 drwxrwxrwx2 mysqlmysql8192 Nov 16 22:46 mydbA
 drwxrwxrwx2 mysqlmysql4096 Nov 16 22:50 mydbB
 -rw-rw1 mysqlmysql 5242880 Nov 16 22:46 ib_logfile0
 -rw-rw1 mysqlmysql 5242880 Nov 13 22:07 ib_logfile1
 -rw-rw1 mysqlmysql18874368 Nov 16 22:46 ibdata1
 drwx--2 mysqlmysql4096 Nov 13 22:07 mysql
 drwx--2 mysqlmysql4096 Nov 13 22:06 test

Does there already exist a foo.frm file in mydbB?
If so, are its permissions unusual and/or is there a hardware disk error?
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Reply-to is to originator rather than to list

2007-10-22 Thread Ofer Inbar
I've seen this debate on a lot of lists.  I firmly believe having a
list munge reply-to is almost universally a very bad idea (the
main exception being very small lists of people who know each other).

Most email programs allow you to tell them the names of the lists you
subscribe to, and/or can autodetect what the list name is from the
appropriate headers, and give you an easy list-reply command.  So you
have your usual individual reply command, group reply / reply to all, 
and list reply.  If you're annoyed by having to edit headers to reply
to the list, then learn how to use your email program's list-reply
command.

Yes, sometimes someone accidentally replies to you and not the list.
You write them back and ask, did you mean to send this just to me, or
to the list?  Mildly annoying, but either of you can send the message
on to the list and the rest of the list members lose nothing.

If you go the other way, though, the error case is that sometimes
someone sends to the list a message they intended to be private.
Not only does it increase list volume mostly with noise, but it
occasionally leads to embarrassment, confusion, or breach of privacy.
It makes all of the above more likely to happen accidentally, and when
they do happen, there's no way to take it back.
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL crashing on flush-logs

2007-10-03 Thread Ofer Inbar
We have MySQL 5.0.27 running on about 10 different RedHat EL4 boxes,
all from the same RPMs.  Every night we run mysqladmin flush-logs from
crontab (as well as some other things) on most of these servers.

One on server, mysqld is dying with signal 11 every single night right
during the mysqladmin flush-logs command.  None of the others ever do that.
This is repeatable.  It happens every night.

We're investigating possible causes, but in the meantime I'm also
curious if anyone else on this list has run into something similar
and has some suggestions.


Here's the backtrace portion of the error log from the most recent crash:

| Attempting backtrace. You can use the following information to find out
| where mysqld died. If you see no messages after this, something went
| terribly wrong...
| Cannot determine thread, fp=0x45394f78, backtrace may not be correct.
| Stack range sanity check OK, backtrace follows:
| 0x5f737400746f6f72
| New value of fp=0x1874230 failed sanity check, terminating stack trace!
| Please read http://dev.mysql.com/doc/mysql/en/Using_stack_trace.html and 
follow instructions on how to resolve the stack
|  trace. Resolved
| stack trace is much more helpful in diagnosing the problem, so please do 
| resolve it
| Trying to get some variables.
| Some pointers may be invalid and cause the dump to abort...
| thd-query at (nil)  is invalid pointer
| thd-thread_id=12310

  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: seeding a slave from a slave

2007-07-25 Thread Ofer Inbar
Ian P. Christian [EMAIL PROTECTED] wrote:
 How do I create a mysql data dump from a slave to seed another slave?
 Using --master-data with mysqldump from my existing slave sets the 
 master to the slave I was dumping, not the real master.

I started a discussion of the same thing a week or two ago on this
list, if you search the archives (or just google, I bet) you'll see
more detail.

The quick answer is you can't do it if the slave is in use and needs
to keep replicating.  You need to stop replicating, copy down the
slave's master info (you can see it in show slave status), run the
dump while replication is stopped, then start it again.

(Read the earlier thread if you want more detail)
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql dump help!

2007-07-24 Thread Ofer Inbar
 On 7/24/07, Red Hope [EMAIL PROTECTED] wrote:

 mysql
 
 mysql \R shell
 PROMPT set to 'shell'
 
 shell

It doesn't matter what the prompt says, it's still mysql you're
running here.  When people say the shell prompt they don't mean
make your prompt say the word shell, they mean the prompt when
you're running the shell.  Your shell prompt is still the mysql
prompt, because it is the mysql program's command line.  Make sense?

mysqldump is *not* a mysql command (like select, create table, use, etc.)

mysqldump is a separate program.

In Windows, if I remember correctly, the shell is called cmd.exe, so
you need to run cmd.exe and at its prompt (aka the shell prompt) type
the mysqldump command you want to try.
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Replication broken - fatal error 1236: 'error reading log entry'

2007-07-19 Thread Ofer Inbar
On Wed, Jul 18, 2007 at 06:13:10PM -0400, I wrote:
 This afternoon, both slaves stopped at the same place, with the same error:

 070718 17:28:00 [ERROR] Error reading packet from server: error reading log 
 entry ( server_errno=1236)
 070718 17:28:00 [ERROR] Got fatal error 1236: 'error reading log entry' from 
 master when reading data from binary log
 070718 17:28:00 [Note] Slave I/O thread exiting, read up to log 
 'hlgbinlog-oil.15', position 139702230

It turns out that this is due to a large query and max_allowed_packet
being too small for that query, on the *master*.  If max_allowed_packet
on the slave is too small, you'll get an error message that directly
tells you max_allowed_packet is too small; but if it's too small on
the master, you'll get this cryptic message in the slave's error log,
and nothing at all on the master's error log.
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Replication broken - fatal error 1236: 'error reading log entry'

2007-07-18 Thread Ofer Inbar
MySQL 5.0.27 from RPM, on Redhat EL4.

One master, two slaves, one database.  Slaves have been up for 5 days.

This afternoon, both slaves stopped at the same place, with the same error:

070718 17:28:00 [Note] Slave SQL thread initialized, starting replication in 
log 'hlgbinlog-oil.15' at position 138655868, relay log 
'/data/mysql/logs/soda-relay.25' position: 138656009
070718 17:28:00 [Note] Slave I/O thread: connected to master '[EMAIL 
PROTECTED]:3306',  replication started in log 'hlgbinlog-oil.15' at 
position 139702230
070718 17:28:00 [ERROR] Error reading packet from server: error reading log 
entry ( server_errno=1236)
070718 17:28:00 [ERROR] Got fatal error 1236: 'error reading log entry' from 
master when reading data from binary log
070718 17:28:00 [Note] Slave I/O thread exiting, read up to log 
'hlgbinlog-oil.15', position 139702230

If I try to stop and restart the slave threads, or restart mysql, it
just logs the same error again.  The relay log ends there.  The last
query in the relay log, which matches the query in the master's binlog
at position 139702230, is large insert statement.  I tried grep'ing
the query out of the master's binlog with mysqlbinlog, copying it to
one of the slaves, and running it there (mysql -ufoo -p  query.sql)
and it worked, so the query seems fine, though perhaps its large size
is related to the problem.

Any suggestions on how to re-start replication, and how to avoid this?
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



why are tmp tables being created on disk?

2007-07-17 Thread Ofer Inbar
MySQL 5.0.27 running on Redhat EL4.

In /etc/my.cnf I have:
  tmp_table_size=64M

mysql SHOW GLOBAL VARIABLES LIKE %tmp%;
+---+--+
| Variable_name | Value|
+---+--+
| max_tmp_tables| 32   |
| slave_load_tmpdir | /data/mysql/tmp/ |
| tmp_table_size| 67108864 |
| tmpdir| /data/mysql/tmp/ |
+---+--+

... and yet, I frequently see tmp tables on disk much smaller than 64M:

#ls -alF /data/mysql/tmp/
total 1552
drwxr-xr-x  2 mysql mysql4096 Jul 17 14:16 ./
drwxrwxr-x  4 mysql mysql4096 Jul 17 13:12 ../
-rw-rw  1 mysql mysql 1572864 Jul 17 14:16 #sql_3b9e_0.MYD
-rw-rw  1 mysql mysql1024 Jul 17 14:16 #sql_3b9e_0.MYI
#

Although I never see more than one tmp table in that directory at a
time (they go so quickly, usually I don't see any), I have a program
that tracks the rate of change of some variables from mysqladmin
extended, and my MySQL servers are consistently created 3-10 or more
disk tmp tables per second.  When I do see a table in tmpdir, though,
it's almost always smaller than 5M (I've been running a job to check
every few seconds, and the largest one I've seen so far was ~12M).

Why are so many small tmp tables being created on disk, not memory?
How can I tell MySQL to use memory for these?
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: why are tmp tables being created on disk?

2007-07-17 Thread Ofer Inbar
mos [EMAIL PROTECTED] wrote:
 Why are so many small tmp tables being created on disk, not memory?
 How can I tell MySQL to use memory for these?

I'd guess these temporary files are the result of Select statements 
 with an Order By clause that requires a FileSort. You can do a Show 
 ProcessList and this should tell you if the active queries are using 
 FileSort. As far as getting it to sort in memory? Your guess is as good 
 as mine. Even though I have an index that matches the sort order, MySQL 
 insists on sorting it using FileSort. I suppose you could set up a Ram disk 
 and have that as your first MySQL temp directory.

I thought of putting tmpdir on a tmpfs mount, worried that there might
occasionally be a need for a very large tmp file that would exceed the
limit (or, if I don't set a limit, use up all memory and force lots of
swapping).  When you say first MySQL temp directory are you implying
I can have more than one?  I don't see anything in the documentation
that suggests that...

BTW, here's another oddity I noticed - here's typical output from
iostat 60:

| avg-cpu:  %user   %nice%sys %iowait   %idle
|7.350.003.590.94   88.12
| 
| Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
| sda  29.20 8.00   342.08480  20528
| sda1  0.00 0.00 0.00  0  0
| sda2  0.00 0.00 0.00  0  0
| sda3  0.00 0.00 0.00  0  0
| sda4  0.00 0.00 0.00  0  0
| sda5 43.74 8.00   342.08480  20528
| sdb   2.43 0.00   173.70  0  10424
| sdb1 21.71 0.00   173.70  0  10424

I've been running this for a few hours and it consistently shows lots
of writes but no reads at all on sdb1, the partition where I have my
binary logs and tmpdir.  Is MySQL writing lots of tmp files and not
reading them?  Or, how else can I interpret this?

  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



quickly copying a database

2007-07-17 Thread Ofer Inbar
I've got a server with a database that's about 10G.  I need several
other copies of this database, with different names, on the same host
and same MySQL instance.

I could mysqldump the db and then restore it into the others...

mysql create database one;
mysql create database two;
 ...

mysqldump ...  dumpfile.sql
mysql -uroot -p one  dumpfile.sql
mysql -uroot -p two  dumpfile.sql
 ...

Unfortunately, each restore from a mysqldump takes about an hour (and
if I do more than one at a time, they'd slow down considerable due to
disk I/O contention).

If these DBs were all MyISAM, I could shut down MySQL and just copy
the directories.  But it seems that InnoDB tables are stored partly
in /var/lib/mysql/ibdata1 and this database has a mix of MyISAM and
InnoDB.

Is there a better technique to make several database copies quickly,
that works for a mix of MyISAM and InnoDB?
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



tangent: confusing iostat readings (was Re: why are tmp tables being created on disk?)

2007-07-17 Thread Ofer Inbar
Mathieu Bruneau [EMAIL PROTECTED] wrote:
  BTW, here's another oddity I noticed - here's typical output from
  iostat 60:
  
  | avg-cpu:  %user   %nice%sys %iowait   %idle
  |7.350.003.590.94   88.12
  | 
  | Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
  | sda  29.20 8.00   342.08480  20528
  | sda1  0.00 0.00 0.00  0  0
  | sda2  0.00 0.00 0.00  0  0
  | sda3  0.00 0.00 0.00  0  0
  | sda4  0.00 0.00 0.00  0  0
  | sda5 43.74 8.00   342.08480  20528
  | sdb   2.43 0.00   173.70  0  10424
  | sdb1 21.71 0.00   173.70  0  10424
  
  I've been running this for a few hours and it consistently shows lots
  of writes but no reads at all on sdb1, the partition where I have my
  binary logs and tmpdir.  Is MySQL writing lots of tmp files and not
  reading them?  Or, how else can I interpret this?
 
 The binlog are creating most of your constant write most probably. If
 you have no slave attached, you're not reading them at all...

Yes and no.  In fact, this iostat output comes from a slave, where
there's hardly any binglog activity (but a fair amount of relay log
activity).  However, I noticed the same thing on the master.  Before
tmpdir pointed to a directory on sdb1, there was a much lower, and
fairly constant, level of writes to that partition, which did not
surprise me.  After I pointed tmpdir to sdb1, the amount of write
activity grew substantially and started varying much more (from as low
as ~50 to as high as ~400), but I still see no reading recorded.
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



slave backups master data

2007-07-16 Thread Ofer Inbar
We've got a couple of production databases using mostly MyISAM tables,
that can't be taken offline without bringing down our application.  To
reduce downtime, we run a full mysqldump once a week and back up the
binary logs every day, so we can always use them to catch up from
the most recent full dump.  Because we're mostly using MyISAM we can't
make the dump a transaction, so the database is very slow for about 10
minutes while we run the dump.  This will get longer and longer as our
database grows.

We could eliminate the slowness entirely if we ran full dumps off a
replication slave.  We could do it on a slave not in use by the
application at that time.  However, the master data in that dump
file would refer to the slave's binary logs, not the real master.
That means we couldn't use that dump to start new replication slaves,
nor to restore a master and catch up (though the latter is less
important since we could could run these dumps more often).

One tactic that seems to work is to stop replication on the slave,
note the master data in show slave status, run a dump, and keep that
master data alongside that dump.  This is clunky for several reasons;
it's harder to automate the backup, harder to automate the restore,
and error-prone.

Is there any way to do a mysqldump of a replication slave and have
that dumpfile contain the master data for the master server?
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



replicating from multiple masters

2007-07-16 Thread Ofer Inbar
Say I have database1 on server1, database2 on server2, etc.

I'd like to set up one server where I can *look* at all of these
databases, without modifying them - a read-only aggregator.

What I'd like to do is, have the aggregator have local copies of
database1, database2, database3, etc., and replication each one of
them from its corresponding master server.

Is there a way to replicate databases with different names from
different masters to one common slave, or must you run multiple MySQL
instances and have each one slave from one master?

I'm having a hard time finding this addressed directly in the
documentation.  That *may* be because it can't be done, but it
may also be because it's not a common thing to do and I'm just
missing something that would help me figure out how to do it.
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



how to stop replication at a specific position?

2007-07-11 Thread Ofer Inbar
When you start a replication slave you can tell it where in the binary
logs to start (which log file, what position) ... but can you tell it
to automatically *stop* when it reaches a certain point (also identified
by log file name and position) ?
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



starting a second slave from a first slave's dump

2007-07-11 Thread Ofer Inbar
Scenario:
  host a is the master
  host b is a replication slave
  host c is to become a second replication slave
  there's no full dump from host a

Normally, to start a new slave, I'd restore a dump from host a, and
start slaving using the master data in that dump.  In this situation,
however, running a full mysqldump on a would cause it to be unresponsive
for a while, and the app is depending on it (mostly MyISAM so can't run
the dump as a transaction).

I can temporarily make the front-end application not read from host b,
and while host b is not in use, run a full mysqldump there of the same
db, and restore that dump onto host c.

... but how do I find the master data to start host c slaving with?

The dump file will have master data referring to host b's binlogs,
which are mostly empty because it's a replication slave.  I need to 
know what position in host a's binlogs to start host c slaving from.

One possibility I can think of:
 - stop slave on host b
 - run the dump on host b
 - note its position in host a's binlogs using show slave status
 - restore the dump on host c
 - start c slaving using the binlog name and position from show slave status

Will that work?

Is there a way to do this *without* stopping replication on host b?
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: starting a second slave from a first slave's dump

2007-07-11 Thread Ofer Inbar
Baron Schwartz [EMAIL PROTECTED] wrote:
 Ofer Inbar wrote:
   host a is the master
   host b is a replication slave
   host c is to become a second replication slave
   there's no full dump from host a

 One possibility I can think of:
  - stop slave on host b
  - run the dump on host b
  - note its position in host a's binlogs using show slave status
  - restore the dump on host c
  - start c slaving using the binlog name and position from show slave 
  status
 
 Will that work?
 
 Yes.

For the benefit of future readers of the archive: yes, it worked easily.

 Just be aware of which columns of SHOW SLAVE STATUS 
 mean what -- there are three sets of binlog coordinates in that output.  
 (There's a note on the online manual that should make it clear).

The columns I used were:

Master_Log_File: binlog.08
Read_Master_Log_Pos: 150484312

  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL upgrade from 5.0.32 to 5.0.42 broke replication

2007-06-12 Thread Ofer Inbar
Ian P. Christian [EMAIL PROTECTED] wrote:
 In theory, I should be able to find out where the slave was up to in the
 old logs, extract them manually and replay them on the slave, and then
 reset the slave to use the new logs - however i'm not sure how reliable
 that's going to be - or even how to go about doing it yet.

Assuming your slave is not usable by client programs now anyway and
you don't mind it being unusable for a while longer, you can restart
the slaving from scratch:

1. take a full mysqldump of the master, --with-master-data --flush-logs
2. drop your databases on the slave, stop slaving, and restore the dump
3. restart slaving on the slave using the master data in from the dump

(see the mysql docs on how to set up replication if you need more detail)

  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



flush logs vs. mysqladmin

2007-06-12 Thread Ofer Inbar
We run a mysqladmin flush-logs from cron every night.  This causes our
server to start a new binary log.  However, the slow query log does
not get flushed - our server continues updating the same slow query
log file.

If I run mysql and then issue a flush logs command, it flushes the
binary logs and the slow query log as well.

 - Redhat ES 4
 - MySQL 5.0.24 from redhat-supplied RPM
 - Both mysqladmin from cron, and my mysql flush logs, use -u root

Why does mysqladmin not flush the slow query log?
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: dbnightly maintenance backup script

2007-05-16 Thread Ofer Inbar
On Thu, May 10, 2007 at 03:23:31PM -0400,
Ofer Inbar [EMAIL PROTECTED] wrote:
   http://thwip.sysadmin.org/dbnightly

The version I put up there had a minor bug:

176c176
   my ($sec,$min,$hour,$mday,$mon,$year) = localtime(time); $year+=1900;
---
   my ($sec,$min,$hour,$mday,$mon,$year) = localtime(time); $year+=1900; 
 $mon++;

I forgot to increment month in the code that names the full dump file,
so it got named with 0-based month numbers (that is, 200704 for May).

It's fixed.  If you got the script, get it from there or add the increment.
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB dropping records / MyISAM working as it should

2007-05-15 Thread Ofer Inbar
Kenneth Loafman [EMAIL PROTECTED] wrote:
 Sounds like InnoDB is still borked though.  You should not have to use a 
 commit unless you have started a transaction, as I understand it.  The 
 semantics for non-transaction access should be identical.

Are you explicitly telling Python not to use transactions in the DB
interface, or are you leaving it to a default?  If so, does Python
default to transactions when available?

I use perl, not python, so I don't know the answer, but that's the
question I'd ask in this situation.  You're using an API, not talking
to MySQL directly via its own client program, so examine what the API
is doing.
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



dbnightly maintenance backup script

2007-05-10 Thread Ofer Inbar
I wrote a perl script to handle all of our regular mysql maintenance
tasks, which I thought might be useful to others.  It's meant for an
enviroment with binary logging turned on, but is fairly flexible.
Although if you're backing up multiple databases you'll have to modify
it a bit, since in our case we only care about backing up one database
(if you do modify it for multidatabase, please send me your diffs).

  http://thwip.sysadmin.org/dbnightly

The syntax is: dbnightly [action [action ...]]

It will perform the actions in the order you give them on its commandline.
Actions it knows how to do are:

1. maint - Run a bunch of SQL queries for nightly maintenace
  (put the queries you want in the DBMAINT section of the script)

2. full - a full mysqldump, into the backup directory, gzip'ed and
   with the database name and datetimestamp in the filename

3. partial - a partial mysqldump of a list of tables you choose,
   into the backup directory, gzip'd

4. flush - flush binary logs

5. logs - copy new or modified binary logs to the backup directory and
   gzip them, delete any that have been deleted from the mysql directory,
   and don't copy  gzip ones that haven't changed since last backed up

The resulting backup directory is all gzip'ed and suitable for rsync'ing.

We run it from crontab, and it produces output like this:

2007-05-10 06:00 dbnightly: Database maintenace
Table   Op  Msg_typeMsg_text
databasename.tablename   optimizestatus  OK
2007-05-10 06:00 dbnightly: Database maintenance done
2007-05-10 06:00 dbnightly: Partial dump of databasename to 
/home/maintusr/backups
2007-05-10 06:01 dbnightly: Partial dump complete: databasename-partial.sql
2007-05-10 06:02 dbnightly: Flushing binary logs
2007-05-10 06:02 dbnightly: Copying /var/lib/mysql/binlogfile.90 to 
/home/maintusr/backups
2007-05-10 06:02 dbnightly: Copying /var/lib/mysql/binlogfile.91 to 
/home/maintusr/backups
2007-05-10 06:02 dbnightly: Done

It also syslogs, like this:

May 10 05:00:01 hostname dbnightly: Database maintenace 
May 10 05:00:04 hostname dbnightly: Database maintenance done 
May 10 05:00:04 hostname dbnightly: Partial dump of databasename to 
/home/maintusr/backups 
May 10 05:01:18 hostname dbnightly: Partial dump complete: 
databasename-partial.sql 
May 10 05:02:14 hostname dbnightly: Flushing binary logs 
May 10 05:02:15 hostname dbnightly: Copying /var/lib/mysql/binlogfile.90 to 
/home/maintusr/backups 
May 10 05:02:38 hostname dbnightly: Copying /var/lib/mysql/binlogfile.91 to 
/home/maintusr/backups 
May 10 05:02:39 hostname dbnightly: Done 

Both of these are from dbnightly maint partial flush logs, which we
run 6 nights a week.  On the other night, we run dbnightly maint full logs
(no need to flush because --flush-logs is in the $fulldump options).

Note: the dirsyncgz script I posted recently was a modified version
of the binlogs subroutine from this script (dbnightly was not complete yet)

  --  Cos (Ofer Inbar)  --  [EMAIL PROTECTED]  http://thwip.sysadmin.org/
  cos, is perl God? 'No, Larry Wall is God.  Perl is the Language of God.
  But I thought you don't believe in God?  That's OK, I don't believe
   in Larry Wall either.  -- a conversation with Mike Sackton over lunch

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



tracing the source of a query

2007-05-09 Thread Ofer Inbar
A certain query happened on our server today, that we'd like to find
the source of.  I can see the query in our binary long...

mysqlbinlog today's logfile shows:

  # at 114047594
  #070509 15:29:21 server id 2  end_log_pos 114047722 Query   
thread_id=1041159   exec_time=0 error_code=0
  SET TIMESTAMP=1178738961;
  [here is the query in question]

Is there a way for us to find out:
1. what mysql username issued this query?
2. what IP/hostname that session was connected from?

  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



slave replication fails, cannot create user

2007-05-04 Thread Ofer Inbar
I'm transitioning from one mysql server to another, in production use,
by setting the new server up as a slave, so that when it's time to
switch, we'll have very little downtime.  Then we can turn off
replication and the new slave will become the new production server.

I set up a replication slave, tested it, and everything was fine.
Then I stopped it, reset slave, and used it for something else.
Now I want to make it a slave again, but it stops partway through
catching up on the binary logs, with this error:
  Operation CREATE USER failed for 'replication'@'[host]'
(where [host] is the slave's hostname)

Here is the procedure I followed to make it a slave:

1. drop database our_main_db;
   create database our_main_db;

2. Brought over the most recent mysqldump from the master,
   which was created from a command like this:

   mysqldump --flush-logs --master-data=2 -uuser -ppasswd our_main_db

   ... and fed the dump's contents into the slave-to-be server.

3. Using the log file name and position from the master data in that
   dump file, issed a change master statement:

  CHANGE MASTER TO
MASTER_HOST='masterserver.domain.name',
MASTER_USER='replication',
MASTER_PASSWORD='replicationuserpassword',
MASTER_LOG_FILE='binlog.11',
MASTER_LOG_POS=98;

  START SLAVE;

... everything was running fine for an hour or so, and the slave
caught up through several days worth of logs from the master, but then
it stopped with this error:

  Last_Errno: 1396
  Last_Error: Error 'Operation CREATE USER failed for
  'replication'@'[host]'' on query. Default database: 'mysql'.
  Query: 'create user 'replication'@'[host]' identified by
  'replicationuserpassword''

(again, [host] is the slave's own hostname).

I checked the mysql.user table and found that the [EMAIL PROTECTED]
user *did* exist.  So I removed it from the table, then tried to
restart replication ... and got the same error.

So then I went to the binary log on the master and, using mysqlbinlog,
found the exact create user statement, and tried to run it by hand on
the slave, which looked like this:

create user 'replication'@'[host]' identified by 'replicationuserpasswd';

I tried running that by hand on the slave server (from the mysql root user)
and got the error again:

mysql create user 'replication'@'[host]' identified by 'replicationuserpasswd';
ERROR 1396 (HY000): Operation CREATE USER failed for 'replication'@'[host]'


... I solved the problem by adding slave_skip_errors=1396 to my.cnf
and restarting the slave server.  It was able to pick up replication
and is now caught up with the master and seems to be fine.  However,

1. I don't understand what caused the problem
2. I fear that after I un-slave it (we're planning to switch masters)
   I still won't be able to create users on this new server.

  --  Cos (Ofer Inbar)  --  [EMAIL PROTECTED]
   So you're one of Jehovah's Witnesses.  I'm Cthulhu's defence lawyer.
prepare for cross-questioning  -- Tanuki, on alt.sysadmin.recovery


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: slave replication fails, cannot create user

2007-05-04 Thread Ofer Inbar
Baron Schwartz [EMAIL PROTECTED] wrote:
 What version of MySQL are you running on each machine?

Sorry, I should've included this information.  Both of them are
running 5.0.24, installed from exactly the same .rpm file.  I wanted
to avoid any issues related to different MySQL versions during this
transition.

 The statement might have failed because the user already existed,

You can see that was my first guess too.  That's why I tried removing
the user from mysql.user and starting the slave threads again, as I
described.  Unless it was trying to create the same user twice, that
should've fixed it (if this was the cause), but it didn't seem to.
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: expire_logs_days

2007-05-02 Thread Ofer Inbar
Baron Schwartz [EMAIL PROTECTED] wrote:
 Actually, the manual does mention the variable, but it doesn't work
 for us.  We run a nightly cron job that just runs [purge master logs]

When you say it doesn't work for us do you mean that you tried it?
In what way did it not work?

Tim Lucia [EMAIL PROTECTED] wrote:
 We do the same thing, based on the rumors I read at the time I set it up.
 (Where rumors means that googling for expire_logs_days reveals many with
 problems and not much good news.)

Has anyone here had direct experience with expire_logs_days either
working or not working?  What happened?

(note: I'm running 5.0.24)

  --  Cos (Ofer Inbar)  --  [EMAIL PROTECTED]
   OSI is a beautiful dream, and TCP/IP is living it!
 -- Einar Stefferud [EMAIL PROTECTED], IETF mailing list, 12 May 1992

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



slave status: vague documentation of Seconds_Behind_Master

2007-05-02 Thread Ofer Inbar
I'm confused by a bit of the documentation here:
  http://dev.mysql.com/doc/refman/5.0/en/show-slave-status.html

In the section on Seconds_Behind_Master, first it says:

  When the slave SQL thread is actively running (processing updates),
  this field is the number of seconds that have elapsed since the
  timestamp of the most recent event on the master executed by that thread.

... but later it says:

 If the network connection between master and slave is fast, the slave
 I/O thread is very close to the master, so this field is a good
 approximation of how late the slave SQL thread is compared to the
 master. If the network is slow, this is not a good approximation; the
 slave SQL thread may quite often be caught up with the slow-reading
 slave I/O thread, so Seconds_Behind_Master often shows a value of 0,
 even if the I/O thread is late compared to the master. In other
 words, this column is useful only for fast networks.

These two sections seem contradictory to me.  If Seconds_Behind_Master
actually works the way it is first defined, then it should reflect any
delays caused by the network as well as delays in the SQL thread.

Since each event in the binary log is timestamped by the master, we
know when the operation happened on the master.  If we compare that
timestamp to the current time, we know how long ago it happened - so
if we look at the timestamp of the most recent event executed by the
slave SQL thread we see when it happened on the master, and can tell
how much time has elapsed since then.

Two problems with this approach would be:

 1. If the local clocks on the master and slave aren't in sync,
the timestamp comparison to current time would be off.

 2. If no writes to the master database have happened in a while, this
would report the slave as behind because the most recent operation
on the master has a timestamp from some time ago.

Both of these lead me to suspect that Seconds_Behind_Master does *not*
actually work the way the first paragraph implies; if so, then the
second paragraph I quoted from the doc could very well be true.

But if so, what exactly does Seconds_Behind_Master show?
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: expire_logs_days

2007-05-02 Thread Ofer Inbar
Mark Leith [EMAIL PROTECTED] wrote:
 Do keep in mind that expire_logs_days only gets triggered at a) server 
 start up b) the time a binary log has to roll over.
 
 If your binary logs do not roll over for quite a period of time (i.e are 
 lower load systems) that still stay up for long periods - you might not 
 see a log expired for some period.

That's a good point, though probably a minor one: At most you would
end up with one binary logfile that's old and not deleted.  As soon
as you create a new one, that one would be deleted (if this feature works).

In our case, we flush logs nightly.  (but hardly ever restart mysqld)
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



a script to archive binary logs

2007-05-02 Thread Ofer Inbar
For disaster recovery, it's good to have copies of your database dumps
that you can easily  conveniently access, that are outside the data
center where the database lives.  Since we do a weekly full dump and
use binary logs for incrementals, I also wanted copies of our binary
logs in the same place.

However, binary logs for an active server can be very big.  It'd be
nice to gzip them them for faster transfer, lower bandwidth charges,
less disk space used on the backup host, etc.  And they compress well:
in my experience, usually to about 1/10th of original size.

Unfortunately, if you gzip the destination, you can't easily use rsync
to make the backups, since it won't correctly identify which files
need to be copied or deleted.  So I wrote this script, which syncs one
directory to another, gzip'ing the resulting files - but only files
whose name matches a regex you set at the beginning.  It knows that
each file in the source dir corresponds to a file with the same name
with .gz appended in the destination dir, and correctly figures out
which ones to copy over and which ones to delete.

I posted the generic version at: http://thwip.sysadmin.org/dirsyncgz

Here it is, with variables set for typical mysql binary log use:

--
#!/usr/bin/perl
#
# $Id: dirsyncgz,v 1.1 2007/05/03 04:15:35 cos Exp $
#
# syncs files w/names matching a regex from srcdir to destdir, and gzips
#
# only files whose modification time is more recent than the
# corresponding gzip'ed file will be copied, and if a file has been
# deleted from the srcdir, the corresponding gzip'ed file will be
# deleted from the destdir

my $srcdir = /var/lib/mysql;
my $destdir = /backup/mysqllogs;
my $basename = ^binlog.\d+$;

opendir SRCDIR, $srcdir or die $0: can't open directory $srcdir: $!\n;

foreach $file
  ( sort grep { /$basename/  -f $srcdir/$_ } readdir(SRCDIR) )

{ next unless ((stat($srcdir/$file))[9]  (stat($destdir/$file.gz))[9]);
  print Copying $srcdir/$file to $destdir\n;
  
  system(cp -p $srcdir/$file $destdir) == 0
or warn $0: cp -p $srcdir/$file $destdir failed: $?\n
and next;
  system(gzip -f $destdir/$file) == 0
or warn $0: gzip -f $destdir/$file failed: $?\n;
}

# now delete from the backup dir any logs deleted from the srcdir

opendir DESTDIR, $destdir or die $0: can't open directory $destdir: $!\n;

foreach $savedfile
  ( sort grep { /$basename/  -f $destdir/$_ } readdir(DESTDIR) )
{ $savedfile =~ s/.gz$//;
  next if -f $srcdir/$savedfile;

  print Deleting $savedfile from $destdir\n;
  unlink $destdir/${savedfile}.gz
or unlink $destdir/$savedfile
or warn $0: error deleting $savedfile: $!\n;
}
--

You can sync the logs to a remotely mounted filesystem and/or use its
destination directory as a source directory for your rsync.

  --  Cos (Ofer Inbar)  --  [EMAIL PROTECTED]
  It's been said that if a sysadmin does his job perfectly, he's the
  fellow that people wonder what he does and why the company needs him,
  until he goes on vacation.  -- comp.unix.admin FAQ

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: slave status: vague documentation of Seconds_Behind_Master

2007-05-02 Thread Ofer Inbar
Mathieu Bruneau [EMAIL PROTECTED] wrote:
  In the section on Seconds_Behind_Master, first it says:
  
When the slave SQL thread is actively running (processing updates),
this field is the number of seconds that have elapsed since the
timestamp of the most recent event on the master executed by that thread.
 
 This is generally true and accepted, the important part to read here is
 actively running. Which leads to the other paragraph:
[...]
 So if the SQL_THREAD isn't executing anything he will report 0 in
 Seconds_behinds_master. If you think about it, it make sense because
 he doesn't have any master timestamp to base his calculation on. So
 the warning applies here, it's not because your SQL_THREAD report 0
 seconds behind master, that means he's caught up with the master, it
 simply means it's caught up with the IO_THREAD. If the io_thread is
 lagging, there's no current way of knowing it

Ahah.  So processing updates does *not* include monitoring the
relay log for new material, and if it sees nothing new in the relay
log it reports 0.  Thanks.
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



expire_logs_days

2007-05-01 Thread Ofer Inbar
There's a system variable called expire_logs_days that lets you set a
number of days to keep binary logs, and automatically delete logs
older than that.  I've heard rumors that using this feature is
problematic.  I notice that in the MySQL documentation about binary
logging, it tells you to use purge master logs to delete old logs,
and does not mention the expire_logs_days variable as another option.
Is there a reason for this omission, or is it safe to use?
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: problem restoring from binary log

2007-04-27 Thread Ofer Inbar
Mathieu Bruneau [EMAIL PROTECTED] wrote:
 Ofer Inbar a écrit :
  I can repeat the problem with this procedure on the test db:
   - Import a full mysqldump file from the prodution db
   - flush logs
   - run a full mysqldump with --flush-logs --master-data=2
   - do a bunch of stuff that writes data
   - drop the database, and recreate it, flushing the log
   - make a copy of the binary log made since the mysqldump
   - import the mysqldump I made locally
= now I have all data correctly restored up to that point
  
  Now, I attempt to restore the binary log and this happens:
  
mysqlbinlog [binlogfile] | mysql -u root -p[db root password] [db name]
ERROR 1062 (23000) at line 21: Duplicate entry '16021330' for key 1

  I think your problem is in the way your trying to playback the binlog.
 I guess this is happening when you try to play the first of your binlog
 right ?

There is only one relevant binlog in the scenario I describe.
So yes, it is the first :)

 Using mysqlbinlog in the way you put it will replay all the log but you
 are probably somewhere in the middle of that log (Or maybe that binlog
 is already inside your backup) Since you dumped with --master-data=2

Either I am misunderstanding something very basic, or you
misunderstood my scenario, so let's clarify:

If I run a mysqldump with the --flush-logs option, and I see that
mysql did indeed start a new binlog at the moment I ran mysqlbindump,
am I not guaranteed that the the new binlog starts *after* the dump
and that I should replay from the beginning of that log?

  --  Cos (Ofer Inbar)  --  [EMAIL PROTECTED]
   This may seem a bit weird, but that's okay, because it is weird.
-- Larry Wall in perlref(1) man page, Perl 5.001

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



problem restoring from binary log

2007-04-25 Thread Ofer Inbar
I'm trying to set up MySQL backup  restore using mysqldump and binary
logs, so we can run mysqldump infrequently and use binary logs for the
daily backups (because mysqldump makes the production database unusable
for too long a time to run every night).

 - I can make full dumps using mysqldump
 - Binary logging works
 - I can restore full backups from mysqldump
 - Restoring binary logs gives me Duplicate entry errors


Redhat Enterprise Linux 4, MySQL 5.0.24 installed from the Redhat RPMs.

(since our production server runs 5.0.24, I'm playing with that version
 on my test setup, so that I can be sure whatever procedure I come up
 with will definitely work on the production server.)

I can repeat the problem with this procedure on the test db:
 - Import a full mysqldump file from the prodution db
 - flush logs
 - run a full mysqldump with --flush-logs --master-data=2
 - do a bunch of stuff that writes data
 - drop the database, and recreate it, flushing the log
 - make a copy of the binary log made since the mysqldump
 - import the mysqldump I made locally
  = now I have all data correctly restored up to that point

Now, I attempt to restore the binary log and this happens:

  mysqlbinlog [binlogfile] | mysql -u root -p[db root password] [db name]
  ERROR 1062 (23000) at line 21: Duplicate entry '16021330' for key 1


I've googled around for similar errors, and found several conflicting
reports about bugs related to inserting 0's into auto_increment
columns.  The insert statement at line 21 in the example above is
indeed inserting into a table that has an auto_increment columnm, but
it is not inserting a value into that column at all, nor are any
earlier statements in the binary log.  I've checked the date in the
database at the time of the restore and there are no rows in the table
where the value of that column is 0 or NULL.

I believe what I'm trying to do is a pretty standard way to set up
backup and restore for a production mysql database, so it should work.
Any ideas?

  --  Cos (Ofer Inbar)  --  [EMAIL PROTECTED]
  It's been said that if a sysadmin does his job perfectly, he's the
  fellow that people wonder what he does and why the company needs him,
  until he goes on vacation.-- comp.unix.admin FAQ

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]