Re: when should I start master's binlog?

2007-07-17 Thread Jen mlists

Hello Baron,

2007/7/17, Baron Schwartz [EMAIL PROTECTED]:

Are you sure the master and slave had the same data at the time you started the
slave replicating from the master?  It looks to me like the slave had different
data and you need to re-initialize the slave.


I'm not so  sure.The order I made is:

in master:
flush tables with read lock;
show master status;
shutdown master mysqld.
tar master db files.
start master mysqld.

in slave:
shutdown slave mysqld.
untar master db files into right directory.
start slave mysqld.
change master to ...
start slave.



You may be able to use mysql-table-checksum and mysql-table-sync to synchronize
the slave with the master without completely re-snapshotting it from the master,
if that is an issue.  (http://mysqltoolkit.sourceforge.net)



Thank you for the good info.would go to try it.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



replication error with system's mysql db

2007-07-17 Thread Jen mlists

Hello members,

When I do mysql replication,I didn't replicate the system's mysql database.
but someone updated the mysql db in master,then I got the errors on slave.
How to resolve it?Do I need to always replicate mysql db?thanks.

ERROR: 1146  Table 'mysql.mobilearea' doesn't exist
070717 15:23:46  Slave: Error 'Table 'mysql.mobilearea' doesn't exist'
on query. Default database: ''. Query: 'DELETE FROM
`mysql`.`mobilearea`', Error_code: 1146
070717 15:23:46  Error running query, slave SQL thread aborted. Fix
the problem, and restart the slave SQL thread with SLAVE START. We
stopped at log 'DB3-bin.185' position 39946

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



consitent backup of MyISAM and Innodb tables

2007-07-17 Thread spikerlion
Hello,

we have a MySQL DBMS with a lot of databases. Most of them are using MyISAM 
tables but three databases use InnoDB and MyISAM tables.

What is the best method to get a consitent ONLINE backup of both table types?

Thanks,
Spiker
-- 
Pt! Schon vom neuen GMX MultiMessenger gehört?
Der kanns mit allen: http://www.gmx.net/de/go/multimessenger

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: consitent backup of MyISAM and Innodb tables

2007-07-17 Thread Olexandr Melnyk

Check out this thread:
http://www.sitepoint.com/forums/showpost.php?p=3357628postcount=2

2007/7/17, [EMAIL PROTECTED] [EMAIL PROTECTED]:


Hello,

we have a MySQL DBMS with a lot of databases. Most of them are using
MyISAM tables but three databases use InnoDB and MyISAM tables.

What is the best method to get a consitent ONLINE backup of both table
types?

Thanks,
Spiker
--
Pt! Schon vom neuen GMX MultiMessenger gehört?
Der kanns mit allen: http://www.gmx.net/de/go/multimessenger

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
Sincerely yours,
Olexandr Melnyk
http://omelnyk.net/


Re: speeding imports

2007-07-17 Thread Mogens Melander

On Tue, July 17, 2007 04:29, Baron Schwartz wrote:
 B. Keith Murphy wrote:
 Hey all. I have a bit of a problem. We are upgrading from ver 4.1.x to
 5.0.45. That's not the problem :). The problem is that MySQL recommends
 dumping and importing your data when doing this upgrade to make sure
 that
 things go as smoothly as possible. We have quite a bit of data spread
 over
 multiple servers. We have slaves off each master server and the plan is
 upgrade the slaves and then make them the masters.

 The problem is that I am realizing that this dump/import is going to
 take
 hours and in some cases days. I am looking for any way to speed this up.
 Any
 suggestions?

 The fastest way I've found is to do SELECT INTO OUTFILE on the master,
 which
 selects into a sort of tab-delimited format by default -- don't specify
 any
 options like field terminators or whatnot.  This file can then be imported
 directly into LOAD DATA INFILE, again without options.

 I think this is faster than loading files full of SQL statements, which
 have to
 be parsed and query-planned etc.

 I thought mysqldump had an option to dump this way, but I can't see it
 now.

I think you are looking for the --single-transaction option :)


 I'd use mysqldump to just dump the structures, routines etc without data,
 and
 then load the data separately.

 Beware: replication from 4.1.x to 5.0.40 will fail.  The MySQL changelog
 didn't
 list it as an incompatible change, but there is some bug that got fixed
 around
 5.0.38 (sorry, I forget what it was and can't find it now).  When the
 slave
 checks the master version, it throws an error and stops saying I refuse
 because
 of this bug.


 Baron

-- 
Later

Mogens Melander
+45 40 85 71 38
+66 870 133 224



-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



left join, right join failure with mysql4

2007-07-17 Thread Olav Mørkrid

hi

i am experimenting with sql for getting lists of friends.

select friend.*
from user
  left join link on user.id=link.id and user.id = $MYID
  right join user friend on link.friend_id = friend.id
where user.name is null;

on my local windows machine running mysql 5 it works fine, however
when i try with to import the tables to a unix machine running mysql
4, the select fails.

- if i do user.name is null, i get an empty set.

- if i do user.name is not null, i get 1 rows, where each of the
100 rows from the user table is repeated 100 times each.

(the error might not even be related to mysql 4/5, but i am at a loss here)

does anyone know how to resolve this?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: left join, right join failure with mysql4

2007-07-17 Thread Edward Kay
 hi

 i am experimenting with sql for getting lists of friends.

 select friend.*
 from user
left join link on user.id=link.id and user.id = $MYID
right join user friend on link.friend_id = friend.id
 where user.name is null;

 on my local windows machine running mysql 5 it works fine, however
 when i try with to import the tables to a unix machine running mysql
 4, the select fails.

 - if i do user.name is null, i get an empty set.

 - if i do user.name is not null, i get 1 rows, where each of the
 100 rows from the user table is repeated 100 times each.

 (the error might not even be related to mysql 4/5, but i am at a
 loss here)

 does anyone know how to resolve this?


It won't help that you've got two table listed after the right join
statement. I think

  right join user friend on ... should be
  right join friend on ...

Edward


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: speeding imports

2007-07-17 Thread Andrew Hutchings
In article lt;[EMAIL PROTECTED]gt; [EMAIL PROTECTED]
(BaronSchwartz) wrote:

  B. Keith Murphy wrote:

   The problem is that I am realizing that this dump/import is going
 to take hours and in some cases days. I am looking for any way to
 speed this up. Any suggestions?

  The fastest way I've found is to do SELECT INTO OUTFILE on the
 master, which selects into a sort of tab-delimited format by default
 -- don't specify any options like field terminators or whatnot.  This
 file can then be imported directly into LOAD DATA INFILE, again
 without options.
 

You may also see a speed increase by adding the indexes after the data
is inserted.


-- 

Andrew Hutchings - LinuxJedi - http://www.linuxjedi.co.uk/
Windows is the path to the darkside...Windows leads to Blue Screen. Blue Screen 
leads to downtime. Downtime leads to suffering...I sense much Windows in you...


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: speeding imports

2007-07-17 Thread Baron Schwartz

Mogens Melander wrote:

On Tue, July 17, 2007 04:29, Baron Schwartz wrote:

B. Keith Murphy wrote:

Hey all. I have a bit of a problem. We are upgrading from ver 4.1.x to
5.0.45. That's not the problem :). The problem is that MySQL recommends
dumping and importing your data when doing this upgrade to make sure
that
things go as smoothly as possible. We have quite a bit of data spread
over
multiple servers. We have slaves off each master server and the plan is
upgrade the slaves and then make them the masters.

The problem is that I am realizing that this dump/import is going to
take
hours and in some cases days. I am looking for any way to speed this up.
Any
suggestions?

The fastest way I've found is to do SELECT INTO OUTFILE on the master,
which
selects into a sort of tab-delimited format by default -- don't specify
any
options like field terminators or whatnot.  This file can then be imported
directly into LOAD DATA INFILE, again without options.

I think this is faster than loading files full of SQL statements, which
have to
be parsed and query-planned etc.

I thought mysqldump had an option to dump this way, but I can't see it
now.


I think you are looking for the --single-transaction option :)


I found the option I meant:

  -T, --tab=name  Creates tab separated textfile for each table to given
  path. (creates .sql and .txt files). NOTE: This only
  works if mysqldump is run on the same machine as the
  mysqld daemon.

Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SQL LINKING TABLE Command

2007-07-17 Thread Baron Schwartz

Hi,

Thufir wrote:

SELECT product_name, customer.name, date_of_sale
FROM `sales` , product, customer
WHERE product.product_id = sales.product_id
and customer.customer_id = sales.customer_id LIMIT 0, 30

The above SQL command links three tables and display the required
result. The tables are linked by their ID fields.

http://www.plus2net.com/sql_tutorial/sql_linking_table.php


how is this different/better than a many-to-many, such as
http://www.plus2net.com/sql_tutorial/sql_inner_join.php?  Isn't that
a better way of doing the same thing?  Or, not?


I agree with the other respondent: they are the same thing.  The comma-join syntax in 
your first example is an old-style join, which used to be the only way to join 
tables.  The join using the JOIN keyword is equivalent, and I call it ANSI-style or 
new-style or just the right way.


I think whoever wrote the tutorial is using the words linking table a little 
carelessly.  Many RDBMS products have a notion of linked tables that is *totally* 
different -- it has to do with accessing a table on one server from another server. 
While it's true a JOIN, comma-style or not, does link data in one table to another 
table, it is NOT a SQL LINKING TABLE command as implied in the tutorial.  It is a 
JOIN, and there is no such command as far as I know.


Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: replication error with system's mysql db

2007-07-17 Thread Baron Schwartz

Hi Jen,

Jen mlists wrote:

Hello members,

When I do mysql replication,I didn't replicate the system's mysql 
database.

but someone updated the mysql db in master,then I got the errors on slave.
How to resolve it?Do I need to always replicate mysql db?thanks.

ERROR: 1146  Table 'mysql.mobilearea' doesn't exist
070717 15:23:46  Slave: Error 'Table 'mysql.mobilearea' doesn't exist'
on query. Default database: ''. Query: 'DELETE FROM
`mysql`.`mobilearea`', Error_code: 1146
070717 15:23:46  Error running query, slave SQL thread aborted. Fix
the problem, and restart the slave SQL thread with SLAVE START. We
stopped at log 'DB3-bin.185' position 39946


You can avoid replicating it, but the rules are complex.  You should be careful to read 
this section of the manual:


http://dev.mysql.com/doc/refman/5.0/en/replication-rules.html

Some of the replication options don't do what you think they will from their 
names.

Cheers
Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: left join, right join failure with mysql4

2007-07-17 Thread Olav Mørkrid

some more info on this:

putting user AS friend improves the query, but does not eliminate the problem.

also, here are the version numbers for each sql server:

mysql  Ver 12.22 Distrib 4.0.22, for portbld-freebsd4.9 (i386)
mysql.exe  Ver 14.12 Distrib 5.0.24, for Win32 (ia32)

the query even runs fine on a related bsd machine with mysql 5, so the
problem is quite certain to be mysql 4.

hope someone can help!

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



why are tmp tables being created on disk?

2007-07-17 Thread Ofer Inbar
MySQL 5.0.27 running on Redhat EL4.

In /etc/my.cnf I have:
  tmp_table_size=64M

mysql SHOW GLOBAL VARIABLES LIKE %tmp%;
+---+--+
| Variable_name | Value|
+---+--+
| max_tmp_tables| 32   |
| slave_load_tmpdir | /data/mysql/tmp/ |
| tmp_table_size| 67108864 |
| tmpdir| /data/mysql/tmp/ |
+---+--+

... and yet, I frequently see tmp tables on disk much smaller than 64M:

#ls -alF /data/mysql/tmp/
total 1552
drwxr-xr-x  2 mysql mysql4096 Jul 17 14:16 ./
drwxrwxr-x  4 mysql mysql4096 Jul 17 13:12 ../
-rw-rw  1 mysql mysql 1572864 Jul 17 14:16 #sql_3b9e_0.MYD
-rw-rw  1 mysql mysql1024 Jul 17 14:16 #sql_3b9e_0.MYI
#

Although I never see more than one tmp table in that directory at a
time (they go so quickly, usually I don't see any), I have a program
that tracks the rate of change of some variables from mysqladmin
extended, and my MySQL servers are consistently created 3-10 or more
disk tmp tables per second.  When I do see a table in tmpdir, though,
it's almost always smaller than 5M (I've been running a job to check
every few seconds, and the largest one I've seen so far was ~12M).

Why are so many small tmp tables being created on disk, not memory?
How can I tell MySQL to use memory for these?
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: why are tmp tables being created on disk?

2007-07-17 Thread mos

At 12:29 PM 7/17/2007, you wrote:

MySQL 5.0.27 running on Redhat EL4.

In /etc/my.cnf I have:
  tmp_table_size=64M

mysql SHOW GLOBAL VARIABLES LIKE %tmp%;
+---+--+
| Variable_name | Value|
+---+--+
| max_tmp_tables| 32   |
| slave_load_tmpdir | /data/mysql/tmp/ |
| tmp_table_size| 67108864 |
| tmpdir| /data/mysql/tmp/ |
+---+--+

... and yet, I frequently see tmp tables on disk much smaller than 64M:

#ls -alF /data/mysql/tmp/
total 1552
drwxr-xr-x  2 mysql mysql4096 Jul 17 14:16 ./
drwxrwxr-x  4 mysql mysql4096 Jul 17 13:12 ../
-rw-rw  1 mysql mysql 1572864 Jul 17 14:16 #sql_3b9e_0.MYD
-rw-rw  1 mysql mysql1024 Jul 17 14:16 #sql_3b9e_0.MYI
#

Although I never see more than one tmp table in that directory at a
time (they go so quickly, usually I don't see any), I have a program
that tracks the rate of change of some variables from mysqladmin
extended, and my MySQL servers are consistently created 3-10 or more
disk tmp tables per second.  When I do see a table in tmpdir, though,
it's almost always smaller than 5M (I've been running a job to check
every few seconds, and the largest one I've seen so far was ~12M).

Why are so many small tmp tables being created on disk, not memory?
How can I tell MySQL to use memory for these?
  -- Cos


Cos,
   I'd guess these temporary files are the result of Select statements 
with an Order By clause that requires a FileSort. You can do a Show 
ProcessList and this should tell you if the active queries are using 
FileSort. As far as getting it to sort in memory? Your guess is as good 
as mine. Even though I have an index that matches the sort order, MySQL 
insists on sorting it using FileSort. I suppose you could set up a Ram disk 
and have that as your first MySQL temp directory.


Mike 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: why are tmp tables being created on disk?

2007-07-17 Thread Ofer Inbar
mos [EMAIL PROTECTED] wrote:
 Why are so many small tmp tables being created on disk, not memory?
 How can I tell MySQL to use memory for these?

I'd guess these temporary files are the result of Select statements 
 with an Order By clause that requires a FileSort. You can do a Show 
 ProcessList and this should tell you if the active queries are using 
 FileSort. As far as getting it to sort in memory? Your guess is as good 
 as mine. Even though I have an index that matches the sort order, MySQL 
 insists on sorting it using FileSort. I suppose you could set up a Ram disk 
 and have that as your first MySQL temp directory.

I thought of putting tmpdir on a tmpfs mount, worried that there might
occasionally be a need for a very large tmp file that would exceed the
limit (or, if I don't set a limit, use up all memory and force lots of
swapping).  When you say first MySQL temp directory are you implying
I can have more than one?  I don't see anything in the documentation
that suggests that...

BTW, here's another oddity I noticed - here's typical output from
iostat 60:

| avg-cpu:  %user   %nice%sys %iowait   %idle
|7.350.003.590.94   88.12
| 
| Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
| sda  29.20 8.00   342.08480  20528
| sda1  0.00 0.00 0.00  0  0
| sda2  0.00 0.00 0.00  0  0
| sda3  0.00 0.00 0.00  0  0
| sda4  0.00 0.00 0.00  0  0
| sda5 43.74 8.00   342.08480  20528
| sdb   2.43 0.00   173.70  0  10424
| sdb1 21.71 0.00   173.70  0  10424

I've been running this for a few hours and it consistently shows lots
of writes but no reads at all on sdb1, the partition where I have my
binary logs and tmpdir.  Is MySQL writing lots of tmp files and not
reading them?  Or, how else can I interpret this?

  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



64 bit build for Windows?

2007-07-17 Thread James Eaton
Is there a 64 bit build of MySQL 5.0 available for Windows Server 2003 64 
bit edition? 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: 64 bit build for Windows?

2007-07-17 Thread James Eaton

Doh... found it.

Why are the 'without installer' versions significantly larger than those 
with the installer?  I would expect it to be the other way around?




- Original Message - 
From: James Eaton [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Tuesday, July 17, 2007 3:24 PM
Subject: 64 bit build for Windows?


Is there a 64 bit build of MySQL 5.0 available for Windows Server 2003 
64 bit edition? 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: 64 bit build for Windows?

2007-07-17 Thread Jimmy Guerrero

Hello,

http://dev.mysql.com/downloads/mysql/5.0.html#winx64

Let me know if that helps.

-- Jimmy

James Eaton wrote:
Is there a 64 bit build of MySQL 5.0 available for Windows Server 2003 
64 bit edition?




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



quickly copying a database

2007-07-17 Thread Ofer Inbar
I've got a server with a database that's about 10G.  I need several
other copies of this database, with different names, on the same host
and same MySQL instance.

I could mysqldump the db and then restore it into the others...

mysql create database one;
mysql create database two;
 ...

mysqldump ...  dumpfile.sql
mysql -uroot -p one  dumpfile.sql
mysql -uroot -p two  dumpfile.sql
 ...

Unfortunately, each restore from a mysqldump takes about an hour (and
if I do more than one at a time, they'd slow down considerable due to
disk I/O contention).

If these DBs were all MyISAM, I could shut down MySQL and just copy
the directories.  But it seems that InnoDB tables are stored partly
in /var/lib/mysql/ibdata1 and this database has a mix of MyISAM and
InnoDB.

Is there a better technique to make several database copies quickly,
that works for a mix of MyISAM and InnoDB?
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: why are tmp tables being created on disk?

2007-07-17 Thread Mathieu Bruneau
Ofer Inbar a écrit :
 mos [EMAIL PROTECTED] wrote:
 Why are so many small tmp tables being created on disk, not memory?
 How can I tell MySQL to use memory for these?
 
I'd guess these temporary files are the result of Select statements 
 with an Order By clause that requires a FileSort. You can do a Show 
 ProcessList and this should tell you if the active queries are using 
 FileSort. As far as getting it to sort in memory? Your guess is as good 
 as mine. Even though I have an index that matches the sort order, MySQL 
 insists on sorting it using FileSort. I suppose you could set up a Ram disk 
 and have that as your first MySQL temp directory.
 
 I thought of putting tmpdir on a tmpfs mount, worried that there might
 occasionally be a need for a very large tmp file that would exceed the
 limit (or, if I don't set a limit, use up all memory and force lots of
 swapping).  When you say first MySQL temp directory are you implying
 I can have more than one?  I don't see anything in the documentation
 that suggests that...
 
 BTW, here's another oddity I noticed - here's typical output from
 iostat 60:
 
 | avg-cpu:  %user   %nice%sys %iowait   %idle
 |7.350.003.590.94   88.12
 | 
 | Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
 | sda  29.20 8.00   342.08480  20528
 | sda1  0.00 0.00 0.00  0  0
 | sda2  0.00 0.00 0.00  0  0
 | sda3  0.00 0.00 0.00  0  0
 | sda4  0.00 0.00 0.00  0  0
 | sda5 43.74 8.00   342.08480  20528
 | sdb   2.43 0.00   173.70  0  10424
 | sdb1 21.71 0.00   173.70  0  10424
 
 I've been running this for a few hours and it consistently shows lots
 of writes but no reads at all on sdb1, the partition where I have my
 binary logs and tmpdir.  Is MySQL writing lots of tmp files and not
 reading them?  Or, how else can I interpret this?
 
   -- Cos
 

The binlog are creating most of your constant write most probably. If
you have no slave attached, you're not reading them at all...

-- 
Mathieu Bruneau
aka ROunofF

===
GPG keys available @ http://rounoff.darktech.org

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



tangent: confusing iostat readings (was Re: why are tmp tables being created on disk?)

2007-07-17 Thread Ofer Inbar
Mathieu Bruneau [EMAIL PROTECTED] wrote:
  BTW, here's another oddity I noticed - here's typical output from
  iostat 60:
  
  | avg-cpu:  %user   %nice%sys %iowait   %idle
  |7.350.003.590.94   88.12
  | 
  | Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
  | sda  29.20 8.00   342.08480  20528
  | sda1  0.00 0.00 0.00  0  0
  | sda2  0.00 0.00 0.00  0  0
  | sda3  0.00 0.00 0.00  0  0
  | sda4  0.00 0.00 0.00  0  0
  | sda5 43.74 8.00   342.08480  20528
  | sdb   2.43 0.00   173.70  0  10424
  | sdb1 21.71 0.00   173.70  0  10424
  
  I've been running this for a few hours and it consistently shows lots
  of writes but no reads at all on sdb1, the partition where I have my
  binary logs and tmpdir.  Is MySQL writing lots of tmp files and not
  reading them?  Or, how else can I interpret this?
 
 The binlog are creating most of your constant write most probably. If
 you have no slave attached, you're not reading them at all...

Yes and no.  In fact, this iostat output comes from a slave, where
there's hardly any binglog activity (but a fair amount of relay log
activity).  However, I noticed the same thing on the master.  Before
tmpdir pointed to a directory on sdb1, there was a much lower, and
fairly constant, level of writes to that partition, which did not
surprise me.  After I pointed tmpdir to sdb1, the amount of write
activity grew substantially and started varying much more (from as low
as ~50 to as high as ~400), but I still see no reading recorded.
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]