Re: tmp tables

2010-01-15 Thread Victor Subervi
On Thu, Jan 14, 2010 at 1:35 AM, Chris W 4rfv...@cox.net wrote:

 I think the reason the other poster was so harsh is because others have
 suggested the right way to do it, if not in a lot of detail, and you have
 just argued with them.


I don't recall anyone doing that. I don't recall arguing. I disagree. And I
don't want anyone to create an argument over this comment!


 Since I'm not in the mood to work on my project I will give you an overview
 of my shopping cart process and data structure.


I have copied over your excellent example and will review it for ideas.
Thanks!
V


Re: tmp tables

2010-01-13 Thread Victor Subervi
On Mon, Jan 11, 2010 at 3:21 PM, mos mo...@fastmail.fm wrote:

 At 09:56 AM 1/11/2010, Johnny Withers wrote:

 Victor,

 The temporary table solution is not a good one. Use a single table and
 store
 each item put into a cart identified by the session ID of the user. A
 process should clean out this table periodically since there are
 shoppers
 that abandon carts from time to time.

 The design of this table could be quite simple:

 id - int auto_inc
 session_id - varchar
 added_dt - datetime
 item_id - int
 quantity - int

 You could get more complicated depending in your need an item requirements
 (colors, sizes, etc).

 -JW


 I also agree with JW, a single table is better. Don't forget shoppers may
 spend 30 minutes of more filling up their shopping cart and may lose the
 connection to the site or have to exit without completing the order. You
 need to retain the registered user's shopping cart so he can go back to it
 later. Only delete it if there is no activity on it for 7-10 days.  If you
 use temporary tables and throw out the shopping cart contents if the session
 is lost, then you will *iss off a lot of customers.


I don't see how a single table is a solution. Tables are not 3D data
structures. If they were, you'd be right! I need 2D to describe the
customer's data: cols (ID, quantity, options, etc) X products. How can I add
a third axis of custID??? No! Not possible.

I am using cookies and committing data to the temp tables every time the
customer puts something in the shopping cart.

Can someone please answer my question as to which is better, putting the
temp tables in a separate database or together with the 'workhorse' tables?
TIA,
V


Re: tmp tables

2010-01-13 Thread Johnny Withers
No one designs a shopping cart system this way.

http://www.google.com/search?q=shopping+cart+database+table+design

If you are dead set on this crazy design it doesn't matter if you put the
temp tables in the main database or another database. It'll be just as silly
either way.

JW



On Wed, Jan 13, 2010 at 12:38 PM, Victor Subervi victorsube...@gmail.comwrote:

 On Mon, Jan 11, 2010 at 3:21 PM, mos mo...@fastmail.fm wrote:

  At 09:56 AM 1/11/2010, Johnny Withers wrote:
 
  Victor,
 
  The temporary table solution is not a good one. Use a single table and
  store
  each item put into a cart identified by the session ID of the user. A
  process should clean out this table periodically since there are
  shoppers
  that abandon carts from time to time.
 
  The design of this table could be quite simple:
 
  id - int auto_inc
  session_id - varchar
  added_dt - datetime
  item_id - int
  quantity - int
 
  You could get more complicated depending in your need an item
 requirements
  (colors, sizes, etc).
 
  -JW
 
 
  I also agree with JW, a single table is better. Don't forget shoppers may
  spend 30 minutes of more filling up their shopping cart and may lose the
  connection to the site or have to exit without completing the order. You
  need to retain the registered user's shopping cart so he can go back to
 it
  later. Only delete it if there is no activity on it for 7-10 days.  If
 you
  use temporary tables and throw out the shopping cart contents if the
 session
  is lost, then you will *iss off a lot of customers.
 

 I don't see how a single table is a solution. Tables are not 3D data
 structures. If they were, you'd be right! I need 2D to describe the
 customer's data: cols (ID, quantity, options, etc) X products. How can I
 add
 a third axis of custID??? No! Not possible.

 I am using cookies and committing data to the temp tables every time the
 customer puts something in the shopping cart.

 Can someone please answer my question as to which is better, putting the
 temp tables in a separate database or together with the 'workhorse' tables?
 TIA,
 V




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: tmp tables

2010-01-13 Thread Victor Subervi
On Wed, Jan 13, 2010 at 2:33 PM, Johnny Withers joh...@pixelated.netwrote:

 No one designs a shopping cart system this way.

 http://www.google.com/search?q=shopping+cart+database+table+design

 If you are dead set on this crazy design it doesn't matter if you put the
 temp tables in the main database or another database. It'll be just as silly
 either way.


You're so complimentary! Wouldn't it have been better to simply suggest FKs?
Or maybe point out, as you did, how others have done it? Why be so harsh
when I simply didn't know how it's done? Thank you anyway since you showed
me how to do this.
V


Re: tmp tables

2010-01-13 Thread Chris W

Victor Subervi wrote:


You're so complimentary! Wouldn't it have been better to simply suggest FKs?

  



I think the reason the other poster was so harsh is because others have 
suggested the right way to do it, if not in a lot of detail, and you 
have just argued with them. 

Since I'm not in the mood to work on my project I will give you an 
overview of my shopping cart process and data structure.


I have the following tables. 
Order

Payment
Cart
User
UserAddress
StuffToBuy

In the order table I have the following fields.
OrderID -- unique auto inc key
Status  -- int to indicate if the order is in progress, processed 
shipped etc.  (use whatever set of statuses you need)
UserID -- this links to the user placing the order can be set to 0 if 
the shopper doesn't want an account on this site.
OpenDate -- date and time the order was started (comes in handy to see 
what is going on later down the road)

CloseDate -- date and time the order is finished.
TimeStamp -- auto update timestamp every time the order is modified.
(that way you can delete orders that are older than a certain age if you 
like)
Any other date time fields to track with various status points are 
reached depending on your needs.


Payment Table.  This table exists separate from the Order table for 2 
reasons.  1: I like to keep it separate.  2: if you provide the option 
to pay over time for something like a service or whatever this structure 
allows you to have multiple payment records for each order.  Fields include.

PaymentID --   auto inc key
OrderID -- link to the order.
Status -- same as above but different uses.
Amount -- Amount of the payment.
SubmitDate -- Date and time order was submitted
ProcessedDate -- Date and time payment was processed (may be the same as 
submitted if done in real time you decide if you need it)
Other Payment fields as required CC Number persons name address etc.  
This way if the user making the order doesn't have an account on the 
system you can store all that information in here.

TimeStamp --- just for good measure.

Cart table stores what people are buying or have bought depending on if 
the associated order is closed.

CartItemID -- auto inc key
OrderID -- link to order.  Before you can add an item to the cart you 
need to create an order record first.

StuffID -- link to item they are buying.
Qty -- quantity they want to buy.
Price -- current price of item (save here because it can change over 
time and you will want to know what they payed for it when the order was 
placed.
Discount -- if you are giving some discount save that here or just leave 
it as 0.
Cost -- What they will actually pay for the qty of StuffID they want.   
(with Price and Discount you can see why the cost is what it is.)

DateAdded -- Date time they added the item.

You can calculate cost in a query if you like but storing it here makes 
queries easier and I prefer to store an ID number for discount and look 
up the amount of the discount in a discount table.  Since the amount of 
the discount in that record could change over time (bad idea to me but 
it could happen) so that way having the final cost stored is handy.  You 
may want to structure it differently depending on the way discounts work 
especially if there are qty discounts. 


User Table
UserID -- auto inc key
UserName -- put a unique key on this.  (using a separate UserID auto inc 
key lets you change the user name if you want, which will happen.  I let 
users change their UserName on all of my web sites, don't know why 
everyone doesn't do this)

FirstName
LastName
Email
Password
blah blah blah whatever you want to store about the user.

User Address  this table stores various shipping and billing information 
about your users so you can auto fill the payment table at the end of 
the order.

UserAddressID -- auto inc key
UserID -- link to user.
AddressType -- I.E. Shipping or billing
Address Name -- so they can say if the shipping address is say for home 
or work or aunt Jane's house.

AddressLine1 --
AddressLine2 --
City
State
Zip
whatever else you need like phone number etc.

Stuff To Buy table.  This last table is very simplified and in most 
cases would require other tables.

StuffID -- auto inc key
Name
Description
Price
QtyInStock


So now in case it's not obvious we are using foreign keys to link tables 
in the following manner.

Order to Cart -- One to Many
Order to Payment   -- One to Many or One to One depending on what you need.
User to Order -- One to Many
StuffToBuy to Cart One to Many
User to UserAddress One to Many

Just in case:  Order to Cart -- One to Many means that One Order 
record can link to Many cart records, this link is created by storing 
the OrderID (foreign key) in the Cart table.  Though it's not needed in 
the system as described above a Many to Many link requires a table to 
link to tables together having the two keys in it that relate to the two 
tables being linked.


That's it for me tonight, everyone feel free to 

Re: tmp tables

2010-01-11 Thread Baron Schwartz
Victor,

On Sun, Jan 10, 2010 at 1:20 PM, Victor Subervi victorsube...@gmail.com wrote:
 Hi;
 I have a shopping cart that will spawn a tmp table for every shopping cart
 instance. Would it be better to create a separate database for these instead
 of having them in the same database as all the other tables for the shopping
 cart?

It will not matter at all.  But it would be better to choose a
different design.  Instead of adding a table per cart, just create a
table and add a row(s) to it for every cart.  This is what relational
databases were designed for :-)

Regards
Baron

-- 
Baron Schwartz
Percona Inc: Services and Support for MySQL
http://www.percona.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: tmp tables

2010-01-11 Thread Victor Subervi
On Mon, Jan 11, 2010 at 10:35 AM, Baron Schwartz ba...@xaprb.com wrote:

 Victor,

 On Sun, Jan 10, 2010 at 1:20 PM, Victor Subervi victorsube...@gmail.com
 wrote:
  Hi;
  I have a shopping cart that will spawn a tmp table for every shopping
 cart
  instance. Would it be better to create a separate database for these
 instead
  of having them in the same database as all the other tables for the
 shopping
  cart?

 It will not matter at all.  But it would be better to choose a
 different design.  Instead of adding a table per cart, just create a
 table and add a row(s) to it for every cart.  This is what relational
 databases were designed for :-)


That strikes me as messy. Each tmp table has as many rows as necessary for
the products that are to be bough. To do as you say I would have to create a
table with a zillion rows to accommodate however many products I *predict*
buyers would buy. Therefore, I guess I should probably create a new database
so as to not make a mess of the main database.
TIA,
V


Re: tmp tables

2010-01-11 Thread Baron Schwartz
Victor,

 That strikes me as messy. Each tmp table has as many rows as necessary for
 the products that are to be bough. To do as you say I would have to create a
 table with a zillion rows to accommodate however many products I *predict*
 buyers would buy. Therefore, I guess I should probably create a new database
 so as to not make a mess of the main database.

You fundamentally misunderstand relational database design.  I suggest
reading this book:
http://www.xaprb.com/blog/2009/08/22/a-review-of-beginning-database-design-by-clare-churcher/

Regards,
Baron

-- 
Baron Schwartz
Percona Inc: Services and Support for MySQL
http://www.percona.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: tmp tables

2010-01-11 Thread Victor Subervi
On Mon, Jan 11, 2010 at 10:49 AM, Baron Schwartz ba...@xaprb.com wrote:

 Victor,

  That strikes me as messy. Each tmp table has as many rows as necessary
 for
  the products that are to be bough. To do as you say I would have to
 create a
  table with a zillion rows to accommodate however many products I
 *predict*
  buyers would buy. Therefore, I guess I should probably create a new
 database
  so as to not make a mess of the main database.

 You fundamentally misunderstand relational database design.  I suggest
 reading this book:

 http://www.xaprb.com/blog/2009/08/22/a-review-of-beginning-database-design-by-clare-churcher/


LOL. Ok, I'll put it on my list. *In the meantime*, since I am reworking my
database design for the shopping cart I just finished building and need to
get this up __n_o_w__, what would your advice be?
V


 Regards,
 Baron

 --
 Baron Schwartz
 Percona Inc: Services and Support for MySQL
 http://www.percona.com/




-- 
The Logos has come to bear
http://logos.13gems.com/


Re: tmp tables

2010-01-11 Thread Keith Murphy
Victor,

Don't want to butt in, and not trying to be rude, but he gave you advice.
You don't seem inclined to take it. How else can he, or anyone else, help
you? Clearly you don't understand some fundamental issue about relational
databases. If you can't just accept his suggestion to put all carts in one
table as the way to do it then there really isn't anything else to say.

My 2 cents :)

keith

-- 
Chief Training Officer
Paragon Consulting Services
850-637-3877

On Mon, Jan 11, 2010 at 10:30 AM, Victor Subervi victorsube...@gmail.comwrote:

 On Mon, Jan 11, 2010 at 10:49 AM, Baron Schwartz ba...@xaprb.com wrote:

  Victor,
 
   That strikes me as messy. Each tmp table has as many rows as necessary
  for
   the products that are to be bough. To do as you say I would have to
  create a
   table with a zillion rows to accommodate however many products I
  *predict*
   buyers would buy. Therefore, I guess I should probably create a new
  database
   so as to not make a mess of the main database.
 
  You fundamentally misunderstand relational database design.  I suggest
  reading this book:
 
 
 http://www.xaprb.com/blog/2009/08/22/a-review-of-beginning-database-design-by-clare-churcher/
 

 LOL. Ok, I'll put it on my list. *In the meantime*, since I am reworking my
 database design for the shopping cart I just finished building and need to
 get this up __n_o_w__, what would your advice be?
 V

 
  Regards,
  Baron
 
  --
  Baron Schwartz
  Percona Inc: Services and Support for MySQL
  http://www.percona.com/
 



 --
 The Logos has come to bear
 http://logos.13gems.com/



Re: tmp tables

2010-01-11 Thread Victor Subervi
On Mon, Jan 11, 2010 at 11:38 AM, Keith Murphy bmur...@paragon-cs.comwrote:

 Victor,

 Don't want to butt in, and not trying to be rude, but he gave you advice.
 You don't seem inclined to take it. How else can he, or anyone else, help
 you? Clearly you don't understand some fundamental issue about relational
 databases. If you can't just accept his suggestion to put all carts in one
 table as the way to do it then there really isn't anything else to say.


I said I would read the book. I will read it. Not today, however. Today, I
will deal with this problem, because tomorrow I will be talking with the
client. First things first.
V


Re: tmp tables

2010-01-11 Thread Johnny Withers
Victor,

The temporary table solution is not a good one. Use a single table and store
each item put into a cart identified by the session ID of the user. A
process should clean out this table periodically since there are shoppers
that abandon carts from time to time.

The design of this table could be quite simple:

id - int auto_inc
session_id - varchar
added_dt - datetime
item_id - int
quantity - int

You could get more complicated depending in your need an item requirements
(colors, sizes, etc).

-JW


On Mon, Jan 11, 2010 at 9:30 AM, Victor Subervi victorsube...@gmail.comwrote:

 On Mon, Jan 11, 2010 at 10:49 AM, Baron Schwartz ba...@xaprb.com wrote:

  Victor,
 
   That strikes me as messy. Each tmp table has as many rows as necessary
  for
   the products that are to be bough. To do as you say I would have to
  create a
   table with a zillion rows to accommodate however many products I
  *predict*
   buyers would buy. Therefore, I guess I should probably create a new
  database
   so as to not make a mess of the main database.
 
  You fundamentally misunderstand relational database design.  I suggest
  reading this book:
 
 
 http://www.xaprb.com/blog/2009/08/22/a-review-of-beginning-database-design-by-clare-churcher/
 

 LOL. Ok, I'll put it on my list. *In the meantime*, since I am reworking my
 database design for the shopping cart I just finished building and need to
 get this up __n_o_w__, what would your advice be?
 V

 
  Regards,
  Baron
 
  --
  Baron Schwartz
  Percona Inc: Services and Support for MySQL
  http://www.percona.com/
 



 --
 The Logos has come to bear
 http://logos.13gems.com/




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: tmp tables

2010-01-11 Thread mos

At 09:56 AM 1/11/2010, Johnny Withers wrote:

Victor,

The temporary table solution is not a good one. Use a single table and store
each item put into a cart identified by the session ID of the user. A
process should clean out this table periodically since there are shoppers
that abandon carts from time to time.

The design of this table could be quite simple:

id - int auto_inc
session_id - varchar
added_dt - datetime
item_id - int
quantity - int

You could get more complicated depending in your need an item requirements
(colors, sizes, etc).

-JW


I also agree with JW, a single table is better. Don't forget shoppers may 
spend 30 minutes of more filling up their shopping cart and may lose the 
connection to the site or have to exit without completing the order. You 
need to retain the registered user's shopping cart so he can go back to it 
later. Only delete it if there is no activity on it for 7-10 days.  If you 
use temporary tables and throw out the shopping cart contents if the 
session is lost, then you will *iss off a lot of customers.


Mike



On Mon, Jan 11, 2010 at 9:30 AM, Victor Subervi 
victorsube...@gmail.comwrote:


 On Mon, Jan 11, 2010 at 10:49 AM, Baron Schwartz ba...@xaprb.com wrote:

  Victor,
 
   That strikes me as messy. Each tmp table has as many rows as necessary
  for
   the products that are to be bough. To do as you say I would have to
  create a
   table with a zillion rows to accommodate however many products I
  *predict*
   buyers would buy. Therefore, I guess I should probably create a new
  database
   so as to not make a mess of the main database.
 
  You fundamentally misunderstand relational database design.  I suggest
  reading this book:
 
 
 
http://www.xaprb.com/blog/2009/08/22/a-review-of-beginning-database-design-by-clare-churcher/

 

 LOL. Ok, I'll put it on my list. *In the meantime*, since I am reworking my
 database design for the shopping cart I just finished building and need to
 get this up __n_o_w__, what would your advice be?
 V

 
  Regards,
  Baron
 
  --
  Baron Schwartz
  Percona Inc: Services and Support for MySQL
  http://www.percona.com/
 



 --
 The Logos has come to bear
 http://logos.13gems.com/




--
-
Johnny Withers
601.209.4985
joh...@pixelated.net



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



tmp tables

2010-01-10 Thread Victor Subervi
Hi;
I have a shopping cart that will spawn a tmp table for every shopping cart
instance. Would it be better to create a separate database for these instead
of having them in the same database as all the other tables for the shopping
cart?
TIA,
Victor

-- 
The Logos has come to bear
http://logos.13gems.com/


Re: why are tmp tables being created on disk?

2007-07-18 Thread Gerald L. Clark

Ofer Inbar wrote:

I've been running this for a few hours and it consistently shows lots
of writes but no reads at all on sdb1, the partition where I have my
binary logs and tmpdir.  Is MySQL writing lots of tmp files and not
reading them?  Or, how else can I interpret this?

  -- Cos



Perhaps the data is still in cache, so no read is actually made from the 
disc.

--
Gerald L. Clark
Supplier Systems Corporation

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



why are tmp tables being created on disk?

2007-07-17 Thread Ofer Inbar
MySQL 5.0.27 running on Redhat EL4.

In /etc/my.cnf I have:
  tmp_table_size=64M

mysql SHOW GLOBAL VARIABLES LIKE %tmp%;
+---+--+
| Variable_name | Value|
+---+--+
| max_tmp_tables| 32   |
| slave_load_tmpdir | /data/mysql/tmp/ |
| tmp_table_size| 67108864 |
| tmpdir| /data/mysql/tmp/ |
+---+--+

... and yet, I frequently see tmp tables on disk much smaller than 64M:

#ls -alF /data/mysql/tmp/
total 1552
drwxr-xr-x  2 mysql mysql4096 Jul 17 14:16 ./
drwxrwxr-x  4 mysql mysql4096 Jul 17 13:12 ../
-rw-rw  1 mysql mysql 1572864 Jul 17 14:16 #sql_3b9e_0.MYD
-rw-rw  1 mysql mysql1024 Jul 17 14:16 #sql_3b9e_0.MYI
#

Although I never see more than one tmp table in that directory at a
time (they go so quickly, usually I don't see any), I have a program
that tracks the rate of change of some variables from mysqladmin
extended, and my MySQL servers are consistently created 3-10 or more
disk tmp tables per second.  When I do see a table in tmpdir, though,
it's almost always smaller than 5M (I've been running a job to check
every few seconds, and the largest one I've seen so far was ~12M).

Why are so many small tmp tables being created on disk, not memory?
How can I tell MySQL to use memory for these?
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: why are tmp tables being created on disk?

2007-07-17 Thread mos

At 12:29 PM 7/17/2007, you wrote:

MySQL 5.0.27 running on Redhat EL4.

In /etc/my.cnf I have:
  tmp_table_size=64M

mysql SHOW GLOBAL VARIABLES LIKE %tmp%;
+---+--+
| Variable_name | Value|
+---+--+
| max_tmp_tables| 32   |
| slave_load_tmpdir | /data/mysql/tmp/ |
| tmp_table_size| 67108864 |
| tmpdir| /data/mysql/tmp/ |
+---+--+

... and yet, I frequently see tmp tables on disk much smaller than 64M:

#ls -alF /data/mysql/tmp/
total 1552
drwxr-xr-x  2 mysql mysql4096 Jul 17 14:16 ./
drwxrwxr-x  4 mysql mysql4096 Jul 17 13:12 ../
-rw-rw  1 mysql mysql 1572864 Jul 17 14:16 #sql_3b9e_0.MYD
-rw-rw  1 mysql mysql1024 Jul 17 14:16 #sql_3b9e_0.MYI
#

Although I never see more than one tmp table in that directory at a
time (they go so quickly, usually I don't see any), I have a program
that tracks the rate of change of some variables from mysqladmin
extended, and my MySQL servers are consistently created 3-10 or more
disk tmp tables per second.  When I do see a table in tmpdir, though,
it's almost always smaller than 5M (I've been running a job to check
every few seconds, and the largest one I've seen so far was ~12M).

Why are so many small tmp tables being created on disk, not memory?
How can I tell MySQL to use memory for these?
  -- Cos


Cos,
   I'd guess these temporary files are the result of Select statements 
with an Order By clause that requires a FileSort. You can do a Show 
ProcessList and this should tell you if the active queries are using 
FileSort. As far as getting it to sort in memory? Your guess is as good 
as mine. Even though I have an index that matches the sort order, MySQL 
insists on sorting it using FileSort. I suppose you could set up a Ram disk 
and have that as your first MySQL temp directory.


Mike 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: why are tmp tables being created on disk?

2007-07-17 Thread Ofer Inbar
mos [EMAIL PROTECTED] wrote:
 Why are so many small tmp tables being created on disk, not memory?
 How can I tell MySQL to use memory for these?

I'd guess these temporary files are the result of Select statements 
 with an Order By clause that requires a FileSort. You can do a Show 
 ProcessList and this should tell you if the active queries are using 
 FileSort. As far as getting it to sort in memory? Your guess is as good 
 as mine. Even though I have an index that matches the sort order, MySQL 
 insists on sorting it using FileSort. I suppose you could set up a Ram disk 
 and have that as your first MySQL temp directory.

I thought of putting tmpdir on a tmpfs mount, worried that there might
occasionally be a need for a very large tmp file that would exceed the
limit (or, if I don't set a limit, use up all memory and force lots of
swapping).  When you say first MySQL temp directory are you implying
I can have more than one?  I don't see anything in the documentation
that suggests that...

BTW, here's another oddity I noticed - here's typical output from
iostat 60:

| avg-cpu:  %user   %nice%sys %iowait   %idle
|7.350.003.590.94   88.12
| 
| Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
| sda  29.20 8.00   342.08480  20528
| sda1  0.00 0.00 0.00  0  0
| sda2  0.00 0.00 0.00  0  0
| sda3  0.00 0.00 0.00  0  0
| sda4  0.00 0.00 0.00  0  0
| sda5 43.74 8.00   342.08480  20528
| sdb   2.43 0.00   173.70  0  10424
| sdb1 21.71 0.00   173.70  0  10424

I've been running this for a few hours and it consistently shows lots
of writes but no reads at all on sdb1, the partition where I have my
binary logs and tmpdir.  Is MySQL writing lots of tmp files and not
reading them?  Or, how else can I interpret this?

  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: why are tmp tables being created on disk?

2007-07-17 Thread Mathieu Bruneau
Ofer Inbar a écrit :
 mos [EMAIL PROTECTED] wrote:
 Why are so many small tmp tables being created on disk, not memory?
 How can I tell MySQL to use memory for these?
 
I'd guess these temporary files are the result of Select statements 
 with an Order By clause that requires a FileSort. You can do a Show 
 ProcessList and this should tell you if the active queries are using 
 FileSort. As far as getting it to sort in memory? Your guess is as good 
 as mine. Even though I have an index that matches the sort order, MySQL 
 insists on sorting it using FileSort. I suppose you could set up a Ram disk 
 and have that as your first MySQL temp directory.
 
 I thought of putting tmpdir on a tmpfs mount, worried that there might
 occasionally be a need for a very large tmp file that would exceed the
 limit (or, if I don't set a limit, use up all memory and force lots of
 swapping).  When you say first MySQL temp directory are you implying
 I can have more than one?  I don't see anything in the documentation
 that suggests that...
 
 BTW, here's another oddity I noticed - here's typical output from
 iostat 60:
 
 | avg-cpu:  %user   %nice%sys %iowait   %idle
 |7.350.003.590.94   88.12
 | 
 | Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
 | sda  29.20 8.00   342.08480  20528
 | sda1  0.00 0.00 0.00  0  0
 | sda2  0.00 0.00 0.00  0  0
 | sda3  0.00 0.00 0.00  0  0
 | sda4  0.00 0.00 0.00  0  0
 | sda5 43.74 8.00   342.08480  20528
 | sdb   2.43 0.00   173.70  0  10424
 | sdb1 21.71 0.00   173.70  0  10424
 
 I've been running this for a few hours and it consistently shows lots
 of writes but no reads at all on sdb1, the partition where I have my
 binary logs and tmpdir.  Is MySQL writing lots of tmp files and not
 reading them?  Or, how else can I interpret this?
 
   -- Cos
 

The binlog are creating most of your constant write most probably. If
you have no slave attached, you're not reading them at all...

-- 
Mathieu Bruneau
aka ROunofF

===
GPG keys available @ http://rounoff.darktech.org

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



tangent: confusing iostat readings (was Re: why are tmp tables being created on disk?)

2007-07-17 Thread Ofer Inbar
Mathieu Bruneau [EMAIL PROTECTED] wrote:
  BTW, here's another oddity I noticed - here's typical output from
  iostat 60:
  
  | avg-cpu:  %user   %nice%sys %iowait   %idle
  |7.350.003.590.94   88.12
  | 
  | Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
  | sda  29.20 8.00   342.08480  20528
  | sda1  0.00 0.00 0.00  0  0
  | sda2  0.00 0.00 0.00  0  0
  | sda3  0.00 0.00 0.00  0  0
  | sda4  0.00 0.00 0.00  0  0
  | sda5 43.74 8.00   342.08480  20528
  | sdb   2.43 0.00   173.70  0  10424
  | sdb1 21.71 0.00   173.70  0  10424
  
  I've been running this for a few hours and it consistently shows lots
  of writes but no reads at all on sdb1, the partition where I have my
  binary logs and tmpdir.  Is MySQL writing lots of tmp files and not
  reading them?  Or, how else can I interpret this?
 
 The binlog are creating most of your constant write most probably. If
 you have no slave attached, you're not reading them at all...

Yes and no.  In fact, this iostat output comes from a slave, where
there's hardly any binglog activity (but a fair amount of relay log
activity).  However, I noticed the same thing on the master.  Before
tmpdir pointed to a directory on sdb1, there was a much lower, and
fairly constant, level of writes to that partition, which did not
surprise me.  After I pointed tmpdir to sdb1, the amount of write
activity grew substantially and started varying much more (from as low
as ~50 to as high as ~400), but I still see no reading recorded.
  -- Cos

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Specific issues with MySQL 4.0.16 on Linux or Windows 2000/3 related to tmp tables?

2004-11-10 Thread foo bar
Hi Everyone,

I've been Googling unsuccessfully for specific issues
relating to queries run on MySQL version 4.0.16
against tmp tables.  I have witnessed several
occurrences where queries running on various platforms
hang in a Copying to tmp table state for hours or
days at a time.  When the same query is manually run
from the MySQL command line client, the query returns,
even on very large tables.  Could someone please give
me a hint as to possible tuning (or upgrade?) ideas to
fix this situation?  I can't find anything else on
each problem system that would give me a hint as to
why this problem randomly occurs.  I've made attempts
at adjusting the tmp_table_size limits to have the
system try to do order by queries in memory, but
this does not seem to help the situation.

Any help would be appreciated!

-Kevin

show full processlist output (notice that the query
has been running for 7.9 days!!!):
| 33 | someuser | localhost:34329 | sometable | Query 
 | 687465 
| Copying to tmp table | select
  date_add( date_format(time, '%Y-%c-%d 00:00:00'),
INTERVAL 0 HOUR) time,
  ...
group by 1
order by 1



__ 
Do you Yahoo!? 
Check out the new Yahoo! Front Page. 
www.yahoo.com 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Specific issues with MySQL 4.0.16 on Linux or Windows 2000/3 related to tmp tables?

2004-11-10 Thread Mark Maunder
Please include the full query you're running, the table structure, and
the number of rows in the table. A dump of 'show variables;' would be
helpful too. 

On Wed, 2004-11-10 at 21:44, foo bar wrote:
 Hi Everyone,
 
 I've been Googling unsuccessfully for specific issues
 relating to queries run on MySQL version 4.0.16
 against tmp tables.  I have witnessed several
 occurrences where queries running on various platforms
 hang in a Copying to tmp table state for hours or
 days at a time.  When the same query is manually run
 from the MySQL command line client, the query returns,
 even on very large tables.  Could someone please give
 me a hint as to possible tuning (or upgrade?) ideas to
 fix this situation?  I can't find anything else on
 each problem system that would give me a hint as to
 why this problem randomly occurs.  I've made attempts
 at adjusting the tmp_table_size limits to have the
 system try to do order by queries in memory, but
 this does not seem to help the situation.
 
 Any help would be appreciated!
 
 -Kevin
 
 show full processlist output (notice that the query
 has been running for 7.9 days!!!):
 | 33 | someuser | localhost:34329 | sometable | Query 
  | 687465 
 | Copying to tmp table | select
   date_add( date_format(time, '%Y-%c-%d 00:00:00'),
 INTERVAL 0 HOUR) time,
   ...
 group by 1
 order by 1
 
 
   
 __ 
 Do you Yahoo!? 
 Check out the new Yahoo! Front Page. 
 www.yahoo.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Specific issues with MySQL 4.0.16 on Linux or Windows 2000/3 related to tmp tables?

2004-11-10 Thread foo bar
   OFF
read_rnd_buffer_size262144
rpl_recovery_rank   0
server_id   0
slave_net_timeout   3600
skip_external_locking   ON
skip_networking OFF
skip_show_database  OFF
slow_launch_time2
socket  /tmp/mysql.sock
sort_buffer_size2097144
sql_mode0
table_cache 64
table_type  MYISAM
thread_cache_size   0
thread_stack126976
tx_isolationREPEATABLE-READ
timezonePST
tmp_table_size  209715200
tmpdir  /tmp/
transaction_alloc_block_size8192
transaction_prealloc_size   4096
version 4.0.16-pro-log
wait_timeout28800

--- Mark Maunder [EMAIL PROTECTED] wrote:

 Please include the full query you're running, the
 table structure, and
 the number of rows in the table. A dump of 'show
 variables;' would be
 helpful too. 
 
 On Wed, 2004-11-10 at 21:44, foo bar wrote:
  Hi Everyone,
  
  I've been Googling unsuccessfully for specific
 issues
  relating to queries run on MySQL version 4.0.16
  against tmp tables.  I have witnessed several
  occurrences where queries running on various
 platforms
  hang in a Copying to tmp table state for hours
 or
  days at a time.  When the same query is manually
 run
  from the MySQL command line client, the query
 returns,
  even on very large tables.  Could someone please
 give
  me a hint as to possible tuning (or upgrade?)
 ideas to
  fix this situation?  I can't find anything else on
  each problem system that would give me a hint as
 to
  why this problem randomly occurs.  I've made
 attempts
  at adjusting the tmp_table_size limits to have
 the
  system try to do order by queries in memory, but
  this does not seem to help the situation.
  
  Any help would be appreciated!
  
  -Kevin
  
  show full processlist output (notice that the
 query
  has been running for 7.9 days!!!):
  | 33 | someuser | localhost:34329 | sometable |
 Query 
   | 687465 
  | Copying to tmp table | select
date_add( date_format(time, '%Y-%c-%d
 00:00:00'),
  INTERVAL 0 HOUR) time,
...
  group by 1
  order by 1
  
  
  
  __ 
  Do you Yahoo!? 
  Check out the new Yahoo! Front Page. 
  www.yahoo.com
 
 
 




__ 
Do you Yahoo!? 
Check out the new Yahoo! Front Page. 
www.yahoo.com 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Specific issues with MySQL 4.0.16 on Linux or Windows 2000/3 related to tmp tables?

2004-11-10 Thread Mark Maunder
   4294967295
 max_binlog_size 1073741824
 max_connections 100
 max_connect_errors  10
 max_delayed_threads 20
 max_heap_table_size 16777216
 max_join_size   4294967295
 max_relay_log_size  0
 max_seeks_for_key   4294967295
 max_sort_length 1024
 max_user_connections0
 max_tmp_tables  32
 max_write_lock_count4294967295
 myisam_max_extra_sort_file_size 268435456
 myisam_max_sort_file_size   2147483647
 myisam_repair_threads   1
 myisam_recover_options  OFF
 myisam_sort_buffer_size 8388608
 net_buffer_length   16384
 net_read_timeout30
 net_retry_count 10
 net_write_timeout   60
 new OFF
 open_files_limit1024
 pid_file   
 /opt/mysql-pro-4.0.16/data/testsystem.pid
 log_error
 port3306
 protocol_version10
 query_alloc_block_size  8192
 query_cache_limit   1048576
 query_cache_size0
 query_cache_typeON
 query_prealloc_size 8192
 range_alloc_block_size  2048
 read_buffer_size131072
 read_only   OFF
 read_rnd_buffer_size262144
 rpl_recovery_rank   0
 server_id   0
 slave_net_timeout   3600
 skip_external_locking   ON
 skip_networking OFF
 skip_show_database  OFF
 slow_launch_time2
 socket  /tmp/mysql.sock
 sort_buffer_size2097144
 sql_mode0
 table_cache 64
 table_type  MYISAM
 thread_cache_size   0
 thread_stack126976
 tx_isolationREPEATABLE-READ
 timezonePST
 tmp_table_size  209715200
 tmpdir  /tmp/
 transaction_alloc_block_size8192
 transaction_prealloc_size   4096
 version 4.0.16-pro-log
 wait_timeout28800
 
 --- Mark Maunder [EMAIL PROTECTED] wrote:
 
  Please include the full query you're running, the
  table structure, and
  the number of rows in the table. A dump of 'show
  variables;' would be
  helpful too. 
  
  On Wed, 2004-11-10 at 21:44, foo bar wrote:
   Hi Everyone,
   
   I've been Googling unsuccessfully for specific
  issues
   relating to queries run on MySQL version 4.0.16
   against tmp tables.  I have witnessed several
   occurrences where queries running on various
  platforms
   hang in a Copying to tmp table state for hours
  or
   days at a time.  When the same query is manually
  run
   from the MySQL command line client, the query
  returns,
   even on very large tables.  Could someone please
  give
   me a hint as to possible tuning (or upgrade?)
  ideas to
   fix this situation?  I can't find anything else on
   each problem system that would give me a hint as
  to
   why this problem randomly occurs.  I've made
  attempts
   at adjusting the tmp_table_size limits to have
  the
   system try to do order by queries in memory, but
   this does not seem to help the situation.
   
   Any help would be appreciated!
   
   -Kevin
   
   show full processlist output (notice that the
  query
   has been running for 7.9 days!!!):
   | 33 | someuser | localhost:34329 | sometable |
  Query 
| 687465 
   | Copying to tmp table | select
 date_add( date_format(time, '%Y-%c-%d
  00:00:00'),
   INTERVAL 0 HOUR) time,
 ...
   group by 1
   order by 1
   
   
 
   __ 
   Do you Yahoo!? 
   Check out the new Yahoo! Front Page. 
   www.yahoo.com
  
  
  
 
 
 
   
 __ 
 Do you Yahoo!? 
 Check out the new Yahoo! Front Page. 
 www.yahoo.com
-- 
Mark D. Maunder [EMAIL PROTECTED]
http://www.workzoo.com/
The Best jobs from the Best Job Sites.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Specific issues with MySQL 4.0.16 on Linux or Windows 2000/3 related to tmp tables?

2004-11-10 Thread foo bar
Hi Mark,

The system in question has 1GB of RAM in it.  As far
as I can tell, the box does not get stuck swapping
when the system has a query (or several) in this
state.  If I log in via the command line client and
kill the query, the system continues on it's way like
nothing was wrong in the first place.  It almost seems
to me that the server is getting into some strange
state while writing to temporary table.  For some
reason, this fails and instead of erroring out, the
query stays in a perpetual running state.

On MySQL 4.0.16, a key_buffer_size of 134217728
translates into 134217728/1024 bytes per key = 131072
possible cached key values.  Because this table (in
addition to several others) may have several million
rows, in order to cache say 3 million keys, I'd have
to allocate about 3GB worth of ram just to this single
buffer.

Thanks for the info though!


--- Mark Maunder [EMAIL PROTECTED] wrote:

 In my experience, inserting into a table with a
 unique key and more than
 5 million records can be very slow because (AFAIK)
 it has to scan the
 entire index to check if the new record is unique
 from the PRIMARY KEY's
 point of view. (I think)
 
 If you don't have much ram (your 128 meg key buffer
 suggests you dont)
 and your disk channel isn't particularly fast, this
 may take a very long
 time. 
 
 Your tmp_table_size of 200 Megs is large considering
 your key buffer is
 128M. If the machine doesn't have enough RAM, it may
 end up swapping
 heavily trying to keep the tmp table in memory and
 get stuck they way
 you've described - although I must admit I'm not
 sure what mysql does in
 a situation where tmp_table_size is too large.
 
 How much RAM does the machine have? Can you check
 swap activity? 
 
 On Thu, 2004-11-11 at 00:27, foo bar wrote:
  Here's the whole the query, table structure, table
  length and show variables output:
  
  mysql desc summary;
 

+-+--+--+-+-++
  | Field   | Type |
 Null |
  Key | Default | Extra  |
 

+-+--+--+-+-++
  | id  | int(10) unsigned |
  |
  PRI | NULL| auto_increment |
  | host_id | int(10) unsigned |
  |
  MUL | 0   ||
  | alias_domain_id | int(10) unsigned | YES
  |
  MUL | NULL||
  | domain_id   | int(10) unsigned | YES
  |
  MUL | NULL||
  | alias_id| int(10) unsigned | YES
  |
  MUL | NULL||
  | sender_domain_id| int(10) unsigned | YES
  |
  MUL | NULL||
  | sender_alias_id | int(10) unsigned | YES
  |
  MUL | NULL||
  | time| datetime |
  |
  MUL | -00-00 00:00:00 ||
  | val1| int(10) unsigned |
  | 
 | 0   ||
  | val2| int(10) unsigned |
  | 
 | 0   ||
  | processed   | int(10) unsigned |
  | 
 | 0   ||
 

+-+--+--+-+-++
  
  select count(*) from summary;
  +--+
  | count(*) |
  +--+
  | 34759085 |
  +--+
  1 row in set (0.05 sec)
  
  Query:
  create temporary table tmp (PRIMARY
  KEY(alias_id,domain_id))
  select
 alias_id,
 domain_id,
 sum(val1) rank
  from summary
  where 1=1
  and time = '2004-11-01 11:00:00'
  and time = '2004-11-09 11:00:00'
  group by 1, 2
  order by rank desc
  limit 5
  
  'show variables;' output:
  Variable_name   Value
  back_log50
  basedir /opt/mysql-pro-4.0.16/
  binlog_cache_size   32768
  bulk_insert_buffer_size 8388608
  character_set   latin1
  character_sets  latin1 big5 czech euc_kr gb2312
 gbk
  latin1_de sjis tis620 ujis dec8 dos german1 hp8
  koi8_ru latin2 swe7 usa7 cp1251 danish hebrew
 win1251
  estonia hungarian koi8_ukr win1251ukr greek
 win1250
  croat cp1257 latin5
  concurrent_insert   ON
  connect_timeout 5
  convert_character_set   
  datadir /opt/mysql-pro-4.0.16/data/
  default_week_format 0
  delay_key_write ON
  delayed_insert_limit100
  delayed_insert_timeout  300
  delayed_queue_size  1000
  flush   OFF
  flush_time  0
  ft_boolean_syntax   + -()~*:|
  ft_min_word_len 4
  ft_max_word_len 254
  ft_max_word_len_for_sort20
  ft_stopword_file(built-in)
  have_bdbNO
  have_crypt  YES
  have_innodb YES
  have_isam   YES
  have_raid   NO
  have_symlinkYES
  have_opensslNO
  have_query_cacheYES
  init_file
  innodb_additional_mem_pool_size 1048576
  

Replication Problem - Droping tmp tables

2003-02-07 Thread Luc Foisy

Here are a number of queries that are run from time to time on the master. They are 
always run together and in order that they appear.
Version of the server is 3.23.32 ( yes its old, but difficult to update at this time )
Version of the slave is 3.23.49a
Note: [param:variable] is handled by our own query parser and relevant data is 
placed withing before going to the server

DROP TABLE IF EXISTS ORD1;

CREATE TEMPORARY TABLE ORD1
 SELECT DISTINCT
 PAYROLLEVENT.ID_CONTRACTOR,
 ORDERHEADER.ID,
 ORDERHEADER.ID_CUSTOMER
 FROM ORDERHEADER
 LEFT JOIN ORDERLINE ON ORDERHEADER.ID = ORDERLINE.ID_ORDERHEADER
 LEFT JOIN PAYROLLEVENT ON ORDERLINE.ID = PAYROLLEVENT.ID_ORDERLINE
 WHERE (ORDERHEADER.AvailableAt BETWEEN '[param:Start Date]' AND 
DATE_FORMAT('[param:End Date]', '%Y%m%d235959'))
 AND PAYROLLEVENT.ID_CONTRACTOR IS NOT NULL
 ORDER BY ORDERHEADER.ID;

SELECT
 ID AS 'Order #',
 ID_CUSTOMER AS 'Acct #'
 FROM ORD1
 GROUP BY ID HAVING COUNT(*)  1
 ORDER BY  ID;

Here is the Slave status

SHOW SLAVE STATUS
*** 1. row ***
Master_Host: 192.168.0.251
Master_User: replicant1
Master_Port: 3306
  Connect_retry: 60
   Log_File: QBSDB251-bin.045
Pos: 4730525
  Slave_Running: No
Replicate_do_db: qbslive
Replicate_ignore_db: 
 Last_errno: 1158
 Last_error: Slave: query 'drop table qbslive.ORD1' partially completed on the 
master and was aborted. There is a chance that your master is inconsistent at this 
point. If you are sure that your master is ok, run this query manually on the slave 
and then restart the slave with SET SQL_SLAVE_SKIP_COUNTER=1; SLAVE START;
   Skip_counter: 0

There is nothing in the error logs that would indicate a problem.

What does this partially completed comment mean?
What would cause a drop table to abort?
Is this a version bug?
Where would I look on the master to see if my master is ok?

Since I can't find a problem ( maybe not knowing where to look ) and I don't want to 
skip anything and run into another problem or something

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Replication and tmp tables

2002-08-17 Thread Bill MacAllister

Hello,

I am continuing to have what seems to be intermittent problems with 
replication.  I have attempted to exclude temporary tables from replication 
by using ignore wild specifications in my.cnf on both master and slave. 
For some unknown reason there are times that MySQL still tries to replicate 
a temporary table.

On the slave:

nb3% mysql --version
mysql  Ver 11.16 Distrib 3.23.49, for redhat-linux-gnu (i386)
nb3% cat /etc/my.cnf
[mysqld]
datadir=/usr/local/mysql/var
socket=/usr/local/mysql/tmp/mysql.sock
set-variable= key_buffer=256M
set-variable= max_allowed_packet=1M
set-variable= table_cache=256
set-variable= sort_buffer=1M
set-variable= record_buffer=1M
set-variable= myisam_sort_buffer_size=64M
set-variable= thread_cache=8
master-host=mysql-master.prideindustries.com
master-user=slaveuser
master-password=slavepassword
master-port=3306
server-id=200
replicate-ignore-table=base_housing.u
replicate-ignore-table=base_housing.wo
replicate-wild-ignore-table=%.tmp%
replicate-wild-ignore-table=%.%tmp
replicate-wild-ignore-table=%.temp%
replicate-wild-ignore-table=%.%temp

[mysqldump]
quick
set-variable= max_allowed_packet=16M

[client]
socket=/usr/local/mysql/tmp/mysql.sock

nb3% mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 36 to server version: 3.23.51-log

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql show slave status \G
*** 1. row ***
Master_Host: mysql-master.prideindustries.com
Master_User: copydb
Master_Port: 3306
  Connect_retry: 60
   Log_File: db1-bin.907
Pos: 14160409
  Slave_Running: No
Replicate_do_db:
Replicate_ignore_db:
 Last_errno: 1146
 Last_error: error 'Table 'm2mreferral.tmp_travis_househits' 
doesn't exist' on query 'insert into househits_travis select * from 
tmp_travis_househits'
   Skip_counter: 0
1 row in set (0.00 sec)

mysql

I don't think it should be trying to replicate this table.  Any suggestions 
would be greatly appreciated.

Thanks,

Bill

+-
| Bill MacAllister, Senior Programmer
| PRIDE Industries
| Phone: 916-788-2402


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Replication + Tmp tables + Flush master/slave = crash

2002-02-15 Thread Rafael Martinez


Hei

I think we have found a bug in mysql replication.

We have some programs that are working against a master DB and that use 
temporary tables. The hole system works without problems until we run a 
flush master/slave. Then the replication in the slave DB stops working 
because of this.

How to produce the crash:

- We need a system with replication working master-slave.

Run f.ex:

MASTER: create temporary table testtmp(id text);
MASTER: insert into testtmp values (testing);

Until here everything works fine. You can continue working with this tmp 
table without problems. But if we run:

MASTER: flush tables with read lock;
MASTER: flush master;
SLAVE:  flush slave;
MASTER: unlock tables;
MASTER; insert into testtmp values(testing2);

The replication in slave will stop working with a error message saying 
that table testtmp does not exist.

This is a huge problem for us. We have many programs working against the 
master DB, they run for weeks and the process of stopping/starting all of 
them is not trivial, so we can not stop all of them to run a flush 
master/slave  when the binary update log file gets to big.

Is this a bug?, is it another way of doing what we want to do without 
crashing replication? Do you need more information?

Thanks for your help

Sincerely
Rafael Martinez




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php