Re: MySQL db size using show table status

2005-09-11 Thread Josh Chamas

Jaspreet Singh wrote:

Hi,

I am trying to compute the MySQL db size using show table status
command. It gives me the size of .MYD and .MIY files, but not .frm which
is typically 12k (using 4.1.9 version of MySQL)

Qus 1. is there any way to deterministically compute the value of .frm
file


using a command line tool should do this.

Usually, .frm table definition files are negligible to the size of
the actual database.


Qus 2. Is there any other way to compute the db size (other than disk
quota).



du -s mysql_data_directory

Regards,

Josh

--
Josh Chamas
Director, Professional Services
MySQL Inc., www.mysql.com
Get More with MySQL!  www.mysql.com/consulting

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Locks acquired with get_lock() persisting, even after MySQL thread has exited

2005-09-11 Thread Josh Chamas

Adam Newby wrote:

Description:


We have distributed applications which make extensive use of the 
get_lock() function
to acquire a system-wide lock. Occasionally, all copies of a given 
application
block attempting to acquire a lock on the same lock string. Using 
is_used_lock() reports

that the lock is held by a particular thread ID.



This does not seem like expected behavior.  I would either file
this as a bug at http://bugs.mysql.com/ with a nice reproducible
test case, or file a ticket with MySQL Support at https://support.mysql.com/

Regards,

Josh

--
Josh Chamas
Director, Professional Services
MySQL Inc., www.mysql.com
Get More with MySQL!  www.mysql.com/consulting


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Users

2005-09-11 Thread Josh Chamas

Jaspreet Singh wrote:


The problem is that MySQL users can only be 16 chars loong (bad !!) and
cannot have '@' or '#' in the name.



If you think 16 characters is not enough, you can file this as a bug/feature
request at http://bugs.mysql.com


Is there a way out ... Is there a way so that i can directly use system
users and not maintain this users table inside MySQL



We don't have external authentication hooks today, but understand that
this is a desirable feature.  Currently there is no commitment on
developing such a feature on our product roadmap, but that could change
in the future.

Regards,

Josh
--
Josh Chamas
Director, Professional Services
MySQL Inc., www.mysql.com
Get More with MySQL!  www.mysql.com/consulting

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: innodb thread concurrency size [mysql performance tuning]

2005-09-11 Thread Josh Chamas

Clyde Lewis wrote:

Hey guys,

I'm looking for any best practices or a formula that is commonly used in 
setting the value for thread concurrency in the configuration file.

I have 24 instances running on a sun 2900 server with 32GB or ram.
Here is a sample of my configuration file.



The best practice for setting innodb_thread_concurrency is
# of resources per instance * 2, so if you had 2 CPUs and 2 hard drives
you wanted to use for a MySQL instance, set it to 8,
which is the default.  Sometimes the *2 factor could be *1
to get the best system performance.  If you want to throttle
all of your instances so no one instance hogs the system,
you might set the innodb_thread_concurrency down to 2.  If you
wanted one to use all of your system resources, then set it
much higher.

If you ever need more involved assistance from the MySQL
Professional Services team, you might look at these packages:

  http://www.mysql.com/consulting/packaged/performance.html
  http://www.mysql.com/consulting/packaged/rapidresponse.html

Regards,

Josh

--
Josh Chamas
Director, Professional Services
MySQL Inc., www.mysql.com
Get More with MySQL!  www.mysql.com/consulting



#use this line mfor mysql 4.1
old-passwords
server-id = 2216
net_buffer_length=65536
net_read_timeout=120
net_write_timeout=180
key_buffer=64M
max_allowed_packet=1M
table_cache=2048
sort_buffer=1M
record_buffer=1M
myisam_sort_buffer_size=16M
max_connections=2500
thread_cache=8
# Try number of CPU's*2
thread_concurrency=4
query_cache_size=256M
query_cache_limit=128K
#only availble in 4.1
innodb_file_per_table
innodb_buffer_pool_size=500M
innodb_additional_mem_pool_size=25M
innodb_log_archive=0
innodb_log_files_in_group=3
innodb_log_file_size=100M
innodb_log_buffer_size=8M
innodb_flush_log_at_trx_commit=2
innodb_file_io_threads=4
innodb_lock_wait_timeout=30
#innodb_flush_method=fdatasync
#innodb_fast_shutdown=1
innodb_thread_concurrency=5

transaction-isolation = READ-UNCOMMITTED
[mysqld140]
bind-address=xxx.xxx.xxx.xxx
old-passwords
mysqld=/usr/local/mysql/bin/mysqld_safe
pid-file=/p01/abq/mysqladmin/abq_pid
basedir=/usr/local/mysql
datadir=/p01/abq/mysqldata
socket=/p01/abq/mysqladmin/mysql.sock
port=3306
local-infile=1
user=mysql
tmpdir = /tmp/abq/
log = /p01/abq/mysqllogs
log-bin = /p01/abq/mysqllogs/abq-bin
log-err = /p01/abq/mysqllogs/abq.err
log-slow-queries = /p01/abq/mysqllogs/abq_slow_query.log
innodb_file_per_table
set-variable = innodb_buffer_pool_size=500M
set-variable = innodb_additional_mem_pool_size=25M
innodb_data_home_dir = /p01/abq/mysqldata/innodb/
innodb_data_file_path = ibdata1_abq:100M:autoextend:max:4096M
#.._log_arch_dir must be the 
same

#as .._log_group_home_dir
innodb_log_group_home_dir = /p01/abq/mysqladmin/iblogs
innodb_log_arch_dir   = /p01/abq/mysqladmin/iblogs



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Too Many Queries Have Writing To Net Status [mysql performance tuning]

2005-09-11 Thread Josh Chamas

Kishore Jalleda wrote:

Hi All,
  We have a production Database, running mysql 4.0.17 , on a
Poweredge 2650 with 3 GB RAM, and dual Xeon 2.4 GHZ, the server
averages between 100 and 200 qps ( ,also and the CPU/MEM load is
pretty low and is extremely fast except for once in a while may be 1
in 1 queries take mote than 3 seconds to execute
(slow_query_time), and the status of the query is always writing to
net

Initially the NIC was an Intel 10/100, then the slow queries were a
lot more, then I started using the Broadcom Gigabit port that comes
with the 2650, and recently I also started load balancing the traffic
between the two Broadcomm NIC'S using the Basp Module, so that I
effectively have a 200 MBPS full duplex link( the switch connected has
only FE ports), with this setting the slow queries have decreased a
lot, but they do come up once in a while..

There is no lock time in any of them, I have tried using both InnoDB
and MyISAM, but they dont seem to go away, so I was wondering why
there are still few queries which still have writing to Net in the
status line and sometimes take 30-50 seconds to execute, note that if
the queries are run in any mysql client they run in less than 0.005
seconds, so I am pretty sure its the network thats the bottleneck, and
buying a Gigabit capable switch is not an option now, also sometimes
the traffic on the server goes upto 35MBPS.

So why is mysql not sending the results back to the client and
sometimes waiting soo long, also while mysql is waiting for a long
query( in the Writing to Net status)  hundreds of other queries are
executed extrelmely fast.

I am not very familiar with how mysql handles network packets, and why
its holding back ...



If the network is a bottleneck, certainly this could result in
long Writing to Net status.  One could imagine network congestion
causing repeated TCP/IP retries, etc.  It might just be that your
server is heavily loaded and certain threads are not getting enough
resources to finish processing.

Things like this are usually far to complex to diagnose simply,
and if you want expert help, I would recommend you engage with
our professional services team with either of these packaged
consulting solutions:

  http://www.mysql.com/consulting/packaged/performance.html
  http://www.mysql.com/consulting/packaged/rapidresponse.html

Regards,

Josh
--
Josh Chamas
Director, Professional Services
MySQL Inc., www.mysql.com
Get More with MySQL!  www.mysql.com/consulting

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: 45 minutes to check table to complete

2005-09-11 Thread Josh Chamas

Peter J. Schoenster wrote:

Hi,

I've got a database that is closed to 7G.

I'm using the Standard4.1.14 version.

The table was corrupt and I had to do a lot of moving of data to free up 
enough space on the partition as I discovered I needed at least twice 
the database size to do a recover. Well, using myisamchk -o worked. I 
then ran check table in the mysql client to see what it would say. It 
said all was well. I'm just suprised that it took 45 minutes to run.


Anyone run check table on large databases? What kind of times did you 
encounter? I must say this is on an OLD box ... maybe 512 RAM and right 
now I don't now the disk drives used.




45 minutes makes sense.  You need more memory for things to go faster,
as caching more of the 7GB database would help things a lot, otherwise
things become disk i/o bound, and that is likely what you are seeing
here.  Once you have the extra memory the next step is tuning MySQL
to take advantage of it depending on the storage engine you are using.

Regards,

Josh

--
Josh Chamas
Director, Professional Services
MySQL Inc., www.mysql.com
Get More with MySQL!  www.mysql.com/consulting

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: The Intel MySQL compiled Server

2005-09-11 Thread Josh Chamas

Dyego Souza Dantas Leal wrote:

Hello guys..

I want to use INTEL SHARED MySQL 4.0.25 - Pro with innodb tables on 
PRODUCTION SERVER DELL 2600


The Linux Kernel is 2.6.12 and the hardware is DUAL XEON 2.4 HT iwth 4 
GB of RAM

(i'm using raw partitions feature on innodb tables)

this is a secure option ? the Intel version of MySQL is really secure to 
use on production servers ?




These binaries tend to be faster, but also have some odd/rare bugs
that don't occur with other binaries.  So I would suggest if its
speed you want use them, and if they seem to have a problem on your
system, then revert back to the normal binaries.

Regards,

Josh

--
Josh Chamas
Director, Professional Services
MySQL Inc., www.mysql.com
Get More with MySQL!  www.mysql.com/consulting

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Intelligent Converters product: MSSQL-to-MySQL

2005-09-11 Thread Josh Chamas

Ryan Stille wrote:

Has anyone ever used this MSSQL-to-MySQL converter?  It's pretty
reasonable at $40, and the demo output I got looked pretty good.  But I
wanted to see if there is anything I should be weary about.
http://www.convert-in.com/mss2sql.htm



I have done a test migration before with it and it seemed fine.

I would also stay tuned for our MySQL Migration Toolkit to support
MSSQL migrations.  You can find it here:

  http://www.mysql.com/products/tools/migration-toolkit/

Regards,

Josh
--
Josh Chamas
Director, Professional Services
MySQL Inc., www.mysql.com
Get More with MySQL!  www.mysql.com/consulting

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQLHotCopy Error

2005-09-11 Thread Josh Chamas

Stephen P. Fracek, Jr. wrote:

We're having a problem with MySQLHotCopy.  It has worked flawlessly in the
past but now we're getting an error message similar to this:

Dumping database... DBD::mysql::db do failed: Can't find file:
'./file.frm' (errno: 24) at /usr/local/mysql/bin/mysqlhotcopy line 468.
Deleting previous 'old' hotcopy directory ('mydirectory')
Existing hotcopy directory renamed to '/mydirectory/db_name_old'
done.



$ perror 24
Error code  24:  Too many open files

Try increasing your open_files_limit?

This is documented here:
  http://dev.mysql.com/doc/mysql/en/not-enough-file-handles.html

If this does not help, and you need further support, you could work
this up with MySQL Support at:

  https://support.mysql.com

Regards,

Josh
--
Josh Chamas
Director, Professional Services
MySQL Inc., www.mysql.com
Get More with MySQL!  www.mysql.com/consulting

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Behaviour of like expression

2005-09-11 Thread Josh Chamas

[EMAIL PROTECTED] wrote:

Hi all,

I've got a question. We were testing something on our mysql server (
4.0.21) with MyISAM tables. 

When we executed the query select * from people where name like ''; we 
expected the same results as select * from people where name=''; but it 
didn't.


The like function returned everything instead of only the people without a 
name. Is this known (and correct) behaviour? It does not sound logical to 
me. I couldn't find anything about it on the mysql website.




Usually, pattern matching done where a pattern is nothing matches everything.

Use LIKE for pattern matching, and = for equivalency.

I don't know what the ANSI SQL spec says here, and whether our behavior
is correct there, but this behavior makes sense to me at least.  I would
expect similar behavior from perl, for example:

$ perl -e 'my $a = qw(a); print $a =~ //;'
1

Regards,

Josh
--
Josh Chamas
Director, Professional Services
MySQL Inc., www.mysql.com
Get More with MySQL!  www.mysql.com/consulting

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: URGENT --- OSX 10.4 ODBC driver error

2005-09-11 Thread Josh Chamas

Andrew stolarz wrote:

Hello All,
 I am downloading and installing the Installer package (Mac OS X v10.3) 
from the mysql site.
I am installing on a OSX 10.4 machine, at the end of the install I get an 
error message saying
 The application MYODBCConfig quit unexpedically after it was reopened, max 
OSX and other applications are not effected etc. etc.

 its version : 3.51.11-1
 so this ODBC driver will not work on the 10.4 machine? as right next to the 
download it states OSX v10.3

 (installing on 10.3 works no problem)


Sorry, I can't help here, but noted this was an urgent request.
If you need urgent help, please contact MySQL Support at:

  https://support.mysql.com

Regards,

Josh
--
Josh Chamas
Director, Professional Services
MySQL Inc., www.mysql.com
Get More with MySQL!  www.mysql.com/consulting

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ORACLE and Mysql: Create tables, constraints, swquences, storage. ..?

2005-07-26 Thread Josh Chamas

Nguyen, Phong wrote:

Good morning,

I have scripts to create constraints, sequences, storage..., tables from
ORACLE and I don't know if I can create them in MySQL?



You can try our Migration Toolkit which has pretty good support
for Oracle now...

  http://www.mysql.com/products/migration-toolkit/

One of the ways I like to use it is to reverse engineer the Oracle
schema, and then have the MySQL schema definition script created for
further modification.  Note we do not have support for sequences,
and its typical to just use AUTO_INCREMENT columns for these.

Regards,

Josh

--
Josh Chamas
Director, Professional Services
MySQL Inc., www.mysql.com
Get More with MySQL!  www.mysql.com/consulting

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: very slow inserts on InnoDB [InnoDB Performance Tuning]

2005-07-26 Thread Josh Chamas

Hi Catalin,

Here are some InnoDB performance tuning tips that may boost
your insert speed:

Catalin Trifu wrote:

...
innodb_buffer_pool_size = 256M


Higher is better, in fact pushing this up to 60%-80% on a
dedicated database would be good.  If there are other things
running like a web server, then you will have to take its memory
requirements into account, but 256M could likely be bigger.

Maybe this could be 512M ?  Then data sets up to this size
will be as fast as possible.


innodb_additional_mem_pool_size = 64M


Rarely does this need to be set over 8M.


# Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 32M


Try setting this to 1/2 your buffer pool size, otherwise
you might get too much checkpointing during lots of inserts.


innodb_log_buffer_size = 8M


Looks good.


innodb_flush_log_at_trx_commit = 1


Try setting to trx_commit = 2 for faster insert performance,
however you then lose ACID transactions, where if you have a system
failure you could lose around 1 second worth committed data.

These suggestions will not necessarily fix your problem.  If you continue
to have issues and they go unresolved on this list, you might consider
getting help via our commercial offerings:

  http://www.mysql.com/network/
- OR -
  http://www.mysql.com/consulting/packaged/performance.html
  http://www.mysql.com/consulting/packaged/rapidresponse.html

Regards,

Josh

--
Josh Chamas
Director, Professional Services
MySQL Inc., www.mysql.com
Get More with MySQL!  www.mysql.com/consulting

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: query on a very big table [MySQL partitioning of large tables]

2005-07-26 Thread Josh Chamas

Christos Andronis wrote:

Hi all,
we are trying to run the following query on a table that contains over 600 million rows: 


'ALTER TABLE `typed_strengths` CHANGE `entity1_id` `entity1_id` int(10) 
UNSIGNED DEFAULT NULL FIRST'

The query takes ages to run (has been running for over 10 hours now). Is this 
normal?



Yes, this is normal, but not desirable of course!

The fundamental issue is that your table is likely too large, and
rebuilding indexes cannot fit into memory, and goes to disk.

You need to break the table up into smaller shards or partitions using
horizontal table partitioning methodologies.

Usually what one will do is have say 10M or 100M rows in a table say
for one months data or some such.  If this is MyISAM, then all of those
sub tables can be put into a MERGE, and queried normally:

  http://dev.mysql.com/doc/mysql/en/merge-storage-engine.html

If this is InnoDB, then you have to teach your application how to
query the partitions individually and aggregate the results on its own.

We will have a better partitioning implementation in MySQL 5.1, being
developed still, but you can get early information on this in the
Partitioning Forum here:

  http://forums.mysql.com/list.php?106

One nice side effect in data warehousing type applications is that
when you go to delete the data, you can just drop the old table,
with having to do large deletes and rebuilding the table to defragment.


As a side issue, is MySQL suited for such big tables? I've seen a couple of 
case studies with MySQL databases over 1.4 billion rows but it is not clear to 
me whether this size corresponds to the whole database or whether it is for a 
single table.



Yes, its fine.  You just need to partition your huge tables, this is true
in all database platforms.


The MySQL distribution we're using is 4.1.12. The database sits on a HP 
Proliant DL585 server with 2 dual-core Opterons and 12 GB of RAM, running Linux 
Fedora Core 3.



If this is InnoDB, then you might make sure that your InnoDB buffer pool
is set very large, say 10GB, and this will improve performance there.

If this is MyISAM, then you want to set myisam_sort_buffer_size and 
key_buffer_size
to 4GB or just under.  There is a 4GB limit currently on those settings.
myisam_sort_buffer_size is used for rebuilding an index and you need to make
sure your index will fit in that amount of memory when you partition your
tables.

If you continue to need help, then you might want to enlist our
on-site consulting for your project needs:

  http://www.mysql.com/consulting/packaged/performance.html

Regards,

Josh

--
Josh Chamas
Director, Professional Services
MySQL Inc., www.mysql.com
Get More with MySQL!  www.mysql.com/consulting

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Concurrency Question

2004-07-05 Thread Josh Chamas
Javier Diaz wrote:
Hi
We have changed all our tables to InnoDB and now the server is not able to
handle the load, even when we are not running the SELECTs statements against
these tables yet.
As I mentioned in my email we make a lots of INSERTS and UPDATES in these
tables (more than 3000 per second). So far using MyISAM everything was OK,
but now when we moved the tables to InnoDB (to be able to make Read/Write
operations in these tables) the performance was down completely and the
server can not handle it.
Does anyone have a rough idea when you change from MyISAM to InnoDB how the
performance is affected?
I would appreciate any ideas you can have, we really need this ASAP. 

I would suggest load/capacity testing things very carefully in a test
environment before moving your system to InnoDB.  InnoDB has very
different locking / disk i/o behavior than MyISAM as you have
discovered.  It also seems to use about 2x the disk space for my tables
as it has something like a 19 byte overhead per record in the table.
Like others suggested, make sure you bundle as many transactions as
possible in one commit.  Each commit will end up doing a disk write,
so using an auto-commit mode ( without BEGIN WORK ) will result in
one disk write per transaction.  If you bundle many SQL statements
in one transaction, you can get a relative performance improvement,
say if you can get an average of 3 insert/updates per transaction,
then you have just increased your performance by a factor of 3
if your operations were disk bound in the first place which is likely
at 3000 inserts/updates per second.
That said, I also found that InnoDB can do some non-intuitive row level
locking that can result in dead locks, so when moving to many SQL operations
per transaction, you also have to test this carefully under load to make
sure that your code does not result in any dead locks.
Regards,
Josh

Josh Chamas, Founder| NodeWorks - http://www.nodeworks.com
Chamas Enterprises Inc. | NodeWorks Directory - http://dir.nodeworks.com
http://www.chamas.com   | Apache::ASP - http://www.apache-asp.org
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Error 1005 when adding a Foreign Key

2004-07-02 Thread Josh Chamas
Bartis, Robert M (Bob) wrote:
I recently added a column and Index to an existing table. I wanted to also add a Foreign Key. I have done this before defining the commands outside MySQL and souring the file in for new tables, but would prefer to not have to dump the current table just for the modification. I tried to add one based on the users guide, see command/response below, without success. Obviously I am missing something. Any suggestions?
 
 
mysql alter table runload_list add foreign key (PlanName) references testplans (PlanName) on update cascade on delete restrict;
ERROR 1005: Can't create table './mydb/#sql-3ebd_430.frm' (errno: 150)

From the manual at 
http://dev.mysql.com/doc/mysql/en/InnoDB_foreign_key_constraints.html
 If MySQL reports an error number 1005 from a CREATE TABLE statement,
 and the error message string refers to errno 150, this means that the
 table creation failed because a foreign key constraint was not correctly
 formed. Similarly, if an ALTER TABLE fails and it refers to errno 150,
 that means a foreign key definition would be incorrectly formed for
 the altered table. Starting from MySQL 4.0.13, you can use SHOW INNODB STATUS
 to display a detailed explanation of the latest InnoDB foreign key error in the 
server.
So, you should check show innodb status for some more about this error it seems.
Also, one of the more common errors is a lack of an index on testplans.PlanName,
so make sure you have one of those too.
Regards,
Josh

Josh Chamas, Founder| NodeWorks - http://www.nodeworks.com
Chamas Enterprises Inc. | NodeWorks Directory - http://dir.nodeworks.com
http://www.chamas.com   | Apache::ASP - http://www.apache-asp.org
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


InnoDB tablespace fragmentation bug in MySQL 4.0.20

2004-07-01 Thread Josh Chamas
Hi, ( hopefully a MySQL developer sees this at some point! )
I am giving InnoDB a good workout before rolling it out onto
production systems, and found a bug in the way the tablespace gets fragmented
when doing basic add/drop of indexes.  Below my sig is a series of SQL
commands I used to replicate the problem.
Basically, when doing a drop index, add index, drop index, one would
expect the tablespace to look more or less how it looked after the first
drop index since the add index should just reuse what gets reclaimed
during the 1st drop.  What I am finding however is that this sequence will
perpetually grow the tablespace, both on disk, and according to InnoDB.
The really interesting thing about this issue is that the tablespace
data file grows on disk at the drop index time, not during the add index.
I could not believe it when I saw it at first, but I repeated the
procedure and confirmed this aspect of this bug a couple times.
Note that I am using the autoextend feature with a basic innodb config of:
# Uncomment the following if you are using InnoDB tables
innodb_data_home_dir = /usr/local/mysql4/innodb
innodb_data_file_path = ibdata1:100M:autoextend
innodb_log_group_home_dir = /usr/local/mysql4/innodb
innodb_log_arch_dir = /usr/local/mysql4/innodb
transaction-isolation = READ-COMMITTED
and the innodb files end up looking like this:
]$ ls -allg /usr/local/mysql4/innodb/
total 504352
drwxr-xr-x2 mysql4096 Jun 21 00:50 .
drwxr-xr-x   12 root 4096 Jun 21 00:17 ..
-rw-rw1 mysql   25088 Mar 22 22:00 ib_arch_log_00
-rw-rw1 mysql2560 Jun 21 00:50 ib_arch_log_02
-rw-rw1 mysql499122176 Jul  1 19:45 ibdata1
-rw-rw1 mysql 8388608 Jul  1 19:45 ib_logfile0
-rw-rw1 mysql 8388608 Jul  1 19:45 ib_logfile1
and just before the last drop index the ibdata1 file looked like:
]$ ls -allg /usr/local/mysql4/innodb/
-rw-rw1 mysql490733568 Jul  1 19:44 ibdata1
Finally, I call this a bug because it seems that if one is doing no more
than routine maintenance on tables by adding/dropping indexes, one will
eventually run out of disk space regardless of whether one is actually
using that disk space!
Also, I have a linux 2.4 kernel that this is running on, with mysql
compiled with gcc 3.2.2.
Thanks,
Josh

Josh Chamas, Founder| NodeWorks - http://www.nodeworks.com
Chamas Enterprises Inc. | NodeWorks Directory - http://dir.nodeworks.com
http://www.chamas.com   | Apache::ASP - http://www.apache-asp.org
mysql alter table clicks drop index idx_test;
Query OK, 891450 rows affected (57.83 sec)
Records: 891450  Duplicates: 0  Warnings: 0
mysql show table status like 'clicks';
++++++-+-+--+---++-
+-++++
| Name   | Type   | Row_format | Rows   | Avg_row_length | Data_length | 
Max_data_length | Index_length | Data_free | Auto_increment | Create_time
| Update_time | Check_time | Create_options | Comment|
++++++-+-+--+---++-
+-++++
| clicks | InnoDB | Dynamic| 891651 | 95 |85590016 |
NULL | 58458112 | 0 |   NULL | NULL
| NULL| NULL   || InnoDB free: 323584 kB |
++++++-+-+--+---++-
+-++++
1 row in set (0.00 sec)
mysql alter table clicks add index idx_test (client_id);
Query OK, 891450 rows affected (1 min 4.73 sec)
Records: 891450  Duplicates: 0  Warnings: 0
mysql show table status like 'clicks';
++++++-+-+--+---++-
+-++++
| Name   | Type   | Row_format | Rows   | Avg_row_length | Data_length | 
Max_data_length | Index_length | Data_free | Auto_increment | Create_time
| Update_time | Check_time | Create_options | Comment|
++++++-+-+--+---++-
+-++++
| clicks | InnoDB | Dynamic| 891651 | 95 |85590016 |
NULL | 91602944 | 0 |   NULL | NULL
| NULL| NULL   || InnoDB free: 291840 kB

InnoDB, odd insert error shared row locking behavior

2004-07-01 Thread Josh Chamas
Hi again,
I ran into some dead locking that was unexpected,  I basically think
the insert error - share lock behavior is problematic.  I would think
that either a insert error does not acquire a row lock ( equivalent of a basic select 
),
or that it would acquire an exclusive row lock as if the insert really happened.
Note that I do not need a fix for the dead lock situation, I have one by basically
doing a select for update to create a critical section ahead of time, but
I am writing up this email suggesting the InnoDB locking behavior could be improved.
The dead locks seems to come from the fact that I would have code like this:
update ( separate transaction )
...
begin work
insert ignore ... ( share lock acquired upon error )
update
commit
I get a dead lock like this:
client 2 insert ignore/error - share lock acquired
client 1 update - request exclusive lock, wait
client 2 update - request exclusive lock *deadlock*, client 2 transaction killed
What is counter-intuitive for me about this is that the insert ignore gets
escalated to a share lock when the insert error occurs.  I understand
that this behavior is as documented at
  http://dev.mysql.com/doc/mysql/en/InnoDB_Locks_set.html
with this entry:
 INSERT INTO ... VALUES (...) sets an exclusive lock on the inserted row.
 Note that this lock is not a next-key lock and does not prevent other
 users from inserting to the gap before the inserted row.
 If a duplicate-key error occurs, *a shared lock on the duplicate index record is set*.
but again, I would think that this would be handled more gracefully
with either an exclusive lock, or no lock at all.  I believe then
that client 2 would not have deadlocked because it already had
the lock at insert time that it later needed for the later update.
Put another way, for an insert command to come back with a share lock
is counterintuitive, even it is during an error condition, but this
error condition is no error at all with insert ignore.
BTW, I have the READ-COMMITTED mode set in my.cnf if it matters.
Also for documentation purposes, I have included below the output
from show innodb status regarding one such deadlock.
Regards,
Josh

Josh Chamas, Founder| NodeWorks - http://www.nodeworks.com
Chamas Enterprises Inc. | NodeWorks Directory - http://dir.nodeworks.com
http://www.chamas.com   | Apache::ASP - http://www.apache-asp.org

LATEST DETECTED DEADLOCK

040701 17:57:19
*** (1) TRANSACTION:
TRANSACTION 0 26896, ACTIVE 0 sec, process no 21491, OS thread id 122896 starting 
index read
mysql tables in use 1, locked 1
LOCK WAIT 3 lock struct(s), heap size 320
MySQL thread id 10422, query id 145918 gate 192.168.0.10 dmoz Updating
-- (
update low_priority dmoz_track.clients
  set last_visit = now(), ip_address = '192.168.0.10', user_agent_id = 
'13', num_visits = num_visits + 1
where client_id = 'VIMHu+tRy/sioy+kgxQBfw'
-- )
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 0 page no 1082 n bits 88 index `PRIMARY` of table 
`dmoz_track/clients` trx id 0 26896 lock_mode X locks rec but not gap waiting
Record lock, heap no 7 PHYSICAL RECORD: n_fields 8; 1-byte offs TRUE; info bits 0
 0: len 22; hex 56494d48752b7452792f73696f792b6b677851426677; asc 
VIMHu+tRy/sioy+kgxQBfw;; 1: len 6; hex 690a; asc i ;; 2: len 7; hex 
000301191e; asc;; 3: len 8; hex 8000123a16dd69ba; asc:  i ;; 4: len 4; 
hex 800d; asc ;; 5: len 8; hex 8000123a16dfe7a7; asc:;; 6: len 12; hex 
3139322e3136382e302e3130; asc 192.168.0.10;; 7: len 4; hex 80001473; ascs;;
*** (2) TRANSACTION:
TRANSACTION 0 26894, ACTIVE 0 sec, process no 20890, OS thread id 114703 starting 
index read, thread declared inside InnoDB 500
mysql tables in use 1, locked 1
3 lock struct(s), heap size 320
MySQL thread id 10421, query id 145919 gate 192.168.0.10 dmoz Updating
-- (
update low_priority dmoz_track.clients
  set last_visit = now(), ip_address = '192.168.0.10', user_agent_id = 
'13', num_visits = num_visits + 1
where client_id = 'VIMHu+tRy/sioy+kgxQBfw'
-- )
*** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 0 page no 1082 n bits 88 index `PRIMARY` of table 
`dmoz_track/clients` trx id 0 26894 lock mode S locks rec but not gap
Record lock, heap no 7 PHYSICAL RECORD: n_fields 8; 1-byte offs TRUE; info bits 0
 0: len 22; hex 56494d48752b7452792f73696f792b6b677851426677; asc 
VIMHu+tRy/sioy+kgxQBfw;; 1: len 6; hex 690a; asc i ;; 2: len 7; hex 
000301191e; asc;; 3: len 8; hex 8000123a16dd69ba; asc:  i ;; 4: len 4; 
hex 800d; asc ;; 5: len 8; hex 8000123a16dfe7a7; asc:;; 6: len 12; hex 
3139322e3136382e302e3130; asc 192.168.0.10;; 7: len 4; hex 80001473; ascs;;
*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 0 page no 1082 n bits 88