JPA, InnoDB, and locking in multi-threaded app

2009-09-14 Thread Grover Blue
Hi everyone,

I have an application that spawns various threads for inserting into various
table.  Additionally, another thread is launched to delete old records.
They all use JPA and entity managed transactions.  I think I'm getting
locking issues, but I'm not sure.  So, I'd like to get your thoughts before
I add stack output to the app.  My development tests are good, but errors
creep up in production after running for a few hours (about 8-9 hours in).

Each inserting thread is bound to a single table, and a table can have
multiple threading persisting objects to it.  These threads do perform some
SELECT operation, but not to manipulate data - just to check for existence.
These threads act every 2 minutes.

The purging thread issues a DELETE (which, to my understanding, JPA
translates directly to a DELETE statement consistent with the underlining
db.  In my case, MySQL InnoDB tables).  It attempts to delete old records
(say, 6 months old), once every hour.

Both are types of threads start to get errors after running for a long
period.  I make use I close my EntityManager object, and re-initialize my
EntityManagerFactory if emf.isOpen() returns false.

So, I'm trying to understand what locking is taking place here.  Since my
tables are InnoDB, I thought row-level locking was used.  Do DELETE ..
WHERE .. statements lock an entire table, thus effect SELECTs from another
thread?

Any input would be appreciated.


Re: InnoDB and locking

2006-02-11 Thread Heikki Tuuri

Patrick,

it should work. You have only shown a fragment of the application code. 
Maybe there is a bug somewhere else. If you write a very simple test program 
to test this, do you still get the duplicate values?


Best regards,

Heikki

Oracle Corp./Innobase Oy
InnoDB - transactions, row level locking, and foreign keys for MySQL

InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM 
tables

http://www.innodb.com/order.php


- Original Message - 
From: Patrick Duda [EMAIL PROTECTED]

Newsgroups: mailing.database.myodbc
Sent: Friday, February 10, 2006 9:01 PM
Subject: Re: InnoDB and locking



At 12:54 PM 2/10/2006, Mark Matthews wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Patrick Duda wrote:
 I guess I don't understand this locking stuff.  I have a InnoDB table
 that has one thing in it, a counter.  All I want to do is have multiple
 instances of the code read this counter and increment it.  I want to
 make sure that each one is unique.

 Here is what I am doing in java:

 c.setAutoCommit(false);
 ...
 rs = statement.executeQuery(select request_id from requestid_innodb 
 for

 update);
 ...
 String updateQuery = update requestid_innodb set request_id=;
  updateQuery = updateQuery + nextRequestId;
 tempStatement = c.createStatement();
 tempStatement.executeUpdate(updateQuery);
 ...
 c.commit();
 c.setAutoCommit(true);

 If I have multiple instances of this code running I end up with
 duplicate keys.  I thought this was suppose to lock the table so that
 would not happen.

 What am I not doing right?  What am I not understanding about locking?

 Thanks

 Patrick



Patrick,

Are you sure the table is using the InnoDB storage engine? What does the
output of SHOW CREATE TABLE  for the table in question say?

-Mark



ysql show create table requestid_innodb;
+--+-+
| Table| Create
Table
|
+--+-+
| requestid_innodb | CREATE TABLE `requestid_innodb` (
  `request_id` int(11) NOT NULL default '0',
  PRIMARY KEY  (`request_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
+--+-+
1 row in set (0.00 sec)



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB and locking

2006-02-11 Thread Peter Brawley

Patrick,

c.setAutoCommit(false);
...
rs = statement.executeQuery(select request_id from requestid_innodb 
for update);

...
String updateQuery = update requestid_innodb set request_id=;
 updateQuery = updateQuery + nextRequestId;
tempStatement = c.createStatement();
tempStatement.executeUpdate(updateQuery); ..
c.commit();
c.setAutoCommit(true);

As you can demonstrate by running your For Update query on the same 
InnoDB table row in two instances of the mysql client, a second 
connection trying to execute your For Update query on a row already 
locked by that query in another connection first hangs, and then times 
out. Given your, that query looks like the correct behaviour. When you 
try the same thing from two instances of your Connector/J code, is  the 
query returning instantly in  the second Connector/J client? If so, you 
would seem to have exposed a Connector/J problem. If not, perhaps it's 
because you need some deadlock handling code?


PB

-

Patrick Duda wrote:
I guess I don't understand this locking stuff.  I have a InnoDB table 
that has one thing in it, a counter.  All I want to do is have 
multiple instances of the code read this counter and increment it.  I 
want to make sure that each one is unique.


Here is what I am doing in java:

c.setAutoCommit(false);
...
rs = statement.executeQuery(select request_id from requestid_innodb 
for update);

...
String updateQuery = update requestid_innodb set request_id=;
 updateQuery = updateQuery + nextRequestId;
tempStatement = c.createStatement();
tempStatement.executeUpdate(updateQuery);
...
c.commit();
c.setAutoCommit(true);

If I have multiple instances of this code running I end up with 
duplicate keys.  I thought this was suppose to lock the table so that 
would not happen.


What am I not doing right?  What am I not understanding about locking?

Thanks

Patrick





--
No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.375 / Virus Database: 267.15.6/257 - Release Date: 2/10/2006


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



InnoDB and locking

2006-02-10 Thread Patrick Duda
I guess I don't understand this locking stuff.  I have a InnoDB table that 
has one thing in it, a counter.  All I want to do is have multiple 
instances of the code read this counter and increment it.  I want to make 
sure that each one is unique.


Here is what I am doing in java:

c.setAutoCommit(false);
...
rs = statement.executeQuery(select request_id from requestid_innodb for 
update);

...
String updateQuery = update requestid_innodb set request_id=;
 updateQuery = updateQuery + nextRequestId;
tempStatement = c.createStatement();
tempStatement.executeUpdate(updateQuery);
...
c.commit();
c.setAutoCommit(true);

If I have multiple instances of this code running I end up with duplicate 
keys.  I thought this was suppose to lock the table so that would not happen.


What am I not doing right?  What am I not understanding about locking?

Thanks

Patrick


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB and locking

2006-02-10 Thread Alec . Cawley
Patrick Duda [EMAIL PROTECTED] wrote on 10/02/2006 16:28:56:

 I guess I don't understand this locking stuff.  I have a InnoDB table 
that 
 has one thing in it, a counter.  All I want to do is have multiple 
 instances of the code read this counter and increment it.  I want to 
make 
 sure that each one is unique.
 
 Here is what I am doing in java:
 
 c.setAutoCommit(false);
 ...
 rs = statement.executeQuery(select request_id from requestid_innodb for 

 update);
 ...
 String updateQuery = update requestid_innodb set request_id=;
   updateQuery = updateQuery + nextRequestId;
 tempStatement = c.createStatement();
 tempStatement.executeUpdate(updateQuery);
 ...
 c.commit();
 c.setAutoCommit(true);
 
 If I have multiple instances of this code running I end up with 
duplicate 
 keys.  I thought this was suppose to lock the table so that would not 
happen.
 
 What am I not doing right?  What am I not understanding about locking?

I think this problem is explained in detail at
http://dev.mysql.com/doc/refman/5.0/en/innodb-locking-reads.html

Alec 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB and locking

2006-02-10 Thread Patrick Duda

At 10:52 AM 2/10/2006, [EMAIL PROTECTED] wrote:

Patrick Duda [EMAIL PROTECTED] wrote on 10/02/2006 16:28:56:

 I guess I don't understand this locking stuff.  I have a InnoDB table
that
 has one thing in it, a counter.  All I want to do is have multiple
 instances of the code read this counter and increment it.  I want to
make
 sure that each one is unique.

 Here is what I am doing in java:

 c.setAutoCommit(false);
 ...
 rs = statement.executeQuery(select request_id from requestid_innodb for

 update);
 ...
 String updateQuery = update requestid_innodb set request_id=;
   updateQuery = updateQuery + nextRequestId;
 tempStatement = c.createStatement();
 tempStatement.executeUpdate(updateQuery);
 ...
 c.commit();
 c.setAutoCommit(true);

 If I have multiple instances of this code running I end up with
duplicate
 keys.  I thought this was suppose to lock the table so that would not
happen.

 What am I not doing right?  What am I not understanding about locking?

I think this problem is explained in detail at
http://dev.mysql.com/doc/refman/5.0/en/innodb-locking-reads.html

Alec




Yes, I know that, I have read that and that is why I am asking what it is 
that I am not doing right.  It talks about a counter specifically:
2) read the counter first with a lock mode FOR UPDATE, and increment after 
that. The latter approach can be implemented as follows:


SELECT counter_field FROM child_codes FOR UPDATE;
UPDATE child_codes SET counter_field = counter_field + 1;

A SELECT … FOR UPDATE reads the latest available data, setting exclusive 
locks on each row it reads. Thus, it sets the same locks a searched SQL 
UPDATE would set on the rows. 

Isn't that what my Java code is doing?

I start a transaction by turning off autocommit, I then do a select for 
update.  Then I do the update and I commit.  From the way I read this, no 
one else should be able to read the table until I commit.


Yet, that is not what I am seeing.  When I start several instances of the 
program running I get lots and lots of:


Error inserting records into database [Caused by: Duplicate entry '152' for 
key 1]


That is what has me confused.  I thought I was doing things they way the 
manual said to.


Thanks

Patrick



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB and locking

2006-02-10 Thread Mark Matthews
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Patrick Duda wrote:
 I guess I don't understand this locking stuff.  I have a InnoDB table
 that has one thing in it, a counter.  All I want to do is have multiple
 instances of the code read this counter and increment it.  I want to
 make sure that each one is unique.
 
 Here is what I am doing in java:
 
 c.setAutoCommit(false);
 ...
 rs = statement.executeQuery(select request_id from requestid_innodb for
 update);
 ...
 String updateQuery = update requestid_innodb set request_id=;
  updateQuery = updateQuery + nextRequestId;
 tempStatement = c.createStatement();
 tempStatement.executeUpdate(updateQuery);
 ...
 c.commit();
 c.setAutoCommit(true);
 
 If I have multiple instances of this code running I end up with
 duplicate keys.  I thought this was suppose to lock the table so that
 would not happen.
 
 What am I not doing right?  What am I not understanding about locking?
 
 Thanks
 
 Patrick
 
 

Patrick,

Are you sure the table is using the InnoDB storage engine? What does the
output of SHOW CREATE TABLE  for the table in question say?

-Mark
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.1 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFD7OF/tvXNTca6JD8RAjS0AJwKoXIpZUVY3Z0g/vqcUbyxm6JzBwCeIC7Z
f1Tgh6gQde3w7KtGRCU3H/0=
=j0x0
-END PGP SIGNATURE-

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB and locking

2006-02-10 Thread Patrick Duda

At 12:54 PM 2/10/2006, Mark Matthews wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Patrick Duda wrote:
 I guess I don't understand this locking stuff.  I have a InnoDB table
 that has one thing in it, a counter.  All I want to do is have multiple
 instances of the code read this counter and increment it.  I want to
 make sure that each one is unique.

 Here is what I am doing in java:

 c.setAutoCommit(false);
 ...
 rs = statement.executeQuery(select request_id from requestid_innodb for
 update);
 ...
 String updateQuery = update requestid_innodb set request_id=;
  updateQuery = updateQuery + nextRequestId;
 tempStatement = c.createStatement();
 tempStatement.executeUpdate(updateQuery);
 ...
 c.commit();
 c.setAutoCommit(true);

 If I have multiple instances of this code running I end up with
 duplicate keys.  I thought this was suppose to lock the table so that
 would not happen.

 What am I not doing right?  What am I not understanding about locking?

 Thanks

 Patrick



Patrick,

Are you sure the table is using the InnoDB storage engine? What does the
output of SHOW CREATE TABLE  for the table in question say?

-Mark



ysql show create table requestid_innodb;
+--+-+
| Table| Create 
Table 
|

+--+-+
| requestid_innodb | CREATE TABLE `requestid_innodb` (
  `request_id` int(11) NOT NULL default '0',
  PRIMARY KEY  (`request_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
+--+-+
1 row in set (0.00 sec)



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



innodb table locking

2005-05-22 Thread Daniel Kern
My problem is the following and I thank you in advance on any help
offered.  Show Innodb Status shows that my DB server has no free
buffers.  What is the significance of this, what causes this, and how
bad is it?
 
Here is the relevant part of the output of that command:
 
BUFFER POOL AND MEMORY
--
Total memory allocated 595053970; in additional pool allocated 975744
Buffer pool size   32768
Free buffers   0
Database pages 31770
Modified db pages  11
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages read 44820347, created 150031, written 9138368
0.37 reads/s, 0.00 creates/s, 4.75 writes/s
Buffer pool hit rate 1000 / 100
 
 
I am running mysql 4.1.4 on Linux with 1 GB Memory.  The DB has a mix of
MyIsam and Innodb tables.  The possibly relevant parts of my.cnf looks
like this:
 
max_connections=200
key_buffer = 256M
sort_buffer_size = 1M
read_buffer_size = 1M
read_rnd_buffer_size = 4M
myisam_sort_buffer_size = 64M
thread_cache = 8
query_cache_size= 16M
 
innodb_data_home_dir = /var/lib/mysql/
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = /var/lib/mysql/
innodb_log_arch_dir = /var/lib/mysql/
innodb_buffer_pool_size = 512M
innodb_additional_mem_pool_size = 20M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.859 / Virus Database: 585 - Release Date: 2/14/2005
 


Re: InnoDB, record locking question

2005-01-27 Thread Gleb Paharenko
Hello.



Use:

 set autocommit=0;

or begin a transaction before executing 'select ... lock in share mode'.

Commit the tramsaction to release lock.





Mojtaba Faridzad [EMAIL PROTECTED] wrote:

 Hi,

 I am trying to learn more about InnoDB to convert MyISAM to InnoDB. 

 according to MySQL document, I can lock a record like this:

 

 SELECT * FROM mytable WHERE id = '100' LOCK IN SHARE MODE;

 

 I run this query and showed a message to stop the screen (waiting) and on 

 the other computer I run the same query or even I updated the record (which 

 is locked by the other computer), but I didn't get any error. How does this 

 work? Did I miss anything here?

 

 My other question: if I lock a record with that command, then how I can 

 release the lock? I could find any command to release the lock!

 

 thanks 

 

 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



InnoDB, record locking question

2005-01-26 Thread Mojtaba Faridzad
Hi,
I am trying to learn more about InnoDB to convert MyISAM to InnoDB. 
according to MySQL document, I can lock a record like this:

SELECT * FROM mytable WHERE id = '100' LOCK IN SHARE MODE;
I run this query and showed a message to stop the screen (waiting) and on 
the other computer I run the same query or even I updated the record (which 
is locked by the other computer), but I didn't get any error. How does this 
work? Did I miss anything here?

My other question: if I lock a record with that command, then how I can 
release the lock? I could find any command to release the lock!

thanks 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: InnoDB Table Locking Issue

2004-04-28 Thread Marc Slemko
On Tue, 27 Apr 2004, Scott Switzer wrote:

 Hi,

 I am having a difficult time with a query.  My environment is MySQL
 v4.0.16 (InnoDB tables) running on Linux (latest 2.4 kernel).
 Basically, I am running a query of the form:

 INSERT INTO temp_tbl
 SELECT c1,c2...
 FROM t1,t2,t3,t4
 WHERE ...

 It is a very complex query, which looks at millions of rows of data to
 produce results.  The issue is this:

 When running from the MySQL command line:
 Normally, when the query is run, it takes about 5 minutes to complete.
 When I LOCK t1,t2,t3,t4 before running, it takes 8 seconds to run.

Are you sure it is actually _working_ when it takes 8 seconds to run?
You say it is a very complex query that looks at millions of rows ...
unless those are all cached (and they could be, depending on your setup),
8 seconds would probably be too fast for it to run no matter how mysql
optimized it.

Triple check that if you start with an empty temp_tbl it actually
inserts everything it should into temp_tbl.

If you aren't locking temp_tbl, I wouldn't expect the query to actually
work.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB Table Locking Issue

2004-04-28 Thread Scott Switzer
Thanks for the message.  I should rephrase - the data set is millions of 
rows, but the tables are indexed, and an EXPLAIN looks like it is using 
indexes effectively.  The query produces the exact same results both 
times (with and without LOCKing).  Is there a reason that by calling the 
query via PHP (v4.3) it would either not LOCK the tables effectively, or 
optimise the query differently than when running in the MySQL client?

If you are duly motivated, I could send you the exact query and a test 
DB dump...

Thanks,
Scott


Marc Slemko wrote:

On Tue, 27 Apr 2004, Scott Switzer wrote:

 

Hi,

I am having a difficult time with a query.  My environment is MySQL
v4.0.16 (InnoDB tables) running on Linux (latest 2.4 kernel).
Basically, I am running a query of the form:
INSERT INTO temp_tbl
SELECT c1,c2...
FROM t1,t2,t3,t4
WHERE ...
It is a very complex query, which looks at millions of rows of data to
produce results.  The issue is this:
When running from the MySQL command line:
Normally, when the query is run, it takes about 5 minutes to complete.
When I LOCK t1,t2,t3,t4 before running, it takes 8 seconds to run.
   

Are you sure it is actually _working_ when it takes 8 seconds to run?
You say it is a very complex query that looks at millions of rows ...
unless those are all cached (and they could be, depending on your setup),
8 seconds would probably be too fast for it to run no matter how mysql
optimized it.
Triple check that if you start with an empty temp_tbl it actually
inserts everything it should into temp_tbl.
If you aren't locking temp_tbl, I wouldn't expect the query to actually
work.


 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


InnoDB Table Locking Issue

2004-04-27 Thread Scott Switzer
Hi,

I am having a difficult time with a query.  My environment is MySQL 
v4.0.16 (InnoDB tables) running on Linux (latest 2.4 kernel).  
Basically, I am running a query of the form:

INSERT INTO temp_tbl
SELECT c1,c2...
FROM t1,t2,t3,t4
WHERE ...
It is a very complex query, which looks at millions of rows of data to 
produce results.  The issue is this:

When running from the MySQL command line:
Normally, when the query is run, it takes about 5 minutes to complete.
When I LOCK t1,t2,t3,t4 before running, it takes 8 seconds to run.
When I put this LOCKing code into my PHP script, the LOCKing code has no 
effect!  The LOCK code still takes 5 minutes.

I can replicate this issue every time.  Can anyone point me in the right 
direction?  I can provide more detail if there is someone here that can 
help.

Thanks...

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Innodb row locking question

2002-12-08 Thread Heikki Tuuri
Mike,

- Original Message -
From: Mike Gohlke [EMAIL PROTECTED]
Newsgroups: mailing.database.mysql
Sent: Saturday, December 07, 2002 6:44 PM
Subject: Re: Innodb row locking question




 Benjamin Pflugmann wrote:

 Hello.
 
 On Fri 2002-12-06 at 10:28:23 -0600, [EMAIL PROTECTED] wrote:
 
...

 Because I'm not using transactions explicitly.  And the whole point of
 setting the thread id is because if thread 1 select's 5k records for
 update I'm not sure that thread 2 won't select the same 5k records.
 I've read the docs forwards and backwards and some things just aren't
 clear to me.
 If a table has 10k records and thread 1 selects the first 5k in update
 mode, when thread 2 executes a microsecond after thread 1 will it get
 the same 5k records?

yes. When MySQL or any ordinary database executes a SELECT, record locks do
not affect the row set it is going to pick. Thus, it will NOT jump over
locked records, but rather will stop to wait.

...
 Although the name sounds otherweise, FOR UPDATE is not only suited for
 updates. IMHO, the write lock should work for the DELETE as for the

The name FOR UPDATE comes from Oracle. A better name would be LOCK IN
EXCLUSIVE MODE.

...
 Thanks for your reply and information.  The problem is that I'm
 conservative as hell when it comes to coding and must be absolutely sure
 things will work the way they should.  Otherwise, I'll choose the
 inefficient route just because I know it's safe.

Some notes which may help:

1. I would like to clarify that InnoDB holds locks till the current
transaction
COMMIT or ROLLBACK. Thus, in the AUTOCOMMIT=1 mode it makes no sense to use
FOR UPDATE.

When MySQL ends a connection, it rolls back the current transaction. Thus,
locks are never kept after the connection ended.

2. In a MySQL UPDATE statement you can also use the LIMIT clause.

3. In an UPDATE or DELETE, InnoDB automatically sets exclusive locks on the
index records it encounters in the search for the rows to be updated or
deleted.

 Mike...

Best regards,

Heikki Tuuri
Innobase Oy
---
Order technical MySQL/InnoDB support at https://order.mysql.com/
See http://www.innodb.com for the online manual and latest news on InnoDB

sql query




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Innodb row locking question

2002-12-07 Thread Benjamin Pflugmann
Hello.

On Fri 2002-12-06 at 10:28:23 -0600, [EMAIL PROTECTED] wrote:
 Heikki and all,
 I've got a quick question which may be more general sql related but
 since I'm using innodb tables specifically for the row locking.
 
 The following process description is specifically designed to prevent
 duplicates.
 
 My current process:
 select * from run where job_id = 111 and thread_id = 0 limit 1 for
 update;
 
 update run set thread_id = 999 where job_id = 111 and thread_id = 0 and
 ( rec_id = x or rec_id = x1 ... );  // rec_id pulled via loop of
 previous select

Why do you repeat job_id = 111 and thread_id = 0? If you are using a
transaction seperation level of at least REPEATABLE READ (which is the
default), InnoDb assures that you always see the same rows within one
transaction. Regardless, FOR UPDATE locks the records, which assures
the values did not change meanwhile.

Btw, rec_id IN (x1, x2, x3) may provide better performance (OR clauses
are still problematic with MySQL v3.23)

 select main.* from run,main where run.job_id = 111 and run.thread_id =
 999 and run.rec_id = main.rec_id; // this is saved off
 
 delete from run where job_id = 111 and thread_id = 999;

I am irritated. Above you select only the 10.000 first rows and
process them, but at the end, you delete all, not only max 10.000?

  end 
 As you can see, it's quite a few queries to prevent collisions.  Right
 now the update and delete take the longest for obvious reasons.
 
 What I'm wondering is if I can do the following:
 
 select main.* from run,main where run.job_id = 111 and run.thread_id = 0
 and run.rec_id = main.rec_id limit 1 for update;
 
 delete from run where job_id = 111 and (rec_id = x or rec_id = x2 ...);

Why not? FOR UPDATE is explained on http://www.mysql.com/doc/en/SELECT.html 
as If you are using FOR UPDATE on a storage engine with page/row
locks, the examined rows will be write locked.

Although the name sounds otherweise, FOR UPDATE is not only suited for
updates. IMHO, the write lock should work for the DELETE as for the
UPDATE. Or in other words: it is either good enough for both of your
variants or for neither.

The only thing you should consider is the fact that the second variant
will lock records from table main, which the first did not.

HTH,

Benjamin.

-- 
[EMAIL PROTECTED]

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Innodb row locking question

2002-12-07 Thread Mike Gohlke


Benjamin Pflugmann wrote:


Hello.

On Fri 2002-12-06 at 10:28:23 -0600, [EMAIL PROTECTED] wrote:
 

Heikki and all,
I've got a quick question which may be more general sql related but
since I'm using innodb tables specifically for the row locking.

The following process description is specifically designed to prevent
duplicates.

My current process:
select * from run where job_id = 111 and thread_id = 0 limit 1 for
update;

update run set thread_id = 999 where job_id = 111 and thread_id = 0 and
( rec_id = x or rec_id = x1 ... );  // rec_id pulled via loop of
previous select
   


Why do you repeat job_id = 111 and thread_id = 0? If you are using a
transaction seperation level of at least REPEATABLE READ (which is the
default), InnoDb assures that you always see the same rows within one
transaction. Regardless, FOR UPDATE locks the records, which assures
the values did not change meanwhile.
 

Because I'm not using transactions explicitly.  And the whole point of 
setting the thread id is because if thread 1 select's 5k records for 
update I'm not sure that thread 2 won't select the same 5k records.

I've read the docs forwards and backwards and some things just aren't 
clear to me.
If a table has 10k records and thread 1 selects the first 5k in update 
mode, when thread 2 executes a microsecond after thread 1 will it get 
the same 5k records?  Will they run concurrently and pick records within 
the whole 10k set so that at the end all 10k are selected but each 5k 
subset is not contiguous?  Or will thread 1 get the first 5k records and 
thread 2 will get the second 5k records?

Btw, rec_id IN (x1, x2, x3) may provide better performance (OR clauses
are still problematic with MySQL v3.23)
 

Thanks, I will give it a try.


 

select main.* from run,main where run.job_id = 111 and run.thread_id =
999 and run.rec_id = main.rec_id; // this is saved off

delete from run where job_id = 111 and thread_id = 999;
   


I am irritated. Above you select only the 10.000 first rows and
process them, but at the end, you delete all, not only max 10.000?
 

It's a CYA maneuver.  If a process is terminated after select, update, 
select then rerun we now have 20k rows to process.

 

 end 
As you can see, it's quite a few queries to prevent collisions.  Right
now the update and delete take the longest for obvious reasons.

What I'm wondering is if I can do the following:

select main.* from run,main where run.job_id = 111 and run.thread_id = 0
and run.rec_id = main.rec_id limit 1 for update;

delete from run where job_id = 111 and (rec_id = x or rec_id = x2 ...);
   


Why not? FOR UPDATE is explained on http://www.mysql.com/doc/en/SELECT.html 
as If you are using FOR UPDATE on a storage engine with page/row
locks, the examined rows will be write locked.
 

But not read locked.  If 2 select for update's are executed against the 
same record set, will 1 succeed and 1 fail?

Although the name sounds otherweise, FOR UPDATE is not only suited for
updates. IMHO, the write lock should work for the DELETE as for the
UPDATE. Or in other words: it is either good enough for both of your
variants or for neither.

The only thing you should consider is the fact that the second variant
will lock records from table main, which the first did not.

HTH,

	Benjamin.
 

Thanks for your reply and information.  The problem is that I'm 
conservative as hell when it comes to coding and must be absolutely sure 
things will work the way they should.  Otherwise, I'll choose the 
inefficient route just because I know it's safe.

Mike...



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: Innodb row locking question

2002-12-07 Thread Benjamin Pflugmann
Hello.

On Sat 2002-12-07 at 10:36:00 -0600, [EMAIL PROTECTED] wrote:
[...]
 Why do you repeat job_id = 111 and thread_id = 0? If you are using a
 transaction seperation level of at least REPEATABLE READ (which is the
 default), InnoDb assures that you always see the same rows within one
 transaction. Regardless, FOR UPDATE locks the records, which assures
 the values did not change meanwhile.

 Because I'm not using transactions explicitly.

Ah. Okay. Is there a special reason for this?

 And the whole point of setting the thread id is because if thread 1
 select's 5k records for update I'm not sure that thread 2 won't
 select the same 5k records.

As I said above, FOR UPDATE already garantues exclusive use. The
second thread would block on the write lock.

 I've read the docs forwards and backwards and some things just
 aren't clear to me.  If a table has 10k records and thread 1 selects
 the first 5k in update mode, when thread 2 executes a microsecond
 after thread 1 will it get the same 5k records?

No, the second one will block when it tries to select any record which
is already selected (and tehrefore locked) by the first thread.

In the worst case, both threads get some rows locked.  Since both
selects want all the rows and some are already locked by the other
thread, neither can have all. In this case one of both queries will
fail and automatically be rolled back (and unlock the rows). See
http://www.mysql.com/doc/en/InnoDB_Deadlock_detection.html.

Btw, you already have this possibility with your current assumptions
(when the second thread starts locking records before the first one
did its update).

But it is not a real problem, because both queries would be the same
in your case, they will - even though that is not garantueed -
pratically get the same execution path and therefore the second thread
will block on the very first record, and therefore be unable to
steal any records from thread one.

 Will they run concurrently and pick records within the whole 10k set
 so that at the end all 10k are selected but each 5k subset is not
 contiguous?

No. The query asked for all records (ignore the LIMIT for a
moment). It would be a bug to not return all of them in each query.
MySQL knows nothing about the fact that both your threads want to do
the same thing with the records.

 Or will thread 1 get the first 5k records and thread 2
 will get the second 5k records?

Same as above. You asked for all. That some of them maybe locked is no
excuse for not returning all of them.

 Why not? FOR UPDATE is explained on
 http://www.mysql.com/doc/en/SELECT.html as If you are using FOR
 UPDATE on a storage engine with page/row locks, the examined rows
 will be write locked.

 But not read locked.  If 2 select for update's are executed against the 
 same record set, will 1 succeed and 1 fail?

Ah, that's the misconception. A write lock is stronger than a read
lock, it implies the garantuees of a read lock. In other words: a read
lock forbids changes (by others) but allows other readers and
therefore is also called shared lock. A write lock forbids anything
(by others), and therefore is also exclusive lock.

These properties are explained quite clearly for the LOCK TABLES
command:

  http://www.mysql.com/doc/en/LOCK_TABLES.html

And the fact that FOR UPDATE indeed sets exclusive locks is mentioned
here

  http://www.mysql.com/doc/en/InnoDB_locking_reads.html

 Thanks for your reply and information.

You are welcome.

 The problem is that I'm conservative as hell when it comes to coding
 and must be absolutely sure things will work the way they should.
 Otherwise, I'll choose the inefficient route just because I know
 it's safe.

Yeah, rather safe than sorry. I am with you. :-) 

HTH,

Benjamin.

-- 
[EMAIL PROTECTED]

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Innodb row locking question

2002-12-06 Thread Mike Gohlke
Heikki and all,
I've got a quick question which may be more general sql related but
since I'm using innodb tables specifically for the row locking.

The following process description is specifically designed to prevent
duplicates.

My current process:
select * from run where job_id = 111 and thread_id = 0 limit 1 for
update;

update run set thread_id = 999 where job_id = 111 and thread_id = 0 and
( rec_id = x or rec_id = x1 ... );  // rec_id pulled via loop of
previous select

select main.* from run,main where run.job_id = 111 and run.thread_id =
999 and run.rec_id = main.rec_id; // this is saved off

delete from run where job_id = 111 and thread_id = 999;

 end 
As you can see, it's quite a few queries to prevent collisions.  Right
now the update and delete take the longest for obvious reasons.

What I'm wondering is if I can do the following:

select main.* from run,main where run.job_id = 111 and run.thread_id = 0
and run.rec_id = main.rec_id limit 1 for update;

delete from run where job_id = 111 and (rec_id = x or rec_id = x2 ...);

unlock tables;


Will the above 2 queries + unlock tables be as safe as the previous 4
queries that take so much longer?

Thanks for your help and very nice table handler.

Mike...



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: MyISAM/InnoDB/Oracle comparison, and a InnoDB table locking question

2002-01-02 Thread Michael T. Babcock

On Wed, Dec 26, 2001 at 04:23:46PM -0600, Philip Molter wrote:
 Are there guides out there for configuring these things?  What is
 a big enough log file?  Honestly, on a lot of stuff, I'm just
 guessing, but it takes a lot of time to fiddle with values, clean
 out the database, and then shove in 1 million rows to see the
 results.

I haven't found any really good guidelines but I'm told if you pay
for support then the developpers would be more than glad to work
out some optimal values for your data set.
-- 
Michael T. Babcock
CTO, FibreSpeed Ltd. (Hosting, Security, Consultation, Database, etc)
http://www.fibrespeed.net/~mbabcock/

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: MyISAM/InnoDB/Oracle comparison, and a InnoDB table locking question

2001-12-27 Thread Weaver, Walt

Thanks for the reply, Heikki.

That the MyISAM would fit in OS file cache didn't occur to me. That would
certainly help explain the big discrepancy. And yes, we had autocommit
turned on for InnoDB, not for Oracle. The InnoDB buffer pool size was 350mb,
not big enough to hold the whole table. The Oracle SGA size was 75mb. On the
new machine (with the RAID 1+0 disks) I'll be increasing the InnoDB buffer
pool size and the Oracle SGA to be large enough for the table to fit. Thanks
for the tips.

And, thanks for the info on the innodb_flush_log_at_trx_commit=0 parm.
Sounds similar to Oracle's log_checkpoint_interval parm. I'll add that to
the my.cnf file. The redo logs are 30mb for both InnoDB and Oracle (three
each). Oracle was cycling through a single log about every 2 minutes or so.
Not sure how to check that on InnoDB. 

With what I've learned about InnoDB during my initial testing I think I'll
be able to set up a better real world environment on the new machine, once
it's ready. I'm currently RTFM'ing the InnoDB manual to better familiarize
myself with the startup parms.

Thanks again,
-- Walt Weaver
   Bozeman, Montana



-Original Message-
From: Heikki Tuuri [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 26, 2001 3:19 PM
To: [EMAIL PROTECTED]
Subject: Re: MyISAM/InnoDB/Oracle comparison, and a InnoDB table locking
question


Walt,

thank you for the test!

This post is partially a reply to a request to share my
MyISAM/InnoDB/Oracle
comparison testing with you guys. In addition, I have a question about
locking mechanisms in InnoDB which I'll ask at the end of the post.

I've been comparison testing MyISAM, InnoDB, and Oracle for the past two
weeks or so. We're looking to replace MyISAM with InnoDB to eliminate table
locking problems we're experiencing on databases with high levels of
activity (dozens of queries a second). Currently we migrate these
high-activity databases to Oracle, where they perform well and very
reliably. In addition, if the performance and reliability is there we may
replace Oracle with InnoDB too.

My initial tests have been done on a VA Linux 2230, 2gb RAM, and a single
disk drive. The single drive puts InnoDB and Oracle at a disadvantage to
MyISAM, and it's evident from the tests. Updating a 600,000 row table took
about 16 minutes in MyISAM, 28 minutes in InnoDB, and 32 minutes in Oracle.
Inserting 1000 rows into a 5.6 million row table showed MyISAM doing 1750
inserts/second, InnoDB doing 72 inserts/second, and Oracle 45
inserts/second.

The MyISAM table obviously fit in the OS file cache, otherwise 1750 inserts
per second would not be possible. Did the table fit in the buffer pool of
InnoDB or the SGA of Oracle? Did you commit each insert individually in
InnoDB and Oracle?

Setting

innodb_flush_log_at_trx_commit=0

in my.cnf will speed up individual inserts if you can afford losing a few of
the last transactions in a crash. Did you configure the log files big enough
for InnoDB and Oracle?

I've also done some lock testing, and of course that's where InnoDB and
Oracle shine. Running several long-running selects and doing inserts at the
same time took MyISAM minutes to finish thanks to table-level locking while
InnoDB and Oracle finished everything up in a matter of seconds. This is
what we're really interested in as far as InnoDB is concerned. If InnoDB
performs as well as or better than Oracle in our testing (and it's as
stable
and scalable as Oracle) we'll be happy since Oracle performs very well in
our production environment, which is a VA Linux/EMC Symmetrix environment.

I'm currently putting together another VA Linux 2230 which will have eight
disks configured for RAID 1+0. I want to see what the performance increase
we'll see with InnoDB by spreading I/O out on the four mirrored drives.
Other than the drives the 2230 is identical to the first one I've been
testing on.

SO, my question about InnoDB locking mechanisms: there is a blurb in the
InnoDB manual that states, The lock table in InnoDB is stored so
space-efficiently that lock escalation is not needed: typically several
users are allowed to lock every row in the database, or any random subset
of
the rows, without InnoDB running out of memory. This statement disturbs me
a bit. In the very near future we'll be running databases with potentially
several hundred queries per second, and if InnoDB will be running out of
memory because of the high locking requirements it encounters we could have
a real problem.

If you run queries where only a (non-locking) SELECT is used, then InnoDB
and Oracle as multiversioned databases can do without setting locks
altogether. So no lock table memory is consumed in SELECTs. Updates and
deletes consume memory in the lock table of InnoDB, but inserts do not.

Thus running out of memory really should not happen in real-world
applications of InnoDB. If you look at the user stories at
http://www.innodb.com/userstories.html you notice that over 1000 queries per
second

MyISAM/InnoDB/Oracle comparison, and a InnoDB table locking question

2001-12-26 Thread Weaver, Walt

This post is partially a reply to a request to share my MyISAM/InnoDB/Oracle
comparison testing with you guys. In addition, I have a question about
locking mechanisms in InnoDB which I'll ask at the end of the post.

I've been comparison testing MyISAM, InnoDB, and Oracle for the past two
weeks or so. We're looking to replace MyISAM with InnoDB to eliminate table
locking problems we're experiencing on databases with high levels of
activity (dozens of queries a second). Currently we migrate these
high-activity databases to Oracle, where they perform well and very
reliably. In addition, if the performance and reliability is there we may
replace Oracle with InnoDB too.

My initial tests have been done on a VA Linux 2230, 2gb RAM, and a single
disk drive. The single drive puts InnoDB and Oracle at a disadvantage to
MyISAM, and it's evident from the tests. Updating a 600,000 row table took
about 16 minutes in MyISAM, 28 minutes in InnoDB, and 32 minutes in Oracle.
Inserting 1000 rows into a 5.6 million row table showed MyISAM doing 1750
inserts/second, InnoDB doing 72 inserts/second, and Oracle 45
inserts/second.

I've also done some lock testing, and of course that's where InnoDB and
Oracle shine. Running several long-running selects and doing inserts at the
same time took MyISAM minutes to finish thanks to table-level locking while
InnoDB and Oracle finished everything up in a matter of seconds. This is
what we're really interested in as far as InnoDB is concerned. If InnoDB
performs as well as or better than Oracle in our testing (and it's as stable
and scalable as Oracle) we'll be happy since Oracle performs very well in
our production environment, which is a VA Linux/EMC Symmetrix environment.

I'm currently putting together another VA Linux 2230 which will have eight
disks configured for RAID 1+0. I want to see what the performance increase
we'll see with InnoDB by spreading I/O out on the four mirrored drives.
Other than the drives the 2230 is identical to the first one I've been
testing on.

SO, my question about InnoDB locking mechanisms: there is a blurb in the
InnoDB manual that states, The lock table in InnoDB is stored so
space-efficiently that lock escalation is not needed: typically several
users are allowed to lock every row in the database, or any random subset of
the rows, without InnoDB running out of memory. This statement disturbs me
a bit. In the very near future we'll be running databases with potentially
several hundred queries per second, and if InnoDB will be running out of
memory because of the high locking requirements it encounters we could have
a real problem.

And this sort of begs the question: is there a way to tune InnoDB's use of
lock/latch memory and mechanisms? I'm familiar with latch tuning in Oracle
and what can cause latch contention. I'm familiar with the various buffers
that can be set in the my.cnf file for InnoDB, but I haven't found any
specific recommendations or instructions for lock/latch tuning with InnoDB.
Anybody know of such an animal?

Thanks much,
--Walt Weaver
  Bozeman, Montana


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: MyISAM/InnoDB/Oracle comparison, and a InnoDB table locking question

2001-12-26 Thread Heikki Tuuri

Walt,

thank you for the test!

This post is partially a reply to a request to share my MyISAM/InnoDB/Oracle
comparison testing with you guys. In addition, I have a question about
locking mechanisms in InnoDB which I'll ask at the end of the post.

I've been comparison testing MyISAM, InnoDB, and Oracle for the past two
weeks or so. We're looking to replace MyISAM with InnoDB to eliminate table
locking problems we're experiencing on databases with high levels of
activity (dozens of queries a second). Currently we migrate these
high-activity databases to Oracle, where they perform well and very
reliably. In addition, if the performance and reliability is there we may
replace Oracle with InnoDB too.

My initial tests have been done on a VA Linux 2230, 2gb RAM, and a single
disk drive. The single drive puts InnoDB and Oracle at a disadvantage to
MyISAM, and it's evident from the tests. Updating a 600,000 row table took
about 16 minutes in MyISAM, 28 minutes in InnoDB, and 32 minutes in Oracle.
Inserting 1000 rows into a 5.6 million row table showed MyISAM doing 1750
inserts/second, InnoDB doing 72 inserts/second, and Oracle 45
inserts/second.

The MyISAM table obviously fit in the OS file cache, otherwise 1750 inserts
per second would not be possible. Did the table fit in the buffer pool of
InnoDB or the SGA of Oracle? Did you commit each insert individually in
InnoDB and Oracle?

Setting

innodb_flush_log_at_trx_commit=0

in my.cnf will speed up individual inserts if you can afford losing a few of
the last transactions in a crash. Did you configure the log files big enough
for InnoDB and Oracle?

I've also done some lock testing, and of course that's where InnoDB and
Oracle shine. Running several long-running selects and doing inserts at the
same time took MyISAM minutes to finish thanks to table-level locking while
InnoDB and Oracle finished everything up in a matter of seconds. This is
what we're really interested in as far as InnoDB is concerned. If InnoDB
performs as well as or better than Oracle in our testing (and it's as stable
and scalable as Oracle) we'll be happy since Oracle performs very well in
our production environment, which is a VA Linux/EMC Symmetrix environment.

I'm currently putting together another VA Linux 2230 which will have eight
disks configured for RAID 1+0. I want to see what the performance increase
we'll see with InnoDB by spreading I/O out on the four mirrored drives.
Other than the drives the 2230 is identical to the first one I've been
testing on.

SO, my question about InnoDB locking mechanisms: there is a blurb in the
InnoDB manual that states, The lock table in InnoDB is stored so
space-efficiently that lock escalation is not needed: typically several
users are allowed to lock every row in the database, or any random subset of
the rows, without InnoDB running out of memory. This statement disturbs me
a bit. In the very near future we'll be running databases with potentially
several hundred queries per second, and if InnoDB will be running out of
memory because of the high locking requirements it encounters we could have
a real problem.

If you run queries where only a (non-locking) SELECT is used, then InnoDB
and Oracle as multiversioned databases can do without setting locks
altogether. So no lock table memory is consumed in SELECTs. Updates and
deletes consume memory in the lock table of InnoDB, but inserts do not.

Thus running out of memory really should not happen in real-world
applications of InnoDB. If you look at the user stories at
http://www.innodb.com/userstories.html you notice that over 1000 queries per
second is attainable if the working set of your application fits in the
buffer pool.

And this sort of begs the question: is there a way to tune InnoDB's use of
lock/latch memory and mechanisms? I'm familiar with latch tuning in Oracle
and what can cause latch contention. I'm familiar with the various buffers
that can be set in the my.cnf file for InnoDB, but I haven't found any
specific recommendations or instructions for lock/latch tuning with InnoDB.
Anybody know of such an animal?

The only latch tuning in InnoDB is through the parameter

innodb_thread_concurrency

in my.cnf. Latch contention can cause slow thread scheduling in the OS, and
lowering or raising its value from the default of 8 can help.

In Oracle you may have to tune, for example, the number of rollback segments
to reduce latch contention. In InnoDB rollback segment implementation is
different from Oracle, and no such tuning is needed.

You can use innodb_monitor to look at the number of spin waits and thread
suspensions that happen at latches.

In InnoDB the high-contention latches are the buffer pool latch and the DB
kernel latch. I studied extensively methods to reduce the buffer pool latch
contention. But I decided to keep a single latch, because when running
InnoDB in a multiprocessor Xeon computer the memory bus gets saturated
before any latch contention.

Thanks much,
--Walt Weaver

Re: MyISAM/InnoDB/Oracle comparison, and a InnoDB table locking question

2001-12-26 Thread Philip Molter

On Thu, Dec 27, 2001 at 12:18:38AM +0200, Heikki Tuuri wrote:
: The MyISAM table obviously fit in the OS file cache, otherwise 1750 inserts
: per second would not be possible. Did the table fit in the buffer pool of
: InnoDB or the SGA of Oracle? Did you commit each insert individually in
: InnoDB and Oracle?
: 
: Setting
: 
: innodb_flush_log_at_trx_commit=0
: 
: in my.cnf will speed up individual inserts if you can afford losing a few of
: the last transactions in a crash. Did you configure the log files big enough
: for InnoDB and Oracle?

Are there guides out there for configuring these things?  What is
a big enough log file?  Honestly, on a lot of stuff, I'm just
guessing, but it takes a lot of time to fiddle with values, clean
out the database, and then shove in 1 million rows to see the
results.

* Philip Molter
* Texas.net Internet
* http://www.texas.net/
* [EMAIL PROTECTED]

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: MyISAM/InnoDB/Oracle comparison, and a InnoDB table locking question

2001-12-26 Thread Heikki Tuuri

Philip,

On Thu, Dec 27, 2001 at 12:18:38AM +0200, Heikki Tuuri wrote:
: The MyISAM table obviously fit in the OS file cache, otherwise 1750 inserts
: per second would not be possible. Did the table fit in the buffer pool of
: InnoDB or the SGA of Oracle? Did you commit each insert individually in
: InnoDB and Oracle?
: 
: Setting
: 
: innodb_flush_log_at_trx_commit=0
: 
: in my.cnf will speed up individual inserts if you can afford losing a few of
: the last transactions in a crash. Did you configure the log files big enough
: for InnoDB and Oracle?

Are there guides out there for configuring these things?  What is
a big enough log file?  Honestly, on a lot of stuff, I'm just
guessing, but it takes a lot of time to fiddle with values, clean
out the database, and then shove in 1 million rows to see the
results.

yes, the manual at http://www.innodb.com/ibman.html contains instructions
and tuning tips.

I have just introduced a new sample my.cnf which will make setting the
values easier:
.
[mysqld]
# You can write your other MySQL server options here
# ...
#
innodb_data_home_dir = c:\ibdata
#Data files must be able to hold your data and indexes
innodb_data_file_path = ibdata1:2000M;ibdata2:2000M

#Set buffer pool size to 50 - 80 % of your computer's memory
set-variable = innodb_buffer_pool_size=70M
set-variable = innodb_additional_mem_pool_size=10M

innodb_log_group_home_dir = c:\iblogs
innodb_log_arch_dir = c:\iblogs
#.._arch_dir must be the same as .._log_group_home_dir
innodb_log_archive=0
set-variable = innodb_log_files_in_group=3
#Set the log file size to about 15 % of the buffer pool size
set-variable = innodb_log_file_size=10M
set-variable = innodb_log_buffer_size=8M
#Set ..flush_log_at_trx_commit to 0 if you can afford losing
#a few last transactions 
innodb_flush_log_at_trx_commit=1

set-variable = innodb_file_io_threads=4
set-variable = innodb_lock_wait_timeout=50
.

* Philip Molter
* Texas.net Internet
* http://www.texas.net/
* [EMAIL PROTECTED]

Regards,

Heikki
http://www.innodb.com



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php