Hi.

On Tue, Apr 30, 2002 at 04:54:27PM +0100, [EMAIL PROTECTED] wrote:
[...]
> >(From http://www.sleepycat.com/docs/ref/transapp/put.html - did not
> >find an explanation in the MySQL manual :-( )
> >
> This reference is very useful, actually.  If I modify my test program to 
> detect the deadlock as shown in the example in your reference (see code 
> below) then two instances of the program now seem to run quite happily 
> together, with one or the other occasionally going through phases like

Good to hear it helped.

[...]
> This is slightly tedious to do every time I wish to perform a 
> transaction.  It would be nice if the database automatically retried a 
> certain number of times itself before giving the error, and coming to 
> think of it I've tried fiddling with a flag called 
> "berkeley_trans_retry" in the MySQL source code (sql/ha_berkeley.cpp) 
> which looked like it might do just that, but it didn't seem to make much 
> difference.  Maybe I'll put some debug in to see if it really is 
> retrying 10 times when MySQL is built with the flag set to 10.

Sorry, I have never used BDB, so I am out on this...

> Also, as the BerkeleyDB docs say, there is no guarantee that the 
> transaction will ever succeed, which could be a bit of a problem (!).

Well, usually this is a worst case that should not happen in reality.

> Is there any way to "favour" the transactions being performed by one
> client over those being performed by another to make sure some
> chosen client's transactions *do* always succeed?  The docs simply
> say that one thread is selected to have its locks discarded (and
> receieve a deadlock error), but can I control *which* thread?

I remember that I read in the BDB docs about a method to influence
which locks gets discarded. But I don't know whether there is a method
with MySQL to change pass that through.

> In the real application code that I'm writing, I have two processes 
> accessing the database (both of them both reading and writing), one of 
> which is fairly busy all the time, the other of which only springs to 
> life for a relatively short period of time about once an hour.  I would 
> like to "favour" the latter process during its short(ish) bursts of 
> activity if possible.  The other process could simply wait and retry, 
> knowing that sooner or later the hourly process will finish it's burst 
> of activity, enabling it to continue.

In this case you also could let the second process get a table lock
for the time of doing, if it is okay with you.

> >This happens rather seldom, as BDB uses page locks, which only block a
> >small part of the table.
>
> What is a "page lock"?  Is it a lock on one row, a certain number of 
> rows, or the whole table?

A page is a internal storage unit, and is usually about the size of a
disk block. So "a certain number of rows" fits it best. I am too lazy
currently to try a technical explanation.

> OK, I'm now definitely running the two command line clients in NON 
> auto-commit mode, and it DOES make difference.

Okay. Reassuring. I not going to be mad. At least not now. ;-)

[...]
> So presumably DBD::ADO is also autocommitting.  (Perhaps it doesn't 
> honour the AutoCommit flag in my Perl program?)

Sorry, also no experience with DBD::ADO.

> If anybody else out there knows anything relevant about page locks in 
> MySQL/BDB, I'd be glad to hear it.

Seconded.

Bye,

        Benjamin.

-- 
[EMAIL PROTECTED]

---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to