===
On 2018-08-18 23:59, shawn l.green wrote:
Hello Jeff,
On 8/13/2018 12:05 PM, j...@lxvi.net wrote:
Hello, I have read through several pages of the reference manual, and
I've seen several instances where it is stated that LOCK TABLES (and
UNLOCK TABLES) is not allowed in a stored procedur
l.green wrote:
Hello Jeff,
On 8/13/2018 12:05 PM, j...@lxvi.net wrote:
Hello, I have read through several pages of the reference manual, and
I've seen several instances where it is stated that LOCK TABLES (and
UNLOCK TABLES) is not allowed in a stored procedure, but so far, I
haven'
Hello Jeff,
On 8/13/2018 12:05 PM, j...@lxvi.net wrote:
Hello, I have read through several pages of the reference manual, and
I've seen several instances where it is stated that LOCK TABLES (and
UNLOCK TABLES) is not allowed in a stored procedure, but so far, I
haven't found an expl
Hello, I have read through several pages of the reference manual, and
I've seen several instances where it is stated that LOCK TABLES (and
UNLOCK TABLES) is not allowed in a stored procedure, but so far, I
haven't found an explanation as to *why* that is. Could someone please
enlighten m
p routines dump, problem with lock tables.
>
> Hi All.
>
> I use:
>
> # rpm -qa | grep -i percona-server-server
> Percona-Server-server-55-5.5.28-rel29.3.388.rhel6.x86_64
>
> My system:
>
> # uname -a;cat /etc/redhat-release
> Linux prbc01.mg.local 2.6.32-279.19.1.
denied for user 'yyy'@'zzz' (using
password: YES) when using LOCK TABLES
So I thinke that mysqldump locks the table (--add-locks) by default.
But for this user:
mysql> s
- Original Message -
> From: "Hank"
>
> Just an update. Using the "load index into cache" statement for the
> 200 million row indexed "source" table, my correlated update
> statement ran in 1 hour, 45 minutes to update 144 million rows. A 50%
> increase in performance!
Good to hear :-
@lists.mysql.com
Subject: Re: mysqldump: Got error: 1017: Can't find file:
'./ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK TABLES
Hello Shafi,
Adding to Prabhat alternatives, you can use --force to the mysqldump command
to ignore the errors and continue taking backup.
Rega
AM, Prabhat Kumar wrote:
> correct. mysqldump by default has --lock-tables enabled, which means it
> tries to lock all tables to be dumped before starting the dump. And doing
> LOCK TABLES t1, t2, ... for really big number of tables will inevitably
> exhaust all available file des
correct. mysqldump by default has --lock-tables enabled, which means it
tries to lock all tables to be dumped before starting the dump. And doing
LOCK TABLES t1, t2, ... for really big number of tables will inevitably
exhaust all available file descriptors, as LOCK needs all tables to be
opened
, the downloading HTML help claims this only for MyISAM
tables, because between "LOCK TABLES" and "UNLOCK TABLES" there is no key-cache
flushing. InnoDB is not mentioned.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
gt; know, except for what you want to know :-)
>
>> I am trying to find a logical or reasonable explanation WHY this would be the
>> case, despite the fact that the documentation states otherwise (see: Right
>> here:
>> http://dev.mysql.com/doc/refman/5.5/en/lock-tables-res
In the last episode (Sep 23), Shafi AHMED said:
> I have a mysql database of 200G size and the backup fails due to the foll.
> Issue.
>
> mysqldump: Got error: 1017: Can't find file:
> './ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK TABLES
>
> C
re-create the index with a sort using myisamchk.
Second, the starting value of "dest.key" for all 144 million records
is "0" so an index on that field wouldn't really help, I think.
> > This query takes about 3.5 hours when I don't use LOCK TABLES, and over 4
&
your mysqld to
see if it works.
- Original Message -
> From: "Shafi AHMED"
> To: mysql@lists.mysql.com
> Sent: Friday, 23 September, 2011 1:42:26 PM
> Subject: mysqldump: Got error: 1017: Can't find file:
> './ssconsole/ss_requestmaster.frm'
Folks
I have a mysql database of 200G size and the backup fails due to the foll.
Issue.
mysqldump: Got error: 1017: Can't find file:
'./ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK TABLES
Can someone assist pls.?
Best Rgs,
Shafi AHMED
onable explanation WHY this would be the
> case, despite the fact that the documentation states otherwise (see: Right
> here:
> http://dev.mysql.com/doc/refman/5.5/en/lock-tables-restrictions.html)
I believe you're misinterpreting that, as is the author from the blog you
originally r
veral hundred million
> > records. The problem is that I am finding that when I use LOCK TABLES,
> > these queries run slower (please read my ORIGINAL post with all this
> > information).
>
> Wandering out my area of expertise here :-) but have you done any
> key cache tuning
mysql.com/doc/refman/5.5/en/lock-tables-restrictions.html )
>
> But if seeing some SQL will make you happy, here is just one example:
>
> UPDATE dest d straight_join source s set d.seq=s.seq WHERE d.key=s.key;
>
> for 140 million records in "dest" and 220 million recor
Like I said, the problem is not just one particular SQL statement. It is
several dozen statements operating on tables with several hundred million
records. The problem is that I am finding that when I use LOCK TABLES,
these queries run slower (please read my ORIGINAL post with all this
a stored proc to update rows ,where u commit for every 1k
>>>> or 10k rows.
>>>> This will be much faster than ur individual update stmt.
>>>>
>>>> regards
>>>> anandkl
>>>>
>>>> On Thu, Sep 22, 2011 at 8:24 PM, Ha
2011 at 8:24 PM, Hank wrote:
>>>
>>>> That is what I'm doing. I'm doing a correlated update on 200 million
>>>> records. One UPDATE statement.
>>>>
>>>> Also, I'm not asking for a tutorial when not to use LOCK TABLES. I'
gt;> anandkl
>>
>> On Thu, Sep 22, 2011 at 8:24 PM, Hank wrote:
>>
>>> That is what I'm doing. I'm doing a correlated update on 200 million
>>> records. One UPDATE statement.
>>>
>>> Also, I'm not asking for a tutorial when
be much faster than ur individual update stmt.
>
> regards
> anandkl
>
> On Thu, Sep 22, 2011 at 8:24 PM, Hank wrote:
>
>> That is what I'm doing. I'm doing a correlated update on 200 million
>> records. One UPDATE statement.
>>
>> Also, I'm not ask
gt; records. One UPDATE statement.
>
> Also, I'm not asking for a tutorial when not to use LOCK TABLES. I'm
> trying
> to figure out why, despite what the documentation says, using LOCK TABLES
> hinders performance for large update statements on MYISAM tables when it is
>
That is what I'm doing. I'm doing a correlated update on 200 million
records. One UPDATE statement.
Also, I'm not asking for a tutorial when not to use LOCK TABLES. I'm trying
to figure out why, despite what the documentation says, using LOCK TABLES
hinders performa
Even for MyISAM tables, LOCK TABLES is not usually the best solution
for increasing performance. When there is little to no contention,
LOCK TABLES doesn't offer much value.
MyISAM works best when you can get more work done in a statement:
Instead of executing a bunch of insert state
s with 200
million records each, so anything I can do to increase the performance of
these long running queries will shorten the migration running time.
What I was referring to was that in the documentation, that when using LOCK
TABLES, mysql does not update the key cache until the lock is releas
LOCK TABLES...WRITE is very likely to reduce performance if you are
using a transactional storage engine, such as InnoDB/XtraDB or PBXT.
The reason is that only one connection is holding the write lock and
no other concurrent operation may occur on the table.
LOCK TABLES is only really
According to everything I've read, using LOCK TABLES...WRITE for updates,
inserts and deletes should improve performance of mysql server, but I think
I've been seeing the opposite effect.
I've been doing quite a bit of testing on a 64bit install of CentOS 5.5
installed as a guest
On Mon, 06 Jun 2011 18:54 +0200, "Johan De Meersman"
wrote:
> > Excluding 'performance_schema' appears to eliminate the error. And it
> > seems does NOT cause a reliability-of-the-backup problem.
>
> Hah, no, backing that up is utterly pointless.
that's a useful/final confirmation. thx.
> No,
- Original Message -
> From: ag...@airpost.net
>
> Excluding 'performance_schema' appears to eliminate the error. And it
> seems does NOT cause a reliability-of-the-backup problem.
Hah, no, backing that up is utterly pointless. Never noticed it doing that.
It's basically a virtual schem
On Mon, 06 Jun 2011 12:44 +0200, "Johan De Meersman"
wrote:
>
> I haven't bothered to look for the "bug", but it seems to me to be quite
> reasonable default behaviour to lock the whole lot when you're dumping
> transactional tables - it ensures you dump all tables from the same
> consistent vi
I haven't bothered to look for the "bug", but it seems to me to be quite
reasonable default behaviour to lock the whole lot when you're dumping
transactional tables - it ensures you dump all tables from the same consistent
view.
I would rather take this up with the ZRM people - it should "just
Am 05.06.2011 23:55, schrieb ag...@airpost.net:
> i still have no idea why this is necessary.
take it or not
it is a professional solution which works for
databses with 20 GB every day here with rsync
without interrupt/lock mysqld a second
and it is much faster
> there seems to be a but, prob
i still have no idea why this is necessary.
there seems to be a but, problem, misconfiguration, etc.
wouldn't it make some sense to try to FIX it, rather than setting up a
completely different server?
perhaps someone with an idea of the problem and its solution will be
able to chime in.
--
My
Am 05.06.2011 23:49, schrieb ag...@airpost.net:
>
> On Sun, 05 Jun 2011 23:29 +0200, "Reindl Harald"
> wrote:
>> i would use a replication slave and stop him for consistent backups
>> because dumb locks are not really a good solution independent
>> if this works "normally"
>
> unfortunately, i
On Sun, 05 Jun 2011 23:29 +0200, "Reindl Harald"
wrote:
> i would use a replication slave and stop him for consistent backups
> because dumb locks are not really a good solution independent
> if this works "normally"
unfortunately, i have no idea what that means.
something's apparently broken w
On Sun, 05 Jun 2011 23:30 +0200, "Reindl Harald"
wrote:
> BTW
> WHY is everybody ansering to the list AND the author of the last post?
> this reults in get every message twice :-(
Reply -> sends to ONLY the From == h.rei...@thelounge.net
Reply to all sends to BOTH the From == h.rei...@thelounge.n
BTW
WHY is everybody ansering to the list AND the author of the last post?
this reults in get every message twice :-(
Am 05.06.2011 23:26, schrieb ag...@airpost.net:
> fwiw, others are seeing this. e.g., in addition to the two bugs i'd
> already referenced,
>
> http://www.directadmin.com/forum/s
hm - bad
i would use a replication slave and stop him for consistent backups
because dumb locks are not really a good solution independent
if this works "normally"
Am 05.06.2011 23:26, schrieb ag...@airpost.net:
> fwiw, others are seeing this. e.g., in addition to the two bugs i'd
> already refer
fwiw, others are seeing this. e.g., in addition to the two bugs i'd
already referenced,
http://www.directadmin.com/forum/showthread.php?p=202053
and one
http://qa.lampcms.com/q122897/Can-t-backup-mysql-table-with-mysqldump-SELECT-LOCK-TABL-command
claims a solution
"Add --skip-add-locks to
the grant statements does nobody interest
maybe use phpmyadmin for a clearer display
mysql> select * from mysql.user where user='root' limit 1;
+---+--+---+-+-+-+-+-+---+
---+
| GRANT USAGE ON *.* TO 'drupal_admin'@'localhost' IDENTIFIED BY
PASSWORD '*D...D'
|
drupal6 performance_schema >
> "/var/mysql-bkup/manual/20110605131003/backup.sql"
> mysqldump: Got error: 1142: SELECT,LOCK TABL command denied to
> user 'root'@'localhost' for table 'cond_instances' when using
> LOCK
e_schema >
"/var/mysql-bkup/manual/20110605131003/backup.sql"
mysqldump: Got error: 1142: SELECT,LOCK TABL command denied to
user 'root'@'localhost' for table 'cond_instances' when using
LOCK TABLES
--> manual:backup:
| Y | Y
> | Y | Y | Y
> | Y | Y | | |
> | | 0 | 0 | 0 |
> 0 |
>
>
> However, I still gave the following cmd.
>
>
| Y
| Y | Y| ||
| | 0 | 0 | 0 |
0 |
However, I still gave the following cmd.
mysql> GRANT select, lock tables ON *.* TO 'root'@'localhost'
IDENTIFIED BY 'password
or:
$ /usr/local/mysql/bin/mysqldump -h localhost -u root ABC_DATABASE>
abc.dump
mysqldump: Got error: 1449: The user specified as a definer
('root'@'%') does not exist when using LOCK TABLES
To fix this, you need to reset the DEFINER for a TRIGGER defined within
the database so
VILEGES;
>>
>> After this,
>> mysqldump failed with the following error:
>> $ /usr/local/mysql/bin/mysqldump -h localhost -u root ABC_DATABASE >
>> abc.dump
>> mysqldump: Got error: 1449: The user specified as a definer
>> ('root'@'%') does
#x27;%';
> mysql> FLUSH PRIVILEGES;
>
> After this,
> mysqldump failed with the following error:
> $ /usr/local/mysql/bin/mysqldump -h localhost -u root ABC_DATABASE >
> abc.dump
> mysqldump: Got error: 1449: The user specified as a definer
> ('root'@
x27; AND host = '%';
mysql> FLUSH PRIVILEGES;
After this,
mysqldump failed with the following error:
$ /usr/local/mysql/bin/mysqldump -h localhost -u root ABC_DATABASE > abc.dump
mysqldump: Got error: 1449: The user specified as a definer
('root'@'%') does not
Hi David,
On Jun 5, 2007, at 3:55 PM, David T. Ashley wrote:
My only concern with GET_LOCK() is that lock is server-global
rather than
database-global. This makes attacks possible in a shared setting
(some bad
person could disable your database code by going after your lock).
My solution
ctually be a LOT less bother!) is use GET_LOCK('database_name'). That
should handle
your requirement to make locks 'database-local.'
In my experience, using LOCK TABLES becomes a spaghetti problem that
begins to involve
more and more things until you are going through *serious* contor
your requirement to make locks 'database-local.'
In my experience, using LOCK TABLES becomes a spaghetti problem that begins to involve
more and more things until you are going through *serious* contortions. I would avoid
it at all costs.
Baron
--
MySQL General Mailin
On 6/5/07, Brent Baisley <[EMAIL PROTECTED]> wrote:
I think you're missing the concept of a transaction in the database sense.
The idea behind a transaction is that you can perform multiple steps and if
you don't complete all steps, any changes are reversed. The reversal process
is handled by th
I think you're missing the concept of a transaction in the database sense.
The idea behind a transaction is that you can perform multiple steps and if
you don't complete all steps, any changes are reversed. The reversal process
is handled by the database.
A good example is moving money from bank a
Once you issue a LOCK TABLES command, you may not access any tables not
in the LOCK statement. You must lock *ALL* tables you will use, perform
your updates, and then UNLOCK TABLES.
I didn't know that. I reviewed the documentation. Thanks.
OK, then my only remaining question is how
LOCK TABLES;
My question is really whether MySQL might do some strange optimizations ...
or somehow buffer the middle query so that it completes after the UNLOCK.
Thanks, Dave.
Once you issue a LOCK TABLES command, you may not access any tables not
in the LOCK statement. You must lock *ALL* tab
On 6/4/07, Jerry Schwartz <[EMAIL PROTECTED]> wrote:
Whatever you do, make sure that every bit of code that locks multiple
resources locks them in the same order. That's the only way to avoid
deadlocks.
Hi Jerry,
I really appreciate the good advice.
However, my original question is still un
.8341
www.the-infoshop.com
www.giiexpress.com
www.etudes-marche.com
> -Original Message-
> From: David T. Ashley [mailto:[EMAIL PROTECTED]
> Sent: Monday, June 04, 2007 3:54 PM
> To: mysql@lists.mysql.com
> Subject: Re: Lock Tables Question
>
> On 6/4/07, Gerald
On 6/4/07, Gerald L. Clark <[EMAIL PROTECTED]> wrote:
David T. Ashley wrote:
LOCK TABLE thistable, thattable, theothertable,
goshthislistcangetlongtable;
Do whatever is needed;
UNLOCK TABLES;
You could use a string lock for this.
Thanks for the suggestion. It looks logically correct.
David T. Ashley wrote:
I decided to go with a simple paradigm for my web-based database. Rather
than transactions, each process locks the entire database while it is
changing something, then unlocks it. This just serializes access (all
other
processes will block until the one modifying the da
I decided to go with a simple paradigm for my web-based database. Rather
than transactions, each process locks the entire database while it is
changing something, then unlocks it. This just serializes access (all other
processes will block until the one modifying the database has finished).
The
Incorporated
195 Farmington Ave.
Farmington, CT 06032
860.674.8796 / FAX: 860.674.8341
-Original Message-
From: mdpeters [mailto:[EMAIL PROTECTED]
Sent: Monday, October 16, 2006 9:19 PM
To: Dan Buettner
Cc: mysql@lists.mysql.com
Subject: Re: LOCK TABLES
I tried mv archive.frm .archive.frm
9 PM
> To: Dan Buettner
> Cc: mysql@lists.mysql.com
> Subject: Re: LOCK TABLES
>
> I tried mv archive.frm .archive.frm first. Then I ran
> mysqldump again.
> It moves past archive and onto another table. I did this 6
> times, each
> time moving the next one it complained abou
I tried this first to no avail.
mysqldump --user root --password=password --skip-lock-tables horsewiki >
horsewiki.sql
mysqldump: mysqldump: Couldn't execute 'show create table `archive`':
Table 'horsewiki.archive' doesn't exist (1146)
I'll try the upda
mysqldump --user root --password=password horsewiki > horsewiki.sql
Dan Buettner wrote:
Hmmm, sounds like something's pretty abnormal here. Any idea what may
have been done here?
I wonder if you could step around this with a call to mysqldump that
doesn't explicitly lock table
aller (rename
LocalSettings.php, then go to the wiki).
2. --opt is enabled by default with mysqldump, and part of what it does it
lock tables. So try the backup without lock tables, by adding
--skip-lock-tables.
Thanks
ViSolve DB Team.
- Original Message -
From: "mdpeters"
Hmmm, sounds like something's pretty abnormal here. Any idea what may
have been done here?
I wonder if you could step around this with a call to mysqldump that
doesn't explicitly lock tables ... what is the commad you're running
again?
Dan
On 10/16/06, mdpeters <[EMAIL PRO
e name is
>> horsewiki.
>>
>> I execute this:
>> # mysqldump --user root --password=password horsewiki > horsewiki.sql
>> and get this:
>> mysqldump: Got error: 1146: Table 'horsewiki.archive' doesn't
exist when
>> using LOCK TABLES
>>
&g
0a-solaris10-sparc version. My database name is
>> horsewiki.
>>
>> I execute this:
>> # mysqldump --user root --password=password horsewiki > horsewiki.sql
>> and get this:
>> mysqldump: Got error: 1146: Table 'horsewiki.archive' doesn't exist
ase is one that is in production to support the
mediawiki wiki application. This is a Solaris Sparc 10 system using the
mysql-max-5.0.20a-solaris10-sparc version. My database name is
horsewiki.
I execute this:
# mysqldump --user root --password=password horsewiki > horsewiki.sql
and get this:
my
t the
mediawiki wiki application. This is a Solaris Sparc 10 system using the
mysql-max-5.0.20a-solaris10-sparc version. My database name is horsewiki.
I execute this:
# mysqldump --user root --password=password horsewiki > horsewiki.sql
and get this:
mysqldump: Got error: 1146: Table 'hors
:
# mysqldump --user root --password=password horsewiki > horsewiki.sql
and get this:
mysqldump: Got error: 1146: Table 'horsewiki.archive' doesn't exist when
using LOCK TABLES
I have tried using phpMyAdmin-2.9.0.2. It seems to let me export the
database to an SQL file. When I
mns (it's an
> example, ignore the content of the key), and want to ensure data integrity
> while I recreate it.
>
> The following is what I thought I had to do:
> LOCK TABLES foo WRITE;
>DROP INDEX `keyX` ON `foo`;
>CREATE UNIQUE INDEX `keyX` ON `foo` (`column
umn1 CHAR(1), column2 CHAR(1), UNIQUE KEY
`keyX`(`column1`));
I have to perform an update of the key to extend it to both columns (it's an
example, ignore the content of the key), and want to ensure data integrity
while I recreate it.
The following is what I thought I had to do:
LOCK
, and want to ensure data integrity
while I recreate it.
The following is what I thought I had to do:
LOCK TABLES foo WRITE;
DROP INDEX `keyX` ON `foo`;
CREATE UNIQUE INDEX `keyX` ON `foo` (`column1`,`column2`);
UNLOCK TABLES;
After much head-scratching due to "Error Code : 1100 Table
Thanks for your reply. I repeated your test with the same results on 4.1.21
(database in question is on 4.1). I'll give the ISP another kick and see what
they have to say.
So there's no other reason why an ISP might not want to grant LOCK TABLES in a
shared hosting environment?
Tha
I am not aware of any such bug related to the LOCK TABLES privilege.
Like you I could not find a mention in our bugs database, for any version.
It is easy to demonstrate that this is not the case. If permissions are
properly set up, LOCK TABLES can be restricted to a database just like
every
I'm using MySQL as the db for Drupal (PHP based CMS), on shared hosting. There
are repeated errors because the db user does not have permission for LOCK
TABLES, which Drupal uses.
The ISP says that they don't grant this permission because ...
"MySQL has a bug which al
th mysqldump.
>
>
>
>>mysqldump --opt database_name > database_name.sql
>>mysqldump --lock-tables database_name > database_name.sql
>>mysqldump --skip-opt --lock-tables database_name > database_name.sql
>>mysqldump --lock-all-tables database_name >
Slawomir Orlowski (CYMPAK) wrote:
hello,
I have mysql-4.1.8 compiled form source working on RH7.2.
I have tried to backup my database with mysqldump.
mysqldump --opt database_name > database_name.sql
mysqldump --lock-tables database_name > database_name.sql
mysqldump --skip-opt
hello,
I have mysql-4.1.8 compiled form source working on RH7.2.
I have tried to backup my database with mysqldump.
> mysqldump --opt database_name > database_name.sql
> mysqldump --lock-tables database_name > database_name.sql
> mysqldump --skip-opt --lock-table
Hello.
I think it is a weird behavior. I've reported a bug:
http://bugs.mysql.com/bug.php?id=9511
Bob O'Neill <[EMAIL PROTECTED]> wrote:
> If I try to read table 'b' after locking table 'a', I expect to get
> the error message
If I try to read table 'b' after locking table 'a', I expect to get
the error message "Table 'b' was not locked with LOCK TABLES".
However, if my query that accesses table b is stored in the query
cache, I don't get the error. This causes a prob
-P -u
-- MySQL dump 8.23
--
-- Host: localhost Database:
-
-- Server version 3.23.58-log
/usr/local/mysql/bin/mysqldump:
Got error: 1105: File '/usr/local/mysql/var/atmail/Abook_vx.MYD' not found
(Errcode: 24) when
You only need to lock whene you are going to run a query that contains
a series of actions and they all have to happen at the same time. As
for single queries, they are already atomic, so you don't need to put
and locks around them.
On Mon, 11 Oct 2004 11:14:36 +0100, Melanie Courtot <[EMAIL PRO
Hi,
I'm a bit confused by the lock mechanism under mysql.
When user A does an update on table 1, the table is automatically locked
by mysql?that means at the same time user B won't be able to modify the
same row?
Or do I have to specify the lock for each query?
And what about temporary tables?
If
Fernando Monteiro wrote:
Hello, Michael,
Version 4.0.15 comes after version 4.0.2 (15 > 2), so the version is
not a problem here.
(...)
I asked them to give me the "lock tables" permission, but they
answered this permission
option isn't available on versions prior to 4.0.2.
Version 4.0.15 comes after version 4.0.2 (15 > 2), so the version is not a
problem here.
From the manual: "As of MySQL 4.0.2, to use LOCK TABLES you must have the
global LOCK TABLES privilege and a SELECT privilege for the involved
tables." <http://dev.mysql.com/doc/mysql/en/
Hello,
My ISP is using the old version 4.0.15a and have no early plans to upgrade it.
I'm trying to issue LOCK/UNLOCK TABLES commands at my ISP's MySQL, and I'm getting an
"access denied" error.
I asked them to give me the "lock tables" permission, but t
t;[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Thursday, June 03, 2004 1:25 PM
Subject: RE: VB .NET & MYSQL - LOCK TABLES
> You will need to reuse your database connection, do not open a new
> connection with each call or the lock will not be there.
> LOCK TABLE table1 READ;
& MYSQL - LOCK TABLES
Hello MySql List,
I have create connection with VB .NET & MySql, and now i must use the
LOCK TABLES statment.
I want to know how i can use this sintax ..
i must open connection - begin the LOCK TABLES - begin the select
statment, and then UNLOCK TABLES ..
I think so tha
Hello MySql List,
I have create connection with VB .NET & MySql, and now i must use the LOCK TABLES
statment.
I want to know how i can use this sintax ..
i must open connection - begin the LOCK TABLES - begin the select statment, and then
UNLOCK TABLES ..
I think so that is not corre
<< Looks like it's a query cache issue. In this case you get result from the
cache.>>
That was it.
THANKS
- Original Message -
From: "Victoria Reznichenko" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, May 10, 2004 1:20 PM
Subject
At 12:40 -0400 on 05/10/2004, Lou Olsten wrote about Blocking Selects
with LOCK TABLES:
According to the docs
(http://dev.mysql.com/doc/mysql/en/LOCK_TABLES.html) :
If a thread obtains a READ lock on a table, that thread (and all
other threads) can only read from the table. If a thread obtains
ock on a table, only the thread
> holding
> the lock can read from or write to the table. Other threads are blocked.
>
> So, I've got two threads going (T1, T2).
>
> T1 issues LOCK TABLES transtest WRITE;
>
> But when I go to T2, I can still issue: SELECT * FROM tr
table. Other threads are blocked.
So, I've got two threads going (T1, T2).
T1 issues LOCK TABLES transtest WRITE;
But when I go to T2, I can still issue: SELECT * FROM transtest; and retrieve all the
data. I CANNOT update, so I know the command is at least partially working. As I
under
!$res =
mysql_query($sql,$spoj)){ decho(' A '.$sError = sql_error(mysql_error(),$sql)); }
$sql = "LOCK TABLES $gt_firmy WRITE"; if(!$res = mysql_query($sql,$spoj)){ decho('
A '.$sError = sql_error(mysql_error(),$sql)); }
$sql = "SELECT * FROM $gt_firmy AS
1 - 100 of 150 matches
Mail list logo