mysql dump global read lock

2015-12-17 Thread Artem Kuchin

Hello!

Hereis my mysqldump command line

mysqldump -u root --events --complete-insert --skip-opt 
--single-transaction --add-drop-table --add-locks --create-options 
--disable-keys --
extended-insert --quick --set-charset --routines --triggers --hex-blob  
DB_NAME


But i see tons of Waiting for global read lock
in show processlist for many tables in many different databases for all 
modification queries and locks


Why?  As i understood --skip-opt --single-transaction must disable 
global read lock



mysql version
Server version: 5.6.27-log Source distribution

Artem





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Global read lock on delete

2015-12-09 Thread Artem Kuchin

09.12.2015 19:35, shawn l.green пишет:


 INSERT operations are special as you can enable a mode to allow 
INSERTs to happen only at the end of the file and not be blocked while 
one of the other two operations are in progress.




Cannot find anything about that. Can you be a little more specific? It 
is unrelated the my question, but would be great to have too.


Artem

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Global read lock on delete

2015-12-09 Thread Artem Kuchin

09.12.2015 19:35, shawn l.green пишет:



On 12/9/2015 9:59 AM, Artem Kuchin wrote:

Hello!
|THREAD_ID   LOCK_MODE   LOCK_DURATION   LOCK_TYPE
TABLE_SCHEMATABLE_NAME
268871  MDL_INTENTION_EXCLUSIVE MDL_EXPLICITGlobal read lock
270022  MDL_INTENTION_EXCLUSIVE MDL_STATEMENT   Global read lock
268871  MDL_SHARED_NO_READ_WRITEMDL_EXPLICITTable metadata
lock sprusearchsobjects
268871  MDL_SHARED_NO_READ_WRITEMDL_EXPLICITTable metadata
lock sprusearches
268871  MDL_INTENTION_EXCLUSIVE MDL_EXPLICITSchema metadata lock
spru
270022  MDL_INTENTION_EXCLUSIVE MDL_TRANSACTION Schema metadata lock
spru




You gave the answer in your last statement: "All tables are myisam" .

The MyISAM storage engine is not transactional and it does not do 
row-level locking. All UPDATE and DELETE operations require a full 
table lock to perform and those must wait for all earlier readers or 
writers to exit the table before they can start.  INSERT operations 
are special as you can enable a mode to allow INSERTs to happen only 
at the end of the file and not be blocked while one of the other two 
operations are in progress.


TABLE LOCK is okay, i understand that. But i see GLOBAL READ LOCK - not 
table lock, but GLOBAL.

As i understand that it means ALL TABLES IN ALL DATABASES. Why?

Artem

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Global read lock on delete

2015-12-09 Thread Artem Kuchin

Hello!

I am actually using MariaDB, but they do not seem to have any public 
discussion system
and i suppose that engine is the same basically, so, problems are 
probably  the same.


Today i setup the server to show locks and notice this:

|THREAD_ID   LOCK_MODE   LOCK_DURATION   LOCK_TYPE 
TABLE_SCHEMATABLE_NAME

268871  MDL_INTENTION_EXCLUSIVE MDL_EXPLICITGlobal read lock
270022  MDL_INTENTION_EXCLUSIVE MDL_STATEMENT   Global read lock
268871  MDL_SHARED_NO_READ_WRITEMDL_EXPLICITTable metadata 
lock sprusearchsobjects
268871  MDL_SHARED_NO_READ_WRITEMDL_EXPLICITTable metadata 
lock sprusearches

268871  MDL_INTENTION_EXCLUSIVE MDL_EXPLICITSchema metadata lockspru
270022  MDL_INTENTION_EXCLUSIVE MDL_TRANSACTION Schema metadata lockspru

The threads are
ID: 270022 TIME:185 COMMAND:Query STATE:Waiting for table metadata lock 
USER:spru DB:spru

LOCK TABLES searchsobjects WRITE, searches WRITE

ID: 268871 TIME:3 COMMAND:Query STATE:updating USER:spru DB:spru
DELETE FROM searchsobjects WHERE search_id IN ( 
3680622,3677720,3679348,3679347,3680621,3678106,3678105,3680597,3680596,3680595,3676915,3676914,3676913,36777

19,3677718,3677717,3677716,3676984,3677795,3677794,3677793,3677792,3677796,3677802,3677801,3677800,3677799,3677798,3677797,3680580,3676988,3677791,3680589,36
77790,3677789,3677788,3677787,3677786,3677785,3677784,3677783,3677782,3680575,3677781,3677780,369,368,367,366,365,364,363,362
,361,360,3677769,3677768,3677767,3677766,3677765,3677764,3680619,3680620,3682405,3677763,3677762,3677761,3677760,3677759,3677758,3680601,3677757,3680
627,3680628,3680576,3680577,3680625,3680626,3680624,3680623,3677754,3679280,3679279,3679278,3679277,3679276,3679867,3679890,3680588,3677753,3677064,3677752,3
677751,3677750,3677749,3679608,3679607,3679606,3679605,3680613 )


So, by thread id it seems like DELETE started first and the LOCK TABLES 
was issued.
However, i do not understand how GLOBAL READ LOCK became involved in 
this all? And both lock tables and delete requested global read lock.


All tables are myisam. MariaDB  is 10.0.22 (mysql 5.6 based as i understand)

Artem




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Fastest way to select on 0/1 flag

2009-06-15 Thread Artem Kuchin

Hello!

I cannot figure out the fastest way to do a select on the floowing field:

f_spectinyint not null;

It is a table of 100 000 records of products and f_spec is set only for 
about 200 products.


I figure it could be done in two ways:

1) create an index on f_spec and do simple
select * from products where f_spec=1;

2) create a separate table

create table specs (
   product_id   int;
   primary key (product_id)
);

then select ids from this table and join with the products table if needed.

What is the best way?

Also, it is often needed to know only the fact that there is any product 
with f_spec set.

Is using index and doing
select id from products where f_spec=1 limit 1
will be very fast ?



Regards,
Artem

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Left join does not work with Count() as expected

2009-02-19 Thread Artem Kuchin
I have two simple tables. One - list of forums, second - list of 
messages and i want to

get the list of forums with number of messages in each.

Here is the query:

SELECT forums.id , COUNT( forum_msg.id ) AS cnt
FROM forums
LEFT JOIN forum_msg ON forums.id = forum_msg.forum_id
ORDER BY forums.sorder ASC

The problem is that if  a forum does not have any messages then the line
with such forums.id does not appear at all.

If i delete COUNT( forum_msg.id ) AS cnt from Select  - i get all forums,
as expected. If  i leave the count() as shown - i get only forums with 
messages
in the result. As far as i can remember it was not like this before. I 
am running

5.1.3

--
Artem Kuchin

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Indexing one byte flags - what implementattion is better

2007-12-28 Thread Artem Kuchin

Garris, Nicole wrote:

Is id a sequential number? And is it referenced by other tables? If
so, and if over time new products become "old" products, then CASE 2
is more complex, because when moving a product (i.e., a row) from the
new product table to the old product table, the value of id needs to
stay the same. So for CASE 2 you'll need a third object to keep track
of the highest value for id.


I think you did not get it right. There is no  MOVING of products.
All product are stored in 'products' table only, and newproducts
is just is kind of FLAG table. It contains ONLY ids of products
considered new, nothing else. So, if product is new then its is
of course in products table and in newproducts table and if it is not new
then it is only in 'products' table.

ID is a seqential number (but no auto_increment - i hate it).

PS: top posting is really popular in  mysql list!

--
Artem



-----Original Message-
From: Artem Kuchin [mailto:[EMAIL PROTECTED]
Sent: Thursday, December 27, 2007 1:19 PM
To: mysql@lists.mysql.com
Subject: Indexing one byte flags - what implementattion is better

Maybe someone could provide a good resonable
input on this issue.

Let's say i have a table products

CASE 1:

table: products

id int unsigned not null,
name char(128) not null,
f_new tinyint not null

id - is basically the id of a product
name - is the name of a product
f_new - is a one byte flag. If it is 1 the product is condireed new.


In this case to select all new products including name i need to do:

select id, name  from products wher f_new=1

CASE 2:

The above can be done another way - via two table,
one products table and another one - listing all ids
for new products

create table products (
id int unsigned not null,
name char(128) not null,
primay key (id)
);

create table newproducts (
product_id int unsigned not null,
primay key (id)
);


If product is is in newproducts table that it is
a new product.

To choose all new products including name i need to do:

SELECT id,name
FROM newproducts
INNER JOIN products ON products.id=newproducts.product_id

The questions are:

1) which way is FASTER?
2) which way eats less memory?
3) which way eats less cpu?
4) which way eats less hdd io?

There are several cases for each question:
1) <1000 products - i think both methods are pretty much the same
in this case because all of the data woul be cached in memory

2) 10 products, 3 new products - interesting to know
which method is better here and how each of the method performs.

3) 10 products, 50 new products - interesting to know
which method is better here and how each of the method performs.


I will greately appriciate input  on this issue.

--
Artem


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Indexing one byte flags - what implementattion is better

2007-12-27 Thread Artem Kuchin

Maybe someone could provide a good resonable
input on this issue.

Let's say i have a table products

CASE 1:

table: products

id int unsigned not null,
name char(128) not null,
f_new tinyint not null

id - is basically the id of a product
name - is the name of a product
f_new - is a one byte flag. If it is 1 the product is condireed new.


In this case to select all new products including name i need to do:

select id, name  from products wher f_new=1

CASE 2:

The above can be done another way - via two table,
one products table and another one - listing all ids
for new products

create table products (
   id int unsigned not null,
   name char(128) not null,
   primay key (id)
);

create table newproducts (
   product_id int unsigned not null,
   primay key (id)
);


If product is is in newproducts table that it is
a new product.

To choose all new products including name i need to do:

SELECT id,name 
FROM newproducts 
INNER JOIN products ON products.id=newproducts.product_id


The questions are:

1) which way is FASTER?
2) which way eats less memory?
3) which way eats less cpu?
4) which way eats less hdd io?

There are several cases for each question:
1) <1000 products - i think both methods are pretty much the same
in this case because all of the data woul be cached in memory

2) 10 products, 3 new products - interesting to know
which method is better here and how each of the method performs.

3) 10 products, 50 new products - interesting to know
which method is better here and how each of the method performs.


I will greately appriciate input  on this issue.

--
Artem


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to auto repair db on hosting with quota

2007-12-10 Thread Artem Kuchin

Russell E. Glaue wrote:

Hmm..,
I thought the temp tables were created in the temp directory.
What version of MySQL are you using?


5.1 latest


Perhaps you can temporarily increase the user's hard_limit, and not
increase the soft_limit. When they have finished recovering, you can
reset the hard_limit.  


I can repair all table manually and i have a script for doing it just by 
chown-ing
the db directory to mysql:mysql , repair it, and the back to mysql:usergroup
so group quota is observed.


Or perhaps it is possible to copy the corrupted database to the temp
directory (with increased hard_limit), perform the recovery there,
then copy it back. The data directory does not have to remain in the
same location in order to be recoverable.  


That's not the point. I need to make MYSQL AUTOMATIC REPAIR
work. Because currently, if mysql tries to automatically repair
large broken table it gets stuck because user group quota is
exceeded.

--
Artem



This would be the same as the last e-mailed proposed solution, but
with two extra steps to copy back and forth from the temp directory. 


-RG


- Original Message -----
From: "Artem Kuchin" <[EMAIL PROTECTED]>
To: "Russell E Glaue" <[EMAIL PROTECTED]>, mysql@lists.mysql.com
Sent: Sunday, December 9, 2007 9:03:45 AM (GMT-0600) America/Chicago
Subject: Re: How to auto repair db on hosting with quota

I won't convert thread style, so, i'm top-posting :)

All this would be great if it worked. The main problem is that
temporary directory option for mysqld iS NOT USED while doing
table repair. All files while repairing tables are placed
in the same directoty with the same owner and group as the original
database files (i mean, for example, .TMD files which can be huge!).

So, i am currenctly stuck with no solultion to the problem.

--
Artem



Russell E Glaue wrote:

Using Quota on Unix, you can specify a different quota soft_limit for
temporary space.
You can set MySQL to use /tmp as the temporary space, then set a
soft_limit of 10mb and a hard_limit of 300MB.
This quota should be separate from the normal user space quota.

With MySQL temp dir set to /tmp, and a soft/hard quota as 10MB/300MB,
the recovery should occur successfully.
And going above the soft_limit gives the user X amount of days to get
back down below the soft_limit. And the user cannot go above the
hard_limit.
You can set the soft_limit recovery days to like 1 day. They can go
over the 10MB soft_limit for 1 day, up to the hard_limit of 300MB,
but after one day the temp files will be deleted until the soft_limit
is satisfied.

This is a great way to control temp space, and not let users abuse
it, but still give them a lot of space to perform things like large
mysql database recoveries.


-RG


Artem Kuchin wrote:

I am asking this question as a hosting PROVIDER, not
as a hosting client.

The sitation is like this (this is a simple example with small
numbers):

1) a client has 100mb quota for site and db files (set on unix
group) 2) 80mb is already taken by 75MB database in ONE TABLE and
5mb of files 3) databases gets broken
4) mysql tried to auto repair the table
5) to repair data it must create a temporary file and this files
take almost as much as 75MB
6) user goes over quota becauase mysql needs another 75MB to reapir
table, and this summs up like 75+75+5 a lot > than 100mb in quota
7) mysql gets stuck on this db
8) people continue to acccess site and connections build up and each
connection waits for repair to be finished
9) eventually mysql runs out of connections and this stop other
clients too.
10) day is ruined.

SO the questions are:

1) Is the way to specify USER/GROUP and PLACE for temporary files
for table repair?
2) Maybe there is some OTHER way to set quota for databases?
3) Is there way to specify max number of connections for a user?

Current solution for this as i see it is to make a daemon which
must connect to database first with administrative privs and
monitor number of connections and repairs using processlist. If some
user does more than allowed number of connection then just kill them
off. If repair accures - then turn off that host for a while and
turn if back on again
when repair finished.

I wonder how it is solved on mass hosting servers.

--
Regards,
Artem


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to auto repair db on hosting with quota

2007-12-09 Thread Artem Kuchin

I won't convert thread style, so, i'm top-posting :)

All this would be great if it worked. The main problem is that
temporary directory option for mysqld iS NOT USED while doing
table repair. All files while repairing tables are placed
in the same directoty with the same owner and group as the original
database files (i mean, for example, .TMD files which can be huge!).

So, i am currenctly stuck with no solultion to the problem.

--
Artem



Russell E Glaue wrote:

Using Quota on Unix, you can specify a different quota soft_limit for
temporary space.
You can set MySQL to use /tmp as the temporary space, then set a
soft_limit of 10mb and a hard_limit of 300MB.
This quota should be separate from the normal user space quota.

With MySQL temp dir set to /tmp, and a soft/hard quota as 10MB/300MB,
the recovery should occur successfully.
And going above the soft_limit gives the user X amount of days to get
back down below the soft_limit. And the user cannot go above the
hard_limit. 
You can set the soft_limit recovery days to like 1 day. They can go

over the 10MB soft_limit for 1 day, up to the hard_limit of 300MB,
but after one day the temp files will be deleted until the soft_limit
is satisfied. 


This is a great way to control temp space, and not let users abuse
it, but still give them a lot of space to perform things like large
mysql database recoveries. 



-RG


Artem Kuchin wrote:

I am asking this question as a hosting PROVIDER, not
as a hosting client.

The sitation is like this (this is a simple example with small
numbers):

1) a client has 100mb quota for site and db files (set on unix group)
2) 80mb is already taken by 75MB database in ONE TABLE and 5mb of
files 3) databases gets broken
4) mysql tried to auto repair the table
5) to repair data it must create a temporary file and this files take
almost as much as 75MB
6) user goes over quota becauase mysql needs another 75MB to reapir
table, and this summs up like 75+75+5 a lot > than 100mb in quota
7) mysql gets stuck on this db
8) people continue to acccess site and connections build up and each
connection waits for repair to be finished
9) eventually mysql runs out of connections and this stop other
clients too.
10) day is ruined.

SO the questions are:

1) Is the way to specify USER/GROUP and PLACE for temporary files
for table repair?
2) Maybe there is some OTHER way to set quota for databases?
3) Is there way to specify max number of connections for a user?

Current solution for this as i see it is to make a daemon which
must connect to database first with administrative privs and
monitor number of connections and repairs using processlist. If some
user does more than allowed number of connection then just kill them
off. If repair accures - then turn off that host for a while and
turn if back on again
when repair finished.

I wonder how it is solved on mass hosting servers.

--
Regards,
Artem


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to auto repair db on hosting with quota

2007-12-07 Thread Artem Kuchin

Baron Schwartz wrote:

Regards,
Baron

Yes it is.  Even worse, people love to convert a top-posted thread to
bottom-posted and verse vice-a!


I see topposting i popular in this mail list :)


And now for the bottom-posting part of this reply...


Any idea on how to limit connection per user? (so one user
cannot abuse the server and take all max_connections connections
for himself).


See http://dev.mysql.com/doc/refman/5.0/en/grant.html and look for the
GRANT OPTION options.


Argh!!

"The MAX_USER_CONNECTIONS count option, implemented in MySQL 5.0.3, limits the maximum number of simultaneous connections that the 
account can make. If count is 0 (the default), the max_user_connections system variable determines the number of simultaneous 
connections for the account. "


We are still kinda stuch to 4.1. Well, its time to upgrade.

--
Regards,
Artem


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to auto repair db on hosting with quota

2007-12-07 Thread Artem Kuchin


I see topposting i popular in this mail list :)

Well, thank you for the idea. It seems to be workable. I 
hope that  while data is being repaired and it is located

in tmp the rights are set correctly so noone else can read the
files. I'll see it myself. Thank you very much.

Any idea on how to limit connection per user? (so one user
cannot abuse the server and take all max_connections connections
for himself).

--
Regards,
Artem

Russell E Glaue wrote:

Using Quota on Unix, you can specify a different quota soft_limit for
temporary space.
You can set MySQL to use /tmp as the temporary space, then set a
soft_limit of 10mb and a hard_limit of 300MB.
This quota should be separate from the normal user space quota.

With MySQL temp dir set to /tmp, and a soft/hard quota as 10MB/300MB,
the recovery should occur successfully.
And going above the soft_limit gives the user X amount of days to get
back down below the soft_limit. And the user cannot go above the
hard_limit. 
You can set the soft_limit recovery days to like 1 day. They can go

over the 10MB soft_limit for 1 day, up to the hard_limit of 300MB,
but after one day the temp files will be deleted until the soft_limit
is satisfied. 


This is a great way to control temp space, and not let users abuse
it, but still give them a lot of space to perform things like large
mysql database recoveries. 
Artem Kuchin wrote:

I am asking this question as a hosting PROVIDER, not
as a hosting client.

The sitation is like this (this is a simple example with small
numbers):

1) a client has 100mb quota for site and db files (set on unix group)
2) 80mb is already taken by 75MB database in ONE TABLE and 5mb of
files 3) databases gets broken
4) mysql tried to auto repair the table
5) to repair data it must create a temporary file and this files take
almost as much as 75MB
6) user goes over quota becauase mysql needs another 75MB to reapir
table, and this summs up like 75+75+5 a lot > than 100mb in quota
7) mysql gets stuck on this db
8) people continue to acccess site and connections build up and each
connection waits for repair to be finished
9) eventually mysql runs out of connections and this stop other
clients too.
10) day is ruined.

SO the questions are:

1) Is the way to specify USER/GROUP and PLACE for temporary files
for table repair?
2) Maybe there is some OTHER way to set quota for databases?
3) Is there way to specify max number of connections for a user?

Current solution for this as i see it is to make a daemon which
must connect to database first with administrative privs and
monitor number of connections and repairs using processlist. If some
user does more than allowed number of connection then just kill them
off. If repair accures - then turn off that host for a while and
turn if back on again
when repair finished.

I wonder how it is solved on mass hosting servers.

--
Regards,
Artem


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to auto repair db on hosting with quota

2007-12-07 Thread Artem Kuchin

Steven Buehler wrote:

-Original Message-
From: Artem Kuchin [mailto:[EMAIL PROTECTED]
Sent: Friday, December 07, 2007 4:35 AM
To: mysql@lists.mysql.com
Subject: How to auto repair db on hosting with quota

I am asking this question as a hosting PROVIDER, not
as a hosting client.

The sitation is like this (this is a simple example with small
numbers):

1) a client has 100mb quota for site and db files (set on unix group)
2) 80mb is already taken by 75MB database in ONE TABLE and 5mb of
files 3) databases gets broken
4) mysql tried to auto repair the table
5) to repair data it must create a temporary file and this files take
almost as much as 75MB
6) user goes over quota becauase mysql needs another 75MB to reapir
table,
and this summs up like 75+75+5 a lot > than 100mb in quota
7) mysql gets stuck on this db
8) people continue to acccess site and connections build up and each
connection waits for repair to be finished
9) eventually mysql runs out of connections and this stop other
clients too.
10) day is ruined.

SO the questions are:

1) Is the way to specify USER/GROUP and PLACE for temporary files
for table repair?
2) Maybe there is some OTHER way to set quota for databases?
3) Is there way to specify max number of connections for a user?

Current solution for this as i see it is to make a daemon which
must connect to database first with administrative privs and
monitor number of connections and repairs using processlist. If some
user
does more than allowed number of connection then just kill them off.
If repair accures - then turn off that host for a while and turn if
back on again
when repair finished.

I wonder how it is solved on mass hosting servers.


Have you tried doing the repair as the root user for MySQL instead of
the client user?


It does not help. Quota is set for GROUP, so, when mysql create a temporary file
in the database directory group is inhereted from this directory, and group is 
the
user group, so, the temporary file is counted in the user quota.

--
Regards,
Artem


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How to auto repair db on hosting with quota

2007-12-07 Thread Artem Kuchin

I am asking this question as a hosting PROVIDER, not
as a hosting client.

The sitation is like this (this is a simple example with small
numbers):

1) a client has 100mb quota for site and db files (set on unix group)
2) 80mb is already taken by 75MB database in ONE TABLE and 5mb of files
3) databases gets broken
4) mysql tried to auto repair the table
5) to repair data it must create a temporary file and this files take
almost as much as 75MB
6) user goes over quota becauase mysql needs another 75MB to reapir table,
and this summs up like 75+75+5 a lot > than 100mb in quota
7) mysql gets stuck on this db
8) people continue to acccess site and connections build up and each
connection waits for repair to be finished
9) eventually mysql runs out of connections and this stop other clients
too.
10) day is ruined.

SO the questions are:

1) Is the way to specify USER/GROUP and PLACE for temporary files
for table repair?
2) Maybe there is some OTHER way to set quota for databases?
3) Is there way to specify max number of connections for a user?

Current solution for this as i see it is to make a daemon which
must connect to database first with administrative privs and
monitor number of connections and repairs using processlist. If some user
does more than allowed number of connection then just kill them off. If
repair accures - then turn off that host for a while and turn if back on again
when repair finished.

I wonder how it is solved on mass hosting servers.

--
Regards,
Artem

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: sporadic error 13 on windows

2007-11-02 Thread Artem Kuchin

Kristen G. Thorson wrote:

-Original Message-
From: Artem Kuchin [mailto:[EMAIL PROTECTED]
Sent: Friday, November 02, 2007 9:23 AM
To: Baron Schwartz
Cc: mysql@lists.mysql.com
Subject: Re: sporadic error 13 on windows

DBD::mysql::st execute failed: Can't create/write to file
'#sql_948_0.MYD' (Errcode: 13) at cleanfilestore.pl line 89, 
ine 1.
DBD::mysql::st fetchrow_array failed: fetch() without execute() at
cleanfilestore.pl line 90,  line 1.
DBD::mysql::st execute failed: Can't create/write to file
'#sql_948_0.MYD' (Errcode: 13) at cleanfilestore.pl line 89, 
ine 1.
DBD::mysql::st fetchrow_array failed: fetch() without execute() at
cleanfilestore.pl line 90,  line 1.
DBD::mysql::st execute failed: Can't create/write to file
'#sql_948_0.MYD' (Errcode: 13) at cleanfilestore.pl line 89, 
ine 1.
DBD::mysql::st fetchrow_array failed: fetch() without execute() at
cleanfilestore.pl line 90,  line 1.





I found several places where anti-virus as referenced as the culprit.
Did you look into that yet?

http://forums.slimdevices.com/showthread.php?t=30543&page=2


Damn it! IT WAS McAfee.  Even though mcaffe did not say anything 
about blocking files in its log - i checked it before. Stupid thing!


--
Artem


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: sporadic error 13 on windows

2007-11-02 Thread Artem Kuchin

Baron Schwartz wrote:

What do you get from the following:

mysql> show variables like '%tmp%';


1) if i set
tmpdir=C:/temp

it shows:
mysql> show variables like '%tmp%';
+---+---+
| Variable_name | Value |
+---+---+
| max_tmp_tables| 32|
| slave_load_tmpdir | C:\temp\  |
| tmp_table_size| 527433728 |
| tmpdir| C:/temp   |
+---+---+
4 rows in set (0.00 sec)

Got error
DBD::mysql::st execute failed: Can't create/write to file '#sql_8c4_0.MYD' (Errcode: 13) at cleanfilestore.pl line 89,  line 
1.

DBD::mysql::st fetchrow_array failed: fetch() without execute() at cleanfilestore.pl 
line 90,  line 1.

1) if i set
tmpdir=C:\\temp
(setting C:\Temp result in C:emp, so it must be \\)

mysql> show variables like '%tmp%';
+---+---+
| Variable_name | Value |
+---+---+
| max_tmp_tables| 32|
| slave_load_tmpdir | C:\temp\  |
| tmp_table_size| 527433728 |
| tmpdir| C:\temp   |
+---+---+
4 rows in set (0.00 sec)

got errors

DBD::mysql::st execute failed: Can't create/write to file 'C:\Program Files\MySQL\MySQL Server 5.0\Data\#sql_150c_0.MYI' (Errcode: 
13) at cleanfilestore.pl line 89,  line 1.

DBD::mysql::st fetchrow_array failed: fetch() without execute() at cleanfilestore.pl 
line 90,  line 1.
DBD::mysql::st execute failed: Can't create/write to file '#sql_150c_0.MYD' (Errcode: 13) at cleanfilestore.pl line 89,  line 
1.

DBD::mysql::st fetchrow_array failed: fetch() without execute() at cleanfilestore.pl 
line 90,  line 1.
DBD::mysql::st execute failed: Can't create/write to file '#sql_150c_0.MYD' (Errcode: 13) at cleanfilestore.pl line 89,  line 
1.

DBD::mysql::st fetchrow_array failed: fetch() without execute() at cleanfilestore.pl 
line 90,  line 1.

1) if i set
tmpdir=C:\\temp\\

+---+---+
| Variable_name | Value |
+---+---+
| max_tmp_tables| 32|
| slave_load_tmpdir | C:\temp\  |
| tmp_table_size| 527433728 |
| tmpdir| C:\temp\  |
+---+---+
4 rows in set (0.20 sec)

DBD::mysql::st execute failed: Can't create/write to file '#sql_948_0.MYD' (Errcode: 
13) at cleanfilestore.pl line 89, 
ine 1.
DBD::mysql::st fetchrow_array failed: fetch() without execute() at cleanfilestore.pl 
line 90,  line 1.
DBD::mysql::st execute failed: Can't create/write to file '#sql_948_0.MYD' (Errcode: 
13) at cleanfilestore.pl line 89, 
ine 1.
DBD::mysql::st fetchrow_array failed: fetch() without execute() at cleanfilestore.pl 
line 90,  line 1.
DBD::mysql::st execute failed: Can't create/write to file '#sql_948_0.MYD' (Errcode: 
13) at cleanfilestore.pl line 89, 
ine 1.
DBD::mysql::st fetchrow_array failed: fetch() without execute() at cleanfilestore.pl 
line 90,  line 1.

Two points here
1) The number of errors is different, rerunning the script will result in 
different number of errors and sometimes no errors
at all, so, this error is unstable and sporadic

2) As you see file mentioned in the error message is sometime in uknown dir 
(like  '#sql_948_0.MYD' ) and sometimes
in 'C:\Program Files\MySQL\MySQL Server 5.0\Data\

HELP!

--
Artem




Artem Kuchin wrote:

anyone any idea on this ?


On 10/31/07, Artem Kuchin <[EMAIL PROTECTED]> wrote:

I am running
Server version: 5.0.45-community-nt MySQL Community Edition (GPL)
on WIndows 2003 R2 Service Pack 2 Standard Edition

I have a database dig perl script with basically does a lot of
SELECT [somefield] as tt FROM table;


When i run it i get:

DBD::mysql::st execute failed: Can't create/write to file
'C:\Program Files\MySQL\MySQL Server 5.0\Data\#sql_bdc_0.MYI'
(Errcode

13) at cleanfilestore.pl line 89,  line 1.

DBD::mysql::st fetchrow_array failed: fetch() without execute() at
cleanfilestore.pl line 90,  line 1.
DBD::mysql::st execute failed: Can't create/write to file
'C:\Program Files\MySQL\MySQL Server 5.0\Data\#sql_bdc_0.MYI'
(Errcode

13) at cleanfilestore.pl line 89,  line 1.

DBD::mysql::st fetchrow_array failed: fetch() without execute() at
cleanfilestore.pl line 90,  line 1.
DBD::mysql::st execute failed: Can't create/write to file
'#sql_bdc_0.MYD' (Errcode: 13) at cleanfilestore.pl line 89,
 l ine 1.
DBD::mysql::st fetchrow_array failed: fetch() without execute() at
cleanfilestore.pl line 90,  line 1.


I have
[mysqld]
tmpdir=C:/temp

in my.ini

SO, first of all, why is it trying to create something in Programe
files?
Why there are files w/o path at all (Can't create/write to file
'#sql_bdc_0.MYD') ?

These error accure at different moments every time script is run.

Anynow any idea what's going on here?

TEMP has

Re: sporadic error 13 on windows

2007-11-02 Thread Artem Kuchin

anyone any idea on this ?


On 10/31/07, Artem Kuchin <[EMAIL PROTECTED]> wrote:

I am running
Server version: 5.0.45-community-nt MySQL Community Edition (GPL)
on WIndows 2003 R2 Service Pack 2 Standard Edition

I have a database dig perl script with basically does a lot of
SELECT [somefield] as tt FROM table;


When i run it i get:

DBD::mysql::st execute failed: Can't create/write to file 'C:\Program 
Files\MySQL\MySQL Server 5.0\Data\#sql_bdc_0.MYI' (Errcode
: 13) at cleanfilestore.pl line 89,  line 1.
DBD::mysql::st fetchrow_array failed: fetch() without execute() at cleanfilestore.pl 
line 90,  line 1.
DBD::mysql::st execute failed: Can't create/write to file 'C:\Program 
Files\MySQL\MySQL Server 5.0\Data\#sql_bdc_0.MYI' (Errcode
: 13) at cleanfilestore.pl line 89,  line 1.
DBD::mysql::st fetchrow_array failed: fetch() without execute() at cleanfilestore.pl 
line 90,  line 1.
DBD::mysql::st execute failed: Can't create/write to file '#sql_bdc_0.MYD' (Errcode: 
13) at cleanfilestore.pl line 89,  l
ine 1.
DBD::mysql::st fetchrow_array failed: fetch() without execute() at cleanfilestore.pl 
line 90,  line 1.


I have
[mysqld]
tmpdir=C:/temp

in my.ini

SO, first of all, why is it trying to create something in Programe files?
Why there are files w/o path at all (Can't create/write to file 
'#sql_bdc_0.MYD') ?

These error accure at different moments every time script is run.

Anynow any idea what's going on here?

TEMP has full rights for ALL. But anyway, as you see, it is not even mentioned 
here.





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: sporadic error 13 on windows

2007-10-31 Thread Artem Kuchin

Just a thought, (I'm certainly no wondows guy) but perhaps your
setting for the TMP directory would be respected if you used the DOS
format path with the backslash..

tmpdir=C:\temp
vs.
tmpdir=C:/temp



tried it, still:

DBD::mysql::st execute failed: Can't create/write to file '#sql_152c_0.MYD' (Errcode: 13) at cleanfilestore.pl line 89,  line 
1.

DBD::mysql::st fetchrow_array failed: fetch() without execute() at cleanfilestore.pl 
line 90,  line 1.

--
Artem




On 10/31/07, Artem Kuchin <[EMAIL PROTECTED]> wrote:

I am running
Server version: 5.0.45-community-nt MySQL Community Edition (GPL)
on WIndows 2003 R2 Service Pack 2 Standard Edition

I have a database dig perl script with basically does a lot of
SELECT [somefield] as tt FROM table;


When i run it i get:

DBD::mysql::st execute failed: Can't create/write to file 'C:\Program 
Files\MySQL\MySQL Server 5.0\Data\#sql_bdc_0.MYI' (Errcode
: 13) at cleanfilestore.pl line 89,  line 1.
DBD::mysql::st fetchrow_array failed: fetch() without execute() at cleanfilestore.pl 
line 90,  line 1.
DBD::mysql::st execute failed: Can't create/write to file 'C:\Program 
Files\MySQL\MySQL Server 5.0\Data\#sql_bdc_0.MYI' (Errcode
: 13) at cleanfilestore.pl line 89,  line 1.
DBD::mysql::st fetchrow_array failed: fetch() without execute() at cleanfilestore.pl 
line 90,  line 1.
DBD::mysql::st execute failed: Can't create/write to file '#sql_bdc_0.MYD' (Errcode: 
13) at cleanfilestore.pl line 89,  l
ine 1.
DBD::mysql::st fetchrow_array failed: fetch() without execute() at cleanfilestore.pl 
line 90,  line 1.


I have
[mysqld]
tmpdir=C:/temp

in my.ini

SO, first of all, why is it trying to create something in Programe files?
Why there are files w/o path at all (Can't create/write to file 
'#sql_bdc_0.MYD') ?

These error accure at different moments every time script is run.

Anynow any idea what's going on here?

TEMP has full rights for ALL. But anyway, as you see, it is not even mentioned 
here.

--
Regards,
Artem


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
- michael dykman
- [EMAIL PROTECTED]

- All models are wrong.  Some models are useful. 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



sporadic error 13 on windows

2007-10-31 Thread Artem Kuchin

I am running
Server version: 5.0.45-community-nt MySQL Community Edition (GPL)
on WIndows 2003 R2 Service Pack 2 Standard Edition

I have a database dig perl script with basically does a lot of 
SELECT [somefield] as tt FROM table;



When i run it i get:

DBD::mysql::st execute failed: Can't create/write to file 'C:\Program 
Files\MySQL\MySQL Server 5.0\Data\#sql_bdc_0.MYI' (Errcode
: 13) at cleanfilestore.pl line 89,  line 1.
DBD::mysql::st fetchrow_array failed: fetch() without execute() at cleanfilestore.pl 
line 90,  line 1.
DBD::mysql::st execute failed: Can't create/write to file 'C:\Program 
Files\MySQL\MySQL Server 5.0\Data\#sql_bdc_0.MYI' (Errcode
: 13) at cleanfilestore.pl line 89,  line 1.
DBD::mysql::st fetchrow_array failed: fetch() without execute() at cleanfilestore.pl 
line 90,  line 1.
DBD::mysql::st execute failed: Can't create/write to file '#sql_bdc_0.MYD' (Errcode: 
13) at cleanfilestore.pl line 89,  l
ine 1.
DBD::mysql::st fetchrow_array failed: fetch() without execute() at cleanfilestore.pl 
line 90,  line 1.


I have 
[mysqld]

tmpdir=C:/temp

in my.ini

SO, first of all, why is it trying to create something in Programe files?
Why there are files w/o path at all (Can't create/write to file '#sql_bdc_0.MYD') ? 


These error accure at different moments every time script is run.

Anynow any idea what's going on here?

TEMP has full rights for ALL. But anyway, as you see, it is not even mentioned 
here.

--
Regards,
Artem


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: BINARY LIKE '..' just does not work

2005-03-05 Thread Artem Kuchin
On Saturday, March 05, 2005 1:46 PM [GMT+1=CET],
Gleb Paharenko <[EMAIL PROTECTED]> wrote:
Hello.
The problem I think is in indexes. After dropping the unique key
the query gives the correct result. I've reported a bug:
 http://bugs.mysql.com/bug.php?id=8976
You may add your comments there.
I have added my comment. TOO BAD that you specifed NON-CRITICAL.
It is critical, becauase after priovider upgraded to 4.1 all our engines 
stopped
searching for products and texts because dictionary table for word and term 
search
id build on binary char. I can only guess how many sites are now broken in 
this
manner.

Artem
"Artem Kuchin" <[EMAIL PROTECTED]> wrote:
MySQL:  4.1.8a
OS: FreeBSD 5.3
The problem is that BINARY LIKE '..' returns no rows no
matter what specified in '..' however BINARY field_name LIKE '..'
work, but I cannot use it because this does not use index (EXPLAINs
are bellow). According to documentation BINARY LIKE shoud work. But
it
does not.
Smells like a bug.
Defaukt charset for mysql (it is specified at compile time) is
cp1251 if it matters (test uses only latin chars).
Below is the test case:
(maybe someone else could run it on there server and tell me if it
works and also report mysql server version where it was tested).
CREATE TABLE voc2 (
 id int(10) unsigned NOT NULL default '0',
 word char(32) binary NOT NULL default '',
 counter int(10) unsigned NOT NULL default '0',
 PRIMARY KEY  (id),
 UNIQUE KEY i_vocabulary_word (word)
) TYPE=MyISAM;
insert into voc2 values(1,'falama',1);
insert into voc2 values(2,'lagraf',1);
insert into voc2 values(3,'folka',1);
insert into voc2 values(4,'pofik',1);
mysql> select * from voc2 where word like 'f%';
Empty set (0.00 sec)
mysql> select * from voc2 where word like '%f';
+++-+
id | word   | counter |
+++-+
 2 | lagraf |   1 |
+++-+
1 row in set (0.00 sec)
mysql> select * from voc2 where word like '%f%';
+++-+
id | word   | counter |
+++-+
 1 | falama |   1 |
 2 | lagraf |   1 |
 3 | folka  |   1 |
 4 | pofik  |   1 |
+++-+
4 rows in set (0.00 sec)
mysql> select * from voc2 where word like BINARY 'f%';
Empty set (0.01 sec)
mysql> select * from voc2 where word like BINARY '%f';
+++-+
id | word   | counter |
+++-+
 2 | lagraf |   1 |
+++-+
1 row in set (0.00 sec)
mysql> select * from voc2 where word like BINARY '%f%';
+++-+
id | word   | counter |
+++-+
 1 | falama |   1 |
 2 | lagraf |   1 |
 3 | folka  |   1 |
 4 | pofik  |   1 |
+++-+
4 rows in set (0.00 sec)
mysql> select * from voc2 where binary word like  'f%';
+++-+
id | word   | counter |
+++-+
 1 | falama |   1 |
 3 | folka  |   1 |
+++-+
2 rows in set (0.00 sec)
mysql> explain select * from voc2 where binary word like  'f%';
++-+---+--+---+--+-+--+--+-+
id | select_type | table | type | possible_keys | key  | key_len |
ref  |
rows | Extra   |
++-+---+--+---+--+-+--+--+-+
 1 | SIMPLE  | voc2  | ALL  | NULL  | NULL |NULL |
NULL |
4 | Using where |
++-+---+--+---+--+-+--+--+-+
1 row in set (0.00 sec)
mysql> explain select * from voc2 where  word like binary 'f%';
++-+---+---+---+---+-+--+--+-+
id | select_type | table | type  | possible_keys | key
|
key_len | ref  | rows | Extra   |
++-+---+---+---+---+-+--+--+-+
 1 | SIMPLE  | voc2  | range | i_vocabulary_word |
i_vocabulary_word |
32 | NULL |1 | Using where |
++-+---+---+---+---+-+--+--+-+
1 row in set (0.00 sec)
This sucks.
Regards,
Artem


--
For technical support contracts, goto
https://order.mysql.com/?ref=ensita This email is sponsored by
  Ensita.NET http://www.ensita.net/ __  ___ ___   __
 /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
/ /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
  <___/   www.mysql.com 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


BINARY LIKE '..' just does not work

2005-03-04 Thread Artem Kuchin
MySQL:  4.1.8a
OS: FreeBSD 5.3
The problem is that BINARY LIKE '..' returns no rows no
matter what specified in '..' however BINARY field_name LIKE '..'
work, but I cannot use it because this does not use index (EXPLAINs are
bellow). According to documentation BINARY LIKE shoud work. But it
does not.
Smells like a bug.
Defaukt charset for mysql (it is specified at compile time) is cp1251 if
it matters (test uses only latin chars).
Below is the test case:
(maybe someone else could run it on there server and tell me if it works
and also report mysql server version where it was tested).
CREATE TABLE voc2 (
 id int(10) unsigned NOT NULL default '0',
 word char(32) binary NOT NULL default '',
 counter int(10) unsigned NOT NULL default '0',
 PRIMARY KEY  (id),
 UNIQUE KEY i_vocabulary_word (word)
) TYPE=MyISAM;
insert into voc2 values(1,'falama',1);
insert into voc2 values(2,'lagraf',1);
insert into voc2 values(3,'folka',1);
insert into voc2 values(4,'pofik',1);
mysql> select * from voc2 where word like 'f%';
Empty set (0.00 sec)
mysql> select * from voc2 where word like '%f';
+++-+
| id | word   | counter |
+++-+
|  2 | lagraf |   1 |
+++-+
1 row in set (0.00 sec)
mysql> select * from voc2 where word like '%f%';
+++-+
| id | word   | counter |
+++-+
|  1 | falama |   1 |
|  2 | lagraf |   1 |
|  3 | folka  |   1 |
|  4 | pofik  |   1 |
+++-+
4 rows in set (0.00 sec)
mysql> select * from voc2 where word like BINARY 'f%';
Empty set (0.01 sec)
mysql> select * from voc2 where word like BINARY '%f';
+++-+
| id | word   | counter |
+++-+
|  2 | lagraf |   1 |
+++-+
1 row in set (0.00 sec)
mysql> select * from voc2 where word like BINARY '%f%';
+++-+
| id | word   | counter |
+++-+
|  1 | falama |   1 |
|  2 | lagraf |   1 |
|  3 | folka  |   1 |
|  4 | pofik  |   1 |
+++-+
4 rows in set (0.00 sec)
mysql> select * from voc2 where binary word like  'f%';
+++-+
| id | word   | counter |
+++-+
|  1 | falama |   1 |
|  3 | folka  |   1 |
+++-+
2 rows in set (0.00 sec)
mysql> explain select * from voc2 where binary word like  'f%';
++-+---+--+---+--+-+--+--+-+
| id | select_type | table | type | possible_keys | key  | key_len | ref  | 
rows | Extra   |
++-+---+--+---+--+-+--+--+-+
|  1 | SIMPLE  | voc2  | ALL  | NULL  | NULL |NULL | NULL | 
4 | Using where |
++-+---+--+---+--+-+--+--+-+
1 row in set (0.00 sec)

mysql> explain select * from voc2 where  word like binary 'f%';
++-+---+---+---+---+-+--+--+-+
| id | select_type | table | type  | possible_keys | key   | 
key_len | ref  | rows | Extra   |
++-+---+---+---+---+-+--+--+-+
|  1 | SIMPLE  | voc2  | range | i_vocabulary_word | i_vocabulary_word | 
32 | NULL |1 | Using where |
++-+---+---+---+---+-+--+--+-+
1 row in set (0.00 sec)

This sucks.
Regards,
Artem 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]