Re: Really slow batch insert??

2013-12-30 Thread Cabbar Duzayak
Ah, forgot to mention that table is INNODB.


On Mon, Dec 30, 2013 at 1:59 PM, Cabbar Duzayak cab...@gmail.com wrote:

 Hi,


 We have a basic table, which is something like :


 (id varchar50, productId varchar50, category varchar50)


 In this table, ID is the primary key and we have a unique index on
 (category, productId). And, there is a case where we want to do bulk
 inserts (3000 inserts) using:


 INSERT INTO (id, productId, category) VALUES (‘x1’, ‘y1’, ‘z1’), (‘x2’,
 ‘y2’, ‘z3’) ….. ON DUPLICATE KEY productId = VALUES(productId), category =
 VALUES(category)


 So, when we try to insert 3000 rows using this syntax with a single
 statement, it takes ~ 3 seconds to execute this on an empty table.


 BTW, innodb_flush_log_at_trx_commit is set to  2 for us.


 I don’t have something factual data here, but it feels that this is too
 much just for 3000 rows.


 Isn’t this just too slow? Are we doing something wrong here? Or, can you
 advice as to which configuration parameters can we tweak here to make it
 faster?


 Thanks



Really slow batch insert??

2013-12-30 Thread Cabbar Duzayak
Hi,


We have a basic table, which is something like :


(id varchar50, productId varchar50, category varchar50)


In this table, ID is the primary key and we have a unique index on
(category, productId). And, there is a case where we want to do bulk
inserts (3000 inserts) using:


INSERT INTO (id, productId, category) VALUES (‘x1’, ‘y1’, ‘z1’), (‘x2’,
‘y2’, ‘z3’) ….. ON DUPLICATE KEY productId = VALUES(productId), category =
VALUES(category)


So, when we try to insert 3000 rows using this syntax with a single
statement, it takes ~ 3 seconds to execute this on an empty table.


BTW, innodb_flush_log_at_trx_commit is set to  2 for us.


I don’t have something factual data here, but it feels that this is too
much just for 3000 rows.


Isn’t this just too slow? Are we doing something wrong here? Or, can you
advice as to which configuration parameters can we tweak here to make it
faster?


Thanks


Re: Query Plan Analyzer

2012-11-22 Thread Cabbar Duzayak
Is this a joke?

On Thu, Nov 22, 2012 at 10:20 AM, Zhigang Zhang zzgang2...@gmail.comwrote:

 By experience!

 -Original Message-
 From: Cabbar Duzayak [mailto:cab...@gmail.com]
 Sent: Thursday, November 22, 2012 3:13 PM
 To: mysql@lists.mysql.com
 Subject: Query Plan Analyzer

 Hi All,

 Is there a way of looking at how mysql builds the query plan and executes
 it for a given query? EXPLAIN is definitely a useful tool, but it is not
 exact (shows approximations as far as I can see), furthermore I want
 something like how the predicates were applied (before or after JOINS),
 which indexes were used, etc.

 Basically, Oracle's EXPLAIN PLAN and SQL TRACE do this, and it is very very
 useful for optimizing queries. Don't get me wrong, I love mysql, but with
 mysql, you optimize queries more by trial and error, instead of
 understanding what exactly is going on.

 Thanks a ton.




Query Plan Analyzer

2012-11-21 Thread Cabbar Duzayak
Hi All,

Is there a way of looking at how mysql builds the query plan and executes
it for a given query? EXPLAIN is definitely a useful tool, but it is not
exact (shows approximations as far as I can see), furthermore I want
something like how the predicates were applied (before or after JOINS),
which indexes were used, etc.

Basically, Oracle's EXPLAIN PLAN and SQL TRACE do this, and it is very very
useful for optimizing queries. Don't get me wrong, I love mysql, but with
mysql, you optimize queries more by trial and error, instead of
understanding what exactly is going on.

Thanks a ton.


Re: Subquery taking too much time on 5.5.18?

2012-07-08 Thread Cabbar Duzayak
mysql SELECT @@optimizer_switch;
++
| @@optimizer_switch
  |
++
| 
index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,engine_condition_pushdown=on
|
++
1 row in set (0.00 sec)


On Sat, Jul 7, 2012 at 8:01 PM, Peter Brawley
peter.braw...@earthlink.net wrote:
 On 2012-07-07 9:52 AM, Cabbar Duzayak wrote:

 Hmm,

 Looking at the link http://www.artfulsoftware.com/infotree/queries.php
 and explanations here, EXISTS() should have performed better, but does
 not seem to??? I stopped it after about 5 minutes.

 I tried both:

SELECT * FROM A WHERE EXISTS (SELECT * FROM B WHERE A.id=B.A_ID and
 B.name like 'X%');

 and

SELECT * FROM A WHERE EXISTS (SELECT B.A_ID FROM B WHERE A.id=B.A_ID
 and B.name like 'X%');

 Both are slow...


 What's the optimizer_switch setting?

 PB


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Subquery taking too much time on 5.5.18?

2012-07-07 Thread Cabbar Duzayak
Hmm,

Looking at the link http://www.artfulsoftware.com/infotree/queries.php
and explanations here, EXISTS() should have performed better, but does
not seem to??? I stopped it after about 5 minutes.

I tried both:

  SELECT * FROM A WHERE EXISTS (SELECT * FROM B WHERE A.id=B.A_ID and
B.name like 'X%');

and

  SELECT * FROM A WHERE EXISTS (SELECT B.A_ID FROM B WHERE A.id=B.A_ID
and B.name like 'X%');

Both are slow...


On Sat, Jul 7, 2012 at 8:34 AM, Rik Wasmus r...@grib.nl wrote:
 See The unbearable slowness of IN() at
 http://www.artfulsoftware.com/infotree/queries.php


 Do you read your own links? Excerpt:

 In 5.0, EXISTS() is much faster than IN(), but slower than JOIN. In 5.5,
 EXISTS() performs about as well as JOIN.

 So judging by the subject line...

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Subquery taking too much time on 5.5.18?

2012-07-06 Thread Cabbar Duzayak
Hi Everyone,

I have been trying to understand why subqueries are taking tooo much
time on my installation of MySQL 5.5.18 on Ubuntu 11.10 release.

In a nutshell, I have 2 tables: A and B. And, I do something like this:

SELECT * FROM A WHERE A.id IN (SELECT A_ID FROM B WHERE B.name like 'X%');

Table A has 460,000 rows and Table B has 5000 rows, and A.id is my
primary key, and B.name is indexed. Also, the sub-query here (B.name
starts with X%) returns about 300 rows.

For some weird reason, this query takes a ton of time (I cancelled it
after 750 seconds). I looked at the query plan with EXPLAIN and it
could not find an index to use for table A and looks like it is doing
a table scan (even though A.id is the primary key)…

To understand it better, I divided it up, and sent two queries
separately as follows::

SELECT A_ID FROM B WHERE B.name like 'X%'
takes 0.002 second.

For testing purposes, I concatenated all ids from this query and send
a hard-coded query on A like:

SELECT * FROM A WHERE A.id in (1,2,3,4,5…..)
and this takes 0.002 second.



Basically, both queries are super fast, but when I combine them via IN
w/sub-query, the thing spends a lot more time?


As an alternative, I tried using JOIN as follows:
SELECT A.* FROM A INNER JOIN B ON (A.id = B.A_id) WHERE B.name like 'X%';
and this takes 0.04 seconds

JOIN is also fast, but there are cases where I really need IN subqueries.


I would really really appreciate it if you can shed some light on this
issue and tell me what I am doing wrong and/or how I can fix this?

Thanks a ton.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Subquery taking too much time on 5.5.18?

2012-07-06 Thread Cabbar Duzayak
Benjamin:

I tried this, but it is slow as well, but it went down to something
like 20 seconds.

Reindl:

I was thinking about doing this on the app server side, but it
consumes more memory + requires multiple roundtrips, that is why I
wanted to do it on the mysql side. Also, like uses the index if index
is not hash, and if cardinality value is more appropriate, ie. your
predicate cardinality is better with the index instead of table scan
and if like does not start with a percent.


Also, JOIN works if you have 2 tables involved, but it starts getting
messier and messier as you keep converting every IN to JOINs, and I
was thinking that optimizer should be able to optimize this much
better and faster than JOINs.

Thanks.


On Fri, Jul 6, 2012 at 7:37 PM, Stillman, Benjamin
bstill...@limitedbrands.com wrote:
 As far as I know, a B-tree index can be used by LIKE as long as the string 
 doesn't begin with a wildcard.  LIKE 'X%'  should be fine to use an index 
 on the name column. The index only includes results in the search base which 
 start with X.

 That said, I probably wouldn't use a subquery, either. But since the OP says 
 they'd prefer to use subqueries, try this and tell me what happens:

 SELECT * FROM A WHERE A.id IN (
 SELECT A_ID FROM (
 SELECT A_ID FROM B WHERE B.name LIKE 'X%'
 ) AS x
 );

 It's just wrapping the subquery within another subquery, forcing MySQL to run 
 from the inside out. I don't have a running instance nearby to test on, but I 
 hope it helps.






 -Original Message-
 From: Reindl Harald [mailto:h.rei...@thelounge.net]
 Sent: Friday, July 06, 2012 11:58 AM
 To: mysql@lists.mysql.com
 Subject: Re: Subquery taking too much time on 5.5.18?



 Am 06.07.2012 17:46, schrieb Cabbar Duzayak:
 Hi Everyone,

 I have been trying to understand why subqueries are taking tooo much
 time on my installation of MySQL 5.5.18 on Ubuntu 11.10 release.

 In a nutshell, I have 2 tables: A and B. And, I do something like this:

 SELECT * FROM A WHERE A.id IN (SELECT A_ID FROM B WHERE B.name like
 'X%');

 Table A has 460,000 rows and Table B has 5000 rows, and A.id is my
 primary key, and B.name is indexed. Also, the sub-query here (B.name
 starts with X%) returns about 300 rows.

 query B can not used any key because 'like' never can use any key

 i try to avoid subqueries wherever i can becasue the mysql query-optmizer is 
 really weak in most cases (it appears
 5.6 will be much better in many of them)

 such things i would always do with two queries in the application

 * first the sub-query
 * genearte the query above with the results in the app
 * fire up the final query


 

 Notice: This communication may contain privileged and/or confidential 
 information. If you are not the intended recipient, please notify the sender 
 by email, and immediately delete the message and any attachments without 
 copying or disclosing them. LBI may, for any reason, intercept, access, use, 
 and disclose any information that is communicated by or through, or which is 
 stored on, its networks, applications, services, and devices.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Indexed Query examining too many rows!

2012-02-13 Thread Cabbar Duzayak
Hi,

As you can see in my query, % is not in the beginning. Once again, it is :

select * from DataIndex where (searchKey like 'A%') order by searchKey limit 10
where searchKey has a btree on it.

As Peter was saying, percent in the beginning does a full table scan
as expected.

Thanks.

On Mon, Feb 13, 2012 at 12:57 AM, Peter Brawley
peter.braw...@earthlink.net wrote:
 On 2/12/2012 4:40 PM, Reindl Harald wrote:

 Am 12.02.2012 23:25, schrieb Cabbar Duzayak:

 Hi All,

 I have a table with a btree index on its searchKey column, and when I
 send a simple query on this table:

 explain select * from DataIndex where (searchKey like 'A%') order by
 searchKey limit 10

 rows is returning 59548 and it tells me that it is using the searchKey
 index.

 Also, a select count(*) on this table returns 32104 rows, i.e.

 select count(*) from DataIndex where searchKey like 'a%' -  gives
 32104 as its result

 Am I doing something wrong here? Given that the searched column is
 indexed, shouldn't it examine way less rows?

 LIKE does not benefit from keys!


 It does if the wildcard is not at the front, as indicated at
 http://dev.mysql.com/tech-resources/presentations/presentation-oscon2000-2719/
 ...

 *When MySQL uses indexes*
 ...
 When you use a LIKE that doesn't start with a wildcard.
 SELECT * FROM table_name WHERE key_part1 LIKE 'jani%'
 ...

 PB

 -




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Pointers about replication

2007-03-06 Thread Cabbar Duzayak

Hi All,

Would it be possible to provide some advanced pointers
(articles/books/tutorials/sites) for learning more about replication?

I am particularly interested in master-to-master replication (not even
sure if this is possible with mysql) and/or real-world usage
scenarios/examples as to how much load it can handle, how reliable it
is, etc?

Any help is appreciated.

Thank you.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Pushing mysql to the limits

2006-11-01 Thread Cabbar Duzayak

Hi,

We have huge amount of data, and we are planning to use logical
partitioning to divide it over multiple machines/mysql instances. We
are planning to use Intel based machines and will prefer ISAM since
there is not much updates but mostly selects. The main table that
constitutes this much of data has about 5 columns, and rows are about
50 bytes in size, and 3 columns in this table need to be indexed.

So, what I wanted to learn is how much can we push it to the limits on
a single machine with about 2 gig rams? Do you think MYSQL can handle
~ 700-800 gigabyte on a single machine? And, is it OK to put this much
data in a single table, or should we divide it over multiple tables?
If that is the case, what would be the limit for a single table?

Any help/input on this is greatly appreciated.

Thanks.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL memory allocation

2006-10-31 Thread Cabbar Duzayak

Hi,

Could you please tell how I can tell how much memory does mysql server
allocate on a linux box? I tried doing:

top -b -n 1 | grep mysql

But, it printed out bunch of processes for mysql. Are these all using
shared memory so each line gives you the total amount for mysql? How
can one interpret the results of this top, i.e. how should I read
Virtual/Resident/Shared columns for all these processes and find out
how much shared + (resident + swap: virtual) memory it uses?

Results of TOP for mysql:
 PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
8205 mysql 12   0 75524  28M 26896 S 4.8  0.7   3:15   1 mysqld
2150 mysql  9   0 75524  28M 26896 S 0.4  0.7   3:04   1 mysqld
32011 root   9   0   628  500   500 S 0.0  0.0   0:00   0 mysqld_safe
32033 mysql  9   0 75524  28M 26896 S 0.0  0.7   0:08   0 mysqld
32035 mysql  9   0 75524  28M 26896 S 0.0  0.7   0:00   0 mysqld
32036 mysql  9   0 75524  28M 26896 S 0.0  0.7   0:04   0 mysqld
1321 mysql  9   0 75524  28M 26896 S 0.0  0.7   2:51   1 mysqld
1323 mysql  9   0 75524  28M 26896 S 0.0  0.7   3:17   0 mysqld
2112 mysql  9   0 75524  28M 26896 S 0.0  0.7   3:20   0 mysqld
2131 mysql  9   0 75524  28M 26896 S 0.0  0.7   3:04   0 mysqld
2132 mysql  9   0 75524  28M 26896 S 0.0  0.7   2:34   1 mysqld
2133 mysql  9   0 75524  28M 26896 S 0.0  0.7   3:04   1 mysqld
2134 mysql  9   0 75524  28M 26896 S 0.0  0.7   3:12   0 mysqld
2135 mysql  9   0 75524  28M 26896 S 0.0  0.7   2:51   0 mysqld
2136 mysql  9   0 75524  28M 26896 S 0.0  0.7   2:59   1 mysqld
2137 mysql  9   0 75524  28M 26896 S 0.0  0.7   2:43   0 mysqld
2142 mysql  9   0 75524  28M 26896 S 0.0  0.7   3:12   0 mysqld
8080 mysql  9   0 75524  28M 26896 S 0.0  0.7   2:58   0 mysqld
8082 mysql  9   0 75524  28M 26896 S 0.0  0.7   2:52   1 mysqld
8197 mysql  9   0 75524  28M 26896 S 0.0  0.7   2:43   1 mysqld
8204 mysql  9   0 75524  28M 26896 S 0.0  0.7   3:16   1 mysqld
16010 mysql  9   0 75524  28M 26896 S 0.0  0.7   2:51   0 mysqld
16011 mysql  9   0 75524  28M 26896 S 0.0  0.7   2:58   1 mysqld
16138 mysql  9   0 75524  28M 26896 S 0.0  0.7   2:52   1 mysqld
4074 mysql  9   0 75524  28M 26896 S 0.0  0.7   2:16   0 mysqld
17331 mysql  9   0 75524  28M 26896 S 0.0  0.7   1:44   0 mysqld
17337 mysql  9   0 75524  28M 26896 S 0.0  0.7   1:59   0 mysqld
22847 mysql  9   0 75524  28M 26896 S 0.0  0.7   0:53   1 mysqld
22912 mysql  9   0 75524  28M 26896 S 0.0  0.7   0:49   0 mysqld
22944 mysql  9   0 75524  28M 26896 S 0.0  0.7   0:48   1 mysqld
23101 mysql  9   0 75524  28M 26896 S 0.0  0.7   0:38   0 mysqld
23102 mysql  9   0 75524  28M 26896 S 0.0  0.7   0:43   0 mysqld
23124 mysql  9   0 75524  28M 26896 S 0.0  0.7   0:44   1 mysqld
23168 mysql  9   0 75524  28M 26896 S 0.0  0.7   0:46   1 mysqld

Thanks.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Too many open processes??

2006-10-07 Thread Cabbar Duzayak

I am using mysql_pconnect from PHP to connect to our mysql server.
However, whenever there is a temprorary surge in the number of users,
i.e. concurrent users jump from 30 to 200 for like 5 minutes, Apache
creates 200 processes and after the surge is over, they die
gracefully, and # of processes goes down to ~ 30.

However, this is not the case for MySQL. During the surge, it creates
200 processes and these processes stay there forever (till the next
re-start), even though there are only 20-30 concurrent users after the
surge.

Is there a way to configure mysql so that it will kill a process after
a certain period of idle time, just like Apache does?

Thanks...

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Queue implementation on MySQL

2005-11-03 Thread Cabbar Duzayak
Hi,

Can you please recommend an open-source, light-weight and more
importantly RELIABLE point-to-point queue implementation, built on
mysql? Something similar to Oracle's Advanced Queue? I don't need
publish/subscribe or a priority-based point-to-point queue, just a
simple one which provides a reliable way of enqueuing and dequeuing
with LIFO...

Thanks...

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Active user sessions

2005-10-30 Thread Cabbar Duzayak
Hi,

Is there a way of listing all the active db sessions and their IP
addresses and/or db user names? Something like v$session in oracle?

Thanks...

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQLDump - Command line password

2005-07-19 Thread Cabbar Duzayak
Hi,

I have setup cronjobs to take daily backups of my db using mysqldump.
But the problem is, mysqldump requires the password to be passed via
command line, which means anyone on the same machine can take a peek
at my password using top, ps -ef, etc.

Is there a way of avoiding this, i.e. making it read the password from
some file, etc? Or, is there any other alternative I can use?

Thanks...

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Storing huge amount of binary data

2005-07-11 Thread Cabbar Duzayak
Hi Everyone,

I will be starting a project for which I will need to store about 1
million image files all of which are about 50-100K in size. I will be
using Linux for this project. The database won't be very busy, there
will be batch image uploads 1-2 times a day, and around 100-200
concurrent users at most, most of which will be reading from the db
and writing only session information type of data, etc... And, I don't
really need transaction support (InnoDB)...

Adding this up, the image data size will be around 50-100 Gb, and I
will need to store a total of 1-2 Gb text information (1K for each
image) along with each of these images...

First of all, I heard that Mysql does not perform very well when
tablesize goes above 1 Gb. Is this a myth? Image table is not a big
deal, since I can partition/distribute it to ~ 100-200 tables, i.e. by
table_(id % 100). However, text information needs to stay in a single
table (since I need to do queries on it for information) and there
will be multiple indexes over this information.

And, as you can imagine, I am not sure if mysql can handle something
like this, and was wondering if you can provide some feedback.

So my questions are:

1. The main question is, do you guys have any experience with this
much binary and regular data? Do you think Mysql can handle this much
data in a reliable manner (without corrupting data and/or
degrading/terrible performance) ?

2. Can I implement this using regular SCSI disks with regular mysql?
Or do I have need advanced solutions such as clustered, replicated,
etc?

3. Again, as you can understand, I want to minimize the cost here. If
you don't think I can use mysql, do you think Microsoft SQL server is
good enough for this task?

Thanks...

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]