a MySQL database that lives in Google's cloud?

2012-09-05 Thread Ariz Jacinto
This might interest some that Google Cloud SQL is actually a MySQL database
that lives in Google's cloud

Source: https://developers.google.com/cloud-sql/docs/introduction


Re: Understanding Slow Query Log

2012-09-05 Thread Andy Wallace

Thanks, Rick - definitely something to think about. I've been
troubled by the pagination stuff in our code. This looks like
something I can definitely use!

andy

On 9/5/12 2:40 PM, Rick James wrote:

Remember where you "left off".

Your "Next" button now says something like ?page=5&size=50
When you get there, you are doing something like SELECT ... ORDER BY ... LIMIT 
250, 50

Instead...
Make it say ?after_id=12345&size=50
and then do SELECT ... WHERE id > 12345 ORDER BY ... LIMIT 51

With 51, you get 3 things:
* the 50 items (or fewer) for the page
* a clue that there will be a "Next" page
* the id of the first item for that Next page

'Exercises for the reader':
* 'Prev'
* each of the next 5
* each of the previous 5
* go to last page
* go to first page
* Knowing whether to have those links or 'gray them out'.

A sample UI layout (you've probably seen web pages like this):
GoTo Page [1] ... [13] [14] 15 [16] [17] ... [last]
Where
* [] represents a link.
* You are currently (for this example) on page 15
* It is showing you only the Next/Prev 2 pages.

I have encountered multiple cases where a "crawler" (eg, search engine) brought a site to 
its knees because of "pagination via OFFSET".

"Pagination via OFFSET" is Order(N) to fetch a page; Order(N*N) to scan the 
entire list.  The first page takes 1 unit of effort. The second takes 2; etc.  By the 
time the entire list has been paged through, about N*N/2 units of work have been done.

My technique is Order(1) for a page, Order(N) for a complete scan.

N is the number of pages.  Some implementations have more than 10,000 pages.  
10,000 * 10,000 = 100 million !


-Original Message-
From: Andy Wallace [mailto:awall...@ihouseweb.com]
Sent: Wednesday, September 05, 2012 2:05 PM
To: mysql@lists.mysql.com
Subject: Re: Understanding Slow Query Log

Ok, this raises a question for me - what's a better way to do
pagination?

On 9/5/12 2:02 PM, Rick James wrote:

* LIMIT 0, 50 -- are you doing "pagination" via OFFSET?  Bad idea.




--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
"Sometimes it pays to stay in bed on Monday, rather than spending the
rest of the week debugging Monday's code."
- Christopher Thompson

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql




--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
"Sometimes it pays to stay in bed on Monday, rather than spending the rest of the 
week debugging Monday's code."
- Christopher Thompson

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Understanding Slow Query Log

2012-09-05 Thread Rick James
Remember where you "left off".

Your "Next" button now says something like ?page=5&size=50
When you get there, you are doing something like SELECT ... ORDER BY ... LIMIT 
250, 50

Instead...
Make it say ?after_id=12345&size=50
and then do SELECT ... WHERE id > 12345 ORDER BY ... LIMIT 51

With 51, you get 3 things:
* the 50 items (or fewer) for the page
* a clue that there will be a "Next" page
* the id of the first item for that Next page

'Exercises for the reader':
* 'Prev'
* each of the next 5
* each of the previous 5
* go to last page
* go to first page
* Knowing whether to have those links or 'gray them out'.

A sample UI layout (you've probably seen web pages like this):
   GoTo Page [1] ... [13] [14] 15 [16] [17] ... [last]
Where
* [] represents a link.
* You are currently (for this example) on page 15
* It is showing you only the Next/Prev 2 pages.

I have encountered multiple cases where a "crawler" (eg, search engine) brought 
a site to its knees because of "pagination via OFFSET".

"Pagination via OFFSET" is Order(N) to fetch a page; Order(N*N) to scan the 
entire list.  The first page takes 1 unit of effort. The second takes 2; etc.  
By the time the entire list has been paged through, about N*N/2 units of work 
have been done.

My technique is Order(1) for a page, Order(N) for a complete scan.

N is the number of pages.  Some implementations have more than 10,000 pages.  
10,000 * 10,000 = 100 million !

> -Original Message-
> From: Andy Wallace [mailto:awall...@ihouseweb.com]
> Sent: Wednesday, September 05, 2012 2:05 PM
> To: mysql@lists.mysql.com
> Subject: Re: Understanding Slow Query Log
> 
> Ok, this raises a question for me - what's a better way to do
> pagination?
> 
> On 9/5/12 2:02 PM, Rick James wrote:
> > * LIMIT 0, 50 -- are you doing "pagination" via OFFSET?  Bad idea.
> 
> 
> 
> --
> Andy Wallace
> iHOUSEweb, Inc.
> awall...@ihouseweb.com
> (866) 645-7700 ext 219
> --
> "Sometimes it pays to stay in bed on Monday, rather than spending the
> rest of the week debugging Monday's code."
> - Christopher Thompson
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Help with mysql connect_timeout

2012-09-05 Thread indrani gorti
Hi all,
I am a newbie to work with the settings in mysql although I have used in
very small applications before
I am currently experimenting with very large tables and a few complicated
queries.  I am using mysql 5.5.  My tables are of size of abt 1 million
tuples.  The queries upon these tables take more than 10 minutes and hence
the connection timeout.
I learnt that there are various timeouts .  I tried using
SHOW VARIABLES LIKE '%TIMEOUT%';  and found that the connect-timeout is 10
( about 600.495 secs)

Later I changed it to  SET GLOBAL connect_timeout=60;
However, I still find that the query loses the connection to mysql server
after about 10 mins( 600.495 secs)
I see that the connect_time is 60 though.

Can anyone please let me know whats wrong?
Thanks in advance for the help.

Regards,
Indrani Gorti


Re: Understanding Slow Query Log

2012-09-05 Thread Andy Wallace

Ok, this raises a question for me - what's a better way to do pagination?

On 9/5/12 2:02 PM, Rick James wrote:

* LIMIT 0, 50 -- are you doing "pagination" via OFFSET?  Bad idea.




--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
"Sometimes it pays to stay in bed on Monday, rather than spending the rest of the 
week debugging Monday's code."
- Christopher Thompson

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Understanding Slow Query Log

2012-09-05 Thread Rick James
* SHOW VARIABLES LIKE 'innodb%';  -- some of them may be hurting performance.

* More that 20% of the table has bean_type = 'Workflow'?  -- if so, it is more 
efficient to do a table scan than to use the index.

* KEY `I_WF_1_DTYPE` (`bean_type`), -->
KEY bean_time (`bean_type`, created_time)
"Compound" index may be your cure.

* Fields with low cardinality (bean_type, status) make very poor INDEXes.

* Consider using an ENUM instead of VARCHAR for status and bean_type, (and 
others?)

* VARCHAR(255) is an awful PRIMARY KEY.  The PK is included implicitly (in 
InnoDB) in every secondary key.

* LIMIT 0, 50 -- are you doing "pagination" via OFFSET?  Bad idea.

Lots more about these topics is discussed in similar questions in
http://forums.mysql.com/list.php?24
Lots more tips here:
http://mysql.rjweb.org/doc.php/ricksrots


> -Original Message-
> From: Adarsh Sharma [mailto:eddy.ada...@gmail.com]
> Sent: Wednesday, September 05, 2012 11:27 AM
> To: Michael Dykman
> Cc: mysql@lists.mysql.com
> Subject: Re: Understanding Slow Query Log
> 
> true Michael, pasting the output :
> 
> CREATE TABLE `WF_1` (
>   `id` varchar(255) NOT NULL,
>   `app_name` varchar(255) DEFAULT NULL,
>   `app_path` varchar(255) DEFAULT NULL,
>   `conf` text,
>   `group_name` varchar(255) DEFAULT NULL,
>   `parent_id` varchar(255) DEFAULT NULL,
>   `run` int(11) DEFAULT NULL,
>   `user_name` varchar(255) DEFAULT NULL,
>   `bean_type` varchar(31) DEFAULT NULL,
>   `auth_token` text,
>   `created_time` datetime DEFAULT NULL,
>   `end_time` datetime DEFAULT NULL,
>   `external_id` varchar(255) DEFAULT NULL,
>   `last_modified_time` datetime DEFAULT NULL,
>   `log_token` varchar(255) DEFAULT NULL,
>   `proto_action_conf` text,
>   `sla_xml` text,
>   `start_time` datetime DEFAULT NULL,
>   `status` varchar(255) DEFAULT NULL,
>   `wf_instance` mediumblob,
>   PRIMARY KEY (`id`),
>   KEY `I_WF_1_DTYPE` (`bean_type`),
>   KEY `I_WF_1_END_TIME` (`end_time`),
>   KEY `I_WF_1_EXTERNAL_ID` (`external_id`),
>   KEY `I_WF_1_LAST_MODIFIED_TIME` (`last_modified_time`),
>   KEY `I_WF_1_STATUS` (`status`)
> ) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
> 
> 
> show indexes from WF_1;
> +-++--+--+-
> ---+---+-+--++-
> -++-+
> | Table   | Non_unique | Key_name | Seq_in_index |
> Column_name| Collation | Cardinality | Sub_part | Packed | Null
> |
> Index_type | Comment |
> +-++--+--+-
> ---+---+-+--++-
> -++-+
> |WF_1 |  0 | PRIMARY  |1 | id
>   | A |  551664 | NULL | NULL   |  | BTREE
>  | |
> |WF_1 |  1 | I_WF_1_DTYPE  |1 |
> bean_type
>| A |  18 | NULL | NULL   | YES  | BTREE
> |
> |
> |WF_1 |  1 | I_WF_1_END_TIME   |1 |
> end_time
> | A |  551664 | NULL | NULL   | YES  | BTREE
> |
> |
> |WF_1 |  1 | I_WF_1_EXTERNAL_ID|1 |
> external_id
>| A |  551664 | NULL | NULL   | YES  | BTREE
> |
> |
> |WF_1 |  1 | I_WF_1_LAST_MODIFIED_TIME |1 |
> last_modified_time | A |  551664 | NULL | NULL   | YES
> |
> BTREE  | |
> |WF_1 |  1 | I_WF_1_STATUS |1 | status
> | A |  18 | NULL | NULL   | YES  | BTREE
> |
> |
> +-++--+--+-
> ---+---+-+--++-
> -++-+
> 
> 
> Thanks
> 
> On Wed, Sep 5, 2012 at 8:43 PM, Michael Dykman 
> wrote:
> 
> > The attachments do not appear to be coming through.
> >
> > I am more curious what the cardinality of bean_type is.  What is the
> > result of select count(*) as cnt, bean_type from WS_1 group by
> > bean_type  ?
> >
> > Low cardinality can render an index usrless.
> >
> > On 2012-09-05 5:19 AM, "Adarsh Sharma"  wrote:
> >
> > I already attached the list.
> >
> > Attaching one more time & thanks for the interest.
> >
> > Cheers
> >
> >
> >
> > On Wed, Sep 5, 2012 at 11:44 AM, Manuel Arostegui 
> > wrote:
> > >
> > >
> > >
> > > 2012/9/5 Adar...
> >
> >
> >
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:http://lists.mysql.com/mysql
> >
> >

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Understanding Slow Query Log

2012-09-05 Thread Adarsh Sharma
true Michael, pasting the output :

CREATE TABLE `WF_1` (
  `id` varchar(255) NOT NULL,
  `app_name` varchar(255) DEFAULT NULL,
  `app_path` varchar(255) DEFAULT NULL,
  `conf` text,
  `group_name` varchar(255) DEFAULT NULL,
  `parent_id` varchar(255) DEFAULT NULL,
  `run` int(11) DEFAULT NULL,
  `user_name` varchar(255) DEFAULT NULL,
  `bean_type` varchar(31) DEFAULT NULL,
  `auth_token` text,
  `created_time` datetime DEFAULT NULL,
  `end_time` datetime DEFAULT NULL,
  `external_id` varchar(255) DEFAULT NULL,
  `last_modified_time` datetime DEFAULT NULL,
  `log_token` varchar(255) DEFAULT NULL,
  `proto_action_conf` text,
  `sla_xml` text,
  `start_time` datetime DEFAULT NULL,
  `status` varchar(255) DEFAULT NULL,
  `wf_instance` mediumblob,
  PRIMARY KEY (`id`),
  KEY `I_WF_1_DTYPE` (`bean_type`),
  KEY `I_WF_1_END_TIME` (`end_time`),
  KEY `I_WF_1_EXTERNAL_ID` (`external_id`),
  KEY `I_WF_1_LAST_MODIFIED_TIME` (`last_modified_time`),
  KEY `I_WF_1_STATUS` (`status`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |


show indexes from WF_1;
+-++--+--++---+-+--++--++-+
| Table   | Non_unique | Key_name | Seq_in_index |
Column_name| Collation | Cardinality | Sub_part | Packed | Null |
Index_type | Comment |
+-++--+--++---+-+--++--++-+
|WF_1 |  0 | PRIMARY  |1 | id
  | A |  551664 | NULL | NULL   |  | BTREE
 | |
|WF_1 |  1 | I_WF_1_DTYPE  |1 | bean_type
   | A |  18 | NULL | NULL   | YES  | BTREE  |
|
|WF_1 |  1 | I_WF_1_END_TIME   |1 | end_time
| A |  551664 | NULL | NULL   | YES  | BTREE  |
|
|WF_1 |  1 | I_WF_1_EXTERNAL_ID|1 | external_id
   | A |  551664 | NULL | NULL   | YES  | BTREE  |
|
|WF_1 |  1 | I_WF_1_LAST_MODIFIED_TIME |1 |
last_modified_time | A |  551664 | NULL | NULL   | YES  |
BTREE  | |
|WF_1 |  1 | I_WF_1_STATUS |1 | status
| A |  18 | NULL | NULL   | YES  | BTREE  |
|
+-++--+--++---+-+--++--++-+


Thanks

On Wed, Sep 5, 2012 at 8:43 PM, Michael Dykman  wrote:

> The attachments do not appear to be coming through.
>
> I am more curious what the cardinality of bean_type is.  What is the
> result of
> select count(*) as cnt, bean_type from WS_1 group by bean_type  ?
>
> Low cardinality can render an index usrless.
>
> On 2012-09-05 5:19 AM, "Adarsh Sharma"  wrote:
>
> I already attached the list.
>
> Attaching one more time & thanks for the interest.
>
> Cheers
>
>
>
> On Wed, Sep 5, 2012 at 11:44 AM, Manuel Arostegui 
> wrote:
> >
> >
> >
> > 2012/9/5 Adar...
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>
>


ignore-db-dir

2012-09-05 Thread Noel Butler
Shaun,

Is this option planned for backport into 5.5.x ?


Cheers



signature.asc
Description: This is a digitally signed message part


Re: Understanding Slow Query Log

2012-09-05 Thread Adarsh Sharma
I already attached the list.

Attaching one more time & thanks for the interest.

Cheers

On Wed, Sep 5, 2012 at 11:44 AM, Manuel Arostegui  wrote:

>
>
> 2012/9/5 Adarsh Sharma 
>
>> Actually that query is not my concern :
>>
>> i have a query that is taking so much time :
>> Slow Log Output :
>> # Overall: 195 total, 16 unique, 0.00 QPS, 0.31x concurrency _
>> # Time range: 2012-09-01 14:30:01 to 2012-09-04 14:13:46
>> # Attribute  total min max avg 95%  stddev  median
>> #  === === === === === === ===
>> # Exec time 80887s   192us   2520s415s   1732s612s 80s
>> # Lock time   13ms   0   133us68us   103us23us69us
>> # Rows sent430.89k   0  17.58k   2.21k  12.50k   3.96k   49.17
>> # Rows examine  32.30M   0 466.46k 169.63k 440.37k 186.02k 117.95k
>> # Query size65.45k   6 577  343.70  563.87  171.06  246.02
>>
>> In the logs output :
>> # Query_time: 488.031783  Lock_time: 0.41 Rows_sent: 50
>>  Rows_examined: 471150
>> SET timestamp=1346655789;
>> SELECT t0.id, t0.app_name, t0.status, t0.run, t0.user_name,
>> t0.group_name, t0.created_time, t0.start_time, t0.last_modified_time,
>> t0.end_time, t0.external_id FROM WF_1 t0 WHERE t0.bean_type = 'Workflow'
>> ORDER BY t0.created_time DESC LIMIT 0, 50;
>>
>> The table is near about 30 GB and growing day by day.
>>
>
> Just out curiosity, is that table too fragmented? 471k rows are quite a
> lot, but 488 of query time is insane. Seems you're reading from disk too
> much!
>
>
>>
>> Attaching the table definition & indexes output. I have a index on bean
>> type column but cann't understand why it
>> examined the all rows of table.
>>
>
> Where's the table's schema so we can give it a try?
>
> Manu
>

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql