RE: slow queries not being logged

2010-02-23 Thread John Daisley



 From: machi...@rdc.co.za
 To: mysql@lists.mysql.com
 Subject: slow queries not being logged
 Date: Tue, 23 Feb 2010 09:59:13 +0200
 
 Good day all
 
  
 
 I hope you can assist me with this one...
 
  
 
 We have a client where the slow query log was disabled.
 

Slow query log is on the server only.  If you are saying you have enabled the 
slow query log and the servers query log is empty can you post your 
my.cnf/my.ini file. Also make sure --long-query-time is set appropriately.


  
 
 We noticed that the slow query amount (when viewing global
 status) has skyrocketed during the last 2 weeks going up to over 2 million
 (from 160 million queries).
 
  
 
 We wanted to look at these queries to see if it can be
 optimised to reduce the amount and went through the whole database restart
 routine to enable the slow query log again (they are running version 5.0 so
 had to restart).
 
  
 
  
 
 However, even though the slow query log is enabled, it is
 not logging the queries to the file specified.
 
  
 
 Can someone please assist in why this is not being done? I
 thought that it might be logging to a default filename but there is only one
 slow queries log file in the directory and it is empty.
 
  
 
 Checking the global status again, it showed 29 000 slow
 queries since this morning (3 hours ago) but nothing in the logs.
 
  
 
  
 
 Your help will be appreciated.
 
  
 
  
 
 Regards
 
  
 
  
_
Do you have a story that started on Hotmail? Tell us now
http://clk.atdmt.com/UKM/go/195013117/direct/01/

RE: slow queries not being logged

2010-02-23 Thread Machiel Richards
Hi All

 

I found my problem and this was kind of a blonde moment for
me...

 

When configuring the log_slow_queries parameter, it was
configured as follows:  log_slow_queries=1

 

This the file being created is called 1 and the 1 does not
mean it is enabled.

 

I have fixed this now but need to wait for a gap to reboot
again to have it set properly. (have to live with the filename 1 for the
time being.)

 

I did however find something interesting though, while
looking at the queries being logged.

 

The slow_query_time is set to 2 (2 seconds i am assuming)
however all the queries being logged states that it ran for 0 seconds.

 

I am busy doing explain plans on some of them now but not
really sure what to look for yet (Rather new to MySQL and hope google will
have some answers J  )

 

 

Thank you

 

 

 

 

From: John Daisley [mailto:mg_s...@hotmail.com] 
Sent: 23 February 2010 10:24 AM
To: machi...@rdc.co.za; mysql@lists.mysql.com
Subject: RE: slow queries not being logged

 



 From: machi...@rdc.co.za
 To: mysql@lists.mysql.com
 Subject: slow queries not being logged
 Date: Tue, 23 Feb 2010 09:59:13 +0200
 
 Good day all
 
 
 
 I hope you can assist me with this one...
 
 
 
 We have a client where the slow query log was disabled.
 

Slow query log is on the server only.  If you are saying you have enabled
the slow query log and the servers query log is empty can you post your
my.cnf/my.ini file. Also make sure --long-query-time is set appropriately.


 
 
 We noticed that the slow query amount (when viewing global
 status) has skyrocketed during the last 2 weeks going up to over 2 million
 (from 160 million queries).
 
 
 
 We wanted to look at these queries to see if it can be
 optimised to reduce the amount and went through the whole database restart
 routine to enable the slow query log again (they are running version 5.0
so
 had to restart).
 
 
 
 
 
 However, even though the slow query log is enabled, it is
 not logging the queries to the file specified.
 
 
 
 Can someone please assist in why this is not being done? I
 thought that it might be logging to a default filename but there is only
one
 slow queries log file in the directory and it is empty.
 
 
 
 Checking the global status again, it showed 29 000 slow
 queries since this morning (3 hours ago) but nothing in the logs.
 
 
 
 
 
 Your help will be appreciated.
 
 
 
 
 
 Regards
 
 
 

  _  

Do you want a Hotmail account? Sign-up
http://clk.atdmt.com/UKM/go/19780/direct/01/  now - Free



Re: slow queries not being logged

2010-02-23 Thread Ananda Kumar
slow query log will also have sql's which are not using indexes(doing full
table scan).
May be those queries with ZERO SECOND run on small table without using
indexes.

regards
anandkl

On Tue, Feb 23, 2010 at 2:02 PM, Machiel Richards machi...@rdc.co.zawrote:

 Hi All



I found my problem and this was kind of a blonde moment for
 me...



When configuring the log_slow_queries parameter, it was
 configured as follows:  log_slow_queries=1



This the file being created is called 1 and the 1 does not
 mean it is enabled.



I have fixed this now but need to wait for a gap to reboot
 again to have it set properly. (have to live with the filename 1 for the
 time being.)



I did however find something interesting though, while
 looking at the queries being logged.



The slow_query_time is set to 2 (2 seconds i am assuming)
 however all the queries being logged states that it ran for 0 seconds.



I am busy doing explain plans on some of them now but not
 really sure what to look for yet (Rather new to MySQL and hope google will
 have some answers J  )





 Thank you









 From: John Daisley [mailto:mg_s...@hotmail.com]
 Sent: 23 February 2010 10:24 AM
 To: machi...@rdc.co.za; mysql@lists.mysql.com
 Subject: RE: slow queries not being logged





  From: machi...@rdc.co.za
  To: mysql@lists.mysql.com
  Subject: slow queries not being logged
  Date: Tue, 23 Feb 2010 09:59:13 +0200
 
  Good day all
 
 
 
  I hope you can assist me with this one...
 
 
 
  We have a client where the slow query log was disabled.
 

 Slow query log is on the server only.  If you are saying you have enabled
 the slow query log and the servers query log is empty can you post your
 my.cnf/my.ini file. Also make sure --long-query-time is set appropriately.


 
 
  We noticed that the slow query amount (when viewing global
  status) has skyrocketed during the last 2 weeks going up to over 2
 million
  (from 160 million queries).
 
 
 
  We wanted to look at these queries to see if it can be
  optimised to reduce the amount and went through the whole database
 restart
  routine to enable the slow query log again (they are running version 5.0
 so
  had to restart).
 
 
 
 
 
  However, even though the slow query log is enabled, it is
  not logging the queries to the file specified.
 
 
 
  Can someone please assist in why this is not being done? I
  thought that it might be logging to a default filename but there is only
 one
  slow queries log file in the directory and it is empty.
 
 
 
  Checking the global status again, it showed 29 000 slow
  queries since this morning (3 hours ago) but nothing in the logs.
 
 
 
 
 
  Your help will be appreciated.
 
 
 
 
 
  Regards
 
 
 

  _

 Do you want a Hotmail account? Sign-up
 http://clk.atdmt.com/UKM/go/19780/direct/01/  now - Free




Re: Partitioning

2010-02-23 Thread Johan De Meersman
that's very much gonna depend on what your selects look like. For example, a
low-cardinality but often-where'd field makes an interesting candidate, as
such a partitioning will take the size of your table scans down. If you know
that you'll mostly access just last month's data, partition on year+month.

YMMV.


On Mon, Feb 22, 2010 at 11:23 PM, Jerry Schwartz jschwa...@the-infoshop.com
 wrote:

 I’d like to know your opinions about partitioning the following table.
 Here’s the relevant snippet:



 Create Table: CREATE TABLE `prod_price` (

  `prod_price_id` varchar(15) NOT NULL DEFAULT '',

  `prod_id` varchar(15) DEFAULT NULL,

 …

  PRIMARY KEY (`prod_price_id`),

  KEY `prod_id` (`prod_id`)

 ) ENGINE=MyISAM DEFAULT CHARSET=utf8



 Here’s the deal. The primary key, `prod_price_id`, is rarely used. Prices,
 as you might expect, are fetched by `prod_id`. Both keys are randomly
  generated strings. (Before you ask, I am not a mental health professional
 and am therefore not qualified to judge my predecessor.)



 How could I partition this table in a useful way?



 Regards,



 Jerry Schwartz

 The Infoshop by Global Information Incorporated

 195 Farmington Ave.

 Farmington, CT 06032



 860.674.8796 / FAX: 860.674.8341



  http://www.the-infoshop.com www.the-infoshop.com






-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


RE: Partitioning

2010-02-23 Thread John Daisley
Hi Jerry,

I guess modification of the table is needed! What are you trying to achieve by 
partitioning?

If the primary key is rarely used then maybe adding another column with a 
numeric value based on `prod_id` and adding that column to the primary key 
would work and at least let you do some hash partitioning to distribute data, 
may take some time to run if the table is large but something like this may 
work

ALTER TABLE `prod_price` ADD COLUMN `partition_key` int unsigned;

ALTER TABLE `prod_price` DROP primary key;

UPDATE `prod_price` set `partition_key` = ASCII(prod_id);

ALTER TABLE `prod_price` add primary key(prod_price_id,partition_key);

ALTER TABLE `prod_price` PARTITION BY HASH(partition_key) PARTITIONS 4;

Just an rough idea based on me not knowing anything about your data and only a 
little about partitioning.

Be very interested to hear how you eventually overcome this issue so please do 
let me know what you  decide.

Regards

John Daisley

==
John Daisley
Certified MySQL DBA / Developer
IBM Cognos BI Developer

Tel: +44(0)1283 537111
Mobile: +44 (0)7819 621621
Email: j...@butterflysystems.co.uk

==

Sent via HP IPAQ mobile device.





-Original Message-
From: Jerry Schwartz jschwa...@the-infoshop.com
Sent: Monday, February 22, 2010 10:51 PM
To: mysql@lists.mysql.com
Subject: Partitioning

I'd like to know your opinions about partitioning the following table. Here's 
the relevant snippet:

  

 Create Table: CREATE TABLE `prod_price` (

   `prod_price_id` varchar(15) NOT NULL DEFAULT '',

   `prod_id` varchar(15) DEFAULT NULL,

 .

   PRIMARY KEY (`prod_price_id`),

   KEY `prod_id` (`prod_id`)

 ) ENGINE=MyISAM DEFAULT CHARSET=utf8

  

 Here's the deal. The primary key, `prod_price_id`, is rarely used. Prices, as 
you might expect, are fetched by `prod_id`. Both keys are randomly  generated 
strings. (Before you ask, I am not a mental health professional and am 
therefore not qualified to judge my predecessor.)

  

 How could I partition this table in a useful way?

  

 Regards,

  

 Jerry Schwartz

 The Infoshop by Global Information Incorporated

 195 Farmington Ave.

 Farmington, CT 06032

  

 860.674.8796 / FAX: 860.674.8341

  

  http://www.the-infoshop.com www.the-infoshop.com

  



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: slow queries not being logged

2010-02-23 Thread Carsten Pedersen

You might want to read the comments to this posting: 

http://www.bitbybit.dk/carsten/blog/?p=116



Several tools/methods for controlling and analyzing the slow query log are

suggested there.



Best,



/ Carsten



On Tue, 23 Feb 2010 14:09:30 +0530, Ananda Kumar anan...@gmail.com

wrote:

 slow query log will also have sql's which are not using indexes(doing

full

 table scan).

 May be those queries with ZERO SECOND run on small table without using

 indexes.

 

 regards

 anandkl

 

 On Tue, Feb 23, 2010 at 2:02 PM, Machiel Richards

 machi...@rdc.co.zawrote:

 

 Hi All







I found my problem and this was kind of a blonde moment

for

 me...







When configuring the log_slow_queries parameter, it was

 configured as follows:  log_slow_queries=1







This the file being created is called 1 and the 1 does

not

 mean it is enabled.







I have fixed this now but need to wait for a gap to

reboot

 again to have it set properly. (have to live with the filename 1 for

the

 time being.)







I did however find something interesting though, while

 looking at the queries being logged.







The slow_query_time is set to 2 (2 seconds i am

assuming)

 however all the queries being logged states that it ran for 0 seconds.







I am busy doing explain plans on some of them now but

not

 really sure what to look for yet (Rather new to MySQL and hope google

 will

 have some answers J  )











 Thank you



















 From: John Daisley [mailto:mg_s...@hotmail.com]

 Sent: 23 February 2010 10:24 AM

 To: machi...@rdc.co.za; mysql@lists.mysql.com

 Subject: RE: slow queries not being logged











  From: machi...@rdc.co.za

  To: mysql@lists.mysql.com

  Subject: slow queries not being logged

  Date: Tue, 23 Feb 2010 09:59:13 +0200

 

  Good day all

 

 

 

  I hope you can assist me with this one...

 

 

 

  We have a client where the slow query log was disabled.

 



 Slow query log is on the server only.  If you are saying you have

enabled

 the slow query log and the servers query log is empty can you post your

 my.cnf/my.ini file. Also make sure --long-query-time is set

 appropriately.





 

 

  We noticed that the slow query amount (when viewing global

  status) has skyrocketed during the last 2 weeks going up to over 2

 million

  (from 160 million queries).

 

 

 

  We wanted to look at these queries to see if it can be

  optimised to reduce the amount and went through the whole database

 restart

  routine to enable the slow query log again (they are running version

  5.0

 so

  had to restart).

 

 

 

 

 

  However, even though the slow query log is enabled, it is

  not logging the queries to the file specified.

 

 

 

  Can someone please assist in why this is not being done? I

  thought that it might be logging to a default filename but there is

  only

 one

  slow queries log file in the directory and it is empty.

 

 

 

  Checking the global status again, it showed 29 000 slow

  queries since this morning (3 hours ago) but nothing in the logs.

 

 

 

 

 

  Your help will be appreciated.

 

 

 

 

 

  Regards

 

 

 



  _



 Do you want a Hotmail account? Sign-up

 http://clk.atdmt.com/UKM/go/19780/direct/01/  now - Free





 

 

 !DSPAM:451,4b839535858212076517642!

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



MySQL University session on February 25: Securich - Security Plugin for MySQL

2010-02-23 Thread Stefan Hinz
Securich - Security Plugin for MySQL
http://forge.mysql.com/wiki/Securich_-_Security_Plugin_for_MySQL

This Thursday (February 25th, 13:00 UTC - way earlier than usual!),
Darren Cassar will present Securich - Security Plugin for MySQL.
According to Darren, the author of the plugin, Securich is an
incredibly handy and versatile tool for managing user privileges on
MySQL through the use of roles. It basically makes granting and revoking
rights a piece of cake, not to mention added security it provides
through password expiry and password history, the customization level it
permits, the fact that it runs on any MySQL 5.0 or later and it's easily
deployable on any official MySQL binary, platform independent.
More information here: http://www.securich.com/about.html.

For MySQL University sessions, point your browser to this page:

http://webmeeting.dimdim.com/portal/JoinForm.action?confKey=mysqluniversity

You need a browser with a working Flash plugin. You may register for a
Dimdim account, but you don't have to. (Dimdim is the conferencing
system we're using for MySQL University sessions. It provides integrated
voice streaming, chat, whiteboard, session recording, and more.)

MySQL University is a free educational online program for
engineers/developers. MySQL University sessions are open to anyone, not
just Sun employees. Sessions are recorded (slides and audio), so if you
can't attend the live session you can look at the recording anytime
after the session.

Here's the schedule for the upcoming weeks:

* March 4: MySQL Column Databases (Robin Schumacher)
* March 11: Improving MySQL Full-Text Search (Kristofer Pettersson)

The schedule is not engraved in stone at this point. Please visit
http://forge.mysql.com/wiki/MySQL_University#Upcoming_Sessions for the
up-to-date list. On that page, you can also find the starting time for
many time zones.

Cheers,

Stefan
-- 
Regards,

Stefan Hinz stefan.h...@sun.com, Documentation Manager MySQL

Phone: +49-30-82702940, Fax: +49-30-82702941, http://dev.mysql.com/doc
Sun Microsystems GmbH, Sonnenallee 1, 85551 Kirchheim-Heimstetten
Amtsgericht Muenchen: HRB161028
Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels
Vorsitzender des Aufsichtsrates: Martin Haering

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Partitioning

2010-02-23 Thread Jerry Schwartz
-Original Message-
From: John Daisley [mailto:mg_s...@hotmail.com]
Sent: Tuesday, February 23, 2010 6:07 AM
To: jschwa...@the-infoshop.com ; mysql@lists.mysql.com
Subject: RE: Partitioning

Hi Jerry,

I guess modification of the table is needed! What are you trying to achieve 
by
partitioning?

[JS] I was trying to get a feel for how partitioning would work, that's all. I 
have no real need for partitioning. My tables are small, by most standards, 
and the cardinality is excellent.

If the primary key is rarely used then maybe adding another column with a
numeric value based on `prod_id` and adding that column to the primary key
would work and at least let you do some hash partitioning to distribute data,
may take some time to run if the table is large but something like this may
work

[JS] I'd dearly love to ditch this whole key structure, but it would require a 
lot of work for a relatively small investment.

Thanks.

Regards,

Jerry Schwartz
The Infoshop by Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341

www.the-infoshop.com



ALTER TABLE `prod_price` ADD COLUMN `partition_key` int unsigned;

ALTER TABLE `prod_price` DROP primary key;

UPDATE `prod_price` set `partition_key` = ASCII(prod_id);

ALTER TABLE `prod_price` add primary key(prod_price_id,partition_key);

ALTER TABLE `prod_price` PARTITION BY HASH(partition_key) PARTITIONS 4;

Just an rough idea based on me not knowing anything about your data and only 
a
little about partitioning.

Be very interested to hear how you eventually overcome this issue so please 
do
let me know what you  decide.

Regards

John Daisley

==
John Daisley
Certified MySQL DBA / Developer
IBM Cognos BI Developer

Tel: +44(0)1283 537111
Mobile: +44 (0)7819 621621
Email: j...@butterflysystems.co.uk

==

Sent via HP IPAQ mobile device.





-Original Message-
From: Jerry Schwartz jschwa...@the-infoshop.com
Sent: Monday, February 22, 2010 10:51 PM
To: mysql@lists.mysql.com
Subject: Partitioning

I'd like to know your opinions about partitioning the following table. Here's
the relevant snippet:



 Create Table: CREATE TABLE `prod_price` (

   `prod_price_id` varchar(15) NOT NULL DEFAULT '',

   `prod_id` varchar(15) DEFAULT NULL,

 .

   PRIMARY KEY (`prod_price_id`),

   KEY `prod_id` (`prod_id`)

 ) ENGINE=MyISAM DEFAULT CHARSET=utf8



 Here's the deal. The primary key, `prod_price_id`, is rarely used. Prices, 
 as
you might expect, are fetched by `prod_id`. Both keys are randomly  generated
strings. (Before you ask, I am not a mental health professional and am
therefore not qualified to judge my predecessor.)



 How could I partition this table in a useful way?



 Regards,



 Jerry Schwartz

 The Infoshop by Global Information Incorporated

 195 Farmington Ave.

 Farmington, CT 06032



 860.674.8796 / FAX: 860.674.8341



  http://www.the-infoshop.com www.the-infoshop.com








-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Partitioning

2010-02-23 Thread Jerry Schwartz
 

From: vegiv...@gmail.com [mailto:vegiv...@gmail.com] On Behalf Of Johan De 
Meersman
Sent: Tuesday, February 23, 2010 5:52 AM
To: Jerry Schwartz
Cc: MY SQL Mailing list
Subject: Re: Partitioning

 

that's very much gonna depend on what your selects look like. For example, a 
low-cardinality but often-where'd field makes an interesting candidate, as such 
a partitioning will take the size of your table scans down. If you know that 
you'll mostly access just last month's data, partition on year+month.

YMMV.

[JS] This is a thought experiment. The cardinality is excellent, since a give 
product typically has one or two prices.

Thanks.

 

Regards,

 

Jerry Schwartz

The Infoshop by Global Information Incorporated

195 Farmington Ave.

Farmington, CT 06032

 

860.674.8796 / FAX: 860.674.8341

 

www.the-infoshop.com

 

 

On Mon, Feb 22, 2010 at 11:23 PM, Jerry Schwartz jschwa...@the-infoshop.com 
wrote:

I’d like to know your opinions about partitioning the following table. Here’s 
the relevant snippet:



Create Table: CREATE TABLE `prod_price` (

 `prod_price_id` varchar(15) NOT NULL DEFAULT '',

 `prod_id` varchar(15) DEFAULT NULL,

…

 PRIMARY KEY (`prod_price_id`),

 KEY `prod_id` (`prod_id`)

) ENGINE=MyISAM DEFAULT CHARSET=utf8



Here’s the deal. The primary key, `prod_price_id`, is rarely used. Prices, as 
you might expect, are fetched by `prod_id`. Both keys are randomly  generated 
strings. (Before you ask, I am not a mental health professional and am 
therefore not qualified to judge my predecessor.)



How could I partition this table in a useful way?



Regards,



Jerry Schwartz

The Infoshop by Global Information Incorporated

195 Farmington Ave.

Farmington, CT 06032



860.674.8796 / FAX: 860.674.8341



 http://www.the-infoshop.com www.the-infoshop.com






-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel



Index analyser

2010-02-23 Thread Cantwell, Bryan
Is there still no such thing anywhere for Mysql as an index analyser?
Many others have such a thing that will sit and monitor db activity over a 
poeriod of time and suggest the exact indexes on each table based on what it 
has seen to improve performance
Anyone got that for MySQL?



Re: Index analyser

2010-02-23 Thread edberg
On Tue, February 23, 2010 1:28 pm, Cantwell, Bryan wrote:
 Is there still no such thing anywhere for Mysql as an index analyser?
 Many others have such a thing that will sit and monitor db activity over a
 poeriod of time and suggest the exact indexes on each table based on what
 it has seen to improve performance Anyone got that for MySQL?



I know of no daemon/service-style analyzer, but are you aware of the
ANALYZE/OPTIMIZE commands?

   http://dev.mysql.com/doc/refman/5.0/en/table-maintenance-sql.html

You could set up a scheduled job to run an ANALYZE during periods of low
activity, for example Saturday nights 11pm.

To examine performance of an individual SELECT query, there is the EXPLAIN
command.

   http://dev.mysql.com/doc/refman/5.0/en/explain.html

Also the optimization section may be of use:

   http://dev.mysql.com/doc/refman/5.0/en/optimization.html

(you can replace 5.0 with 5.1, 4.1, etc. depending on your version).

   - steve edberg


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Index analyser

2010-02-23 Thread mos

At 03:28 PM 2/23/2010, you wrote:

Is there still no such thing anywhere for Mysql as an index analyser?
Many others have such a thing that will sit and monitor db activity over a 
poeriod of time and suggest the exact indexes on each table based on what 
it has seen to improve performance

Anyone got that for MySQL?


You can look at www.maatkit.org/doc/mk-query-digest.html


DESCRIPTION

This tool was formerly known as mk-log-parser.

mk-query-digest is a framework for doing things with events from a query 
source such as the slow query log or PROCESSLIST. By default it acts as a 
very sophisticated log analysis tool. You can group and sort queries in 
many different ways simultaneously and find the most expensive queries, or 
create a timeline of queries in the log, for example. It can also do a 
query review, which means to save a sample of each type of query into a 
MySQL table so you can easily see whether you've reviewed and analyzed a 
query before. The benefit of this is that you can keep track of changes to 
your server's queries and avoid repeated work. You can also save other 
information with the queries, such as comments, issue numbers in your 
ticketing system, and so on.



Mike 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Index analyser

2010-02-23 Thread Cantwell, Bryan

Ya, that one is helpful... just trying to land on a solution like I've seen in 
other DB's that have index-advisor that listens and creates what it thinks is 
the perfect indexes ... but thx...


From: mos [mo...@fastmail.fm]
Sent: Tuesday, February 23, 2010 4:33 PM
To: mysql@lists.mysql.com
Subject: Re: Index analyser

At 03:28 PM 2/23/2010, you wrote:
Is there still no such thing anywhere for Mysql as an index analyser?
Many others have such a thing that will sit and monitor db activity over a
poeriod of time and suggest the exact indexes on each table based on what
it has seen to improve performance
Anyone got that for MySQL?

You can look at www.maatkit.org/doc/mk-query-digest.html


DESCRIPTION

This tool was formerly known as mk-log-parser.

mk-query-digest is a framework for doing things with events from a query
source such as the slow query log or PROCESSLIST. By default it acts as a
very sophisticated log analysis tool. You can group and sort queries in
many different ways simultaneously and find the most expensive queries, or
create a timeline of queries in the log, for example. It can also do a
query review, which means to save a sample of each type of query into a
MySQL table so you can easily see whether you've reviewed and analyzed a
query before. The benefit of this is that you can keep track of changes to
your server's queries and avoid repeated work. You can also save other
information with the queries, such as comments, issue numbers in your
ticketing system, and so on.


Mike


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=bcantw...@firescope.com
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: how things get messed up

2010-02-23 Thread Vikram A
Sirs,

Because one table will hold the large amount of data, only the recent data will 
be used for transactions; so rest of the old records are remain same with out 
any transaction. So we have decided to go for year based storage; here even old 
records can be taken out by join queries. 

I hope you experts will agree with this. Or your comments and suggestions are 
welcome  for the better design. 

Thank you

VIKRAM A





From: Jerry Schwartz jschwa...@the-infoshop.com
To: Vikram A vikkiatb...@yahoo.in
Cc: Johan De Meersman vegiv...@tuxera.be; MY SQL Mailing list 
mysql@lists.mysql.com
Sent: Tue, 23 February, 2010 3:53:38 AM
Subject: RE: how things get messed up

 
I thought I had replied publicly to Johan’s suggestion, with
some personal experience.
 
He’s absolutely right, that would give you a solution that would
be completely transparent to your application and therefore much easier to
implement. You could keep re-arranging your partitions as necessary.
 
I, myself, have never used portioning so I hope someone with
experience will chime in here.
 
One disadvantage is that all of your data would be in one
database, making your backups bigger and bigger. If you used a separate
database as an archive, the archive database wouldn’t have to be backed up very
often.
 
I never did get a good feel for how big your database will be.
Even if you’re talking about IIT, and assuming 10 students, 6 classes per
semester, three semesters, per year, 20 years of history, you’re going to have 
36
million class records. I think there are much bigger databases running quite
well.
 
Regards,
 
Jerry Schwartz
The Infoshop by Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032
 
860.674.8796 / FAX: 860.674.8341
 
www.the-infoshop.com
 
From:Vikram A
[mailto:vikkiatb...@yahoo.in] 
Sent: Friday, February 19, 2010 11:17 PM
To: Jerry Schwartz
Cc: Johan De Meersman
Subject: Re: how things get messed up
 
Dear
Sir,

I agree with the solution proposed. 

But one of the member[Johan De Meersmanvegiv...@tuxera.be] of this list has
commented it.

Do you have any opposition/Suggestions?

Thank you

VIKRAM A


 
From:Jerry Schwartz
jschwa...@the-infoshop.com
To: Vikram A vikkiatb...@yahoo.in
Cc: MY SQL Mailing list mysql@lists.mysql.com
Sent: Thu, 18 February, 2010 9:54:57 PM
Subject: RE: how things get messed up
From:Vikram A
[mailto:vikkiatb...@yahoo.in] 
Sent: Wednesday, February 17, 2010 11:41 PM
To: Jerry Schwartz
Cc: MY SQL Mailing list
Subject: Re: how things get messed up
 
Dear Jerry Schwartz

We have applications for colleges in India. The same idea of having single
table for manipulating students records. but we are not following archiving
concept.

Ex stupersonal. and stuclass these tables are playing wide role
in our application. After 7 years now there are 9000 records[postgresql
backend] are there in the table. Because of this the entire application [ Fees,
attendance, exams etc] performance is getting down. For the remedy of this I
proposed this year wise architecture for our new version [mysql].
[JS] You have 9000 records? That should not slow down any
application. I must not understand you.

I have problem in year wise also, i have number of mutual related tables for
students such as stu_last_studies, stu_family_details, stu_address,
stu_extracurri and so on. If i go for year basisis i have to make all the above
tables also year basis. 
Hence, I feel it difficult have such number of tables after few years. 
[JS] I did not mean that you should have tables for each year. I
was suggesting that you have tables for recent data and tables for archived
data. 

As you said the archive system, can you the idea about the archive system[If
needed i will give the table structures]. 
[JS] This is best described with a picture. Here is a small
example of what I meant:
 
 
`student_master_table`  (all years)
   
/\
  
/  \
 `grades_current`  `grades_archive`
   
| /
 `class_master_table`
 
The structures of the two grades tables should be almost the
same, something like
 
grade_id autoincrement in grades_current only
student_id index
class_id index
class_start_date
grade_received
 
You would add new grade records to the `grades_current` table.
 
Now, suppose that you don’t usually need data more than five
years old. Once a year you would run these queries:
 
INSERT INTO `grades_archive` SELECT * FROM `grades_current` WHERE
`class_start_date`  YEAR(DATE_SUB(NOW(), INTERVAL 4 YEAR));
DELETE FROM `grades_current` WHERE `class_start_date` 
YEAR(DATE_SUB(NOW(), INTERVAL 4 YEAR));
 
That would keep the `grades_current` table small. If you want to
find a student’s recent grade history, you would use a query like
 
SELECT * FROM `grades_current` WHERE `student_id` = 12345;
 
If you decide that you need a student’s complete history, you
could do 
 
SELECT * FROM `grades_current` WHERE 

Garbage collecting/trimming ibdata1

2010-02-23 Thread Yang Zhang
I recently tried to run

  INSERT INTO general_log SELECT * FROM mysql.general_log;

but that failed a few hours in because I ran out of disk space.
'SELECT COUNT(*) FROM general_log' returns 0, yet ibdata1 is still
49GB (started at 3GB before the INSERT; the source mysql.general_log,
a CSV table, was initially 43GB). I tried TRUNCATE then DROP on
general_log, then restarted mysqld, to no avail.

From Googling, the only thing that appears remotely relevant to
garbage collection is OPTIMIZE TABLE, but I'm not sure how to apply it
in this case (now that the table has been dropped). How do I reclaim
my disk space? Thanks in advance.
-- 
Yang Zhang
http://www.mit.edu/~y_z/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Garbage collecting/trimming ibdata1

2010-02-23 Thread Jim Lyons
Your innodb data file just auto-extended until you either reached its max or
ran out of disk space if you had no max.

The only way I know to reduce it is to dump all the innodb tables, drop the
innodb data file and logs (and drop the innodb tables if you're using
file-per-table), restart mysql, let it rebuild the innodb files, and reload
the innodb tables from the dump file.

On Wed, Feb 24, 2010 at 12:59 AM, Yang Zhang yanghates...@gmail.com wrote:

 I recently tried to run

  INSERT INTO general_log SELECT * FROM mysql.general_log;

 but that failed a few hours in because I ran out of disk space.
 'SELECT COUNT(*) FROM general_log' returns 0, yet ibdata1 is still
 49GB (started at 3GB before the INSERT; the source mysql.general_log,
 a CSV table, was initially 43GB). I tried TRUNCATE then DROP on
 general_log, then restarted mysqld, to no avail.

 From Googling, the only thing that appears remotely relevant to
 garbage collection is OPTIMIZE TABLE, but I'm not sure how to apply it
 in this case (now that the table has been dropped). How do I reclaim
 my disk space? Thanks in advance.
 --
 Yang Zhang
 http://www.mit.edu/~y_z/ http://www.mit.edu/%7Ey_z/

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=jlyons4...@gmail.com




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com