Re: idle query

2010-07-27 Thread Dan Nelson
In the last episode (Jul 27), Mike Spreitzer said:
 Sure, `wc` is different from mysql --- but different enough to account for
 a 16000:75 ratio?

Most definitely.  wc is reading sequentially, and the OS is probably
coalescing those reads and prefetching disk blocks in 128KB chunks. 
16*128 is around 2GB/sec (iostat would tell you your actual throughput). 
You probably either have a 2gb fibre-channel card, or else wc is CPU-bound
at this point, counting each character as it streams past.  I bet dd
if=largefile of=/dev/null bs=8k would give you even more iops.  dd ... 
bs=1m would probably max out your fibre-channel card's bandwidth.  None of
those commands are doing random I/Os, though, so you can't compare their
numbers to your mysql query.
 
 Will iostat give a good utilization metric for GPFS?

For your particular query, yes.  You're doing single-threaded random IO, so
you are fetching a random disk block, waiting for the result, then fetching
another random block, etc.  100% of your time should be in iowait, waiting
for a disk head to seek to your data.  If it's not at least 80%, then your
query isn't waiting on disk I/O, and since you aren't CPU-bound, I'm not
sure what your bottleneck would be at that point...
 
 If I want to try to actually hold a 2GB table in RAM, is there anything I 
 need to set in my.cnf to enable that?

Just make sure your key_buffer_size is large enough to hold the index.  You
can find this number by setting key_buffer_size to a huge number (32GB for
example), running LOAD INDEX INTO CACHE for your index, then running show
status like 'key_blocks_used'; .

http://dev.mysql.com/doc/refman/5.1/en/load-index.html

-- 
Dan Nelson
dnel...@allantgroup.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



query results group/summed by interval

2010-07-27 Thread Ghulam Mustafa
Hi everyone,

i have two columns (seconds, number of calls), i need to produce a
report which will show total number of calls in intervals (let'say 10
seconds interval), i know i can do this programmability in my script but
i was wondering if it's possible to accomplish this behavior within
mysql. for example i have following data.

+--+---+
|  calls   | queue_seconds |
+--+---+
|  250 |  0.00 |
|   28 |  1.00 |
|   30 |  2.00 |
|   56 |  3.00 |
|   23 |  4.00 |
|   31 |  5.00 |
|   33 |  6.00 |
|   50 |  7.00 |
|   49 |  8.00 |
|   62 |  9.00 |
|   74 | 10.00 |
...
... and so on...
...
+--+---+

now result should look like this with a 5 seconds interval.

+--+---+
| count(*) | queue_seconds |
+--+---+
|  250 |  0.00 |
|  168 |  5.00 |
|  268 | 10.00 |
...
... and so on...
...
+--+---+

i would really appreciate your help.

Best Regards.

-- 
Ghulam Mustafa

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: query results group/summed by interval

2010-07-27 Thread Aveek Misra
try this ...

 select 5 * floor(seconds/5) as start, 5 * floor(seconds/5) + 5 as end, 
sum(calls) from calls group by 5 * floor(seconds/5);

This should give you an output of the type

+---+--++
| start | end  | sum(calls) |
+---+--++
| 0 |5 |387 |
| 5 |   10 |225 |
|10 |   15 | 74 |
+---+--++


Thanks
Aveek

From: Ghulam Mustafa [mustafa...@gmail.com]
Sent: Tuesday, July 27, 2010 3:53 PM
To: mysql@lists.mysql.com
Subject: query results group/summed by interval

Hi everyone,

i have two columns (seconds, number of calls), i need to produce a
report which will show total number of calls in intervals (let'say 10
seconds interval), i know i can do this programmability in my script but
i was wondering if it's possible to accomplish this behavior within
mysql. for example i have following data.

+--+---+
|  calls   | queue_seconds |
+--+---+
|  250 |  0.00 |
|   28 |  1.00 |
|   30 |  2.00 |
|   56 |  3.00 |
|   23 |  4.00 |
|   31 |  5.00 |
|   33 |  6.00 |
|   50 |  7.00 |
|   49 |  8.00 |
|   62 |  9.00 |
|   74 | 10.00 |
...
... and so on...
...
+--+---+

now result should look like this with a 5 seconds interval.

+--+---+
| count(*) | queue_seconds |
+--+---+
|  250 |  0.00 |
|  168 |  5.00 |
|  268 | 10.00 |
...
... and so on...
...
+--+---+

i would really appreciate your help.

Best Regards.

--
Ghulam Mustafa

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=ave...@yahoo-inc.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: idle query

2010-07-27 Thread Mike Spreitzer
Does `iostat` consider GPFS mounts at all?  If so, how can I tell which 
line of `iostat` output is about the GPFS mounted at /dev/gpfscf ?  I do 
not see such a thing mentioned in the iostat output.

In `vmstat` output, I thought bi is in terms of fixed-size blocks, not 
I/O commands.

Thanks,
Mike Spreitzer


More Tools to Work with MySQL Databases in Visual Studio Provided by dbForge Fusion!

2010-07-27 Thread Julia Samarska
Devart 
Email: i...@devart.com
Web: http://www.devart.com 



FOR IMMEDIATE RELEASE




CONTACT INFORMATION:

Julia Samarska
jul...@devart.com




27-Jul-2010




More Tools to Work with MySQL Databases in Visual Studio Provided by dbForge 
Fusion! 

Devart today releases dbForge Fusion for MySQL, v4.50 - a powerful add-in 
designed to simplify MySQL database development and enhance data management 
capabilities. 

With dbForge Fusion, Devart continues its initiative to produce efficient 
database experiences for all the people in MySQL world. 

New features in dbForge Fusion for MySQL, v4.50 include:

Support of MS Visual Studio 2010

Now all the tools of dbForge Fusion for MySQL are available in MS Visual Studio 
2010.

More freedom for backing up schemas

Schema Export wizard has been totally redesigned to Database Backup to enable 
users to back up schemas in automatic mode using Windows task scheduler, save 
backup options for future use, view automatically complied log file. Besides, 
old backup files are automatically removed based on date or quantity. 

New tool for database developers - Query Profiler

dbForge Fusion offers results of internal MySQL tools like SHOW PROFILE and 
EXPLAIN in a convenient and clear GUI. Besides, you get STATUS variables for 
the required query automatically calculated. 

Additional benefits:

  a.. Plan of the query displayed in the tree view for easy review 

  b.. Profiling history that can be saved for further analysis 

  c.. Capability to compare profiling results in two clicks 

  d.. Capability to print profiling results

Data comparison and synchronization of any databases

Diverse testing and close interaction with database developers, admins and 
casual users resulted in thoughtful redesign and enhancement of Data Comparison 
tool. Now it compares and synchronizes database of any length with significant 
performance improvement. To customize comparison and synchronization, the users 
can use new options, change synchronization direction in one click, and quickly 
filter tables in comparison results. An additional benefit is generating 
accurate comparison reports in HTML and Excel formats. 

Advanced query building

Now Query Builder, a powerful tool for visual query creating, is tailored for 
creating complex conditions with several clicks. The new power is based on 
optimized performance of the Selection tab in the expression editor, visual 
addition of subqueries to any part of the main query, new Wrap to Subquery 
option to wrap tables into a subquery, optimized navigation in the editor, 
particularly between subqueries and other features. 

Quick generating template SQL scripts for database objects

Thanks to this new functionality, you can save your time while working with 
database objects. For example, you can quickly generate template SQL scripts 
CREATE, DROP, SELECT, INSERT, UPDATE or DELETE scripts for tables. This option 
is available in the context menu of Database Explorer and called 'Generate 
Script As'. 

Improved schema comparison

  a.. Extended capabilities of Schema Comparison wizard 

  b.. New comparison options to ignore some table options, DEFINER and SQL 
SECURITY expressions, default values for columns

Price and Availability

The fully-functional 30-day trial version of dbForge Fusion for MySQL 4.50 is 
available for download at www.devart.com/dbforge/mysql/fusion/download.html . 
The license price starts at $99.95. 

For more information about dbForge Fusion for MySQL, visit the product's web 
site - www.devart.com/dbforge/mysql/fusion/ .

About Devart

Devart is a software development company with 11 years of experience on the 
software market and over 20 thousands of devoted users. 

We specialize in providing comprehensive development and management tools as 
well as native connectivity solutions for the most popular databases, including 
Oracle, SQL Server, MySQL, PostgreSQL, InterBase, Firebird, and SQLite.




For additional information about Devart, visit www.devart.com/company/ . 




# # #

EVALUATION COPY AVAILABLE ON REQUEST


RE: idle query

2010-07-27 Thread Jerry Schwartz
-Original Message-
From: Dan Nelson [mailto:dnel...@allantgroup.com]
Sent: Monday, July 26, 2010 11:31 PM
To: Mike Spreitzer
Cc: MySql
Subject: Re: idle query


iostat -x output would be helpful here, too, so we can see whether your
disks are at 100% busy.

[JS] 100% busy would be a disaster! Anything more than about 80% busy is going 
to slow things down rapidly, but by busy I mean not idle. The number of I/Os 
per second is not a good indicator, because there's a big difference between 
sequential reads, repeatable reads (that could be satisfied out of the disk's 
or controller's buffers), random reads, and writes of any of the same flavors. 
It also matters whether or not you have write caching (delayed writes) turned 
on.

Then you have to consider the effect of striping, the number of controllers 
across which the disks are spread, the smarts in the controllers (request 
chaining?) and the like.

There's also the behavior of the file system at the OS level. If reads are 
unlikely to be satisfied in the file system's buffers, then a small buffer 
pool would be marginally more efficient that a large one; if they are fairly 
localized or repeatable, a bigger buffer pool is better. A journaling file 
system is better for writing, under most circumstances.

In most cases, when a system is slow but the CPU's are all loafing it's a good 
guess that the bottleneck is in the disk subsystem.

[I was a performance consultant in a past life. In another past life I 
invented the sharp stick.]

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com




You're doing ~75 I/O's per second, so it looks like you're running on slow
SATA drives (a 4500 RPM drive will tops out at 75 iops); with drives like
that, anything that forces random I/O to a large table is going to kill you.
MySQL runs a single thread for each submitted query (i.e.  no Parallel Query
option like Oracle has), so if a query needs to do 14 million random reads,
that'll be 140/70/3600 ~= 6 hours just to fetch table data (let alone
the index lookups required to find the table rows).

MySQL also only has a simple index join type on disk-based tables, so no
hash joins (which would load your 2nd table into RAM temporarily as a hash
table).  It does use hash indexes on memory tables though, so if you load
fldrcv into a memory table and join fldsnd against it, that might work
pretty well.

Another option would be to create an index on fldrcv that covers both your
WHERE clause fields and your SELECT clause fields.  MySQL will be able to
use that index to satisfy the query's requirements without going to the
table itself.  Then you can use the LOAD INDEX INTO CACHE command to preload
that index into memory.  This ends up working similar to the memory table
option (no disk I/O needed for the 2nd table), with tradeoffs on both sides
(a preloaded index consumes disk space and mysql has to keep it updated when
the table is updated, but a memory table has to be dropped and rebuilt every
time you run your main query if the fldrcv table changes often).  Which
option you choose depends on how often you run the query, and how often
fldrcv changes.

 create table fldpar (p VARCHAR(200) NOT NULL,
  rcd DATETIME NOT NULL, rcms SMALLINT NOT NULL, msgid BIGINT NOT  NULL, q
  VARCHAR(200) NOT NULL, scd DATETIME NOT NULL, scms SMALLINT NOT NULL,
  INDEX p(p), INDEX q(q) ) as select fldrcv.p, fldrcv.cd as rcd, fldrcv.cms
  as rcms, fldrcv.msgid as msgid, fldrcv.q, fldsnd.cd as scd, fldsnd.cms as
  scms from fldrcv, fldsnd where fldrcv.q=fldsnd.p AND
  fldrcv.qboot=fldsnd.pboot AND fldrcv.msgid=fldsnd.msgid;

 This statement makes a new table by joining two existing tables.  Here is
 what `explain` has to say about the select part of the statement:

 mysql explain select fldrcv.p, fldrcv.cd as rcd, fldrcv.cms as rcms,
 -  fldrcv.msgid as msgid, fldrcv.q, fldsnd.cd as scd,
 -  fldsnd.cms as scms from fldrcv, fldsnd where
 -  fldrcv.q=fldsnd.p AND fldrcv.qboot=fldsnd.pboot AND
 -  fldrcv.msgid=fldsnd.msgid;
 ++-++--+---+--+-+
---
---+---+-+
 | id | select_type | table  | type | possible_keys | key  | key_len | ref
| rows  | Extra   |
 ++-++--+---+--+-+
---
---+---+-+
 |  1 | SIMPLE  | fldsnd | ALL  | pec,pbm   | NULL | NULL| NULL
| 29036 | |
 |  1 | SIMPLE  | fldrcv | ref  | qbm   | qbm  | 220 |
bigCell2906_flood.fldsnd.p,bigCell2906_flood.fldsnd.pboot,bigCell2906_flood.fld
snd.msgid  |   452 | Using where |
 

Re: idle query

2010-07-27 Thread Dan Nelson
In the last episode (Jul 27), Mike Spreitzer said:
 Does `iostat` consider GPFS mounts at all?  If so, how can I tell which 
 line of `iostat` output is about the GPFS mounted at /dev/gpfscf ?  I do 
 not see such a thing mentioned in the iostat output.

iostat works at the disk device level, not at the filesystem level.  If your
gpfs filesystem consists of one SAN device, then it'll be easy to find in
your iostat -x output :)  If it is spread over multiple SAN devices, then
you'll only see per-device stats.  There may be a gpfs-specific command that
can give you summary IO stats for the filesystem as a whole.
 
 In `vmstat` output, I thought bi is in terms of fixed-size blocks, not 
 I/O commands.

It looks like on Linux, bi and bo are the total disk throughput in
kbytes.

-- 
Dan Nelson
dnel...@allantgroup.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: idle query

2010-07-27 Thread Dan Nelson
In the last episode (Jul 27), Dan Nelson said:
 In the last episode (Jul 27), Mike Spreitzer said:
  If I want to try to actually hold a 2GB table in RAM, is there anything I 
  need to set in my.cnf to enable that?
 
 Just make sure your key_buffer_size is large enough to hold the index.  You
 can find this number by setting key_buffer_size to a huge number (32GB for
 example), running LOAD INDEX INTO CACHE for your index, then running show
 status like 'key_blocks_used'; .
 
 http://dev.mysql.com/doc/refman/5.1/en/load-index.html

Oops.  My previous email described the requirements for the prefetch index
into RAM option.  For the load table into RAM option, you'll need to
raise max_heap_table_size to a size large enough to hold your table.

http://dev.mysql.com/doc/refman/5.1/en/memory-storage-engine.html

-- 
Dan Nelson
dnel...@allantgroup.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Automate Install/Configuration of MySQL on Linux

2010-07-27 Thread Todd E Thomas

I'm looking for automation direction...

I've found many packages that sit on top of MySQL. For the purposes of 
consistency I'd like to automate these installs.


I've been able to automate the install and configuration of everything 
except the mysql part.


I'm using CentOS 5.5. Installing/verifying is no big deal.

It's the MySQL configuration that's holding me up.

Basically I've created an expect script. It works 99% but it's a PITA to 
finish. Here's what I'd like to accomplish:

  *Set the default admin password
# mysqladmin -u root password 'root-password'

  *login to mysql
mysql mysql -u root -p

  *Drop the anonymous accounts
mysql DELETE FROM mysql.user WHERE user = '';

  *Sync all of the root passwords
mysql UPDATE mysql.user SET Password = PASSWORD('root-password') 
WHERE User = 'root';


  *Remove the test database:
mysql  drop database test;


In another script I would like to create databases for specific 
packages. EG:

Concrete5, for example needs:
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, ALTER ON 
concrete5.db TO 'admin'@'localhost' IDENTIFIED BY 'admin-password';




If there is a better way to do this than using expect I would greatly 
appreciate any pointers in the right direction. Bash is comfortable for 
me and perl is within reach. I'm not much versed in anything else right now.



--
Thanks for the assist,

Todd E Thomas
It's a frail music knits the world together.
-Robert Dana

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org