5.1 to 5.6 upgrade: is it possible?

2013-12-04 Thread Ilya Kazakevich
Hello,

Have anybody tried to upgrade 5.1 to 5.6? I believe running mysql_upgrade
should be enough, but does there are some caveats?

Ilya Kazakevich.


RE: Design help

2013-04-21 Thread Ilya Kazakevich
Hello, 

Many thanks for your response.  Can yo u offer any advice with regards
usage
of country_codes eg gb and regions, cities etc ?  I've been reading up on
http://en.wikipedia.org/wiki/ISO_3166 etc.  Should I be looking to use a
Surrogate key for countries ?  Or the country code like fr for France ?

Same with regions/states and cities and districts ?

I do not think you need surrogate key for country. Country code is 2
letters, so if you use char(2) charset ASCII you only need 2 bytes for that.
That is ok and your queries would be easier to read.
You do not need surrogate keys for US states also but if you speak about
states in general you may need key (many countries over the world have
states)
The same is about cities: city name is too big to be used as primary key,
and there may be many cities with similar names.

Ilya.




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Query Help

2013-04-19 Thread Ilya Kazakevich
Hello Richard, 

the count(*) for each week of 2013 so that I end up with:
http://dev.mysql.com/doc/refman/5.1/en/group-by-functions.html

Ilya.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Troubleshoot excessive memory usage in InnoDB

2013-04-19 Thread Ilya Kazakevich
Hello,

Try to use tuning-primer.sh: this scripts reads your variables  and prints
memory size you need for that.

Here is example of its output:
MEMORY USAGE
Max Memory Ever Allocated : 2.86 G
Configured Max Per-thread Buffers : 1.80 G
Configured Max Global Buffers : 2.10 G
Configured Max Memory Limit : 3.91 G
Physical Memory : 5.82 G

I am not sure if it works correctly with 5.6


Ilya

-Original Message-
From: Denis Jedig [mailto:d...@syneticon.net]
Sent: Saturday, April 20, 2013 1:17 AM
To: mysql@lists.mysql.com
Subject: Troubleshoot excessive memory usage in InnoDB

Hi all.

In a specific MySQL installation of 5.6.10 using InnoDB tables, I
am observing unusual memory consumption patterns. The memory
usage is growing constantly - even beyond the physical memory
limits. The entire on-disk storage is 41 GB (uncompressed), yet
memory usage is happily growing to values larger than 50 GB.

The databases mainly experience read load with complex queries
and subSELECTs running ~60-100 connection threads.

Although the docs state that there should be no memory leaks,
this case certainly looks like one at first glance.

http://dev.mysql.com/doc/refman/5.6/en/memory-use.html suggests
that temporary in-memory tables would be used  for this purpose
so I suspected unfreed temporary tables to be the culprit. But
memory usage growth rates did not change significantly even after
lowering tmp_table_size to 2M (from 64M). Also, I have been
unable to find a way to determine the size of in-memory temporary
tables at any given time.

Some of the STATUS counters:

| Com_select| 424614  |
| Com_update| 3444|
| Created_tmp_disk_tables   | 1716|
| Created_tmp_files | 43  |
| Created_tmp_tables| 4002|
| Uptime| 5112|

The total number of tables over all databases is 1370. my.cnf
contains the following memory-related values:

max_allowed_packet  = 16M
thread_stack= 192K
thread_cache_size   = 8
max_connections= 1000
innodb_buffer_pool_size = 5000M
innodb_log_file_size= 256M
innodb_flush_method = O_DIRECT
query_cache_limit   = 1M
query_cache_size= 256M
join_buffer_size= 256k
tmp_table_size  = 2M
max_heap_table_size = 64M
read_buffer_size= 1M
ft_min_word_len = 3
open_files_limit= 1

A replication slave of this very host is running 5.6.10 with
MyISAM tables and the mysqld process does not exceed 1 GB in
memory utilization even after several hours of operation under
similar load.

I have posted a question to
http://dba.stackexchange.com/questions/40413 which I will
update with further information as I get it.

Any hints on how to hunt the resource hog greatly appreciated,
--
Denis Jedig

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Mesaure query speed and InnoDB pool

2013-04-17 Thread Ilya Kazakevich
Hello Rick,

Run your query twice; take the second time.  For most queries the first run
brings everything into cache, then the second gives you a repeatable,
though
cached, timing.
Yes, but  I need cache to be  my database size to prevent other pages from
pushing out pages for my query, right?
Or I need to do at the dedicated server..

Please provide EXPLAIN SELECT, SHOW CREATE TABLE, and we will critique
your indexes and query plan.
I speak about query optimization in general)


Handler* is another way to get consistent values.  These numbers are
unaffected by caching.
What variable exactly should I take?
Why can't I use Innodb_pages_read? That is number of page reads regardless
its source (pool or disk), is not it?


1GB buffer_pool?  You have only 2GB of available RAM?  Normally, if you are
running only InnoDB, the buffer_pool should be set to about 70% of
available
RAM.
I will increase it now.
But I will need to disable swapping also to prevent my OS from swapping out
InnoDB pages.

Ilya.


 -Original Message-
 From: Ananda Kumar [mailto:anan...@gmail.com]
 Sent: Tuesday, April 16, 2013 2:06 AM
 To: Ilya Kazakevich
 Cc: MySQL
 Subject: Re: Mesaure query speed and InnoDB pool

 Does your query use proper indexes.
 Does your query scan less number blocks/rows can you share the explain
 plan of the sql


 On Tue, Apr 16, 2013 at 2:23 PM, Ilya Kazakevich 
 ilya.kazakev...@jetbrains.com wrote:

  Hello,
 
  I have 12Gb DB and 1Gb InnoDB pool. My query takes 50 seconds when
  it reads data from disk and about 2 seconds when data already exists
  in pool. And it may take 10 seconds when _some_ pages are on disk
  and
 some are in pool.
  So, what is the best way to test query performance? I have several
 ideas:
  * Count 'Innodb_rows_read' or 'Innodb_pages_read' instead of actual
  time
  * Set pool as small as possible to reduce its effect on query speed
  * Set pool larger than my db and run query to load all data into
  pool and measure speed then
 
  How do you measure your queries' speed?
 
  Ilya Kazakevich
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Mesaure query speed and InnoDB pool

2013-04-16 Thread Ilya Kazakevich
Hello,

I have 12Gb DB and 1Gb InnoDB pool. My query takes 50 seconds when it reads
data from disk and about 2 seconds when data already exists in pool. And it
may take 10 seconds when _some_ pages are on disk and some are in pool. 
So, what is the best way to test query performance? I have several ideas:
* Count 'Innodb_rows_read' or 'Innodb_pages_read' instead of actual time
* Set pool as small as possible to reduce its effect on query speed
* Set pool larger than my db and run query to load all data into pool and
measure speed then

How do you measure your queries' speed?

Ilya Kazakevich


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: how things get messed up

2010-02-10 Thread Ilya Kazakevich
Hello John, 

There was a joke in russian PHP club: why strore images in database? Are
you going to have a full text search on them?

IMHO:
Storing files in DB is probably bad idea.
Here are some advantages of storing files on filesystem:
1) Theoretically it takes more time to load data from database then to load
it directly from filesystem if data is huge (we are not speaking about
databases, installed on raw disks)
2) while storing files on fs, you can access them via web server and ftp and
network fs and what-ever-you-like. Almost all servers for your platform
supports filesystem, but not database.
Yes, its possible to create script to access data from db via web, but you
have to create it and support HTTP cache (modified-sience, e-tag and so on)
and you also need to store modification date in db (to make this cache work)
and so on. If you store files as files -- you have filesystem and servers to
access them. If you use db as filesystem -- you should create servers by
yourself. Looks like invention of wheel.
3) Tables with BLOB are not so flexible, i.e. HEAP tables do not support
such fields. So, in some situation you'll have filesort that you cannt
fix.
4) You can not easly do SELECT * from table with 10.000 rows if each row
contains field with 200Kb pdf file. 

And there is only one disadvantage: consistency. When deleting user -- all
her data could be deleted automatically using foreign keys, but you have to
do it in your app if files are stored externally.

If somebody knows more reasons to store files in DB -- post it here, please.
It would be interesting. Thanks.

Ilya.

-Original Message-
From: John G. Heim [mailto:jh...@math.wisc.edu] 
Sent: Wednesday, February 10, 2010 10:09 PM
To: mysql@lists.mysql.com
Subject: how things get messed up

About 5 years ago, I was asked to write a php app for my department. The app
keeps track of graduate school applicants to my department at the
university. The main data elements are the scores each professor gives to
each applicant. There are only about 400 applicants each year so even with
all the personal data, scores, transcripts,  etc for each student, it's not
much. for the first 2 years, it was under a meg of data. Well, then the
selection committee asked me to add something so that if a student e-mailed
the department a document, say a paper he'd written or a photo of himself,
or whatever, it could be tacked on to the info they saw about him while
grading the applicant.

So I said, Well, there is only going to be maybe 10 or 20 of those a year. 
And even if all 400 applicants submit a PDF of a paper they'd written, it
would be only 400 docs. 4,000 after 10 years. Yeah, lets just create a
documents table in the database and store them in mysql.

For the first 2 years, only 2 students sent in documents to attach to their
application. I figured I'd wasted my time. Then the next year, the graduate
school changed their  web application form to allow students to upload
documents. Fine, I said, My worst case scenario has already come true. 
But, well, this is why you plan for the worst case.

Then they started taking letters of recommendation as PDF documents. In
fact, they started requiring PDF docs. Each student has 3 to 6 letters of
recommendation. All in all, I figure we're at about 100 times as many docs
in our database as I originally expected and about 10x my worst case
scenario.

I should either be fired or shot.  Maybe fired *then* shot. Actually, its
not as bad as all that. I can pretty easily write a perl script to export
the docs to files and access them via a network mounted filesystem. After
all, saving myself 5 hours of work 5 years ago is worth what? -- maybe
10hours today? It is amazing how often quick  dirty turns out just being
dirty in the end.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=kazakev...@devexperts.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Help optimizing settings?

2010-02-05 Thread Ilya Kazakevich
Which MySQL server do you use?
What is your storage engine? 

-Original Message-
From: Ken D'Ambrosio [mailto:k...@jots.org] 
Sent: Friday, February 05, 2010 5:26 PM
To: mysql@lists.mysql.com
Subject: Help optimizing settings?

I've got a fairly large -- 100+ GB -- MySQL database.  It isn't accessed
often -- it's acting more as an archive right now than anything else. 
That being said, when it does get accessed, the indeces seem to take forever
to load.  Being as I just bumped the RAM from 2 GB to 6 GB, what,
generically, would be the best way to go forward to take advantage of the
extra RAM?

Thanks!

-Ken


--
This message has been scanned for viruses and dangerous content by
MailScanner, and is believed to be clean.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=kazakev...@devexperts.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org