Best Configuration on Production Server
Hi list, I am having a problem with one of our production server. The performance of the server get decrease considerably in production server. We are using mysql data base of a Authentication Server. When we have tested same server on test environment (2GB RAM, 2 CPU) we are getting 5 ms but in production same request is taking 50 ms. The Production sever configuration No of CPU : 8 RAM : 8 GB OS : Cent OS Here i am giving the show variable out put on the system. Please help to do the best configuration for my mysql server on production. We are using both MyISM and InnoDB engine. | auto_increment_offset | 1 | | automatic_sp_privileges | ON | | back_log| 50 | | basedir | /usr/ | | bdb_cache_size | 8384512| | bdb_home| /var/lib/mysql/| | bdb_log_buffer_size | 262144 | | bdb_logdir || | bdb_max_lock| 1 | | bdb_shared_data | OFF| | bdb_tmpdir | /tmp/ | | binlog_cache_size | 32768 | | bulk_insert_buffer_size | 8388608| | character_set_client| latin1 | | character_set_connection| latin1 | | character_set_database | latin1 | | character_set_filesystem| binary | | character_set_results | latin1 | | character_set_server| latin1 | | character_set_system| utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | | collation_connection| latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server| latin1_swedish_ci | | completion_type | 0 | | concurrent_insert | 1 | | connect_timeout | 10 | | datadir | /var/lib/mysql/| | date_format | %Y-%m-%d | | datetime_format | %Y-%m-%d %H:%i:%s | | default_week_format | 0 | | delay_key_write | ON | | delayed_insert_limit| 100| | delayed_insert_timeout | 300| | delayed_queue_size | 1000 | | div_precision_increment | 4 | | keep_files_on_create| OFF| | engine_condition_pushdown | OFF| | expire_logs_days| 0 | | flush | OFF| | flush_time | 0 | | ft_boolean_syntax | + -()~*:| | | ft_max_word_len | 84 | | ft_min_word_len | 4 | | ft_query_expansion_limit| 20
Slow queries when using left join
Dear MySQL forum. I have performance problems when using left join x combined with where x.y is null, in particularily when combining three tables this way. Please contact me by e-mail if you are familiar with these issues and know how to eliminate slow queries. I would really appreciate your help. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: MySQL Encryption
Jim, I tend to derive a key based on a separate character string and the contents of the data in the same or a related table. This means each row has a unique encryption key and you never have to have the whole key stored somewhere (you don't even know it :p ). Biggest advantage to this is should someone get hold of your data they have to work out your character string and the logic for deriving the key or attempt to hack each and every individual row of the table because no two rows will ever have the same key. For example, in a table with the columns `username`, `email_address`, `password`, `jointime` (where password is encrypted with AES_ENCRYPT) I may Use a charcter string of awfully_complex_char_string- and derive the key like so CONCAT(awfully_complex_char_string-,SUBSTRING(`email_address`,1,LOCATE(@,`email_address`)-1),CAST(`jointime` AS CHAR)) I then store the logic in a database stored procedure and use database security to prevent unauthorised access. At no point do I have this logic outside the database in any external application or script! That would be silly :) Regards John Daisley On Thu, Mar 18, 2010 at 7:26 PM, Jim j...@lowcarbfriends.com wrote: In terms of encryption functions AES_DECRYPT and AES_ENCRYPT, can anyone point to any good links or offer any suggestions in terms of best practices on storage of the associated symmetric key? I've found very little information on this when searching. Does MySQL offer any asymmetric encryption capabilities? What are people using in terms of a good solution for encrypting specific columns of table data while providing protection of the key? Thanks, Jim -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=john.dais...@butterflysystems.co.uk
Re: Slow queries when using left join
2010/3/19 Olav Mørkrid olav.mork...@gmail.com Dear MySQL forum. I have performance problems when using left join x combined with where x.y is null, in particularily when combining three tables this way. With a left join, particularly when you're using *is (not) null*, you can't use index selecting on your right table. That is, you're bound to do a tablescan on what is essentially the cartesian product of your tables. Every additional table only compounds the problem. 100x100 is 10.000., but 100x100x100 is 1.000.000. Avoid left joins whenever possible - in some cases it's quicker to split out the complex query and implement it in code with loops - not always, though, you'll have to apply some elbow grease to find out the optimal solution. The most recent example of this, was a hierarchical lookup query in Drupal's taxonomy module: the hierarchy table was left-joined to itself five times. Execution time on an unloaded machine was 0.54 seconds. By doing individual lookups in a code loop until I got to the top level, I replaced that query with a maximum of five (and usually less) 0.00 second ones over an existing covering index. Another thing - and maybe one you should look at first, is wether you can add more selective where-clauses for you base table. That doesn't always stop at the actual data you want, either. Another example from here: for a radiostation, there was a multiple left-join query to display the last 20 played songs on the homepage. However, the playlist table keeps growing, so I got the website people to agree that it's pretty unlikely that songs from yesterday end up in those 20: we added an index on the playdate and selected on that. Boom, execution time down from 0.35 to 0.01. In addition, killing off old playlist items would've been very beneficial, but this was not an option due to business requirements. Shame, I love to delete people's data :-D And, of course, check if you have indexes on the major parts of your where clause. Selectivity brings speed. I seem to have the order of obviousness in this mail wrong, though. Please read it from bottom to top :-) -- Bier met grenadyn Is als mosterd by den wyn Sy die't drinkt, is eene kwezel Hy die't drinkt, is ras een ezel
setting auto_increment value with a local variable
Hi, Is there any way to set the auto_increment value with the variable like below. mysql set @id=10; mysql alter table suresh_copy auto_increme...@id; ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '@id' at line 1 It is working and below but need to work ab above. mysql alter table suresh_copy auto_increment=1000; Query OK, 0 rows affected (0.01 sec) Records: 0 Duplicates: 0 Warnings: 0 From the MySQL documentation : -- To change the value of the AUTO_INCREMENT counter to be used for new rows, do this: ALTER TABLE t2 AUTO_INCREMENT = value; You cannot reset the counter to a value less than or equal to any that have already been used. Thanks Suresh Kuna
Re: Innodb and bulk writes
Hi Raj, Ananda, the schema is very simple, we don't have any blob or text column. I thought the same about the log files, so I tried with diferent sizes but nothing change. This is the output of iostat -x 1 when the performance is slow running a restore avg-cpu: %user %nice %system %iowait %steal %idle 0.000.000.000.000.00 100.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sde 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 md0 0.00 0.00 0.00 3.92 0.0059.8015.25 0.000.00 0.00 0.00 sdh 0.00 0.00 0.00 0.98 0.0031.3732.00 0.000.00 0.00 0.00 sdi 0.00 0.00 0.00 0.98 0.0023.5324.00 0.000.00 0.00 0.00 sdj 0.00 0.00 0.00 1.96 0.00 4.90 2.50 0.000.00 0.00 0.00 sdk 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 avg-cpu: %user %nice %system %iowait %steal %idle 0.000.000.000.000.00 100.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sde 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 md0 0.00 0.00 0.00 1.98 0.0017.82 9.00 0.000.00 0.00 0.00 sdh 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 sdi 0.00 0.00 0.00 0.99 0.0015.8416.00 0.000.00 0.00 0.00 sdj 0.00 0.00 0.00 0.99 0.00 1.98 2.00 0.000.00 0.00 0.00 sdk 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.000.00 0.00 0.00 2010/3/18 Raj Shekhar spa...@rajshekhar.net Max Bube maxbube at gmail.com writes: The problem starts when I run bulk writes like an alter table or a restore from mysqldump, its starts processing more than 5 rows/s but suddenly the ratio goes down to 100 rows /sec. and then its stucked at this ratio even if I restart MySQL. The only way to get good perfomance again is deleting all innodb files (ibdata, iblog files) and restoring the DB again. The DBs are relative small about 70M rows and 10Gb size. I can repeat this behavior all the time just running 2 restores of the same database. Another example when its stucked: I want to delete 1M rows delete from table where id IN (select id from ) deletes 100 rows / sec but if I run 1 Million delete from table where id = xxx deletes 1 rows / sec How busy are your disks when you start seeing slowdown in the delete process? Are there blobs or big varchars in the deletes that you are doing? Innodb might be filling up its log files and when you see a slow down, it might be flushing the log to the disk. One workaround for this is to not delete million rows, but to delete in batches of 1000 rows. My guess would be that if each row is of size B, and you delete in a batch size of [innodb_log_file_size (in bytes) - 100 MB (in bytes)]/B , you should not see a slowdown. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=maxb...@gmail.com
Re: MySQL Encryption
Thanks for the reply, John. What you are describing seems to be the approach I've seen on the few places I've seen this topic discussed. I've been considering something along those lines, essentially a two part key. Part one of the key is made from some data that is in the record I want to protect and it is different for each record, very much like you suggest. Part two of the key is some constant key value I store somewhere. The full key is created based on some defined manipulation of the two parts, much like you suggest I believe. But, then the issue comes of where to store part two of the key. In your case, you are storing it in a stored procedure and I assume that stored procedure resides on the same mysql server that holds the data you want to protect. That's where I start questioning the security of that approach. The assumption being if someone got full control of that mysql box then essentially all your eggs are in one basket. I was thinking in terms of a most secure solution, you could have a separate server (perhaps a mysql server) that for the purpose of this example only serves part two of the key. That server is well protected and non-public as is the mysql server that stores the data. This way, two servers have to be compromised in order to gain all the parts of the key and data. But, of course, that's kind of a waste of a server and can you afford that and the extra resources that go along with maintaining another server. So, I was thinking, is it really so bad to store only one part of the key in source code. That source code resides on a separate server from the mysql server. Yes, the server that stores the source code is a public server, but at least it's two servers that have to be compromised to give up all the components needed to gain access to the encrypted data. I suppose maybe if I ask you to expand on what you mean by the following that would be helpful to further understand your approach: I then store the logic in a database stored procedure and use database security to prevent unauthorised access. Thanks, Jim On 3/19/2010 6:39 AM, John Daisley wrote: Jim, I tend to derive a key based on a separate character string and the contents of the data in the same or a related table. This means each row has a unique encryption key and you never have to have the whole key stored somewhere (you don't even know it :p ). Biggest advantage to this is should someone get hold of your data they have to work out your character string and the logic for deriving the key or attempt to hack each and every individual row of the table because no two rows will ever have the same key. For example, in a table with the columns `username`, `email_address`, `password`, `jointime` (where password is encrypted with AES_ENCRYPT) I may Use a charcter string of awfully_complex_char_string- and derive the key like so CONCAT(awfully_complex_char_string-,SUBSTRING(`email_address`,1,LOCATE(@,`email_address`)-1),CAST(`jointime` AS CHAR)) I then store the logic in a database stored procedure and use database security to prevent unauthorised access. At no point do I have this logic outside the database in any external application or script! That would be silly :) Regards John Daisley On Thu, Mar 18, 2010 at 7:26 PM, Jim j...@lowcarbfriends.com mailto:j...@lowcarbfriends.com wrote: In terms of encryption functions AES_DECRYPT and AES_ENCRYPT, can anyone point to any good links or offer any suggestions in terms of best practices on storage of the associated symmetric key? I've found very little information on this when searching. Does MySQL offer any asymmetric encryption capabilities? What are people using in terms of a good solution for encrypting specific columns of table data while providing protection of the key? Thanks, Jim -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=john.dais...@butterflysystems.co.uk -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
RE: Best Configuration on Production Server
What changed between your test that took 5ms and the current production system? -Original Message- From: Abdul Mohid Khan [mailto:abdulmohid.k...@magnaquest.com] Sent: Friday, March 19, 2010 12:27 AM To: mysql@lists.mysql.com Cc: Abdul Mohid Khan Subject: Best Configuration on Production Server Hi list, I am having a problem with one of our production server. The performance of the server get decrease considerably in production server. We are using mysql data base of a Authentication Server. When we have tested same server on test environment (2GB RAM, 2 CPU) we are getting 5 ms but in production same request is taking 50 ms. The Production sever configuration No of CPU : 8 RAM : 8 GB OS : Cent OS Here i am giving the show variable out put on the system. Please help to do the best configuration for my mysql server on production. We are using both MyISM and InnoDB engine. | auto_increment_offset | 1 | | automatic_sp_privileges | ON | | back_log| 50 | | basedir | /usr/ | | bdb_cache_size | 8384512| | bdb_home| /var/lib/mysql/| | bdb_log_buffer_size | 262144 | | bdb_logdir || | bdb_max_lock| 1 | | bdb_shared_data | OFF| | bdb_tmpdir | /tmp/ | | binlog_cache_size | 32768 | | bulk_insert_buffer_size | 8388608| | character_set_client| latin1 | | character_set_connection| latin1 | | character_set_database | latin1 | | character_set_filesystem| binary | | character_set_results | latin1 | | character_set_server| latin1 | | character_set_system| utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | | collation_connection| latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server| latin1_swedish_ci | | completion_type | 0 | | concurrent_insert | 1 | | connect_timeout | 10 | | datadir | /var/lib/mysql/| | date_format | %Y-%m-%d | | datetime_format | %Y-%m-%d %H:%i:%s | | default_week_format | 0 | | delay_key_write | ON | | delayed_insert_limit| 100| | delayed_insert_timeout | 300| | delayed_queue_size | 1000 | | div_precision_increment | 4 | | keep_files_on_create| OFF| | engine_condition_pushdown | OFF| | expire_logs_days| 0 | | flush | OFF| | flush_time | 0 | | ft_boolean_syntax | + -()~*:| | | ft_max_word_len | 84