Re: Doubts tuning MySQL Percona Server 5.5
Hi, I was checking MySQL performance ... Sometimes my database could be working slow. I have some queries that spend 9-10 seconds updating some columns by primary key. I'm not sure if is a data base problem ... Moreover, I have checked tuning scripts and appear these variables. Good sentence! Thanks in advance. Regards, Antonio.
Doubts tuning MySQL Percona Server 5.5
Hi list, I have some doubts adjusting some MySQL variables. I have checked MySQL status and maybe I should to increase some variables ... For example: InnoDB log waits is 103; innodb_log_buffer_size is 8M --> Maybe the next best value could be 16M? Table cache hit rate is 12%; Open_tables is 627; table_open_cache is 1024 --> Maybe the next best value could be 2048? Key buffer hit rate is 93.7%; I have some queries that not using indexes .. join_buffer_size is 4M --> Next best value? Maybe 8M and then check it again? That's all. I hope your advices. Regards, Antonio.
Re: Optimizing InnoDB tables
Hi Johan, Thanks for your reply. Theorically the fragmented tables not offer the best performance to the InnoDB engine, that's correct or not? I don't know if is a problem or not, is a doubt/question for me. I'm not sure if is an atypical behaviour. Thanks in advance. Regards, Antonio.
Re: Optimizing InnoDB tables
Hi Reindl, Thanks for your attention. Following the previous mail, I have checked my MySQL's configuration and innodb_file_per_table is enabled so, I think that this parameter not affects directly to fragmented tables in InnoDB (In this case). I would like to know, if is possible, why after execute an analyze table command on some fragmented table, after that, appears fragmented again. Regards, Antonio.
Re: Optimizing InnoDB tables
Hi Andre, Thanks for your reply. I have checked the link and my configuration. Innodb_file_per_table is enabled and in data directory appears a set of files by each table. Any ideas? Thanks in advance. Regards, Antonio.
Problem with INSERT INTO and UPDATE queries
Hi list, I have some problems with INSERT INTO and UPDATE queries on a big table. Let me put the code and explain it ... I have copied the create code of the table. This table has more than 1500 rows. Create Table: CREATE TABLE `radacct` ( `RadAcctId` bigint(21) NOT NULL AUTO_INCREMENT, `AcctSessionId` varchar(32) NOT NULL DEFAULT '', `AcctUniqueId` varchar(32) NOT NULL DEFAULT '', `UserName` varchar(64) NOT NULL DEFAULT '', `Realm` varchar(64) DEFAULT '', `NASIPAddress` varchar(15) NOT NULL DEFAULT '', `NASPortId` varchar(15) DEFAULT NULL, `NASPortType` varchar(32) DEFAULT NULL, `AcctStartTime` datetime NOT NULL DEFAULT '-00-00 00:00:00', `AcctStopTime` datetime NOT NULL DEFAULT '-00-00 00:00:00', `AcctSessionTime` int(12) DEFAULT NULL, `AcctAuthentic` varchar(32) DEFAULT NULL, `ConnectInfo_start` varchar(50) DEFAULT NULL, `ConnectInfo_stop` varchar(50) DEFAULT NULL, `AcctInputOctets` bigint(20) DEFAULT NULL, `AcctOutputOctets` bigint(20) DEFAULT NULL, `CalledStationId` varchar(50) NOT NULL DEFAULT '', `CallingStationId` varchar(50) NOT NULL DEFAULT '', `AcctTerminateCause` varchar(32) NOT NULL DEFAULT '', `ServiceType` varchar(32) DEFAULT NULL, `FramedProtocol` varchar(32) DEFAULT NULL, `FramedIPAddress` varchar(15) NOT NULL DEFAULT '', `AcctStartDelay` int(12) DEFAULT NULL, `AcctStopDelay` int(12) DEFAULT NULL, `XAscendSessionSvrKey` varchar(10) DEFAULT NULL, PRIMARY KEY (`RadAcctId`), KEY `user_start` (`UserName`,`AcctStartTime`), KEY `nasip_starttime` (`NASIPAddress`,`AcctStartTime`), KEY `stop_nasip_start` (`AcctStopTime`,`NASIPAddress`,`AcctStartTime`), KEY `acctsesid_user_nasip` (`UserName`,`AcctSessionId`,`NASIPAddress`), KEY `user_stop` (`UserName`,`AcctStopTime`) ) ENGINE=InnoDB AUTO_INCREMENT=17694651 DEFAULT CHARSET=utf8 ### The next text shows the entries in mysql-slow.log. ### # Time: 140625 9:37:45 # User@Host: radius[radius] @ [192.168.0.30] # Thread_id: 94892163 Schema: radius Last_errno: 0 Killed: 0 # Query_time: 2.327159 Lock_time: 0.86 Rows_sent: 0 Rows_examined: 0 Rows_affected: 1 Rows_read: 0 # Bytes_sent: 19 use radius; SET timestamp=1403681865; INSERT INTO radacct (acctsessionid,acctuniqueid, username, realm,nasipaddress, nasportid, nasporttype, acctstarttime, acctstoptime, acctsessiontime, acctau thentic,connectinfo_start, connectinfo_stop, acctinputoctets, acctoutputoctets, calledstationid, callingstationid, acctterminatecause, servicetype, framedprotocol, framedipaddress, acctstartdelay, acctstopdelay,xascendsessionsvrkey) VALUES ('80004ef0', '78d3fc2661258da5', 'zu629LAYUT', '', '178.136.71.251', '2147503856', 'Wireless-802.11', '2014 -06-25 09:37:26', '-00-00 00:00:00', '0', '', '', '', '0', '0', 'tururu', '00-00-11-11-11-11', '', '', '', '178.136.71.1', '0', '0', ''); # User@Host: radius[radius] @ [192.168.0.31] # Thread_id: 97905294 Schema: radius Last_errno: 0 Killed: 0 # Query_time: 2.397604 Lock_time: 0.62 Rows_sent: 0 Rows_examined: 1 Rows_affected: 1 Rows_read: 1 # Bytes_sent: 52 SET timestamp=1403681865; UPDATE radacct SET framedipaddress = '182.138.214.240', acctsessiontime = '4199', acctinputoctets = '0' << 32 | '12327909', acctoutputo ctets= '0' << 32 | '294177486' WHERE acctsessionid = '805063b1' AND username= 'fa239DADUX' AND nasipaddress= '182.138.214.50'; ### The previous query is converted because I want to use EXPLAIN ... ### SELECT framedipaddress = '172.21.13.152', acctsessiontime = '4199', acctinputoctets = '0' << 32 |'12327909', acctoutputo ctets= '0' << 32 | '294177486' FROM radacct WHERE acctsessionid = '805063b1' AND username= 'fa239DADUX' AND nasipaddress= '192.168.254.10'; ++-+-+--+---+--+-+---+--+-+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | ++-+-+--+---+--+-+---+--+-+ | 1 | SIMPLE | rad
Re: Optimizing InnoDB tables
Hi again, I have enabled innodb_file_per_table (Its value is on). I don't have clear what I should to do ... Thanks in advance. Regards, Antonio.
Re: Optimizing InnoDB tables
Hi Wagner, I'm running MySQL Percona Server 5.5.30 64Bits. No, I don't have tried to execute ALTER TABLE (Analyze with InnoDB tables do that, or not?). Thanks in advance. Regards, Antonio.
Optimizing InnoDB tables
Hi list, I was trying to optimize the InnoDB tables. I have executed the next query to detect what are the fragmented tables. SELECT TABLE_SCHEMA,TABLE_NAME FROM TABLES WHERE TABLE_SCHEMA NOT IN ("information_schema","mysql") AND Data_free > 0 After that, I have seen that there are 49 fragmented tables. With one table, I have executed "optimize table table_name;" and "analyze table table_name;". The result is the same, the table continuos fragmented. Any ideas? I have followed the mysqltuner recomendations ... Thanks in advance. Regards, Antonio.
Re: Order column in the second table
Hi Lay, If I don't mistake, you can't eliminate "Using temporary" and "Using filesort" because you are using an "order by". Try the explain again removing order by and check the output. When you use an order by, MySQL needs to use filesort and spends some time sorting the result set. Also, create a temporary table with the rows prevously. This is the reason. Regards, Antonio.
Re: Big innodb tables, how can I work with them?
Hi, Thanks for your replies. In our case, we can't implement NOSQL solution. Thats requires modify/check all our application and all services (Including FreeRADIUS that I'm not sure if it's compatible). Andrew, I have heard about people that has a lot of data, more than me. I know that MySQL support this amount but in this case and thinking in the future, I have this problem with my architecture; how can I grow in database servers without delete rows in the tables. I have checked slow queries and now there aren't. These tables are serving queries from FreeRADIUS service. For example, SUMs, COUNTS, nomal SELECTs ... Always with a where condition. Excuse me, what is the meaning of IMO? Thanks. Regards, Antonio.
Advices for work with big tables
Hi, I write to the list because I need your advices. I'm working with a database with some tables that have a lot of rows, for example I have a table with 8GB of data. How can I do to have a fluid job with this table? My server works with disk cabin and I think that sharding and partitioning are technologies that not applies. Work with a lot of data produces that there are some slow query, even with the correct indexes created. So, one option is to delete data but, I use a RADIUS system to authenticate and authorize users to connect to Internet. For this reason I need work with almost all data. Another solution is increase the server resources. Any ideas? Thanks in advance. Regards, Antonio.
Big innodb tables, how can I work with them?
Hi, I have in my server database some tables that are too much big and produce some slow query, even with correct indexes created. For my application, it's necessary to have all the data because we make an authentication process with RADIUS users (AAA protocol) to determine if one user can or not navigate in Internet (Depending on the time of all his sessions). So, with 8GB of data in one table, what are your advices to follow? Fragmentation and sharding discarted because we are working with disk arrays, so not apply. Another option is to delete rows, but in this case, I can't. For the other hand, maybe de only possible solution is increase the resources (RAM). Any ideas? Thanks in advance. Regards, Antonio.
Re: DBA: please review my.cnf [for Java Hibernate application] and give suggestions
Hi Lukas, In that case, such as Adarsh has said, you can configure until 70% of your RAM for innodb_buffer_pool_size. In your case, with 3GB RAM, I suggest you to configure until 2GB for MySQL: Minimal for MyISAM (Maybe 32MB), and the rest for InnoDB. Your problem will be loading data. Maybe your application will work slowly loading data because there are more data than RAM memory. Executing the following script, you can see your optimal buffer size for InnoDB with your data. SELECT CONCAT(ROUND(KBS/POWER(1024, IF(PowerOf1024<0,0,IF(PowerOf1024>3,0,PowerOf1024)))+0.4), SUBSTR(' KMG',IF(PowerOf1024<0,0, IF(PowerOf1024>3,0,PowerOf1024))+1,1)) recommended_innodb_buffer_pool_size FROM (SELECT SUM(data_length+index_length) KBS FROM information_schema.tables WHERE engine='InnoDB') A, (SELECT 3 PowerOf1024) B; SELECT CONCAT(CEILING(RIBPS/POWER(1024,pw)),SUBSTR(' KMGT',pw+1,1)) Recommended_InnoDB_Buffer_Pool_Size FROM ( SELECT RIBPS,FLOOR(LOG(RIBPS)/LOG(1024)) pw FROM ( SELECT SUM(data_length+index_length)*1.1*growth RIBPS FROM information_schema.tables AAA, (SELECT 1 growth) BBB WHERE ENGINE='InnoDB' ) AA ) A; Good luck! Regards, Antonio.
Re: DBA: please review my.cnf [for Java Hibernate application] and give suggestions
Hi Lukas, What is your default engine? In MySQL there are a lot of parameters that configure the engine behaviour. Depends on the engine, I suggest you to add some parameters or others. Also it's important to know the size of your data. Your configuration is minimal and by default is not optimal. Regards, Antonio.
Re: LOAD DATA in replication
If I don't mistake, there are some parameters to make that you are saying. Check statement-based-replication and row-based-replication. I think that this could help you. Regards, Antonio.
Doubt with sored procedures - II
I have discovered that in MySQL client program I can define "definer" and "sql security". This is the detail. Thanks everybody. Best regards, Antonio.
Re: Doubt with stored procedures
Hi everybody, Firstly thanks for your reply. I'm using MySQL only in one server (I have only one), without replication and without cluster configuration. I have defined the stored procedure as a normal user, not as a root. And the call is make as a normal user also. Occurs with MySQL 5.1.49 on Debian 6. This normal user has CREATE PROCEDURE privilege and EXECUTE privilege. Thanks. Best regards, Antonio.
Doubt with stored procedures
Hi everybody, I have a doubt with stored procedures functionality. Is possible that a stored procedure works with all databases form the server? I have created a stored procedure on dataBaseA and also works with dataBaseB. Is that correct? Independently of the user privileges defined. Thanks everybody. Best regards, Antonio.
Re: Defragmentation of MySQL tables, how many times have I to do it?
Hello Manuel, Exactly. My objective doing performance MySQL it isn't to free disk space (it isn't the problem). I would like to get a good performance using cache and setting good values to configuration variables. I will defrag tables once per month. Best regards, Antonio.
Re: Defragmentation of MySQL tables, how many times have I to do it?
Ok. Thank you very much. Best regards, Antonio.
Re: Defragmentation of MySQL tables, how many times have I to do it?
Hi Reindl, Thanks for your reply. So, is it not necessary (not recommended) to defragment tables if I have a lot of write operations (writing or deleting)? Best regards, Antonio.
Defragmentation of MySQL tables, how many times have I to do it?
Hi everybody, Once I have done the defragmentation of MySQL tables, mysql-tunner.pl suggests me do it again. Is this correct? I think that the idea is that in the result of the script there are not fragmented tables ... Any ideas? Thank you very much. Best regards, Antonio.