Performance MySQL with C-API
Hi, I've got a couple a questions concerning the speed of insert queries when using the C API from MySQL. I've written an application that receives data through a CORBA event channel and stores it in a MySQL database. The problem is that a lot of events are dropped. I think (I'm quite sure actually) that the MySQL database is the bottleneck because it's a slow consumer. Now if I use a benchmark to check how many inserts the database can handle each second is will that result be about the same as the number of inserts the same database can handle using the C API? I guess not. Using a MySQL server on a machine with a 1000 mhz AMD processor I can insert 200 strings + timestamps a second through the API. Does that sound too less or am I expecting too much? Do benchmarks or test results for the C API exist? If someone has more experience on this or knows a place where I can find out more, plz let me know. Greetings, Raf - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail <[EMAIL PROTECTED]> To unsubscribe, e-mail <[EMAIL PROTECTED]> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
C-API performance
Hi, I'm building a DBMs for distributed platforms using MySQL and CORBA. It's completely written but I have a question 'bout the performance, does it sound normal that I can call the next function ony 150 - 200 times a second a PIII 700mhz with 128 mb ram?? int resolvequery(char * vraag, MYSQL_RES& toReturn, int& affected, int control) { MYSQL_RES * result; MYSQL_ROW row; int state,counter; char * query; char * db; MYSQL * connect; int * temporal; connect = standardcon.connection; db = standardcon.database; state = mysql_real_query(connect, vraag, 1024); if (state != 0) { printf(mysql_error(connect)); return 1; } if((strncmp(vraag,"SEL",3)==0)||(strncmp(vraag,"SHO",3)==0) ||(strncmp(vraag,"DES",3)==0)) { result = mysql_store_result(connect); affected = mysql_num_rows(result); toReturn = *result; } else affected = 0; return 0; } If it's not normal: how should I make my program better? I'm also still looking for a good and easy benchmark to test the MySQL-server?? Thanks, Raf _ Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com. - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail <[EMAIL PROTECTED]> To unsubscribe, e-mail <[EMAIL PROTECTED]> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Benchmark
Hi, I wanna know how many INSERT queries my MySQL server can resolve in a second? What benckmark should I use or how should I do this? I've read something in the manual about looking for sql-bench, but it's nowhere on my pc. Is it possible that it's not installed?? Greetings, Raf _ Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com. - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail <[EMAIL PROTECTED]> To unsubscribe, e-mail <[EMAIL PROTECTED]> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
MySQL performance
Hi, I'm currently writing my final year thesis and i'm in need for some information on the efficiency of MySQL. If anyone knows a good site, book, paper etc ... pls let me know. I'm wondering how many request MySQL server can resolve per second and that kind of things. Thanks in advance, Raf _ Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com. - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail <[EMAIL PROTECTED]> To unsubscribe, e-mail <[EMAIL PROTECTED]> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Performance question
Hi, I know I've allready posted this question but it's rather important for my final work. I'll add some extra information at the bottom. I've got a question 'bout the number of number of writes a MySQLserver can handle in a second. I've written a program which contains the following code: typedef struct { MYSQL * connection; char * database; } con; int resolvequery(char * vraag, MYSQL_RES& toReturn, int& affected, int control) { MYSQL_RES * result; MYSQL_ROW row; int state,counter; char * query; char * db; MYSQL * connect; int * temporal; connect = standardcon.connection; db = standardcon.database; state = mysql_real_query(connect, vraag, 1024); if (state != 0) { printf(mysql_error(connect)); return 1; } if((strncmp(vraag,"SEL",3)==0)||(strncmp(vraag,"SHO",3)==0) ||(strncmp(vraag,"DES",3)==0)) { result = mysql_store_result(connect); affected = mysql_num_rows(result); toReturn = *result; } else affected = 0; return 0; } And when I call this procedure 50 times/second (from one client) or 100 times/second (two clients at the same time) it stores all the data without generating an error, but immediatly when I start a third client it crashes with the usual segmentation fault. In case you need it, the calls are made from this function: CORBA::Short i_DBM_impl::storeData(const DINA::t_Table& table, const char * e) throw(CORBA::SystemException) { int state; char * mission; MYSQL_RES temp; mission = (char *) malloc(256 * sizeof(char)); sprintf(mission, "INSERT DELAYED INTO %s VALUES ('%s',CURRENT_TIMESTAMP())", (char *) table.name, e); cout << mission << endl; if (resolvequery(mission,temp,state,0)==0) return 1; else return 0; } So is it possible that some buffer is full or something when more than 100 inserts in second are requested?? What can I do to adjust the performance or did I make other mistakes?? I'm using mysql Ver 9.38 Distrib 3.22.32, for pc-linux-gnu (i586) on Debian Linux machine. Please can you also mail to me ([EMAIL PROTECTED]) and not only directly to the list (I'm receiving the index-version and else I've to wait too long :-) Extra info: SHOW STATUS: +--++ | Variable_name| Value | +--++ | Aborted_clients | 0 | | Aborted_connects | 0 | | Created_tmp_tables | 0 | | Delayed_insert_threads | 0 | | Delayed_writes | 10101 | | Delayed_errors | 60 | | Flush_commands | 1 | | Handler_delete | 0 | | Handler_read_first | 0 | | Handler_read_key | 68198 | | Handler_read_next| 1332 | | Handler_read_rnd | 788092 | | Handler_update | 68194 | | Handler_write| 107755 | | Key_blocks_used | 3466 | | Key_read_requests| 354147 | | Key_reads| 1 | | Key_write_requests | 114449 | | Key_writes | 114426 | | Max_used_connections | 3 | | Not_flushed_key_blocks | 0 | | Not_flushed_delayed_rows | 0 | | Open_tables | 2 | | Open_files | 4 | | Open_streams | 1 | | Opened_tables| 91 | | Questions| 449077 | | Running_threads | 1 | | Slow_queries | 0 | | Uptime | 331347 | +--++ and SHOW VARIABLES: ++-+ | Variable_name | Value | ++-+ | back_log | 5 | | connect_timeout| 5 | | basedir| /usr/ | | datadir| /var/lib/mysql/ | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000| | join_buffer| 131072 | | flush_time | 0 | | key_buffer | 16773120| | language | /usr/share/mysql/english/ | | log| ON | | log_update | OFF | | long_query_time| 10 | | low_prior
Performance question
Hi, I've got a question 'bout the number of number of writes a MySQLserver can handle in a second. I've written a program which contains the following code: typedef struct { MYSQL * connection; char * database; } con; int resolvequery(char * vraag, MYSQL_RES& toReturn, int& affected, int control) { MYSQL_RES * result; MYSQL_ROW row; int state,counter; char * query; char * db; MYSQL * connect; int * temporal; connect = standardcon.connection; db = standardcon.database; state = mysql_real_query(connect, vraag, 1024); if (state != 0) { printf(mysql_error(connect)); return 1; } if((strncmp(vraag,"SEL",3)==0)||(strncmp(vraag,"SHO",3)==0) ||(strncmp(vraag,"DES",3)==0)) { result = mysql_store_result(connect); affected = mysql_num_rows(result); toReturn = *result; } else affected = 0; return 0; } And when I call this procedure 50 times/second (from one client) or 100 times/second (two clients at the same time) it stores all the data without generating an error, but immediatly when I start a third client it crashes with the usual segmentation fault. In case you need it, the calls are made from this function: CORBA::Short i_DBM_impl::storeData(const DINA::t_Table& table, const char * e) throw(CORBA::SystemException) { int state; char * mission; MYSQL_RES temp; mission = (char *) malloc(256 * sizeof(char)); sprintf(mission, "INSERT DELAYED INTO %s VALUES ('%s',CURRENT_TIMESTAMP())", (char *) table.name, e); cout << mission << endl; if (resolvequery(mission,temp,state,0)==0) return 1; else return 0; } So is it possible that some buffer is full or something when more than 100 inserts in second are requested?? What can I do to adjust the performance or did I make other mistakes?? I'm using mysql Ver 9.38 Distrib 3.22.32, for pc-linux-gnu (i586) on Debian Linux machine. Please can you also mail to me ([EMAIL PROTECTED]) and not only directly to the list (I'm receiving the index-version and else I've to wait too long :-) Greetings, Raf _ Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com. - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail <[EMAIL PROTECTED]> To unsubscribe, e-mail <[EMAIL PROTECTED]> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php