At 03:28 PM 9/24/2004, you wrote:
Hello,
I have a C++ application which spits out data continuously which I need to
load into a database. The data rate is roughly 50,000 rows * 50 bytes/row
per second. I use LOAD DATA INFILE (the quickest way I can find
to load data into the db) to load these data into MyISAM tables which is
accomplished by continously running another C++ application which uses
the MySQL C API (which ultimately runs LOAD DATA INFILE). In order to do
this, I have modified the first app to write the data to ASCII files as
required by LOAD DATA INFILE.  The problem is that there is a bottleneck
in organzing the data (which I do by appending to C++ strings) and then
dumping the strings to the flat files, so I'm looking for ways to increase
the performance in this area.  Writing binary data would be faster.  Does
MySQL support something similar to LOAD DATA INFILE for loading data from
binary files?  Is there a faster way of getting the data into the db
without having to dump it to flat files? I've tried INSERTs, writing code
which essentially bridges the two apps, but of course INSERTs are much
slower than LOAD DATA INFILE. The data types themselves are simple: ints,
smallints, text, doubles, and floats.
Thanks for any help.


David,
I do the same thing, write to comma delimited text files using Delphi (instead of C++) and then use a Load Data Infile to load the data. If the bottleneck is your C++ application, the problem could be with string allocation. Since your rows are only 50 characters in length, if you define a string that is say 50 or 60 bytes long, it should speed things up compared to a variable length string because of the way it manages memory. I usually write text files with 8 million rows without any problem and have gone up to 100 million rows at times.


If you want to write the data out faster, you could try using blockwrite (if your C++ compiler has that feature). You simply take the text strings and move them into a buffer and add a CR/LF to each line, then do a blockwrite when the buffer is full. But first I would profile the application to find the bottleneck, then try and optimize it.

Mike


-- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]



Reply via email to