I am proposing this as a hypothetical situation and I would like the full
feedback of the group:
Could Alejandro re-use the sections of the MySQL source code that handle
replication and bin-logging to make his data capture application appear as
a "Master" server and have his MySQL database act as its slave?
Would that be faster than periodic batch files with LOAD DATA INFILE?
Respectfully,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine
Alejandro Heyworth
<[EMAIL PROTECTED] To: [EMAIL PROTECTED]
ciples.com> cc:
Fax to:
06/25/2004 03:48 PM Subject: Memory to Memory INSERTS
Hi!
I'm looking for a better way to insert large numbers of rows from a client
application that is sampling physical data in real-time. In our case, we
are using a C "double hipvalues[1000000]" cyclical array to buffer our
sampled values.
We're currently creating large query strings similar to:
INSERT DELAYED INTO hipjoint VALUES
(hipvalues[0]),(hipvalues[1]),(hipvalues[2]),(hipvalues[3]),etc...
We would like to continue to insert our values directly from our client app
without first having to dump the data to a temp file and LOAD DATA
INFILEing it periodically.
Any ideas?
Config values of interest:
key_buffer_size = 4G
bulk_insert_buffer_size = 1024M
We are using MySQL 4.1.2.
Thanks.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]