: [EMAIL PROTECTED]
Sent: Thursday, May 30, 2002 9:04 PM
Subject: Re: I need 50.000 inserts / second
On Fri, May 31, 2002 at 01:49:11AM -0300, Cesar Mello - Axi wrote:
Hello,
I intend to use MySQL in a data acquisition software. The actual
version stores the acquired data straight
snip
INSERT INTO my_table VALUES (foo, foo), (bar, bar), (z, z)...
snip
From my own experience, by using this method, that is doing one insert per
200 or more rows I can increase the insert speed from between 5x - 10x. A
couple more performance improvements where made as well so that value
On Fri, May 31, 2002 at 01:56:12PM -0500, Dan Nelson wrote:
In the last episode (May 31), Mark said:
Cesar, you really should consider using placeholders and bind_param
(if available). Without using placeholders, the insert statement will
contain the literal values to be inserted and has
Hi !!
You could maybe buffer the data in Your application
and then run inserts later... like this.
struct oneRow{
double timestamp;
double data;
etc, etc
}
struct oneRow rows[num_of_rows];
for (int i = 1; i num_of_rows; i++)
{
// collect data
rows[i].timestamp = (double) i;
Depending on your available RAM length of your sampling runs, you
could write records to heap (in-memory) tables -
http://www.mysql.com/doc/H/E/HEAP.html
- and then dump those to disk after the sample was done. You might
even be able to use heap tables as a buffer with one process
Hello,
You could maybe buffer the data in Your application
and then run inserts later... like this.
This is not a solution for me as the data acquisition can take hours without
any break.
I might be missunderstanding You since I don't get this together...
You wrote:
The following C++
On Fri, 2002-05-31 at 20:54, Cesar Mello - Axi wrote:
Hello,
You could maybe buffer the data in Your application
and then run inserts later... like this.
This is not a solution for me as the data acquisition can take hours without
any break.
Whatever you do, with any SQL you get stuck
If HEAP table won't do because of memory constraints, why not use straight
flat files like typical Unix or Web logs? Start a new file at set intervals,
whether it be time intervals or record intervals. Then you can use the
LOAD DATA command to load the files.
Your situation seems similar to that
Hi.
On Fri, May 31, 2002 at 03:24:52PM +0200, [EMAIL PROTECTED] wrote:
Cesar, you really should consider using placeholders and bind_param (if
available). Without using placeholders, the insert statement will contain
the literal values to be inserted and has to be re-prepared and re-executed
- Original Message -
From: Benjamin Pflugmann [EMAIL PROTECTED]
To: Mark [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Friday, May 31, 2002 8:42 PM
Subject: Re: I need 50.000 inserts / second
The general idea is correct, but note that MySQL does not support
this (yet)...
Really
In the last episode (May 31), Mark said:
Cesar, you really should consider using placeholders and bind_param
(if available). Without using placeholders, the insert statement will
contain the literal values to be inserted and has to be re-prepared
and re-executed for each row. With
:04 PM
Subject: Re: I need 50.000 inserts / second
On Fri, May 31, 2002 at 01:49:11AM -0300, Cesar Mello - Axi wrote:
Hello,
I intend to use MySQL in a data acquisition software. The actual
version stores the acquired data straight in files. The sample rate
can get up to 50 kHz. I would
On Fri, May 31, 2002 at 01:49:11AM -0300, Cesar Mello - Axi wrote:
Hello,
I intend to use MySQL in a data acquisition software. The actual
version stores the acquired data straight in files. The sample rate
can get up to 50 kHz. I would like to know if there is some way to
improve MySQL
13 matches
Mail list logo