Re: Memory to Memory INSERTS

2004-06-29 Thread Alejandro Heyworth
Shawn,
Very Interesting idea.  I definitely want to look into this a bit more.
I fear though that the bin-logs might be written first to disk before they 
are copied over to the replicas.

Another member of my team mentioned there might be a way to issue direct 
MyISAM table INSERTS.  She suggested that we could bypass the parsing and 
param checking of our large string-based INSERT queries.

Since our sampling application can guarantee the inputs on the INSERT, this 
would offer us a potential performance boost as there is no reason for 
MySQL to verify our INSERT query inputs a second time.

Does anyone know if the direct table/index insert idea is valid?
At 09:43 AM 6/29/04, you wrote:
Could Alejandro re-use the sections of the MySQL source code that handle
replication and bin-logging to make his data capture application appear as
a Master server and have his MySQL database act as its slave?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Memory to Memory INSERTS

2004-06-29 Thread Alejandro Heyworth
Eric,
I'm looking for a way to eliminate the construction, transmission, and 
parsing of the long multi-row INSERT queries that we are issuing from our 
client app.  Since we are inserting 200k rows a shot, we're looking for 
every boost that we can find.
   * Connecting: (3) [want to use a connection pool]
   * Sending query to server: (2) [want to eliminate]
   * Parsing query: (2) [want to eliminate]
   * Inserting record: (1 x size of record) [no way around this]
   * Inserting indexes: (1 x number of indexes) [no way around this]
   * Closing: (1) [want to use a connection pool]
Since we have already tuned the server, I'm looking for other ideas.

Radical ideas are welcome!
At 07:34 PM 6/29/04, you wrote:
http://dev.mysql.com/doc/mysql/en/Insert_speed.html


Memory to Memory INSERTS

2004-06-25 Thread Alejandro Heyworth
Hi!
I'm looking for a better way to insert large numbers of rows from a client 
application that is sampling physical data in real-time.  In our case, we 
are using a C double hipvalues[100] cyclical array to buffer our 
sampled values.

We're currently creating large query strings similar to:
INSERT DELAYED INTO hipjoint VALUES 
(hipvalues[0]),(hipvalues[1]),(hipvalues[2]),(hipvalues[3]),etc...

We would like to continue to insert our values directly from our client app 
without first having to dump the data to a temp file and LOAD DATA 
INFILEing it periodically.

Any ideas?
Config values of interest:
key_buffer_size = 4G
bulk_insert_buffer_size = 1024M
We are using MySQL 4.1.2.
Thanks.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


INDEX DESC

2004-06-23 Thread Alejandro Heyworth
Does anyone know when INDEX DESC will be implemented?
I'm storing time values and want to access the data from the most recent 
time value without sorting the result set.

Thanks!
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: INDEX DESC

2004-06-23 Thread Alejandro Heyworth
I don't see any relationship between your two sentences?
Thanks for the responses.
I guess I was not clear enough in my last post.
You can define an index to sort values in a particular order...  ASC is the 
default.
DESC is an option, but it is not implemented yet.  I want to know when it 
will be implemented.

I would like the values in my table to be stored in DESC order.
For example, I want to read ranges of values from my db in the opposite 
order of how they are currently being indexed without using an extra ORDER 
BY in my select. This ORDER BY forces an internal sort to put the result 
set in DESC order which slows my SELECT.

Currently, if I insert integer values into a table with an index ASC (or 
DESC) on the INT column , a general SELECT will return the values in 
ascending order.
1 2 3 4 5

I want to naturally store the data as:
5 4 3 2 1
without adding an ORDER BY int_col DESC
It would make a world of difference to our app since we are dealing with 
huge ranges of joint compression information that need to be read out 
sequentially.

I hope this is clearer.
So, does anyone know when it will be implemented?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: INDEX DESC

2004-06-23 Thread Alejandro Heyworth

That could be.  If so, it's a MySQL 3.23 limitation that was fixed in
MySQL 4.0.
So, what we are agreeing on is that MySQL 4.x does in fact support both 
DESC and ASC indexes?

If this is the case and we're doing something wrong here, cool!
I definitely think the docs should reflect this functionality!
BUT,
ALTER TABLE kneejoint ADD INDEX testindex (compression DESC);
1) SHOW INDEX kneejoint only shows ASC collation.
2) SELECT compression FROM kneejoint still returns values in ASC order.
We are currently using the 4.1.2 binaries (MyISAM) and the DESC index 
functionality isn't working for us yet.

I hear all of your comments on the actual storage of the data and 
understand that.  However, the indexes should return values in the proper 
indexed order if the indexes are being used and they are.

Our experience is that ORDER BY ___ ASC on an ASC index is faster than an 
ORDER BY DESC on the same ASC index when the data sets are large.

Thanks for the help by the way!  We are rapidly migrating our experiments 
to MySQL and trying to dump SQL Server forever! 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Data spikes and buffering inserts...

2004-06-22 Thread Alejandro Heyworth
Hi!
My MyISAM database works perfectly for 98% of the cases that we encounter.
However, there are a few cases during our experiment that we know will 
cause spikes in dataflow.  We know both the why and when, so we are trying 
to plan appropriately for it.

Is there a way that I can mix in memory inserts for that period (maybe 
using the memory/head engine) directly with another MyISAM table for the 
rest of the experiment?

I'm looking for a way to buffer the data while still keeping the data 
available in real-time.

I have access to a RH9 8GB SMP system.
Using INSERT DELAYS seems to kill the server when it hits these data spikes.
I guess what I'm looking for is a way to insert 1,000,000 rows from a HEAP 
engine table to the beginning of a MyISAM table as a block instead of using 
something like this:

INSERT INTO hipjoint SELECT * FROM memhipjoint
Any help would be great! 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


thread_stack and out of memory issues...

2004-06-21 Thread Alejandro Heyworth
Hi!
My Config:
Running Redhat 9.0 Linux 2.4.20-8bigmem SMP
8GB mem
250GB raid array
My server:
MySQL 4.1.2-alpha-standard
We are running into strange memory related errors with our system.
1) Warning: Asked for 262144 thread stack, but got 126876
On start up, we get the above warning.
How can I help MySQL get the stack space that it expects?
I changed thead_stack in my.cnf to 256k (from 192k) just to test the param 
out and it had no effect.

2) Out of Memory; check if mysqld or some other process uses all available 
memory -and- Out of memory (needed 56 bytes)

These errors are usually generated after MySQL dies.
These usually are produced after a large insert of a let's say 100,000 
records from our c client app using:

INSERT DELAYED INTO kneejoint VALUES (),(),(), etc...
Our C client inserter application works well, so I am leaning toward our 
server/distro configuration as the problem.

I'm inserting the records using our client because we are receiving 
real-time stress/strain data from a remote input source.

By the way, max_allowed_packet is large enough, so that is not the problem.
3) Can't create new thread (errno 11)
I am assuming that the server can't start a new client thread because of 
memory issues.

I have monitored the system and it doesn't look like any other app is using 
much memory at all.  In fact, it looks like most mem is cached when 
examined using top.

I haven't been able to interpret any of these error properly, so any help 
would be appreciated!