Can you skip ORDER BY get rows back in inserted order ?

2002-03-05 Thread Sam Lam

I have a table like so :

CREATE TABLE album_rank(
album_id INT NOT NULL,
rank INT NOT NULL,
KEY(album_id)
)

I want to query on the album_id  get the results ordered by rank but I 
want to avoid doing an ORDER BY in the query because of the filesort 
that it usually triggers so I pre-ordered the rows in my table by 
inserting them in the order I wanted with a CREATE TABLE ordered 
SELECT * FROM unordered ORDER BY album_id, rank ASC

For some reason I get the data back in a different order I've tried 
ORDER BY album_id,rank ASC  DESC in case it's a FILO or FIFO

Is there some way that I can avoid doing an ORDER BY and get my
rows back ordered by album_id, rank they way I inserted them ?





InnoDB : Lock wait timeout exceeded; Try restarting transaction

2002-01-09 Thread Sam Lam

I recently switched to InnoDB  persistent connections from PHP.

Lately I've been getting these errors Lock wait timeout exceeded; Try 
restarting transaction on an UPDATE on table. The system is in 
development so there is at most one other user ( a back end Perl script).

When I switched PHP back to non-persistent connections I stopped getting 
that error.

How does one use persistent PHP connections  InnoDB to avoid this 
error ?


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




InnoDB ibd file hit 400MB default

2001-12-30 Thread Sam Lam

I've been using InnoDB for the last week or so with good results  then 
all of a sudden I started getting errors : table  is full. I finally 
tracked it down to the size of the IBD file which I had at the 400MB 
default.

Does an IBD file expand out columns to their data type max size ? I have 
some TEXT columns that are sparsely used that I'm thinking are the cause 
of large space wasting.

What can I do when my IBD file reaches the 2GB Linux file size limit ? 
Which alternate file system should I use for  2GB or should I switch to 
BSD ?

Is there any way to have separate IBD files for each MySQL table or at 
least DB ?


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




tables locked during long compound INSERT/DELETE ?

2001-12-01 Thread Sam Lam

I have web sites where there are periodic inserts ( deletes ) of many 
new records generated by crawlers.

In order to reduce the # of queries I do compound operations like  
INSERT VALUES(a),(b),(c), and DELETE FROM table WHERE record_id IN 
($LONG_ID_LIST).

At the same as these operations are occurring users are doing queries on 
these tables.

I've noticed that during these long compound operations  SELECT queries 
seem to be slow as if the tables are read locked. Does this sound like 
what is happening ?

If so to prevent tables from being locked should I :

a) return to single INSERT and DELETE operations  suffer the query 
overhead for each single operation ?

or

b) if I do INSERT and DELETE with LOW_PRIORITY with a compound operation 
does that mean that tables won't get locked ? Will that result in 
INSERTs or DELETEs being spread over time such that SELECTs could 
potentially get different counts over that spread of time ?

or

c) something else ?






tables locked during long compound INSERT/DELETE ? correction

2001-12-01 Thread Sam Lam

Sorry I confused read lock with locks that prevent other threads to read.

This is what I meant :

I have web sites where there are periodic inserts ( deletes ) of many 
new records generated by crawlers.

In order to reduce the # of queries I do compound operations like  
INSERT VALUES(a),(b),(c), and DELETE FROM table WHERE record_id IN 
($LONG_ID_LIST).

At the same as these operations are occurring users are doing queries on 
these tables.

I've noticed that during these long compound operations  SELECT queries 
seem to be slow as if the tables can't be read by other threads. Does 
this sound like what is happening ?

If so to prevent tables from being locked in a way that other threads 
can't read should I :

a) return to single INSERT and DELETE operations  suffer the query 
overhead for each single operation ?

or

b) if I do INSERT and DELETE with LOW_PRIORITY with a compound operation 
does that mean that tables won't get locked  prevent other threads from 
reading ? Will that result in INSERTs or DELETEs being spread over time 
such that SELECTs could potentially get different counts over that 
spread of time ?

or

c) something else ?


 I have web sites where there are periodic inserts ( deletes ) of many 
 new records generated by crawlers.

 In order to reduce the # of queries I do compound operations like  
 INSERT VALUES(a),(b),(c), and DELETE FROM table WHERE record_id IN 
 ($LONG_ID_LIST).

 At the same as these operations are occurring users are doing queries 
 on these tables.

 I've noticed that during these long compound operations  SELECT queries 
 seem to be slow as if the tables are read locked. Does this sound like 
 what is happening ?

 If so to prevent tables from being locked should I :

 a) return to single INSERT and DELETE operations  suffer the query 
 overhead for each single operation ?

 or

 b) if I do INSERT and DELETE with LOW_PRIORITY with a compound 
 operation does that mean that tables won't get locked ? Will that 
 result in INSERTs or DELETEs being spread over time such that SELECTs 
 could potentially get different counts over that spread of time ?

 or

 c) something else ?