innodb_data_file_path tuning

2007-11-08 Thread Russell Uman


what are the performance implications of different settings for 
innodb_data_file_path?


if i only have one partition or filesystem, does it ever make sense to 
define more than one table file?


iirc, there's some way to have a single file per table - when is this 
advisable?


i've had trouble finding resources on tuning innodb - i'd appreciate 
links as much as answers :)


thank you!

---
russell  uman
   firebus
d-_-b

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: correct way to simulate 'except' query in mysql 4.1

2007-10-27 Thread Russell Uman


huh. it's a varchar(50) on table1 and a varchar(50) on table2. i 
wonder why explain is reporting 150 as key_len?


utf8?


yes. that does make sense.


is there anything else i can investgate?


Do you need utf8? :-)


yes. it's an internationalized application :)

Check your cache hits.  I can't remember if you said, but is it an 
InnoDB table?  I'm guessing MyISAM since you have a 2G key buffer.


yes. we do have some tables as innodb - those that get many many inserts and 
don't require any count(*) queries which as i understand it are slow in innodb - 
if there's some reason that this kind of query would be faster under innodb i'm 
happy to give it a try...


Check 
key_read_requests and key_reads for the query (mysql-query-profiler is a 
handy way to do this).


awesome. i will look into it.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: correct way to simulate 'except' query in mysql 4.1

2007-10-27 Thread Russell Uman


Baron Schwartz wrote:
I don't think it will be any better to count distinct values.  I think 
the query is just slow because the index lookups are slow.  Is the 
'word' column really 150 bytes?


huh. it's a varchar(50) on table1 and a varchar(50) on table2. i wonder why 
explain is reporting 150 as key_len?


 That's probably the culprit.  How slow 
is this, by the way?  


this is also interesting. as you can see in the slow query log reported before, 
it took 94 seconds. i'd say i see between 15 and 90 seconds in the slow query 
log for this normally.


however, i just ran the query now, at a time when the application is not heavily 
loaded, and it finished quickly - less than a second.


another run a few minutes later took around 3 seconds. so there seems to be some 
interaction with load.


370k rows in one table, verifying the 
non-existence of index records in a 4M-row table with 150-byte index 
values... what does "slow" mean for your application?  How big is the 
index for the 4M-row table (use SHOW TABLE STATUS)?


the larger table has 95M index. the smaller has a 5M index. key_buffer is set to 
2G, and when i look at top mysql never actually get's above 1.5G, so i'm under 
the impression that all the indexes are in memory.


it's a search table, so it does get a lot of inserts, but slow log never reports 
any lock time.


is there anything else i can investgate?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: correct way to simulate 'except' query in mysql 4.1

2007-10-25 Thread Russell Uman


There's no "using distinct", but there is "not exists", and in fact no rows are
returned. Slow query log reports "#Query_time: 94  Lock_time: 0  Rows_sent: 0
Rows_examined: 370220"

EXPLAIN:
id   select_type   table   type   possible_keys   key
key_len   ref   rows   Extra
1  SIMPLE  t1  index  NULL  PRIMARY  150  NULL
338451  Using index
1  SIMPLE  t2  ref  word  word  150  t2.field  4
 Using where; Using index; Not exists

These are two search tables (hence the large key_len i believe), one with ~400K
rows, one row per search term the other with ~4M rows, relating search terms to
content.

Perhaps I could optimize by doing a count(distinct) on each table and only
running the expensive query if the counts don't match?

Would I see any benefit by making these InnoDB tables?

Thanks for your help with this!

Baron Schwartz wrote:

Hi,

That is the right way, but if you show us the exact output of EXPLAIN we can 

help more.  In particular, does it say "Using distinct/not exists" in Extra?


Russell Uman wrote:


howdy.

i trying to find items in one table that don't exist in another.
i'm using a left join with a where clause to do it:

SELECT t1.field, t2.field FROM table1 t1 LEFT JOIN table2 t2 ON
t1.word = t2.word WHERE t2.word IS NULL;

both tables are quite large and the query is quite slow.

the field column is indexed in both tables, and explain shows the
indexes being used.

is there a better way to construct this kind of query?


--
russell uman
  firebus
d-_-b


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



correct way to simulate 'except' query in mysql 4.1

2007-10-24 Thread Russell Uman


howdy.

i trying to find items in one table that don't exist in another.
i'm using a left join with a where clause to do it:

SELECT t1.field, t2.field FROM table1 t1 LEFT JOIN table2 t2 ON t1.word = 
t2.word WHERE t2.word IS NULL;


both tables are quite large and the query is quite slow.

the field column is indexed in both tables, and explain shows the indexes being 
used.


is there a better way to construct this kind of query?

thank you!

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RESET MASTER during daily backups

2002-10-08 Thread Russell Uman


howdy.

we use the binary log as a crash recovery tool.
therefore, once we have backed up the db (we use the excellent mysql_backup for this) 
we can happily discard yesterday's binlog.
the only correct way i've found to get rid of old binlogs in to issue RESET MASTER, 
and i figure i should do this at the same time
that the logs get flushed.
however, neither mysqldump nor mysqladmin have a RESET MASTER option.
has anyone found a good way to do this?



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




undo

2001-11-08 Thread Russell Uman


howdy.

i have a mysql backed php application.
has anyone ever come up with a (however kludgy) undo functionality for mysql in a php 
app? based on keeping a history somewhere?
based on the update log?

i'm thinking of just making some innodb tables, using transactions, and forcing my 
users to commit after each write.

please let me know if there are any other possibilities...

thanks!


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




update/join question

2001-10-26 Thread Russell Uman

i feel like i must be missing something simple because i keep wanting to do things 
like this, but i can't find a one step way to do
it.

whether or not there is an easy way, can someone tell me the best way to do it?

i want to update a field in one table based on data in another table.

example:

CREATE TABLE users (userid INT(11) PRIMARY KEY, userinfo_set CHAR(1) NOT NULL DEFAULT 
0);
CREATE TABLE userinfo (userid INT(11) NOT NULL, userinfo TEXT);



what i want is:

UPDATE users SET userinfo_set=1 where (users.userid=userinfo.userid);

aka, i want to set a flag in every row in users if there is a row in userinfo with the 
same userid.

i've been SELECT INTO OUTFILEing (with a join on the two tables) and LOAD DATA 
INFILEing to deal with this. is there a better way?

thanks!


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




novice join question

2001-08-31 Thread Russell Uman


i have three tables. products, class, and dept.
class and dept each have two fields - code and text.
products has a field named class (containing class codes) and a field named dept 
(containing dept codes)

for some depts, all the products that have that dept have the same class
for other depts, there are products in all classes with that dept.

i want to generate a list of depts based on product class/dept combinations. for 
example:
if there is a product with dept 1 and class 1, then dept 1 should be on the list
if there are no products in dept 2 with class 1, then dept 2 should not be on the list

i've tried

select dp.name, dp.dept
from dept.dept as dp
inner join products as pr on dp.dept=pr.dept where pr.class = '1';

i think this filters out the right depts, but of course it gives me multiple rows for 
each dept. how can i remove the duplicate
rows?

can anyone give me a clue? should i be organizing my tables differently if i want this 
kind of result?

thanks


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php