Re: merge table: speed benefits?

2005-12-25 Thread Tom Cunningham
, right? Tom. On 24 Dec 2005 02:08:12 -, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: > Tom Cunningham writes: > >(a) The docs say that spreading the underlying tables across different > >disks can make queries faster. I don't quite understand how this will > >w

MERGE tables considered harmful for data warehouse fact tables

2005-12-22 Thread Tom Cunningham
The script to prove it follows. Splitting a million-row fact table into a 5-part merge table makes 3-second queries take 9 seconds. The basic problem is this: fact tables are generally referenced by unique combinations of dimensions, and though often only one underlying table needs to be referen

Re: merge table: speed benefits?

2005-12-21 Thread Tom Cunningham
OK thanks for your help, so my summary: (1) spreading merged tables across disks will only help concurrent queries (in my data-warehouse application I'm doing things serially). (2) there's no efficiency in the way a merge table splits the indexes into smaller files -- if anything, it will be less

merge table: speed benefits?

2005-12-20 Thread Tom Cunningham
Hi all. I'm working on splitting a 5G myisam fact table into separate parts, and putting a merge table on top. his will definitely help with table management, but I am hoping that a merge table might help me with query speed: (a) The docs say that spreading the underlying tables across different

sudden mysqldump errors

2005-08-04 Thread Tom Cunningham
Hi list, our overnight backup scripts have suddenly been getting a lot of mysqldump errors like: mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table `cs_email_things` at row: 512234 I've checked the obvious things, and no luck: * disk space is OK * nothing in

cross-reference of field names in query

2005-06-20 Thread Tom Cunningham
Here's something I've been curious about: Referring to one field from another field in a 'select' query: SELECT data*2 AS intermediate_result, intermediate_result + 1 AS final_result FROM table; I know that MySql 4.1 can't handle it, but: (a) Is it part of standard SQL? (I hav

Re: resolving ambiguous column name in subquery

2005-04-04 Thread Tom Cunningham
I take it all back. I see now why this behaviour is desirable, or at least standard. E.g., see: https://aurora.vcu.edu/db2help/db2s0/c2corr.htm On Apr 4, 2005 2:40 PM, Tom Cunningham <[EMAIL PROTECTED]> wrote: > Hi, the following seems wrong to me. Not sure whether this is a bug. >

resolving ambiguous column name in subquery

2005-04-04 Thread Tom Cunningham
Hi, the following seems wrong to me. Not sure whether this is a bug. In short: a column-name in a subquery can refer to a table *outside* of the subquery (fair enough), but if the column-name is ambiguous between *inside* and *outside*, the parser assumes that it refers to the *inside* context. I

Re: INSERT .. SELECT ... ON DUPLICATE KEY UPDATE

2005-02-27 Thread Tom Cunningham
Thanks Shawn, Sergei. I'll get onto the new version as soon as I can. Tom. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

INSERT .. SELECT ... ON DUPLICATE KEY UPDATE

2005-02-24 Thread Tom Cunningham
It appears you can't combine an insert-select with an on-duplicate-key-update. I would find it very useful if you *could* do this. I know it would be complicate how you would handle the syntax for what to do when you hit a duplicate key, could do this: update all the columns that are *not* involve

Re: passing environment variable to an SQL script

2005-02-04 Thread Tom Cunningham
I found an answer: mysql -e "SET @VAR:=1234; SOURCE xxx.sql" -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

passing environment variable to an SQL script

2005-02-01 Thread Tom Cunningham
Hi everybody. I do a lot of maintenance things on the DB by piping SQL files to my MySql DB. What would be very useful is to find a way of passing very simple *arguments* to these scripts. The only way I can think of it is using a nasty trick > mysql --set-variable=wait_timeout=1234 < file.sql

data-warehouse ETL with MySQL

2005-01-14 Thread Tom Cunningham
Hi everybody. This is just a note to tell people what I'm doing, hoping I might get some comments. In short: I have been using MySQL SQL scripts to do data-warehouse ETL for about 6 months. It's gone well, but perhaps getting a little creaky now. The general format is this: 1. Add any new recor

not recognizing an index (possible_keys)

2004-11-12 Thread Tom Cunningham
I've had something funny happen to me thismorning: I did an EXPLAIN - it couldn't find an index that was there. Then I did a DESCRIBE. Then I did an EXPLAIN again, and it *could* find the index. Is this expected behaviour? (4.0.22-standard) 1st EXPLAIN: +---+--++--

size of indexes with prefixes

2004-11-02 Thread Tom Cunningham
Hi - I've been trying different prefixes on my index & I'm getting funny results: in short: >> an index on a column-prefix is bigger than an index on the whole column << using "show table status" I get total index size as: 12147712 - without index 13239296 - with full index 13455360 - with in

Re: "Data Truncated" warning about join, not insert?

2004-10-27 Thread Tom Cunningham
Sorry I forgot to include table-defs: .. OK, I've got it entirely reproducible now, without giving away any sensitive data. No, the column-lengths are the same. But doing this has shown me that it's something to do with the partial key on the 'email' column in the 'user' table. If you remove it,

"Data Truncated" warning about join, not insert?

2004-10-26 Thread Tom Cunningham
Hi Everybody. I have a funny problem: I'm getting thousands of "Data Truncated" warnings when I insert a single row of data into a table, using an INSERT...SELECT. The funny thing is, the warning is for a column I'm not even inserting. It's only a column that I'm doing a join on. The query is th

Re: Partial Row Reads?

2004-10-25 Thread Tom Cunningham
I think what Harald is saying (& if he's not, then I say this): You could have an alternative table structure like this: - it should make queries much quicker: create table raddata_2004_10_ONE ( granID integer not null, scanID tinyint unsigned not null, fpID

Large amounts of SQL

2004-07-22 Thread Tom Cunningham
Hi this is just a general question - how do other people organise a large amount of SQL? I have begun to have a lot of SQL files lying around, used for some data-mining type stuff, & things are getting a bit messy. I do my best with: (a) putting user variables at the start of files for anything

User variables & optimization of IF()

2004-07-07 Thread Tom Cunningham
Hi everybody. I generally try to parametrize out my queries as much as possible with user-variables. So, say you have a general query for all users: --- SELECT ... FROM users --- It's nice to do this: SET @USER_ID:= NULL; SELECT ... FROM users WHERE IF(@USER_ID IS NOT NULL,[EMAIL PROTECTED],

User variables & optimization of IF()

2004-07-06 Thread Tom Cunningham
Hi everybody. I often try to parametrize out my queries as much as possible with user-variables. So, say you have a general query for all users: --- SELECT ... FROM users --- It's nice to do this: SET @USER_ID:= NULL; SELECT ... FROM users WHERE IF(@USER_ID IS NOT NULL,[EMAIL PROTECTED],1) -

replication corruption because of max_packet_size

2004-05-21 Thread Tom Cunningham
hi everybody, Just a note of a problem I found - to see whether worth reporting this as a bug, or suggest that someone mention it in the manual. In short: *when replication fails because 'max_packet_size' is too small, that failure can corrupt the relay logs*. So even when you fix the max_packe

replication of temporary tables

2004-05-12 Thread Tom Cunningham
Hi, In short: what is the standard MySQL handling of temporary tables, *especially* when you've got a slave filter on to only replicate certain tables? My replication stopped over the weekend, because a query was executed which created temporary tables, then used that table to update one of t

Re: sql prob

2004-01-30 Thread Tom Cunningham
Try this: CREATE TEMPORARY TABLE max_entry SELECT NoPeople, Max(Date_entry) AS Date_entry FROM table; SELECT NoPeople FROM table NATURAL JOIN max_entry WHERE Departure IS NULL; > Hi all :) > > I have 1 relation A. > > Structure of A: > NoPeople > Date_entry > Date_departure > > and key is (

Re: Indexes to speed up a duplicates query.

2003-07-27 Thread Tom Cunningham
users.o_last_name) or (users.o_email_address=duplicateusers.o_email_address)) group by oid --- Plus a supplementary question: whether I have 'limit 1' or 'limit 100' doesn't seem to make any difference. I thought it must be my 'order by' clause, but I got rid of that &

Indexes to speed up a duplicates query.

2003-07-27 Thread Tom Cunningham
OK, say you're trying to find all the non-exact duplicates in a table, and especially do it with a single query, so that the check can be part of a user interface. Then naturally you're going to have a slow query, because it has to compare every row with every other row. You ought to be able to