Re: INDEX DESC

2004-06-23 Thread Matt W
Hi Michael,

- Original Message -
From: Michael Stassen
Sent: Wednesday, June 23, 2004 2:30 PM
Subject: Re: INDEX DESC

 Jeremy Zawodny wrote:
  Why is sorting required at all?  Indexes *are* sorted already.

 I expect he's referring to mysql's poor performance when doing ORDER BY
 indexed_column DESC relative to ORDER BY indexed_column ASC.

The performance is only poor when using an index for DESC, *if the index is
PACKED*.  (There's different criteria that determines if a certain index is
packed.)  Otherwise, it should be pretty much exactly the same as ASC.


Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: INDEX DESC

2004-06-23 Thread Matt W
Hi Gerald,

- Original Message -
From: gerald_clark
Sent: Wednesday, June 23, 2004 2:28 PM
Subject: Re: INDEX DESC


 I suspect he is refering to 3.23's inability to use an index on a ORDER
 BY xxx DESC

That's not always true.  3.23 WILL use the index for ORDER BY ... DESC in a
query like this:

SELECT * FROM table ORDER BY key DESC LIMIT 10

but it won't for this:

SELECT * FROM table WHERE key_part1=123
ORDER BY key_part2 DESC LIMIT 10

e.g. when the index is already used for the WHERE.  Of course this was
fixed in 4.0.


Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Full text search problem

2004-06-21 Thread Matt W
Hi Pieter,

That's because may is a stopword in MySQL's full-text indexing, by
default (like can, the, etc).  You can define your own stopword file
with the ft_stopword_file variable.  And you can find the default, built-in
list of stopwords in the file myisam/ft_static.c of the source
distribution.

Hope that helps.

(Oh, also what Terry said in his reply!)


Matt


- Original Message -
From: Pieter Botha
Sent: Monday, June 21, 2004 3:54 AM
Subject: Full text search problem


 Hi

 I have a fulltext search on a dbase for lost pets.
 My problem is the following:

 I have dog in the database called May May which doesnt show up in the
 search results. A dog called Doggy Doggy does show up however. I guess
 the problem is that MySql sees May May as being a date or something
 and doesnt do a text compare.

 Here is my query, from php.
 $query_results = sprintf(SELECT * FROM dogslost WHERE  MATCH
 (`name`,`colour`,`gender`,`breed`,`location`,`description`) AGAINST
 ('%s' IN BOOLEAN MODE), $crit_results);

 any ideas?

 Regards
 Pieter


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Help with apostrophe and FTS

2004-06-11 Thread Matt W
Hi Andrea,

The ' isn't NOT a stopword, it's simply not a word-boundary character,
which I think is what you want.  And there is no way I know of to change
that in MySQL... unless you edit the source of course and compile it
yourself. :-)  Is that an option for you?  If so, I think you just need to
change 1 line in myisam/ftdefs.h:

#define misc_word_char(X)   ((X)=='\'')

change that to:

#define misc_word_char(X)   (0)

I HOPE that is correct! ;-)


Matt


- Original Message -
From: Andrea Gangini
Sent: Thursday, June 10, 2004 9:44 AM
Subject: Help with apostrophe and FTS


 Is there the possibility of making the apostrophe char ( ' ) a stopword
in
 mysql?
 Full text search queries in italian or other European language are
greatly
 affected by that; for example searching amore will not return
dell'amore
 as a match
 Any workaround suggested?

 Andrea Gangini [EMAIL PROTECTED] Mimesi Srl


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Column's DataType -- TEXT vs BLOB...

2004-06-10 Thread Matt W
Hi Scott,

No, TEXT and BLOB are the same except for the case-sensitivity differences.
Neither is like VARCHAR (except the with/without BINARY attribute part) in
that TEXT/BLOB columns don't lose trailing spaces when inserted, as VARCHAR
will -- just to clear that up.

That article is wrong, at least for MySQL.  In MySQL, unlike some other
DB systems, neither TEXT nor BLOB data is stored separate from the table.
If you want to have the non-TEXT/BLOB part of your table smaller and faster
(usually for full table scans), you have to manually create a separate
table for your TEXT/BLOB columns.


Hope that helps.

Matt


- Original Message -
From: Scott Fletcher
Sent: Thursday, June 10, 2004 2:47 PM
Subject: Column's DataType -- TEXT vs BLOB...


I'm wrestling over deciding on which data type to go with, TEXT or BLOB.  I
have one table with one column of 400 characters, I was thinking that TEXT
may be the way to go for that one.

I also have another table that use 4 columns of 800 characters along with 5
columns that use 250 characters.  I'm thinking of using TEXT for 9 of those
columns.

The reason is because I read the MySQL Manual there that say TEXT and BLOB
are pretty much the same in many ways, the only thing different is that
BLOB use VARCHAR Binary while TEXT use VARCHAR.  But reading the
article somewhere (not part of MySQL's Manual) say this...

--snip--
If it doesn't have to be searchable then a BLOB might be more efficient and
you shouldn't have to worry about size (Like size is important?  ). The
reason being that BLOB information is stored seperate from the table data
and is related by a reference number in the table. This keeps the table
smaller and faster as I understand.
--snip--

So, I don't feel too sure what to decide on...  Care for some advice or
recommendation??

Thanks,
 Scott Fletcher


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Table types

2004-06-03 Thread Matt W
Hi Ronan,

Yes, it's fine to mix table types in databases and queries.


Matt


- Original Message -
From: Ronan Lucio
Sent: Thursday, June 03, 2004 2:44 PM
Subject: Table types


 Hi,

 Is it wise to have a database with hybrid table types?

 In other words: if I have a table that wouldn´t have many
 INSERT/DELETE/UPDATE queries, may I create it
 as MyISAM type and even interact (make JOINs) with
 other InnoBD and MyISAM tables?

 Or is it better (faster) to create all columns with the same type
(InnoDB)?

 Thanks,
 Ronan


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Tuning MySQL 4.0.20 for large full table scans

2004-06-03 Thread Matt W
Hi Dan,

- Original Message -
From: Dan Nelson
Sent: Thursday, June 03, 2004 12:34 PM
Subject: Re: Tuning MySQL 4.0.20 for large full table scans

[snip]

  Not sure what can be done about making it not go straight to tmpdir
  with a BLOB column in the SELECT clause, though.  Probably nothing,
  in 4.0.

 Do you actually see a temp file being created?  With tmp_table_size set
 to 2gb, it shouldn't have to go to disk.  Some more interesting data
 would be the State column from show processlist during the query,
 every 10 seconds or so.

Currently, temporary tables with TEXT/BLOB columns are always created on
disk because HEAP tables don't support variable-length rows.  I think this
limitation will be lifted in 5.0 or 5.1.

For the original poster, maybe this is one of the times that a RAM disk
could be useful for MySQL's tmpdir.


Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Specifying an index length and the default value

2004-05-30 Thread Matt W
Hi David,

Great questions:

- Original Message -
From: David Griffiths
Sent: Friday, May 28, 2004 6:05 PM
Subject: Specifying an index length and the default value


 The length of indexes on varchar and char indexes can be specified at
 index creation.

 What is the default length of an index if no length is provided?

The default is to index the whole column length (of course that's not
possible with TEXT/BLOB columns).


 The High Performance MySQL book hints that the index-length used is
 specific for each entry in the indexed column (ie an index on a column
 where the average length of the data is 8 bytes would take up (8 x
 number-of-rows) bytes).

Well, maybe.  It depends... see below.


 If a column was a varchar(128), would the index use 128 bytes per entry,
 or would it use the number of bytes in each row of the indexed column.
 So if each row had exactly four characters, the index would use four
 bytes per row, but if a row was added with 8 characters, that one row
 would have an index entry that was 8 bytes in size and the rest would
 remain at 4?

I don't know that it's in the manual anywhere, but from experimenting and
stuff, I've found that, generally, indexes with a TOTAL length (if there's
multiple columns) of = 8 are fixed-length.  That is, they'll always use 8
bytes/row even if a string doesn't take up that much space.  Actually, this
up to 8 bytes, fixed length behavior might only occur with indexed
character columns (e.g. 100% numeric indexes may stay fixed-length
regardless of their size).  I'm not sure...

You can change the behavior, however, by setting PACK_KEYS in ALTER/CREATE
TABLE.  The default, in MySQL 4+, is DEFAULT, where MySQL decides whether
to use fixed-length keys (faster) or packed variable-length keys (space
saving) depending on the index.  Setting PACK_KEYS to 0 forces all
fixed-length keys (I *think* 0 in 3.23 is like DEFAULT in 4+).  Setting
PACK_KEYS to 1 packs all *non-unique* indexes, even if they're numeric or
= 8 bytes.

It's useful to use myisamchk to find out the properties of columns in an
index:

myisamchk -dv /path/to/db/table

In the second half of the output, you will see information about the
table's indexes.  Some things you may see in the Type column are:

packed - I think this is for character indexes that have prefix compression
(multiple index entries that start with the same characters are
compressed).  Any unused space at the end of the index (storing 10 chars in
a 32 character index) is also not stored (like you were talking about
above).

prefix - I think this one is for numeric indexes that have prefix
compression (in an INT index, values 0 - 255 use the same 3 bytes, so those
can be compressed).

stripped - This is for character indexes that have unused trailing space
stripped (again, like you were talking about above).

Now, having said that, there's still some things in the myisamchk output
that I can't figure out: like sometimes there will be packed and
stripped on the same column; sometimes not.  And other things I can't
remember now that don't seem consistent.  I just kinda figured it out on my
own since I don't know that there are official descriptions anywhere.
(  But at least it gives you more of an idea of what's going on internally
than you can get from a SQL query. :-)


 Thanks for any input.
 David.

Hope that helps somewhat.

Matt






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Delayed insert record visibility

2004-05-02 Thread Matt W
Hi Peter,

- Original Message -
From: Peter Thomas
Sent: Saturday, May 01, 2004 11:24 PM
Subject: Delayed insert record visibility


 I'm trying to understand the delayed insert process to see whether I
can use
 it to reduce the load on mysql, and have the following question.

 Are 'delayed inserts' which are queued but not yet actually added to
the
 database by the handler visible to selects on the table?

 I'm assuming they are not. Hence, if I am using a table effectively as
a
 queue, where I insert records on one end and select.. limit 1 to pull
them
 off the other end, I could end up being told by the select that there
is
 nothing left in the queue, when in reality there could be a dozen
records
 waiting in the delayed insert handler waiting to be put into the
table.

 Is my assumption correct?

Yes it is.


 Cheers
 Peter

Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Indexing

2004-05-02 Thread Matt W
Hi John,

- Original Message -
From: John Mistler
Sent: Sunday, May 02, 2004 12:50 AM
Subject: Indexing


 I know this is an elementary question, but I am getting two sets of
 instructions from different MySQL manuals about setting an index on a
prefix
 of a column of a table.  One says to use:

 KEY indexName (colName(length))

 and the other says to use

 INDEX indexName (colName(length))

 Are both all right?  Any light shed on indexing columns would be much
 appreciated.

INDEX is a synonym for KEY... or vice versa.  So yes, they're both the
same.  In fact, you'll see that if you use INDEX, and then use SHOW
CREATE TABLE, MySQL will have it specified as KEY. :-)


 Thanks,

 John

Hope that helps.


Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Another Trailing Spaces Issue

2004-05-02 Thread Matt W
Hi John,

What version do you use?  In 4.0.18, they fixed some bugs that were
introduced in 4.0.17 related to trailing spaces on indexed TEXT-family
columns: http://dev.mysql.com/doc/mysql/en/News-4.0.18.html

I see 3 Bugs fixed entries with trailing spaces in them.  If you're
not using 4.0.17, what you're seeing IS a bug and should be reported if
it hasn't already been.


Matt


- Original Message -
From: John Mistler
Sent: Friday, April 30, 2004 1:39 PM
Subject: Another Trailing Spaces Issue


 The TINYTEXT format solves the problem of storing the string with
spaces at
 the end.  Now, I'm having trouble SELECTING a row WHERE the
TINYTEXTcolumn =
 theStringWithTheSpacesAtTheEnd;

 If the string is theString + space and another row has an entry with
 theString (+ no space), the query returns BOTH rows.

 Is there a way to get around returning the latter entry?

 Thanks,

 John


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Storing a space

2004-04-30 Thread Matt W
Hi John,

I *think* VARCHAR is *supposed* to work that way, but doesn't in MySQL.
So you'll have to use TINYTEXT.  Its storage requirements are the same
as VARCHAR(255) and it behaves the same way, except for, I think, 3
things: 1) the trailing space thing, obviously; 2) it can't have a
DEFAULT value; and 3) you can't index the whole column -- but you can
INDEX (col(255)), which has the same effect. :-)

Hope that helps.


Matt


- Original Message -
From: John Mistler
Sent: Friday, April 30, 2004 3:54 AM
Subject: Storing a space


 Is there a way to force an invisible space character to be stored at
the
 END of a string in a column (either VARCHAR or CHAR), so that when it
is
 retrieved the space at the end is not cut off?

 theString + space

 or even,

 theString + space + space + space, etc.

 Currently, I can only get the string back as theString

 Thanks,

 John


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: fulltext index -- word needs not found

2004-04-28 Thread Matt W
Hi Joyce,

needs is a stopword, that's why it's not indexed or found.  You can
use your own ft_stopword_file to define the list without needs.  The
default, built-in stopword list is defined in, I think, the
myisam/ft_static.c file of the source distribution, for reference.

Hope that helps.


Matt


- Original Message -
From: [EMAIL PROTECTED]
Sent: Wednesday, April 28, 2004 12:01 AM
Subject: fulltext index -- word needs not found


 Description:
 We have three different unrelated tables, each with one field that
 has a fulltext index. In each table, in the field with the fulltext
 index, there are records with the word needs in them, and in each
case no
 records are found when doing this type of query (using a fulltext
index):

   select * from testdb where match(highergeog) against('needs' in
boolean mode);

 However, records are found when doing substring searches:

   select * from testdb where highergeog like '%needs%';

snip

 Also I know someone running 4.0.15 on linux, and needs can't be
found
 in fulltext queries on his data also, even though the word exists in
the data.

 Perhaps this is all solved in 4.0.18?


 Fix:
 Don't use fulltext index--use substring search.
 But substring is slower than fulltext.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow Query Question - Need help of Gurus.

2004-04-19 Thread Matt W
Ben,


- Original Message -
From: Ben Dinnerville
Sent: Monday, April 19, 2004 1:49 AM
Subject: RE: Slow Query Question - Need help of Gurus.



snip

Then try again:

   SELECT `Call Svc Tag ID`,
   Count(*) as counter,
  `Journal Create Date`
   FROM 31909_859552
   WHERE `Journal Create Date` between '2004-03-13' AND  '2004-03-16'
   AND `Call Svc Tag ID`'null'
   GROUP BY `Call Svc Tag ID`
   HAVING counter  3
   ORDER BY counter;

 The count(*) will be causing some havoc here, as all columns in the
 underlying table(s) will have to be read in, negating any direct hits
to the
 index's and causing a lot more IO than is needed. Change it to a count
on
 one of the columns in the result set or simply a count(1) as
counter -
 will give you the same result without any IO.

COUNT(*) is not a problem.  It won't cause the data file to be read if
just the index can be used.  EXPLAIN will show the same plan for
COUNT(*) and COUNT(1).  :-)


Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Altering MAX_DATA_LENGTH

2004-04-13 Thread Matt W
Hi Dan,

(Sending to the General list too, since this isn't a Windows-specific
thing.)

SHOW TABLE STATUS LIKE 'tbl_name'

will show you the current Avg_row_length.

But the specific values for MAX_ROWS and AVG_ROW_LENGTH when you CREATE
or ALTER the table don't matter (except for looking correct :-)) as
long as their product is greater than 4GB.  BTW, you can't have the
limit be 8GB -- when you go greater than 4GB, the Max_data_length will
be 1TB.


Hope that helps.


Matt


- Original Message -
From: Dan
Sent: Tuesday, April 13, 2004 3:58 PM
Subject: Altering MAX_DATA_LENGTH


 If I have a table that has two fields: Char(100), Blob

 How do I determine the avg_row_length value if I want to increase the
size limit to 8GB?

 Thanks
 Dan


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Question regarding defaults

2004-04-10 Thread Matt W
Hi Boyd,

Can I ask why it really matters? :-)  I would assume the DEFAULT value
is stored at creation time; but the end result would be the same either
way.

BTW, I hate how MySQL's SHOW CREATE TABLE quotes DEFAULT INT-family
values. :-(  It shouldn't do that.


Matt


- Original Message -
From: Boyd E. Hemphill
Sent: Friday, April 09, 2004 9:49 PM
Subject: Question regarding defaults


 Hello:

 I have need to declare a column as type integer then default is at '0'
 (that is a string with a zero in it).

 An example may be:
 Create table foo (
 foo_id  int not null default '0'
 )

 My question centers on the notion of implicit type conversion.  Is the
 server converting the type at the time the ddl (not really too big a
 deal) or is it doing the conversion at run time (i.e. each time a row
is
 inserted in the DB).

 Thanks for your time and expertise!

 Best Regards,
 Boyd E. Hemphill


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Fulltext index is not being built with large database

2004-04-10 Thread Matt W
Hi sascha,

How's the space on your datadir partition (or wherever this table is)?
I believe MySQL creates the temp tables during ALTER in the DB
directory, not the tmpdir.

If the space there is OK, have you checked the error log for anything
related?


Matt


- Original Message -
From: sascha mantscheff
Sent: Friday, April 09, 2004 4:21 PM
Subject: Fulltext index is not being built with large database


 I'm trying to build a fulltext index on a table with about 4 million
 entries with 2 varchar and one text field.
 The indexing starts and runs for about 1/2 to 1 hour, then the process
 stops without any error message. And leaves me with no index. I
checked
 for the size in tmp and redirected it to a partition with 50GB space
 (about 15 times as much as the database tables).
 Mysql runs on gentoo-linux 2.4 with 800MB RAM with server version
4.0.17.

 Any clues, hints or tips?
 Thank you.


 sascha mantscheff


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: backup

2004-04-10 Thread Matt W
Hi Steve,

You might want to look at FLUSH TABLES WITH READ LOCK.  That's a query
to run from mysql, but I'm sure you can get it to work in your shell
script (you need to maintain the MySQL connection while doing the
backup).  I don't know much about that, though.  I think you just run
UNLOCK TABLES when you're finished.


Matt


- Original Message -
From: Steve Sills
Sent: Tuesday, April 06, 2004 8:17 PM
Subject: backup


I want to use rsync to backup my db server, how do i lock all the tables
for all the db's to read only so i cando my backup, then unlock them
again.  It needs to be done from the command line, not the mysql
program.  Anyone have any ideas?  I have looked and couldn't find the
answer i was looking before.  Its running from a shell script, from my
backup machine.  Its currently setup to shut down the server, however i
don't want to have to do this.  Thanks in advance.

Steve Sills
Platnum Computers, President
http://www.platnum.com
[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Why can't I use an AS value in the WHERE clause.

2004-04-05 Thread Matt W
Hi,

This is what HAVING is for.  :-)


Matt


- Original Message -
From: Joe Rhett
Sent: Monday, April 05, 2004 8:59 PM
Subject: Re: Why can't I use an AS value in the WHERE clause.


 On Mon, Apr 05, 2004 at 08:03:33PM -0500, Paul DuBois wrote:
  At 17:29 -0700 4/5/04, Daevid Vincent wrote:
  I'm curious when will I be able to do something like this:
  
  SELECT *, IF(( (unix_timestamp()-unix_timestamp(last_seen)) 
600),1,0) as
  active FROM wifi_table WHERE active = 1;
 
  I think you'll never be able to do it.
 
  The stuff after the SELECT is calculated based on the rows selected
  by the WHERE.  The WHERE therefore cannot be based on the stuff
after
  the SELECT.

 So why not the reverse?  Allow aliasing in the WHERE clause, that we
can
 use in the select clause.  Probably not ANSI, but MySQL isn't shy
about that.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Is this possible?

2004-03-31 Thread Matt W
Hi,

GROUP_CONCAT() is in 4.1. :-)


Matt


- Original Message - 
From: m.pheasant
Sent: Wednesday, March 31, 2004 5:26 PM
Subject: RE: Is this possible?


 You would need an aggregate concat() function I think its in 5.0
 m
 
 -Original Message-
 From: Chris Boget [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, April 01, 2004 7:08 AM
 To: MySQL
 Subject: Is this possible?
 
 I have 2 tables (looks best in fixed width font):
 
 test_items
 +---+--+
 | name  | rec_num  |
 +---+--+
 | Book  | 1|
 | Game  | 2|
 +---+--+
 
 test_attributes
 +---++
 | item_rec_num  | attribute  |
 +---++
 | 1 | Thick  |
 | 1 | Tall   |
 | 1 | Green  |
 | 2 | Narrow |
 | 2 | Yellow |
 +---++
 
 How can I query the above tables so that if I select name and
 attribute, the result set comes out looking like this:
 
 +---+---+
 | name  | attribute |
 +---+---+
 | Book  | Thick/Tall/Green  |
 | Game  | Narrow/Yellow |
 +---+---+
 
 Because every join query I've been able to think of always
 returns the result set as follows:
 
 +--+---+
 | name | attribute |
 +--+---+
 | Book | Thick |
 | Book | Tall  |
 | Book | Green |
 | Game | Narrow|
 | Game | Yellow|
 +--+---+
 
 So my question, is it even possible to do what I'd like to do?
 
 thnx,
 Chris


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ORDER DESC vs. ORDER ASC exec time

2004-03-29 Thread Matt W
Hi,

MySQL 4+ can use indexes for ORDER BY ... DESC (3.23 can only in some
cases) in every case that ASC can.  However, reading a packed index in
reverse order is slower.  I don't think your index is packed, though, if
it's a date-type column, unless you've specified PACK_KEYS in your
CREATE TABLE.  Better check with SHOW INDEX FROM PhoneCalls.

That leaves the DISTINCT clause as the suspect.  The EXPLAINs look the
same with ASC/DESC on an equivalent query I just tried.  So maybe
something is making DISTINCT + reverse index scan slow even if it's not
packed...


Matt


- Original Message -
From: Vadim P.
Sent: Monday, March 29, 2004 8:15 PM
Subject: ORDER DESC vs. ORDER ASC exec time


 Hi all,

 Just noticed that a simple query that returns only 14 rows is 10X
slower
 when ORDER .. DESC is used compared to ORDER .. ASC.
 The table has about 700,000 records, indexed on the field the table is
 being ordered by.

 Is this expected behavior?

 MySQL  4.0.18 running under OpenBSD 3.4 Intel/PIII 900MHz/2GB RAM

 =

 mysql SELECT distinct billingCycle FROM PhoneCalls ORDER BY
 billingCycle DESC;
 +--+
 | billingCycle |
 +--+
 | 2004-04-01   |
 | 2004-03-01   |
 | 2004-02-01   |
 | 2004-01-01   |
 | 2003-12-01   |
 | 2003-11-01   |
 | 2003-10-01   |
 | 2003-09-01   |
 | 2003-08-01   |
 | 2003-07-01   |
 | 2003-06-01   |
 | 2003-05-01   |
 | 2003-04-01   |
 | 2003-01-01   |
 +--+
 14 rows in set (14.77 sec)

 mysql SELECT distinct billingCycle FROM PhoneCalls ORDER BY
billingCycle;
 +--+
 | billingCycle |
 +--+
 | 2003-01-01   |
 | 2003-04-01   |
 | 2003-05-01   |
 | 2003-06-01   |
 | 2003-07-01   |
 | 2003-08-01   |
 | 2003-09-01   |
 | 2003-10-01   |
 | 2003-11-01   |
 | 2003-12-01   |
 | 2004-01-01   |
 | 2004-02-01   |
 | 2004-03-01   |
 | 2004-04-01   |
 +--+
 14 rows in set (1.06 sec)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqld keeps crashing

2004-03-29 Thread Matt W
Hi Joshua,

First thing I'd try is upgrading to 4.1.1!  And/or 4.1.2 when it's
released in a couple weeks.


Matt


- Original Message -
From: Joshua Thomas
Sent: Monday, March 29, 2004 10:51 AM
Subject: mysqld keeps crashing


 Hello all,

 I'm running mysql 4.1.0-alpha-log on FreeBSD 5.1, with InnoDB.

 In the last month, I've had a number crashes, and table corruption. My
 application is a website of medium volume, and the database is not
more than
 a few hundred MB. The errors I recieve are almost all signal 11
errors.

 Here's some of what I have, from mysql_error_log:

 --
 InnoDB: Scanning backward trying to find previous allocated mem blocks
 Mem block at - 68, file w0sel.c, line 2457
 Freed mem block at - 324, file w0sel.c, line 2457
 Mem block at - 580, file mysql.c, line 324
 Mem block at - 836, file w0ins.c, line 82
 Mem block at - 964, file m0rec.c, line 443
 Mem block at - 1092, file m0rec.c, line 443
 Mem block at - 1348, file 0pcur.c, line 28
 Mem block at - 1604, file w0upd.c, line 287
 Mem block at - 1732, file 0pcur.c, line 162
 Mem block at - 1860, file m0rec.c, line 443
 InnoDB: Scanning forward trying to find next allocated mem blocks
 Mem block at + 188, file mysql.c, line 324
 Freed mem block at + 2236, file w0sel.c, line 2128
 Mem block at + 8380, file w0ins.c, line 82
 Mem block at + 10428, file w0ins.c, line 82
 Mem block at + 12476, file w0ins.c, line 82
 Mem block at + 16572, file mysql.c, line 324
 Mem block at + 24764, file w0ins.c, line 82
 Freed mem block at + 28860, file x0trx.c, line 78
 Mem block at + 29372, file t0mem.c, line 197
 Mem block at + 29884, file t0mem.c, line 197
 040328 12:49:29  InnoDB: Assertion failure in thread 358640640 in file
 row0mysql
 .c line 452
 InnoDB: We intentionally generate a memory trap.
 InnoDB: Send a detailed bug report to [EMAIL PROTECTED]
 mysqld got signal 11;
 This could be because you hit a bug. It is also possible that this bin
ary
 or one of the libraries it was linked against is corrupt, improperly
built,
 or misconfigured. This error can also be caused by malfunctioning
hardware.
 We will try our best to scrape up some info that will hopefully help
 diagnose
 the problem, but since we have already crashed, something is
definitely
 wrong
 and this may fail.

 key_buffer_size=33554432
 read_buffer_size=131072
 sort_buffer_size=2097144
 max_used_connections=46
 max_connections=500
 threads_connected=1
 It is possible that mysqld could use up to
 key_buffer_size + (read_buffer_size +
sort_buffer_size)*max_connections =
 112076
 4 K
 bytes of memory
 Hope that's ok; if not, decrease some variables in the equation.

 040328 12:49:29  mysqld restarted
 --


 Thanks in advance,
 Joshua Thomas


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: String Concatenation Operator?

2004-03-20 Thread Matt W
Hi Jim,

Unfortunately you do have to use the CONCAT() function to make sure it
works on all MySQL installations.  The operator used in other DBs, and
which can be used in MySQL when running in ANSI mode, is ||, not +:

SELECT firstname || ' ' || lastname AS fullname
 FROM customers

But if MySQL isn't in ANSI mode (specifically, the PIPES_AS_CONCAT
part), which is typical since it's not enabled by default, || is logical
OR. :-(


Matt


- Original Message -
From: Jim McAtee
Sent: Thursday, March 18, 2004 4:29 PM
Subject: String Concatenation Operator?


 Does MySQL have a string contatenation operator, or must you use the
 CONCAT() function?

 SELECT firstname + ' ' + lastname AS fullname
 FROM customers


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Stored Procs and Commit/Rollback Transactions

2004-03-20 Thread Matt W
Hi Laphan,

(I'm sending this to the general list too, since this isn't Windows
specific and more people will see it.)

MySQL 5.0, which is an early Alpha, does now support stored procedures.
http://www.mysql.com/doc/en/Stored_Procedures.html

And MySQL also supports transactions with the InnoDB (most common) and
BDB table types.  MyISAM doesn't.


Hope that helps.


Matt


- Original Message -
From: Laphan
Sent: Thursday, March 18, 2004 5:19 PM
Subject: Stored Procs and Commit/Rollback Transactions


 Hi All

 OK I admit it I'm a complete MSV (MySQL Virgin), so I might be asking
some
 very stupid questions!!!

 I'm used to SQL Server so I think I should have a basic understanding,
but
 I'm sure you'll tell me different!!

 Basically I just want to confirm that the latest release of MySQL
doesn't
 offer stored procs or commit/rollback functionality - right?

 How does a MySQL-er get round this?

 I'm wanting to develop my ASP/Cart with MySQL as the back-bone and I'm
 trying to find out what the generic do's and dont's are when using
this
 collaboration.

 Any feedback would be very much appreciated.

 Rgds

 Laphan


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: BETWEEN

2004-03-17 Thread Matt W
Hi Michael,


- Original Message -
From: Michael Stassen
Sent: Tuesday, March 16, 2004 9:45 AM
Subject: Re: BETWEEN



 Matt W wrote:
 
  The query using 2 BETWEENs with OR is exactly how it should be.  It
will
  be fast even in MySQL 3.23.  OR is not a problem when the OR parts
  involve the same index. :-)

 Well, that makes sense, and it fits my own experience, but is it
 documented anywhere?  Or is that just supposed to be common sense?

 So far as I can see, the manual does not mention OR or BETWEEN in the
 context of using an index, except for the mention of a new feature in
 5.0 http://www.mysql.com/doc/en/OR_optimizations.html.  There have
 been so many threads about slowness of OR or BETWEEN, often with UNION
 as a suggested work-around, that in the absence of a clear description
 of how this works and when UNION helps, it is easy to get confused.

http://www.mysql.com/doc/en/MySQL_indexes.html  - Under the example
WHERE clauses that use indexes:

/* index = 1 OR index = 2 */
... WHERE index=1 OR A=10 AND index=2

And for clarity, that should really have parentheses: index=1 OR (A=10
AND index=2)

It's not a lot, but I don't think there ever was much about it in the
docs -- except maybe pointing out that indexes wouldn't be used if the
[top-level] OR branches don't reference the same indexed column (before
5.0).  I don't see that mentioned anymore however... except indirectly
in that How MySQL Optimizes OR Clauses section you mentioned.


Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: BETWEEN

2004-03-17 Thread Matt W
Hi Michael, Jochem,

- Original Message -
From: Michael Stassen
Sent: Tuesday, March 16, 2004 10:00 AM
Subject: Re: BETWEEN



 Jochem van Dieten wrote:

  However, I expect that would result in doing 2 rangescans and a
  merge. It might be even faster to use:
  SELECT * FROM sys
  WHERE sectorID BETWEEN 1 AND 42
 AND sectorID NOT BETWEEN 21 AND 29;
 
  That would result in 1 rangescan and a filter. But a lot depends
  on the schema and cardinality.

 I'm curious.  Could you explain this further?  If this works by
grabbing
 1 to 42, then dropping 21 to 29, wouldn't this increase the likelihood
 (without knowing anything about the data) of crossing the 30%
threshhold
 and doing a tablescan instead of using the index, relative to asking
for
 1 to 20 plus 30 to 42?

 On the other hand, if you already know that 1 to 20 and/or 30 to 42
 exceeds 30%, then one tablescan is better than 2.  Is that what you
mean?

 Also, if we ignore the UNION version and look at the two BETWEENs, are
 you saying that

WHERE sectorID BETWEEN 1 AND 20
  OR sectorID BETWEEN 30 AND 42;

 will be treated differently than

WHERE sectorID BETWEEN 1 AND 42
  AND sectorID NOT BETWEEN 21 AND 29;

 by the optimizer?

No, no, no. :-)  Well, yes, it will be treated differently, in that the
latter won't be as optimized.  You should always only use

WHERE sectorID BETWEEN 1 AND 20
OR sectorID BETWEEN 30 AND 42;

Which, assuming sectorID is an integer (not a float etc.), will be
treated exactly like an IN (...) list with 32 values (or 32 ORs; but
BETWEEN is the simplest and parses fastest).  Using the larger BETWEEN
range with NOT BETWEEN will cause who-knows-how-many rows with sectorID
between 21 and 29 to be read, only to be discarded!

Of course, it's possible that MySQL could remove the range values that
won't be found because they conflict, but it only does that type of
thing with constants, not ranges; at least not yet.

Compare the EXPLAINs for these 2 WHEREs:

... WHERE col=123 AND col  123;
... WHERE col IN (123) and col NOT IN (123);


Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: BETWEEN

2004-03-13 Thread Matt W
Hi Michael,

- Original Message -
From: Michael Stassen
Sent: Saturday, March 13, 2004 10:48 AM
Subject: Re: BETWEEN



 Keith wrote:

  g'day,
 
  i'm looking for a way to do two BETWEEN ranges. Currently I have
  sys.sectorID BETWEEN 1 AND 20 but I want it so that I can search
between
  1 and 20 and also between 30 and 42 but all my efforts net an error
and
  the manual doesn't go into a lot of detail. If there's a faster way
than
  BETWEEN then 'll take it.
 
  Cheers,
  Keith

 SELECT * FROM sys
 WHERE sectorID BETWEEN 1 AND 20
 OR sectorID BETWEEN 30 AND 42;

 If that's slow (the optimizer doesn't like ORs) and you are using at
 least mysql 4.0.0, you can change this to

The query using 2 BETWEENs with OR is exactly how it should be.  It will
be fast even in MySQL 3.23.  OR is not a problem when the OR parts
involve the same index. :-)


 SELECT * FROM sys WHERE sectorID BETWEEN 1 AND 20
 UNION
 SELECT * FROM sys WHERE sectorID BETWEEN 30 AND 42;

 Michael


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: query question using REGEXP

2004-03-13 Thread Matt W
Hi Anthony,

You don't need REGEXP for this; LIKE will do.  Try something like this:

... WHERE CONCAT(',', Column, ',') LIKE '%,2,%'

to search for rows that contain 2.

Hope that helps.


Matt

- Original Message -
From: award
Sent: Saturday, March 13, 2004 2:16 PM
Subject: query question using REGEXP


 Hi,

 I'm storing in a database  numbers separated by comma if more than one
 number i.e
 Record #   Column
 112
 21,2,5,6
 31,12,24,45
 4 2,6

 Now if I want to search for records that has number 1 it will find
 records 1,2,3
 but what I want to return only record 2,3
 an example is that if I'm looking for a record that has a number 2
 it will print record 1,2,3,4 but what I want is only record 2,4

 Any help is appreciated,
 anthony


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: query question using REGEXP

2004-03-13 Thread Matt W
Hi Anthony,

Please reply to the list also (and not just me) so others can follow the
discussion. :-)

Well, if the commas are sometimes there, sometimes not (*with multiple
numbers*), that's a problem.  However, if you just mean that the commas
aren't there when it's just one number, then the query I gave will work
fine for that because it adds a comma to the beginning and end of the
column (with CONCAT()) before doing the LIKE comparison.


Matt


- Original Message -
From: award
Sent: Saturday, March 13, 2004 5:10 PM
Subject: RE: query question using REGEXP


 Hi thanks for the help

 But the problem in the column it can take various form
 Just as
 1
 1,2
 12
 1,22,4

 sometimes I have the comma and sometimes I do not have them.

 So if do  WHERE column LIKE %2%

 would it work??
 thank you

 anthony

 -Original Message-
 From: Matt W [mailto:[EMAIL PROTECTED]
 Sent: 13 March 2004 22:47
 To: award; [EMAIL PROTECTED]
 Subject: Re: query question using REGEXP

 Hi Anthony,

 You don't need REGEXP for this; LIKE will do.  Try something like
this:

 ... WHERE CONCAT(',', Column, ',') LIKE '%,2,%'

 to search for rows that contain 2.

 Hope that helps.


 Matt

 - Original Message -
 From: award
 Sent: Saturday, March 13, 2004 2:16 PM
 Subject: query question using REGEXP


  Hi,
 
  I'm storing in a database  numbers separated by comma if more than
one
  number i.e
  Record #   Column
  112
  21,2,5,6
  31,12,24,45
  4 2,6
 
  Now if I want to search for records that has number 1 it will find
  records 1,2,3
  but what I want to return only record 2,3
  an example is that if I'm looking for a record that has a number 2
  it will print record 1,2,3,4 but what I want is only record 2,4
 
  Any help is appreciated,
  anthony


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: 3 000 000 requests for last 14 days...

2004-03-10 Thread Matt W
Hi Patrick,

No, you can't get per database statistics in MySQL. :-(


Matt


- Original Message -
From: Patrick Gelin
Sent: Wednesday, March 10, 2004 1:45 AM
Subject: 3 000 000 requests for last 14 days...


 Hi,

 I've got very astonished to see with phpMyAdmin my MySQL database has
 received more than 3 Millions conexions for this last 14 days. But I
don't
 knowwhich database is reponsable for this. Is there a mean to know
 connexion rate per database ?

 Thanks.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqladmin processlist and pid

2004-03-09 Thread Matt W
Hi Tom,

You can't.  MySQL's own thread ids are sequential.  The OS pids are
random.  There's no connection between them.  Besides, mysqld is really
only running in a single real process, it's just that LinuxThreads
shows each thread as a process.


Matt


- Original Message -
From: Tom Roos
Sent: Tuesday, March 09, 2004 6:05 AM
Subject: mysqladmin processlist and pid


hi listers

how would i determine the association between the id from mysqladmin
processlist to a pid?

what i'm after is that i notice some of the mysql threads have a high
cpu utilization. i see this using top (on a linux box). i would like to
know which user/program is responsible for tuning purposes. i can use
mysqladmin processlist and it gives me a list (including an id) of what
processes are running but how do i tie this in with unix' pid?

tks
tom


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Join Definitions

2004-03-06 Thread Matt W
Hi Rhino,

- Original Message -
From: Benoit St-Jean
Sent: Saturday, March 06, 2004 9:00 AM
Subject: Re: Join Definitions


 Rhino wrote:

  Can anyone point me to documentation describing the concepts behind
  MySQL's different join types?
[snip]

 http://www.mysql.com/doc/en/JOIN.html

Yeah, and MySQL doesn't yet support FULL OUTER JOIN. :-(  It's listed on
the TODO under Features Planned for the Mid-Term Future:
http://www.mysql.com/doc/en/TODO_sometime.html  And it's listed for
version 5.1 on the Development Roadmap:
http://www.mysql.com/doc/en/Roadmap.html


Matt



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Corrupt full text index

2004-03-04 Thread Matt W
Hi Dave,

- Original Message -
From: [EMAIL PROTECTED]
Sent: Thursday, March 04, 2004 7:43 AM
Subject: Corrupt full text index


 Description:
 When updating a table with a fulltext index, the fulltext index
becomes corrupted. The Error ERROR 1034 at line 76: Incorrect key file
for table: 'test'; try to repair it is dislpayed. The error is not
produced when there is no fulltext index.
 How-To-Repeat:
 Download sql script from http://www.kirkpatrick.me.uk/bug.sql.tar.gz
 I have replicated this on 4.1.1 on both Linux and Win32. The problem
does not happen on 4.0

 Fix:
 If you drop the fulltext index before the update and then receate it
no error is produced.

Like Sergei said, there's no problem in 4.1.2.  There are a few
full-text corruption bugs in 4.1.1:

http://bugs.mysql.com/1977
http://bugs.mysql.com/2190
http://bugs.mysql.com/2417

And with multi-byte character sets:

http://bugs.mysql.com/2033
http://bugs.mysql.com/2065

Hopefully any problems are ironed out in 4.1.2! :-)


Matt



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: HOWTO add Primary Key to Existing Table

2004-02-26 Thread Matt W
Hi Paul,

ALTER TABLE table_name ADD id_column_name INT UNSIGNED NOT NULL
AUTO_INCREMENT PRIMARY KEY FIRST;

The FIRST word at the end just makes it the first column in the table if
that's what you want.


Hope that helps.


Matt


- Original Message -
From: Paul Maine
Sent: Thursday, February 26, 2004 7:08 PM
Subject: HOWTO add Primary Key to Existing Table


 How can I add an auto-incrementing primary key to an existing table?

 MySQL version 4.0

 Thank You


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: fulltext search always returns no results

2004-02-25 Thread Matt W
Hi Don,

No, full-text search was added in MySQL 3.23.23, I believe (4.0.1 just
added boolean searches along with more speed overall).  It doesn't need
to be compiled in or anything, it's there by default.  Unless someone
compiled it and actually *removed* the full-text code or something. :-)

Also, key_len of 0 in EXPLAIN is normal.

It sounded like you are getting some kind of error in your first
message?  If so, what is it?  Are you SURE that the EXACT word you're
searching for is present in the table (for example, with a space, etc.
on either side of it)?


Matt


- Original Message -
From: Don Dikunetsis
Sent: Thursday, February 26, 2004 12:21 AM
Subject: Re: fulltext search always returns no results



 Hi, thanks for your reply, but it looks like:

   As of Version 3.23.23, MySQL has support for full-text indexing and
 searching.

 --according to:

 http://www.mysql.com/doc/en/Fulltext_Search.html

 However, I would be unsurprised (though disappointed) to find that the
 answer is some variant of this won't work with your
configuration--my
 setup certainly doesn't seem to be responding to the normal setup
 instructions for fulltext search.


 From: [EMAIL PROTECTED]
 Subject: Re: fulltext search always returns no results
 Date: Thu, 26 Feb 2004 10:12:42 +1100 (EST)
 
 Excuse if i'm not correct but this may be your problemo ?
 
 MySQL 3.23.55 running on my webhost's Linux box
 phpMyAdmin 2.1.0
 
 
 I didnt think fulltext was in 3.23 wasnt this a Mysql 4 feature ??
 
 
   Summary: When I run a fulltext search, it always returns no
results. I
   have  added a fulltext index to the column being searched. Also, I
am
   searching  for a term that is in the table, but not in more than
50% of
   the rows.
  
   I notice that when I add EXPLAIN to my search, the key_len of my
   fulltext  index is 0. Does that mean my fulltext index is empty,
   explaining why my  searches never return any results?
  
   Either way, I'd be grateful for any suggestions on how to get
fulltext
   search to work!


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: run query second time

2004-02-23 Thread Matt W
Hi Mike,


- Original Message -
From: Mike Mapsnac
Sent: Monday, February 23, 2004 5:49 PM
Subject: run query second time


 Hello

 Today I run large query. It took more than 1 minute to start printing
the
 results. The output was about 5 rows. However, when I run the
query
 second time it took a couple of seconds before printing the results.
Why is
 that? Because data still in the RAM?

 Thanks

Yep, that's usually why. :-)


Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: HEAP tables vs MYISAM on ramdisk

2004-02-23 Thread Matt W
Hi Mark,

- Original Message -
From: Mark Maunder
Sent: Monday, February 23, 2004 4:17 PM
Subject: Re: HEAP tables vs MYISAM on ramdisk


 411 is packed with features I'm dying to have on my production server,
 but I had it on my dev box, and I got some table corruption which,
 admittedly, I was too lazy to try to reproduce. So I've downgraded to
 production 4 again. I have a heavily updated fulltext index which may
be
 the root of the evil.

Yeah, there are a few bugs with full-text indexes in 4.1.1!  Hopefully
4.1.2 (which should be out soon I think) will be ready to rock in
production. :-)

Here's the 4.1.1 full-text bugs I know of:

http://bugs.mysql.com/1977
http://bugs.mysql.com/2190
http://bugs.mysql.com/2417

And with multi-byte character sets:

http://bugs.mysql.com/2033
http://bugs.mysql.com/2065


Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Encryption Issue

2004-02-22 Thread Matt W
Hi,


- Original Message -
From: [EMAIL PROTECTED]
Sent: Sunday, February 22, 2004 3:18 PM
Subject: Re: Encryption Issue


 According to documentation there is a query log wich logs
established
 connections and executed queries, also there is the binary log wich
 stores all statements that changes something (Used for
replication)

 So... if i do something like

 update myTable set field1=AES_ENCRYPT('information', 'key')

 Any one who looks into the log file will be able to see the query, the
 information and the key, and all my information would be
compromised... am
 i wrong?

You're absolutely right. :-)  Query contents can be seen in logs.
That's why I do any encryption in the client code and only use the
finished result in queries.  Not sure how possible it is if you want to
use AES encrytion, though.


Matt



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Indexed searching with OR ?

2004-02-20 Thread Matt W
Chris,

The good news is that MySQL 5.0 can finally use multiple indexes per
table.  I just noticed this page in the manual a few days ago:
http://www.mysql.com/doc/en/OR_optimizations.html

:-)


Matt


- Original Message -
From: Chris Nolan
Sent: Monday, February 16, 2004 7:13 AM
Subject: Re: Indexed searching with OR ?


 Hi!

 MySQL's optimizer has a slight problem. OR queries cause it to get
very
 confused.

 Try the following to get the best performance:

 Rewrite SELECT FROM table WHERE (condition1) OR (condition2);

 As:

 (SELECT FROM table WHERE condition1) UNION (SELECT FROM table WHERE
 condition2);

 Hope this helps!

 Regards,

 Chris

 On Tue, 2004-02-17 at 00:05, Andreas Pardeike wrote:
  Hi List,
 
  Can someone explain the results below? It seems that MySQL has a
hard
  time choosing keys for 'or' searches. The example here is very
simple
  but reflects the more complex cases where lots of rows or joins are
  used perfectly:
 
snip
 
  What's the point of indices if I cannot combine two indexed fields
with
  OR ?
 
  Any help appreciated,
  Andreas Pardeike



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Transferring comma-delimited list imto mysql table

2004-02-20 Thread Matt W
Hi Eve,

That error is because the LOCAL part of LOAD DATA is disabled.  See
here: http://www.mysql.com/doc/en/LOAD_DATA_LOCAL.html

Since your file is probably on the same system as the MySQL server, it
should work if you remove the LOCAL word.


Hope that helps.


Matt


- Original Message -
From: Eve Atley
Sent: Friday, February 20, 2004 12:58 PM
Subject: Transferring comma-delimited list imto mysql table



 I have a large email database that has been translated into a CSV,
simply
 like this:
 [EMAIL PROTECTED];[EMAIL PROTECTED]; (etc.)

 I need to get this into SOME format. I have tried Access, which said
it was
 too large - so I am trying my mysql. I created a table with fields ID,
and
 email. I just want to put each email into the email field. I get this
error:

 LOAD DATA LOCAL INFILE 'C:\\PHP\\uploadtemp\\php7.tmp' REPLACE INTO
TABLE
 `email`
 FIELDS TERMINATED BY ';'

 MySQL said:


 #1148 - The used command is not allowed with this MySQL version

 Can anyone help me? Excel choked, Access choked. My intent is to pull
all
 these from MySQL and into Outlook. If MySQL isn't the app to use,
perhaps
 another solution can be recommended.

 Thanks so much,
 Eve


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Improving seek/access times -- does RAID help?

2004-02-20 Thread Matt W
Hi all,

Can anyone tell me whether or not some kind of RAID will improve the seek/access times 
during lots of random reads from, say, MyISAM data files?  I *do not care* about 
improved [sequential] transfer rates; I want the fastest possible random access.

I'm thinking that RAID won't give an improvement in this case, because the disks can't 
know where to read from until MySQL issues the seek calls. :-(  About the only thing I 
can think of that may help is if you're using striping, there won't be as much data on 
each disk so the head would need shorter seeks.

If RAID doesn't help the situation, any other ideas if the sub-6ms access times of the 
fastest 15K SCSI drives isn't fast enough? :-)


Thanks,

Matt


Re: Improving seek/access times -- does RAID help?

2004-02-20 Thread Matt W
Hi Ted,

Heh. :-)  This could be many GBs.  There's no problem reading rows that
are in RAM (cached by the OS) -- can read over 10,000/second.  If
there's enough RAM, the OS will take care of it (you could cat table.MYD
to /dev/null). No ramdisk necessary. :-)

BTW, this is for MySQL's full-text search.  It works pretty well (fast)
as far as doing the lookups and searching in the index.  That's not a
concern at all.  The problem is that it *has to* read the data file for
each matching row (and possibly non-matching rows, depending on the
search). :-(  Searches need to be reasonably fast on millions of rows,
while possibly reading 10s of thousands of data rows.  It takes a lot
more time when those rows aren't cached.

The only thing I've thought of so far is symlinking the data file on a
separate drive, but I'm not sure how much that will actually help.


Matt


- Original Message -
From: [EMAIL PROTECTED]
Sent: Friday, February 20, 2004 7:24 PM
Subject: RE: Improving seek/access times -- does RAID help?


 Run everything off a ramdisk ;-)

 Ted Gifford

 -Original Message-
 From: Matt W
 Sent: Friday, February 20, 2004 5:21 PM
 Subject: Improving seek/access times -- does RAID help?

 Hi all,

 Can anyone tell me whether or not some kind of RAID will improve the
 seek/access times during lots of random reads from, say, MyISAM data
files?
 I *do not care* about improved [sequential] transfer rates; I want the
 fastest possible random access.

 I'm thinking that RAID won't give an improvement in this case, because
the
 disks can't know where to read from until MySQL issues the seek calls.
:-(
 About the only thing I can think of that may help is if you're using
 striping, there won't be as much data on each disk so the head would
need
 shorter seeks.

 If RAID doesn't help the situation, any other ideas if the sub-6ms
access
 times of the fastest 15K SCSI drives isn't fast enough? :-)


 Thanks,

 Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Massive memory utiliazation

2004-02-19 Thread Matt W
Hi James,

Your key_buffer is using tons of memory at 1.5 GB!  table_cache is
probably too big, too.


Matt


- Original Message -
From: James Kelty
Sent: Saturday, February 14, 2004 3:03 AM
Subject: Massive memory utiliazation


 Hello,



 We have currently tuned MySQL for a high rate of traffic. But, now we
are
 seeing issues with memory usage. It reaches about 2GB and the server
becomed
 wildly unstable. Below is our my.cnf file. Can anyone point out any
glarring
 errors? We are running this on a Dell 2650 with Red Had Advanced
Server v2.1
 with Kernel 2.4.9-e.25smp and Hyper threading.



 Thanks a lot!





 [client]

 port= 3306



 # The MySQL server

 [mysqld]

 datadir = /var/lib/mysql

 port= 3306

 skip-locking

 set-variable= max_connections=800

 set-variable= key_buffer=1500M

 set-variable= max_allowed_packet=1M

 set-variable= table_cache=16384

 set-variable= sort_buffer=256k

 set-variable= record_buffer=256k

 set-variable= record_rnd_buffer=256k

 set-variable= thread_cache=64

 set-variable= thread_concurrency=32

 set-variable= myisam_sort_buffer_size=64M

 set-variable= interactive_timeout=300

 set-variable= open_files_limit=6

 set-variable= wait_timeout=300

 set-variable= long_query_time=5

 set-variable= tmp_table_size=16M



 server-id   = 0



 # Adding bin log for PIT recovery

 log-bin



 #set-variable   = bdb_cache_size=768M

 #set-variable   = bdb_max_lock=10



 log-slow-queries=/var/log/slowqueries.log



 [safe_mysqld]

 open-files-limit=6



 [mysqldump]

 quick

 set-variable= max_allowed_packet=16M



 [mysql]

 no-auto-rehash



 [isamchk]

 set-variable= key_buffer=512M

 set-variable= sort_buffer=512M

 set-variable= read_buffer=2M

 set-variable= write_buffer=2M



 [myisamchk]

 set-variable= key_buffer=512M

 set-variable= sort_buffer=512M

 set-variable= read_buffer=2M

 set-variable= write_buffer=2M



 [mysqlhotcopy]

 interactive-timeout











-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump via tcp/ip memory problem

2004-02-19 Thread Matt W
Hi,

Yeah, by default mysqldump buffers the result of the SELECT * FROM
table query in memory before writing the SQL statements (using
mysql_store_result()). If you use the --opt option (or at least -q
or --quick), it dumps the data as it gets it (using mysql_use_result()).


Hope that helps.


Matt


- Original Message -
From: [EMAIL PROTECTED]
Sent: Thursday, February 19, 2004 1:23 PM
Subject: mysqldump via tcp/ip memory problem



 I've dumped alot of databases before using mysqldump, and am trying to
 dump a larger database than normal, about 2.2GB in size..  The largest
 table just over 12 million rows...  It's dumping over a network to a
 tape backup server..

 I start the job off:

 /usr/local/bin/mysqldump -c -F --host=prv-master1 \
 --password=blahblah --port=3306 --user=blahblah --verbose mdb1 
 /tapesource/MDB1/mdb1.db

 It runs for bit, dumping some smaller tables, then gets the the
largest
 table (12mil row) .. runs for a bit and reports Killed

 Dmesg shows:

 __alloc_pages: 0-order allocation failed (gfp=0x1d2/0)
 VM: killing process mysqldump

 Which leads to a memory problem, or lack of...  The box does have
approx.
 500MB of free ram...

 Is it just eating it up buffering the network response from the
server?

 Mysqldump on client is Ver 8.22 Distrib 3.23.57
 Mysqld on server is 3.23.55-log

 Thoughts?


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: key_reads key_read_requests

2004-02-07 Thread Matt W
Hi,

You're probably right.  All the status variables seem to start over
after hitting 4,294,967,295. :-(  I don't get why they're only using 32
bit integers for the variables that they know can go WAY over that
amount. :-/


Matt


- Original Message -
From: Mikhail Entaltsev
Sent: Friday, January 30, 2004 8:39 AM
Subject: Re: key_reads  key_read_requests


 Hi,

 I am not sure, but may be you had Key_read_requests overflow.

 Best regards,
 Mikhail.


 - Original Message -
 From: John David Duncan
 Sent: Thursday, January 29, 2004 11:40 PM
 Subject: key_reads  key_read_requests


  Hi,
 
  key_reads is usually a small fraction of key_read_requests,
  but in the case of the server below, key_reads is actually
  GREATER than key_read_requests.  Can anyone explain what would
  cause that to happen?
 
  - JD
 
  mysql show status like 'key%';
  ++---+
  | Variable_name  | Value |
  ++---+
  | Key_blocks_used| 997521|
  | Key_read_requests  | 42804277  |
  | Key_reads  | 70150022  |
  | Key_write_requests | 236384514 |
  | Key_writes | 130961162 |
  ++---+
  5 rows in set (0.00 sec)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: query the data of a fulltext index directly from index?

2004-02-04 Thread Matt W
Hi Sergei!

Great news. Thanks very much! :-)


Matt


- Original Message -
From: Sergei Golubchik
Sent: Tuesday, February 03, 2004 1:54 PM
Subject: Re: query the data of a fulltext index directly from index?


 Hi!

 On Feb 02, Matt W wrote:
  Sergei,
 
  Any chance of getting a ft_dump Windows binary in the distribution?
:-)

 Chances are good :)
 It was added to rpms and binary unix distributions 5 min ago,
 and it should be added to windows distro too.

 Note - the new name is myisam_ftdump.

 Regards,
 Sergei


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Server Behavior.

2004-02-04 Thread Matt W
Hi,

- Original Message -
From: [EMAIL PROTECTED]
Sent: Wednesday, February 04, 2004 12:12 PM
Subject: Server Behavior.


 Seeking opinions on this.

 Server is Dual Pentium Xeon 2.8, 6 GB RAM, running RedHat Linux 7.2,
 MySQL 4.0.17, all installed and tested with no problems.

 I had a small database for testing purposes, and then dropped it,
 leaving the default installation databases, mysql and test.

 If I run top from shell I get the following:


 19683 root   9   0   956  956   772 S 0.0  0.0   0:00
mysqld_safe
 19716 mysql  9   0 12960  12M  1612 S 0.0  0.2   0:00 mysqld
 19718 mysql  9   0 12960  12M  1612 S 0.0  0.2   0:00 mysqld
 19719 mysql  9   0 12960  12M  1612 S 0.0  0.2   0:00 mysqld
 19720 mysql  9   0 12960  12M  1612 S 0.0  0.2   0:00 mysqld
 19721 mysql  9   0 12960  12M  1612 S 0.0  0.2   0:00 mysqld
 19722 mysql  9   0 12960  12M  1612 S 0.0  0.2   0:00 mysqld

 I checked the connections to the server and there are absolutely none.
 No one else is on a shell session except me.
 This is going on for the last 48 hrs.
 Is this normal?
 Do I need to do anything about this?

Those are just the minimum mysqld threads running (they appear as
processes with LinuxThreads). You'd probably see the same thing right
after starting the server. :-)

BTW, the size of the whole process is 12M in your case; NOT 12M for each
thread.

Hope that helps.


Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mySQL autogenerate, update table

2004-02-04 Thread Matt W
Hi David,

ALTER TABLE table ADD ListingID MEDIUMINT UNSIGNED NOT NULL
AUTO_INCREMENT PRIMARY KEY FIRST;

That will add the column at the beginning of the table (first column).
Remove FIRST from the end if you don't want that (it will then go at
the end) or replace it with: AFTER some_other_column

Also sending this to the General list since it isn't a Windows specific
question. :-)


Matt


- Original Message -
From: tooptoosh
Sent: Wednesday, February 04, 2004 1:54 PM
Subject: mySQL autogenerate, update table


Hi all,

I have a mySQL table with 75,000 records in it, but the table has no
primary key (autogenerate) column in it.  I want to add this field
column ListingID to the table.  How do I do that?

Cheers,
David


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: query the data of a fulltext index directly from index?

2004-02-04 Thread Matt W
Sergei,

Any chance of getting a ft_dump Windows binary in the distribution? :-)


Regards,

Matt


- Original Message -
From: Sergei Golubchik
Sent: Monday, February 02, 2004 11:33 AM
Subject: Re: query the data of a fulltext index directly from index?


 Hi!

 On Feb 02, Alexander Bauer wrote:
  Hello,
 
  is there any way to get the fulltext index contents directly? I'm
looking
  for a way to list all indexed words from a column to provide a
filter
  selection.
 
  How can I access the index data without walking through all table
rows, get
  the column and tokenize and collect words?

 Use the ft_dump utility program that comes from MySQL source
 distribution.

 Regards,
 Sergei


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: A fun one

2004-01-24 Thread Matt W
Hey Roger,


- Original Message -
From: Roger Baklund
Sent: Saturday, January 24, 2004 7:09 PM
Subject: Re: A fun one

 You shouldn't use text columns for ip/name... ip addresses fits in an
 unsigned int

Yeah, I want to use an INT UNSIGNED column for IPs, which is great for
the space savings... except people have brought up the issue of IPv6
addresses (128-bit; 32 hex chars IIRC).  I've never personally seen an
IPv6 address yet, but I guess I will in the future.  And I'm concerned
about an INT not handling them. :-(  Of course, the people that mention
IPv6 are using VARCHAR(15) columns, which won't hold the addresses
either. :-D

So what do you think about the situation?  If we want to stick with
INT-based columns and handle IPv6, I guess we could use 2 BIGINT
columns. :-/  I just don't know when we're going to *need to* handle
IPv6...


Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow query times

2004-01-20 Thread Matt W
Hi Balazs,

The likely answer is the one that nobody mentioned: it's an optimizer
bug in 4.0.16.  If you look at the EXPLAIN output for the second query,
it's probably using a ref type on the sex column, instead of the more
restrictive id index.  If so, that's the bug.  From
http://www.mysql.com/doc/en/News-4.0.17.html

Fixed optimizer bug, introduced in 4.0.16, when REF access plan was
preferred to more efficient RANGE on another column.

If that's the problem, upgrading will fix it. :-)


Matt


- Original Message -
From: Balazs Rauznitz
Sent: Monday, January 19, 2004 9:39 AM
Subject: Slow query times



 While doing some benchmarks the other day, I saw surprisingly slow
 query results on columns that were indexed.

 Here's the table definition:

 create table sex (
 id integer,
 sex char(1));
 create index id_index on sex (id);
 create index sex_index on sex (sex);

 Then I loaded a million rows, id was from 1 to 1_000_000, sex was
 randomly 'F' or 'M'.

 When searching on 'id' everything is snappy:

 mysql select count(*) from sex where id459000 and id =46;
 +--+
 | count(*) |
 +--+
 | 1000 |
 +--+
 1 row in set (0.00 sec)

 However when the 'sex' column is involved:

 mysql select count(*) from sex where id459000 and id =46 and
sex = 'M';
 +--+
 | count(*) |
 +--+
 |  504 |
 +--+
 1 row in set (5.09 sec)

 Any way to make this faster ?

 I'm using MySQL 4.0.16 on Linux 2.4.x with a 1GHz AMD CPU and 640M
RAM.

 Insert jokes about sex making MySQL slow here ;-)

 Thanks,

 Balazs


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow query times

2004-01-20 Thread Matt W

- Original Message - 
From: Peter J Milanese
Sent: Tuesday, January 20, 2004 1:37 PM
Subject: RE: Slow query times



 You may also want to try :
 
 count(1)
 
 instead of
 
 count(*)
 
 
 count(*) pulls back the data while count(1) does not.

Completely untrue...


Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How does key buffer work ?

2004-01-16 Thread Matt W
Hi John,

- Original Message -
From: [EMAIL PROTECTED]
Sent: Thursday, January 15, 2004 2:28 AM
Subject: Re: How does key buffer work ?


 Matt,

 One last question and then I promise to drop the topic ... what would
be
 the best way to force a complete load of an index into the key buffer
?

It's no problem.  :-)   Sorry for the late reply.

Off hand, to force an index to be loaded I would say run queries that
scan each index.  e.g. SELECT index_col FROM table;

But remember the indexes will be unloaded from the key_buffer if the
table is closed -- after things like FLUSH, ALTER, OPTIMIZE, and maybe
more.

BTW, MySQL 4.1.1 added a new key cache system that looks like it has
more tunable stuff. From http://www.mysql.com/doc/en/News-4.1.x.html

New key cache for MyISAM tables with many tunable parameters. You can
have multiple key caches, preload index into caches for batches...

But I don't see anything documented about it yet. :-(


 Thanks very much for your time.

 John

You're welcome.


Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How does key buffer work ?

2004-01-14 Thread Matt W
Hi John,

I'll give my comments. :-)


- Original Message -
From: [EMAIL PROTECTED]
Sent: Wednesday, January 14, 2004 2:04 AM
Subject: How does key buffer work ?


 I've been trying to optimise the operation of a MySQL (4.0.13)
 installation on a Windows 2000 based web server.

First, I'd upgrade MySQL to the latest 4.0.x for bug fixes, etc.  The
biggest of which may be possible index corruption (I think) in versions
before 4.0.14 (or is it .15?).


 After going through the
 docs and browsing the net for some time, it seems that after ensuring
that
 your database design is sound and your queries correctly structured
with
 the right indexes present then further optimisation can be realised by
 tinkering with the MySQL server's startup parameters, principally the
key
 buffer size. It seems that the key buffer is solely used as an index
cache
 and that extending this, up to a point, potentially will significantly
 improve performance.

Yup, table/index design and optimized queries are very important for
performance.

I'm not of the opinion that a huge key_buffer is as important as a lot
of people make it. :-)  Sure, it's important, but I don't know if many
changes will significantly improve performance. :-)  Making it too
large may actually hurt overall performance.

You see, MySQL doesn't cache any row data from the .MYD files.  The OS
will use any free RAM to cache file data such as that (to save costly
disk seeks/reads).  (BTW, in Win2k, the Performance tab of Task Manager,
where it says System Cache, I *think* is a good indicator of how much
file data is cached.)  And if you make your key_buffer too big, this
will be [more] memory that a program (MySQL) has allocated, from the
OS's view.  That's that much less free RAM that could be used to cache
the data file.

Compared to randomly reading data file rows (especially larger, variable
length ones) after index lookups, it's much faster to read the index
file, even from disk (if key_buffer is too small).  That's because the
index file is smaller and everything is in order, to be read more
sequentially, thus saving random disk seeks.  Besides, even if the
key_buffer is too small, the OS will also cache the index file data,
so it may not actually have to be read from disk.


 However, after playing with this value on my system
 for a while, I have a number of questions about how it works...
 1) I assume that the key buffer caches the contents of the myi files
(I'm
 only talking MyISAM tables here) but is this a direct copy of the
contents?

Yes.


 i.e. if you extend the key buffer so that it is bigger than the sum of
 the size of the myi files on your system, then will this be sufficient
to
 be able to cache all the indexes all the time ?

Yes it will.  Making it as big as your .MYI files is the *maximum* you
should use.  BUT, it's probably not the best. :-)  It's more like, How
much of those .MYI files are accessed *regularly*?  Probably not all of
them.

And remember about leaving enough memory to cache row data.

After the server's been running awhile, I think if Key_reads divided by
Key_read_requests (from SHOW STATUS) is less than 0.01 like it says in
the manual, you should be pretty good.


 2) Does the whole index get loaded into the cache on the first time
it's
 accessed or do only 'parts' of the index get loaded as they are used
for
 various queries ?

Only parts. :-)  Blocks, actually.  A block is usually 1024 bytes;
though if you have an index more than like 255 bytes long, the blocks
will be 2048 bytes.

They are loaded on demand when they're not in the key_buffer (Key_reads
status var).  The status variable Key_blocks_used is like a high water
mark.  It's the most blocks that were ever in the key_buffer (not
necessarily currently for some reason *shrug*) since the server was
started.  If the blocks are the usual 1K size, then 16384
Key_blocks_used, for example, would mean 16MB of indexes were in the
key_buffer at some point -- and may still be, of course.

Again, after MySQL's been running awhile (doing typical queries), if
Key_blocks_used divided by 1024 is LESS THAN your key_buffer_size (in
MB), your key_buffer is probably too big -- as it's never getting
filled.


 3) If an index is updated for any reason, is the whole cache copy of
the
 index then invalidated or is the cache copy updated at the same time
as
 the disk file?

I think the block in the key_buffer is updated first, then on disk.
Don't hold me to this, though. :-)  If anything was invalidated, it
would just be the block(s) that were updated.


 One idea I was toying with was to 'delay' all inserts to the sensitive
 tables (an update is done every five minutes for my particular system)
so
 that the tables are updated pretty much in one single go and then
ensure
 the key buffer is refreshed so that all select queries on these tables
for
 the next five minute period will use only cached indexes. Does this
sound
 plausible and or sensible ?

To me, not really for the sake of 

Re: How does key buffer work ?

2004-01-14 Thread Matt W
Hi John,


- Original Message -
From: [EMAIL PROTECTED]
Sent: Wednesday, January 14, 2004 6:37 AM
Subject: Re: How does key buffer work ?


 Matt,

 Many thanks for the answer. It has helped enormously.

 First, I have been getting the odd index corruption that has proved to
be
 very annoying. I had checked the changes document for releases since
 4.0.13 and there didn't seem to be any mention of an index problem but
now
 I'll upgrade asap. Thanks for that.

It was in 4.0.15: http://www.mysql.com/doc/en/News-4.0.15.html

Fixed rare bug in MyISAM introduced in 4.0.3 where the index file
header was not updated directly after an UPDATE of split dynamic rows.
The symptom was that the table had a corrupted delete-link if mysqld was
shut down or the table was checked directly after the update.


 Next, I had extended my key buffer too much. When I calculated the
high
 water mark for key buffer usage, I found that indeed it was
considerably
 less than the space I had allocated. I will modify accordingly.
However, I
 was just thinking about what you said about this only being a high
water
 mark ... I can't see any way, apart from dropping an index or table,
that
 information is going to be purged from the cache especially as you say
 that MySQL updates the contents of the cache when an index is
modified, so
 won't that mean that during normal operation the key_blocks_used
should
 indicate exactly how many blocks are currently in use ? Some of the
 contents may of not been used for a while but still they won't be
purged
 unless the maximum extent of the cache is reached ?

When a table is closed, its blocks are released from the key_buffer.  So
after running FLUSH TABLES, for example, Key_blocks_used should be 0 if
it was current.

You can see that the blocks are removed from the buffer by running a
query that uses an index. Look at Key_reads. Run it again and Key_reads
shouldn't change. Use FLUSH TABLES and run it again.  Key_reads will be
increased since the blocks were reloaded.


 With regards to the caching on myd data, is the fact that MySQL
doesn't
 cache myd data a design choice ? It makes perfect sense for MySQL
 installations on a dedicated machine as its a fair assumption that
there's
 no other nasty apps around filling your system cache with other data
and
 the OS is probably in the best position to cache the disks. However,
in my
 case (and I would guess in the proportion of the cases), the database
 shares the machine and in my case this is with Apache which depending
on
 the usage on the website, is likely to flush the cache reasonably
quickly.
 It would be easier to get more consistent query execution times if
MySQL
 maintained it's own caches (for myd data as well) so that more control
 could be kept on cache contents. In a previous life I worked with
Sybase
 and one of the advantages (only ?) is that the administrator has
control
 on the caching of index and data for each individual table. Very handy
if
 it was benficial to ensure the contents of specific tables were
available
 in a cache.

InnoDB may cache full row data too with its buffer_pool.  But I don't
know much about that. :-)

But when you have something like Apache running, you want it to be able
to use the memory it needs. Isn't it better to not have .MYD data cached
than to have other processes swapping?


 Lastly, I'd love to use the query cache but I do have to update the
 indexes every 5 minutes (the system revolves around retrieving SNMP
data
 from a bunch of routers every 5 mins then dumping it into the db ... a
 user then requests a report periodically) so the QCache is invalidated
 every 5 mins anyway.

If you can get [repeated] queries to use the query cache for 5 minutes,
I'd say that's a pretty long time. :-)


 So, a) do  you (or anyone else) know of any plans to
 extend the caching functionality

No.


 and b) are there any other parameters
 that may be worth a tweak ? One thing I had considered was to extend a
 composite index to incorporate the data that is required in the
 problematic query then the query should be able to extract the data
 required without having to search the myd file. I understand that this
 will increase the index size and slow the inserts but otherwise is
this a
 legitamate solution i.e. there must be some other downside surely ?

No, that would probably work pretty well if you don't mind making the
index a bit bigger (there's a limit of 16 cols/index or 512 (?) bytes,
and no [full] TEXT/BLOB columns).  I've done this on one of my tables.
If EXPLAIN on your SELECTs says Using index, then it's getting the
data without going to the .MYD file (I guess you already know that).

Making an index bigger (by adding columns) is not as bad as adding
another separate index.  With a bigger index, the only slowdown on
inserts is writing the extra bytes -- not much.  But for each separate
index, MySQL needs to find where in the index to put the new row.  So I
don't think you'll notice any slowdown or 

What full-text improvements are next?

2004-01-12 Thread Matt W
Hi,

Sorry, I guess this is yet another question for Sergei! :-)

Since the full-text search TODO in the manual is a little vague (and
hasn't been updated much) and it was kind of a surprise when
multi-byte character-set support was added to 4.1 a couple months ago,
I'm wondering what surprises will be coming next? ;-)  I call MB
char-set support a surprise because it was just there all of a sudden
without hearing anything right beforehand.  Though maybe it's an
exception since it's needed for compatibility with 4.1's per table, etc.
char-sets.

Anyway, I wish the manual's TODO could be updated with a more detailed
road map.  But maybe no more details are known. :-(  I know it's hard
to give dates, but in relative time, or such and such order would be
nice.

Basically I'm really wondering what, if any more, improvements will be
added to 4.1 -- say, before it's declared Production?

The biggest little thing I've been hoping for is more control over the
minimum word length!  I'd like to be able to set it on a per table basis
(in CREATE/ALTER), if not per index.  It seems like this would be simple
to add with 4.1's extended .frm files (if they're even needed for this).
But what do I know? :-)  And if not, it'd be nice for the default
min_word_len to be lowered to 3 -- not sure what Sergei said about this.
Besides, hasn't the default max_word_len been lowered to 80-something?

Per table/index stopword file is probably not so simple.  And I'm sure
stemming is a ways off.

Just very curious about ANY coming improvements, besides what I
mentioned.  Any more information that you can give us about anything
(and/or put in the manual)?  If not, that's OK. :-/

Keep up the good work!


Regards,

Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: 4.1.1 FTS 2-level?

2004-01-12 Thread Matt W
Hi,

- Original Message -
From: Sergei Golubchik
To: Steven Roussey
Sent: Wednesday, December 10, 2003 7:44 AM
Subject: Re: 4.1.1 FTS 2-level?


 Hi!

 On Dec 09, Steven Roussey wrote:
  Does Mysql 4.1.1 have the two level index system integrated into it
for full
  text searches?

 What do you mean ?
 Is it used to optimize searches ? No.

 Still there could be some speedup because, e.g, MyISAM will use binary
 search in the key pages instead of linear one, etc.

 Regards,
 Sergei

You're right.  Wow!

I assume you were comparing it to 4.0.x, not 4.1.0.  Once the data rows
are cached, the index search in boolean mode seems to be about *7 times*
faster than 4.0. :-)  2 test searches on the same data went from 48s -
7 and 35 - 5.  Nice!  Of course, if the data rows have to be read from
disk, the full-text code can do nothing to improve those reads. :*(

So are these faster index searches only the result of binary vs linear
search?  (I don't know the exact difference, but binary sounds good.
;-))  The actual full-text code itself is NOT any more optimized than
4.0?

Are the 2-level indexes solely for FTS to use, or can MyISAM use them in
general for any indexes?  Just wondering, since you said Is it used to
optimize searches? No.  Which sounds like it's being used for
*storage*, just not the word count statistics for optimization, etc.
And my index file was reduced from 1.74G in 4.0 to 1.59G, so I thought
maybe this is where some space was saved.

BTW, would (re)building the index be slower with 4.1 for any reason?  I
thought maybe it was, but I'd have to try again to be sure.


Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Which one is better: CHAR or VARCHAR?

2004-01-12 Thread Matt W
Hi Hassan,

In a case like that where you know the data will always be a certain
length, CHAR is definitely better.  VARCHAR will actually waste space (1
byte) when the data is always a certain length.

And yes, if using the CHAR allows your table to have fixed-length rows,
there will be a speed improvement.

Of course, if you have other variable length columns in your table,
MySQL will be stupid and change your CHAR to VARCHAR, thinking it's
helping you. :-(


Matt


- Original Message -
From: Hassan Shaikh
Sent: Sunday, January 11, 2004 7:04 AM
Subject: Which one is better: CHAR or VARCHAR?


Hi,

I've a column of type VARCHAR(10) where I know the data would be 10-char
in length - always. Apart from the fact that VARCHAR saves space as
compared to CHAR, is there any performance benefit? If yes, is it
significant enough?

Thanks.


Hassan


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Automatic conversion from `char` TO `varchar`

2004-01-12 Thread Matt W
Hi,

- Original Message -
From: Michael Stassen
Sent: Sunday, January 11, 2004 5:10 PM
Subject: Re: Automatic conversion from `char` TO `varchar`



 Martijn Tonies wrote:

  Hi,
 
 The manual http://www.mysql.com/doc/en/Open_bugs.html says
 
 
 The following problems are known and will be fixed in due time:
 [...]
 All string columns, except BLOB and TEXT columns, automatically
have
 all trailing spaces removed when retrieved. For CHAR types this is
okay,
 and may be regarded as a feature according to SQL-92. The bug is
that in
 MySQL Server, VARCHAR columns are treated the same way.
 
 That seems the reverse of what you are saying.
 
 
  Indeed. Nevertheless, I'm right at this one :-)

 I defer to your expertise on SQL standards.  Regardless of the
standard,
 however, mysql does not pad CHARs with spaces.  Thus, CHARs and
VARCHARs
 are identical from the client's point of view, so silently changing
 CHARs to VARCHARs for tables with variable length rows does not affect
 the client, but does save space and time.  As I understand it, a
string
 is a string in mysql.  CHAR and VARCHAR are just two string storage
 methods.

Not sure about any of this other stuff, but AFAIK the problem in MySQL
is that it strips trailing spaces from VARCHARs when storing, which
shouldn't happen.  This will be fixed sometime.  I thought CHARs were
handled correctly -- e.g. it's OK that they're not padded on retrieval.


 So long as that's true, mysql is doing you a favor when it
 makes this change.

No, it's NOT!  If *I* specify CHAR, it should be CHAR, period.  When
storing md5 values in a variable row length table, MySQL's favor of
changing CHAR(32) to VARCHAR(32) wastes 1 byte per row. :-(

It's also done a favor causing me to screw up my table design because
I wanted to use a CHAR column with a TEXT column for speed.  That
favor causes slower row access as soon as that changed-to-VARCHAR
column splits the row by being updated to a longer length. :-(  So... I
had to change the TEXT column to CHAR(255) and hope it's long enough.
Though in most cases, it's too long and makes the table bigger than it
would be variable-length with a never-updated TEXT column.  Bigger,
because MySQL did me a favor.

It sucks. :-(


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: query efficiency

2004-01-12 Thread Matt W
Hi Dan,

Sending this to general list too since it's not a Windows specific
question.

Just run the UPDATE with all the column you want to update. :-)  MySQL
won't update the ones that haven't changed (which may have been said).
Don't waste time trying to determine whether you should include a column
in the UPDATE or not.

The only time it could be an issue is if you're setting a column to a
*REALLY* long value.  It will take longer to send the query to the
server (especially over a network) and MySQL will take a little more
time to parse it.


Hope that helps.


Matt


- Original Message -
From: Daniel Canham
Sent: Monday, January 12, 2004 2:48 AM
Subject: Re: query efficiency


 Thats not really what I meant.  I have 5 (or whatever) columns in a
table.
 I need to update values changed in those columns during processing.
But I
 have no way of knowing what column values need updating.  Is it worth
my
 while to write a dirty field checker (value has changed) or doesn't
the
 mysql engine care on UPDATE whether it is UPDATEing 1 or all 10
columns.
 The way you have it, you are doing all UPDATEs in one statement, or
each
 column on a seperate UPDATE.  In that case obviously it is faster to
run one
 update.  My question was is it more efficient to execute an single
statement
 UPDATE that has just the changed columns, or a single statement UPDATE
that
 contains all columns every time because the mysql engine treats them
the
 same.

 -Dan




 -Original Message-
 From: robert_rowe [mailto:[EMAIL PROTECTED]
 Sent: Saturday, January 10, 2004 1:17 PM
 To: [EMAIL PROTECTED]
 Subject: [mysql] Re: query efficiency



 By the way, MySQL won't actually do the update is the existing value
is the
 same as what is already in the fields. Try something like this:

 pseudocode
 print now

 for i=1 to 1
 update table1 set col1=i+4, col2=i+3,col3=i+2,col4=i+1,col5=i;
 next i

 print now

 for i=1 to 1
 update table1 set col1=i+4
 update table1 set col2=i+3
 update table1 set col3=i+2
 update table1 set col4=i+1
 update table1 set col5=i
 next i

 /pseudocode

 I suspect that the first way will be faster.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: JOIN types

2004-01-12 Thread Matt W
Hi Keith,

I would assume it's because LEFT JOIN forced a change in the join order
(in EXPLAIN).  Does using STRAIGHT JOIN give the same result?

So your query was this?

SELECT a.field
FROM table1 a
LEFT JOIN table2 b USING (field2)
ORDER BY b.field3 DESC

If table1 is read first (which it should be), then I don't see how
there's no temp table/filesort because the column(s) you're ordering by
don't come from the first used table.

Oh yeah, and is there an index on field2 in both tables?  It'd be better
to see the EXPLAIN output for the different queries. :-)

It could also be an optimizer bug.  What version of MySQL are you using?


Matt


- Original Message -
From: Keith Bussey
Sent: Monday, January 12, 2004 4:49 PM
Subject: JOIN types


 Hey all,

 I've read the pages in the MySQL manual that explain the types of
JOINs many
 times, but still think I'm missing something.

 I usually was always using INNER JOIN in all my join queries, and in a
few
 cases LEFT JOIN (in cases I wanted the rows in one table that were not
in the
 other one).

 I've started to discover, with the help of EXPLAIN, that the join type
can
 seriously affect performance. For example, I had a query such as this:

 SELECT a.field
 FROM table1 a
 INNER JOIN table2 b
 USING (field2)
 ORDER BY b.field3 DESC

 It was using both filesort and a temporary table (in EXPLAIN) and took
about
 4.50 seconds to run. I switched the order of the tables in the join,
putting
 table2 first, and nothing changed in my EXPLAIN.

 I then changed the join to LEFT JOIN, and suddenly I had no more
filesort or
 temporary table, and the query took 0.05 seconds !

 I compared that I got the same rows in my result both ways, and indeed
I did.

 I'm positive I can improve many of my queries this way, but feel I
need to
 understand the JOINs better. If someone can point me to a guide on
this
 someplace, other than the mysql manual (as Ive already read it few
times but
 it didnt explain their differences and uses very well), I'd greatly
appreciate
 it =)

 Or, if you simply want to give the explanation yourself, that's fine
too.

 Thanks,


 --
 Keith Bussey


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Loading the .myd into memory

2004-01-12 Thread Matt W
Hi Trevor,

MySQL itself doesn't cache any of the data (.MYD) file.  The operating
system uses any free RAM to cache that file data.  This is why I don't
think it's that important to have such a huge key_buffer, because some
of that memory would probably be better used for caching the data file.
Index data can be read from disk a lot faster than rows can -- and the
OS will also cache the index data even when MySQL's key_buffer does.


Hope that helps.


Matt


- Original Message -
From: trevor%tribenetwork.com
Sent: Monday, January 12, 2004 5:20 PM
Subject: Loading the .myd into memory


 Mysqlians,



 Greetings.  Besides the query cache is their a buffer
which
 holds the data portion of MyISAM tables.  All the buffers seem to hold
key
 information or query processing information.



 Thanks,



 Trevor


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: FULLTEXT across two tables

2004-01-03 Thread Matt W
Hi Ladd,

How about SELECT DISTINCT?


Hope that helps.


Matt


- Original Message -
From: Ladd J. Epp
Sent: Saturday, January 03, 2004 11:39 AM
Subject: FULLTEXT across two tables


 Hello,

 I would like to do a FULLTEXT search across two tables.  I run an
artist
 website, so I need to search across the user list and the users'
associated
 art pieces list. I've come up with this query (fulltext indexes for
these
 precise values have been created on both tables):

 SELECT * from users INNER JOIN art ON (users.user_id = art.user_id)
 WHERE MATCH (nickname, name_first, name_last, name_middle, city,
state,
 zip_area, country, bio_short, bio_desc) AGAINST ('kansas')
 OR MATCH (title, medium, commentary) AGAINST ('kansas');

 This query is very close to what I need, except that it returns
redundant
 rows.  For example, if users.state='kansas' it returns every record
from art
 where users.user_id=art.user_id.  How do I return records that have
'kansas'
 in either users, or art, or both, only once? I think a UNION might
help me
 here, but my provider uses MySQL v.3.22 so that is not an option...

 I apologize if I am not being clear about something ... If you need
more
 detail I would be happy to provide it.

 Thanks,
 --
 Ladd J. Epp


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Converting MyISAM to InnoDB type.

2004-01-03 Thread Matt W
Hi Fred,

InnoDB does not support AUTO_INCREMENT on secondary columns of a
multi-column index.

  `id_registro` int(11) NOT NULL auto_increment,
  PRIMARY KEY  (`id_formula`,`id_registro`)

There: id_registro is the second column of the index.


Matt


- Original Message -
From: Fred
Sent: Saturday, January 03, 2004 5:38 PM
Subject: Re: Converting MyISAM to InnoDB type.




 
  If you change the Type=MyISAM to Type=InnoDB and execute the
  create query, what message do you get?
 

 Hi you all .. again 

 This is the error message I get if I try to create
 this table in InnoDB type.

 CREATE TABLE `test' (
   `id_formula` int(11) NOT NULL default '0',
   `tp_posologia` int(11) default NULL,
   `qt_dias` int(11) default NULL,
   `id_registro` int(11) NOT NULL auto_increment,
   PRIMARY KEY  (`id_formula`,`id_registro`)
 ) TYPE=InnoDB DEFAULT CHARSET=latin1;

 [EMAIL PROTECTED]:3306] ERROR 1075: Incorrect table definition;
 There can only be one auto column and it must be defined as a key

 If I drop the AUTOINCREMENT, I can change the table type, but
 in this case, I will have to change the application source-code in
 Delphi.

 Thanks,
 Fred


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Default DATE field values

2004-01-03 Thread Matt W
Hi Chris,

Nope, DEFAULT values have to be constants; no functions or anything.
:-/

What are you trying to do?  And what's wrong with using TIMESTAMP since
you want a default of NOW()?  If it's because you don't want it update
when you UPDATE the row, you can just set it to its current value, if
you weren't aware of that.


Matt


- Original Message -
From: Chris Nolan
Sent: Saturday, January 03, 2004 10:34 AM
Subject: Default DATE field values


 Hi all,

 Upon reading the funky manual, I have discovered the following things:

 1. TIMESTAMP fields can be set so that their default value is NOW().
 2. DATE and TIMESTAMP fields are related.

 Given the two above facts, is there a way to set DATE columns so the
 default value is NOW()? My playing around seems to have not produced
any
 fruitful results.

 Regards,

 Chris


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Converting MyISAM to InnoDB type.

2004-01-03 Thread Matt W
Hi Fred,

Also, you may be able to swap the order of those columns in the index.
I think that would work, but don't know if it would cause other
problems -- like for the way your app uses the index, etc.


Matt


- Original Message -
From: Fred
Sent: Saturday, January 03, 2004 6:11 PM
Subject: Re: Converting MyISAM to InnoDB type.


 Hi Matt and thank you very much 

 Now I get the point  I'll have to decide if I maintain these
 two tables in MyISAM or if I drop the AUTO_INCREMENT and
 change the source-code in Delphi.

 It's clear now that the problem is the lack of support of InnoDB.

 Thanks for your help and regards,
 Fred.

  Hi Fred,
 
  InnoDB does not support AUTO_INCREMENT on secondary columns of a
  multi-column index.
 
`id_registro` int(11) NOT NULL auto_increment,
PRIMARY KEY  (`id_formula`,`id_registro`)
 
  There: id_registro is the second column of the index.
 
 
  Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Alter table and setup Default value

2004-01-03 Thread Matt W
Hi Mike,

It's just part of modifying the column to change the DEFAULT value.
e.g. you might use this (changes to NOT NULL and DEFAULT value of
'new'):

ALTER TABLE table MODIFY type ENUM('new','used') NOT NULL DEFAULT 'new';


Hope that helps.


Matt


- Original Message -
From: Mike Mapsnac
Sent: Saturday, January 03, 2004 5:37 PM
Subject: Alter table and setup Default value


 I use MYSQL 4. I can  modify or alter a column with no problems. But
how I
 can alter table and setup Default value. The column below has default
value
 NULL, but I want to setup default value used. How that's can be done?

 type  | enum('new',used') | YES  | | NULL||


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Subtracting date fields

2004-01-01 Thread Matt W
Dan,

DATEDIFF() only works in MySQL 4.1.1+.

RTFM!  ;-)


Matt


- Original Message - 
From: [EMAIL PROTECTED]
Sent: Wednesday, December 31, 2003 2:10 PM
Subject: RE: Subtracting date fields


 Kenneth,
 
 try
 
 SELECT id, DATEDIFF(firstdate, postdate) AS diff FROM
 calendar
 
 RTFM!
 
 hope that helps, dan
 
 -Original Message-
 From: Kenneth Letendre
 Sent: Saturday, January 31, 2004 1:51 PM
 Subject: Subtracting date fields
 
 
 Hello,
 
I'm trying to get the difference (in days) between
 dates stored in two 
 date fields.
My query:
 
 SELECT id,(firstdate- postdate) AS diff FROM calendar
 
This works fine if the two dates are in the same
 month, but not 
 otherwise.  MySQL appears to be treating the two dates
 as base-10 integers 
 rather than dates.  E.g.:
 
 2004-01-07 (20,040,107) - 2003-12-31 (20,031,231) =
 8876
 
How do I get MySQL to treat these date fields as
 date fields in this case?
 
 
 Thanks,
 
 Kenneth


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Subtracting date fields

2004-01-01 Thread Matt W
Hi Bob,

I did.  So it should be assumed he's using 4.1.1 or 5.0?  That's pretty
stupid.  What % of people do you think are using those versions?  1%?
0.1%?  0.01%?

I think most people want the most compatible code/syntax/functions if
they accomplish the same thing.  If you want things to be portable to
different systems, I suggest using things compatible with 3.23.  And
when you have to use something that's not, handle it in your code.


Matt


- Original Message -
From: Bob Terrell [EMAIL PROTECTED]
Sent: Thursday, January 01, 2004 5:35 PM
Subject: Re: Subtracting date fields


 on 1/1/04 5:42 PM, Matt W wrote:

  Dan,
 
  DATEDIFF() only works in MySQL 4.1.1+.
 
  RTFM!  ;-)

 Did you notice how the original poster didn't specify a version
number?

 RTFOP,YSSOS.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: special characters as field values

2004-01-01 Thread Matt W
Hi Chris,

You're fine with mysql_real_escape_string().  % or _ only need to be
escaped if you're using them in LIKE and want them to match
iterally.  -- never needs to be escaped in a string.

BTW, if you're using PHP and the stupid magic_quotes_gpc is on, you
don't want to escape stuff yourself again, or it will add too many
slashes!  So you need to check for that somehow.

Hope that helps.


Matt


- Original Message -
From: Chris W
Sent: Wednesday, December 31, 2003 4:44 PM
Subject: special characters as field values


 I am storing data from an html form and there could be any character
in
 there, including a % and an _.  I'm using mysql_real_escape_string in
 php to escape special characters, but it says it doesn't escape the %
 and _.  I was also reading something about escaping the -- .  If I
want
 all of these characters and character sequences to be allowable field
 values do I need to do more than what mysql_real_escape_string will do
 for me?  I am enclosing all values in ' like the following...

 $query = INSERT INTO user Values (NULL, '$UserID', '$Password',
 '$Email', '$FName', '$LName', ;
 $query .= '$StreetAddress', '$AddressLine2', '$City', '$State',
'$ZIP');
 $result = mysql_query($query);


 Chris W


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Benefits of MAX_ROWS and AVG_ROW_LENGTH

2003-12-23 Thread Matt W
Hi Jeremy,

- Original Message -
From: Jeremy Zawodny
Sent: Monday, December 22, 2003 2:20 PM
Subject: Re: Benefits of MAX_ROWS and AVG_ROW_LENGTH


 On Fri, Dec 19, 2003 at 06:40:17PM -0600, Matt W wrote:
  Hi Mark,
 
  I'll tell you what I know. :-)
 
  First, AVG_ROW_LENGTH is only needed for dynamic row-length tables
(it's
  ignored with fixed-length rows) -- more specifically, those with
  TEXT/BLOB columns.

 And VARCHAR/VARBINARY.

Yes, in that VARCHAR makes variable length rows, but *not* that you
*have to* (or rather really should) specify AVG_ROW_LENGTH with
MAX_ROWS.  Sure, include it if your VARCHARs aren't always going to be
filled to the max length.

Otherwise, MySQL will just assume that the rows will be as long as the
sum of the max length of all the VARCHAR (and other) columns.  When
VARCHAR(n) is specified, and n is the max length that will be stored in
the column, this should be a pretty accurate assumption, no?

But if you have TEXT/BLOB columns, their max length (well, plain TEXT
anyway; not TINY/MEDIUM/LONG) is equivalent to 255 VARCHAR(255) columns!
And again, MySQL will assume you're going to use it all.  That's fine if
you're actually storing 64K in each column, but that's hardly ever the
case.  Hence why I said, more specifically, those with TEXT/BLOB
columns.


Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Benefits of MAX_ROWS and AVG_ROW_LENGTH

2003-12-23 Thread Matt W
Hi Mark,

Maybe you intentionally only replied to me (instead of the list too),
but I'm sending this to the list also so others can follow the
discussion. :-)

I never know how much I have to explain things for a person's knowledge
level, but it sounds like you understand what's going on very well. :-)

More below...

- Original Message -
From: Mark Hawkes
Sent: Saturday, December 20, 2003 3:50 PM
Subject: Re: Benefits of MAX_ROWS and AVG_ROW_LENGTH


 Hi Matt,

 Thanks very much for your thoughts and advice. I was going to ignore
using
 MAX_ROWS, MIN_ROWS and AVG_ROW_LENGTH because the tables I'll be
working
 with are small. That doesn't stop me wanting to tune them though, so
I've
 included them anyway. I figure it's better to give MySQL a clue -
better
 than specifying no size params whatsoever. Maybe future versions will
use
 them intelligently (?).

I doubt it.  It's already doing all it can.  You have to tell it the
rest.  4 byte pointers are the default since most tables don't have data
files  4GB.  It can't go smaller unless you give MySQL that
information -- otherwise people would be getting Table is full errors.
:-)


 The reason I asked is that I'm accustomed to tuning options for
filesystems
 (inode density, cluster size etc..), and hash tables when programming
(e.g.
 load factor and ensuring the number of buckets isn't a power of 2,
blah). I
 just wanted to make sure that something as important as a database
table
 would also be sized or tuned correctly.

Yeah, I'm always trying to optimize things as much as possible. :-D  And
I never see anyone use MAX_ROWS/AVG_ROW_LENGTH -- unless it's to get
around the 4GB limit.  He he.


 First, AVG_ROW_LENGTH is only needed for dynamic row-length tables
(it's
 ignored with fixed-length rows) -- more specifically, those with
 TEXT/BLOB columns.

 Using VARCHAR also makes a table dynamic doesn't it? (Unless it's
below 4
 chars.)

Yes, but see my reply to Jeremy Zawodny about that.  MySQL should still
be able to fairly accurately estimate the average row length if the only
dynamic columns are VARCHAR.


Otherwise, if MAX_ROWS is used, MySQL will assume
 that each TEXT/BLOB column will be filled completely, [probably]
making
 it think the data file will be  4GB.  I discovered this when I was
ONLY
 specifying MAX_ROWS.  It made the index file larger because 5 byte
data
 pointers were used instead of 4.

 I see, so if we had a table like this...

CREATE TABLE Foo ( essay TEXT NOT NULL ) MAX_ROWS = 25;

 then, without specifying AVG_ROW_LENGTH at all, MySQL would assume the
 table could get as large as 1.6MB (25 * 65536) and thus use  a 3 byte
 datafile pointer.

Right.  :-)  I said it would probably use 5 byte pointers without
AVG_ROW_LENGTH, unless MAX_ROWS is small enough.  If there's just 1
regular TEXT/BLOB column:

MAX_ROWS  ~65,532 -- 4 byte pointer
MAX_ROWS  ~256 -- 3 byte pointer

With 1 TEXT column like your example, the max row length would probably
be more like 65,540: 65,535 for TEXT data + 2 bytes to record that
length + ~3 bytes (I think) for the row header.  And that's not taking
into account possible split/fragmented rows which will take extra space
for the pointer to where the row continues. ;-)

 But if we did this...

CREATE TABLE Foo ( essay TEXT NOT NULL ) MAX_ROWS = 25
AVG_ROW_LENGTH = 2000;

 then 2 byte datafile pointers would be used because 50,000 bytes can
be
 addressed by a 16-bit pointer. Okay, I get it.

Yep, just verified that. :-)


 No, as far as I know, MySQL doesn't use MAX_ROWS or AVG_ROW_LENGTH to
do
 any optimizations -- only to decide the pointer size and therefore
the
 max size of the table.  BTW, I don't know what the use of MIN_ROWS
is,
 do you??

 I agree - looks like MAX_ROWS and AVG_ROW_LENGTH determine the
datafile
 pointer size and that's all. The only mention of MIN_ROWS in the
manual says

  Minimum number of rows you plan to store in the table

 Boy, that's really informative! I have no idea what it does but have
used
 it anyway.

Exactly what I was thinking!  I honestly can't think of any way that it
would be useful, though.  Don't know how the minimum number or rows
would change anything...


Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How boolean full-text search finds matches?

2003-12-19 Thread Matt W
Hi Sergei!

Thanks for replying again.  I hope I'm not wasting too much of your time
with my questions! :-)

More below...

- Original Message -
From: Sergei Golubchik
Sent: Thursday, December 18, 2003 7:17 AM
Subject: Re: How boolean full-text search finds matches?


 Hi!

 On Dec 17, Matt W wrote:
  Hi,
 
  Just have a couple more full-text search inquiries here. :-)
 
  I'm not exactly clear on how matching rows are found when searching
for
  2 or more required words: '+word1 +word2'.  I understand that it
can't
  currently know which word occurs less, so that it can be searched
  first -- this optimization will come with 4.1's 2-level indexes. :-)
 
  I just want to know, when it finds a match for whichever word is
tried
  first, how does it check if the other required word(s) are present
in
  the same row?  Say that word1 and word2 are each present in 100,000
  rows.
 
  1) Surely it doesn't check the 100,000 entries for word2 for EACH
word1
  match to see if they're in the same row, does it?

 No it does not :)

Good!


  2) It *seems* the best way would be to do a lookup for (word2 +
rowid
  for word1) and see if there's a match.  Is this what's done?  I'm
not
  sure it's possible though with the way the index is structured...

 it is possible, but it is only sensible if word1 is much more rare
than
 word1. This could be done with 2-level indexes :)

I assume that should say if word2 is much more rare than word1.  I
guess that's because it would need too many [random?] index lookups
otherwise?


  3) Or, and I'm thinking *maybe* this is how it's done from what I've
  heard, does it get all the matches for word1, then for word2, and
then
  intersect them to find ones which are present in the same row?  If
so,
  how will the 2-level index optimization change things? Will it do
#2?

 Yes to both questions, without the word then.
 First, one match is found for each word. Then read_next is called for
 the word with the lowest rowid, etc.

Not completely clear on this. :-)  I get that one match is found for
each word... then whichever word occured first in the table (lowest
rowid) is... what? :-/

Oh, wait, I get what you mean! ;-)  You're saying that read_next is
called for the word with the lowest rowid until you see if the rowid
matches the rowid from the other word(s)?  Then if the rowid gets
greater than what you're looking for, you know that there's no matching
row?  (Since you say that each word is sorted by rowid -- see below
about that, though.)

Then I'm not sure what happens to find the next matching row.  Find a
match again for each word starting after the last found rowid?  I'm not
familiar enough with the MySQL code (or C) to understand what's going on
in ft_boolean_search.c. :-(


 The advantage is that matches are
 found and returned earlier - a user don't have to wait for the index
 scan to complete. Also LIMIT, if used, cuts off more work, that is
LIMIT
 is more effective.

Right, I know that LIMIT helps -- as long as there's no ORDER BY, etc.
that needs to find all rows to sort. :-)

That brings me to the main reason for asking these questions: for
searching on my forum system. There could be 5-10+ million posts, which
would put upwards of 250+ million entries in the full-text index.  I'll
probably use a LIMIT of 10-20,000 (w/o ORDER BY) to prevent searches
from taking too long (and returning way too many matches!).

However, with that many posts, I think it's possible that a search could
match 100k+ posts.  Then the search would have to be narrowed down to a
particular forum or date range.  If I add AND forumid=123 etc. to the
WHERE clause, that will make the search hit the data file for
who-knows-how-many posts that aren't in the desired forum -- thus
scanning more data rows than specified by LIMIT.

But you told list member Steven Roussey one time that he could add the
forum id to a character column and include it in the full-text index.  I
thought I'd try that too -- along with some text for the month and year
for date range searches.  Although now I'm not sure that's a good idea,
because what if there's 1 million posts in a single forum (is it bad to
have the same word for a forum id in 1 million rows??) and a search is
done that would actually find  LIMIT rows *across all forums*?  If I
include the forum id in the search, the current full-text code will look
at all 1M words for that forum id.  Much slower than just doing the
search and manually checking forum id. :-(  Of course, if the situation
is reversed (common search in a *small* forum), this method would be
faster than manually checking.

Any other ideas for improving performance when searching a small subset
of the full-text index?  Too bad we can't include numeric columns in the
full-text index (which would work as usual in the WHERE, not in MATCH
()).


 But when one word is much more common than the second one, it is
better
 to do #2, and it's what I'll probably do.

Yeah, like for my example. :-)


  Next

Re: Benefits of MAX_ROWS and AVG_ROW_LENGTH

2003-12-19 Thread Matt W
Hi Mark,

I'll tell you what I know. :-)

First, AVG_ROW_LENGTH is only needed for dynamic row-length tables (it's
ignored with fixed-length rows) -- more specifically, those with
TEXT/BLOB columns.  Otherwise, if MAX_ROWS is used, MySQL will assume
that each TEXT/BLOB column will be filled completely, [probably] making
it think the data file will be  4GB.  I discovered this when I was ONLY
specifying MAX_ROWS.  It made the index file larger because 5 byte data
pointers were used instead of 4.

Second, the 4GB data file limit also only applies to dynamic row-length
tables.  That's because their data pointer is in bytes.  The data
pointer for fixed row-length tables is just a *row number*.  Therefore,
fixed row-length tables are limited by the number of rows -- 4 billion
(4,294,967,295 actually) with 4 byte pointers.

No, as far as I know, MySQL doesn't use MAX_ROWS or AVG_ROW_LENGTH to do
any optimizations -- only to decide the pointer size and therefore the
max size of the table.  BTW, I don't know what the use of MIN_ROWS is,
do you??

Yes, I use MAX_ROWS and AVG_ROW_LENGTH for tables that I know won't be
over a certain size.  This makes the index file smaller if the data
pointer is  4 bytes.  To determine space savings, the number of data
pointers used is: number_of_rows * number_of_indexes.  The *index*
pointer may also be smaller, but I'm not sure what its purpose is -- I
think it points to other blocks in the index or something...  Its size
seems to depend on how many indexes there are, etc.

If you want to check the size of the pointers, use:

myisamchk -dv /path/to/database/table


So for fixed row-length tables:

MAX_ROWS = 255 -- 1 byte pointer
MAX_ROWS = 65,535 -- 2 byte pointer
MAX_ROWS = 16,777,215 -- 3 byte pointer
MAX_ROWS  16,777,215 or no MAX_ROWS -- 4 byte pointer

For dynamic row-length tables, the product of MAX_ROWS and
AVG_ROW_LENGTH determines the max data file size.  In other words, their
individual value doesn't matter, only the product (e.g. don't waste time
 trying to figure their exact values).  If AVG_ROW_LENGTH isn't given, I
think it's assumed to be the maximum length of a row.  product =
MAX_ROWS * AVG_ROW_LENGTH

product = 255 -- 1 byte pointer; max data size: 255 (REALLY small table
:-))
product = 65,535 -- 2 byte pointer; max data size: 64KB
product = 16,777,215 -- 3 byte pointer; max data size: 16MB
product = 4,294,967,295 or no MAX_ROWS -- 4 byte pointer; max data
size: 4GB
product  4,294,967,295 -- 5 byte pointer; max data size: 1TB (or
4,294,967,295 rows is MySQL's internal limit I *think*)


Hope that helps!


Matt


P.S.  You should upgrade MySQL (at least latest 3.23; preferably to
4.0). :-)  3.23.41 is over 2 years old!


- Original Message -
From: Mark Hawkes
Sent: Friday, December 19, 2003 10:01 AM
Subject: Benefits of MAX_ROWS and AVG_ROW_LENGTH


 Hi all,

 At table creation time I can use MAX_ROWS and AVG_ROW_LENGTH to

(a) limit the size of a HEAP table
(b) overcome MyISAM's default 4GB limit

 But are they used in any other ways? Does MySQL use them to improve
 performance by sizing buffers appropriately or to  prevent
fragmentation in
 dynamic tables?

 Is there any point in me using MAX_ROWS, MIN_ROWS or
 AVG_ROW_LENGTH if my
 tables will be small (20KB to 700KB with slow growth rates)? I'm
currently
 running 3.23.41.

 Mark


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: show processlist

2003-12-18 Thread Matt W
Hi Andrius,

Yes, I've wondered about this before too, but wasn't exactly sure what
it meant either. :-)  So I just decided to see where this state is set
in the code, and it's when the make_join_statistics() function is
called.  I think that function checks key distribution and things to see
which index to use, if any, when looking up rows in a table.

Now we both know. ;-)  Hope that helps.


Matt


- Original Message -
From: Andrius Jakas
Sent: Thursday, December 18, 2003 6:08 AM
Subject: show processlist


 Hi,

 show processlist displays processes with state statistics, what does
this
 status means.
 Documentation doesn't say much

 A.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How boolean full-text search finds matches?

2003-12-17 Thread Matt W
Hi,

Just have a couple more full-text search inquiries here. :-)

I'm not exactly clear on how matching rows are found when searching for
2 or more required words: '+word1 +word2'.  I understand that it can't
currently know which word occurs less, so that it can be searched
first -- this optimization will come with 4.1's 2-level indexes. :-)

I just want to know, when it finds a match for whichever word is tried
first, how does it check if the other required word(s) are present in
the same row?  Say that word1 and word2 are each present in 100,000
rows.

1) Surely it doesn't check the 100,000 entries for word2 for EACH word1
match to see if they're in the same row, does it?

2) It *seems* the best way would be to do a lookup for (word2 + rowid
for word1) and see if there's a match.  Is this what's done?  I'm not
sure it's possible though with the way the index is structured...

3) Or, and I'm thinking *maybe* this is how it's done from what I've
heard, does it get all the matches for word1, then for word2, and then
intersect them to find ones which are present in the same row?  If so,
how will the 2-level index optimization change things? Will it do #2?


Next question is... a few weeks ago I was doing some test searches like
'+word1 +word2'.  Actually, maybe I was only using 1 word, I can't
remember, but I don't think it matters.  Anyway, I happened to try
changing the query to '+word1* +word2*' -- e.g. adding a wild-card to
the end of the same word(s) -- and I was amazed at how much faster the
query was! (And no, there's no query cache; and they were both run many
times so the index was cached. :-)) Can't remember how much faster, but
it wasn't insignificant.  Then I tried adding a wild-card to the end of
words in another search (the wild-card did not make more rows match as
far as I know), but that made it a little slower (I'd expect that, if
anything).  Is there any explanation for why adding wild-cards would
make a search faster?


Thanks in advance!

Matt

P.S.  Sergei, if you see this, in one of your replies to my full-text
suggestions back in September ( http://lists.mysql.com/mysql/149644 ),
you said Another reply will follow...  I never saw another reply
though. :-/  It's OK, I was just wondering what other interesting things
you were going to say! :-)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



How boolean full-text search finds matches?

2003-12-17 Thread Matt W
Hi,

Just have a couple more full-text search inquiries here. :-)

I'm not exactly clear on how matching rows are found when searching for
2 or more required words: '+word1 +word2'.  I understand that it can't
currently know which word occurs less, so that it can be searched
first -- this optimization will come with 4.1's 2-level indexes. :-)

I just want to know, when it finds a match for whichever word is tried
first, how does it check if the other required word(s) are present in
the same row?  Say that word1 and word2 are each present in 100,000
rows.

1) Surely it doesn't check the 100,000 entries for word2 for EACH word1
match to see if they're in the same row, does it?

2) It *seems* the best way would be to do a lookup for (word2 + rowid
for word1) and see if there's a match.  Is this what's done?  I'm not
sure it's possible though with the way the index is structured...

3) Or, and I'm thinking *maybe* this is how it's done from what I've
heard, does it get all the matches for word1, then for word2, and then
intersect them to find ones which are present in the same row?  If so,
how will the 2-level index optimization change things? Will it do #2?


Next question is... a few weeks ago I was doing some test searches like
'+word1 +word2'.  Actually, maybe I was only using 1 word, I can't
remember, but I don't think it matters.  Anyway, I happened to try
changing the query to '+word1* +word2*' -- e.g. adding a wild-card to
the end of the same word(s) -- and I was amazed at how much faster the
query was! (And no, there's no query cache; and they were both run many
times so the index was cached. :-)) Can't remember how much faster, but
it wasn't insignificant.  Then I tried adding a wild-card to the end of
words in another search (the wild-card did not make more rows match as
far as I know), but that made it a little slower (I'd expect that, if
anything).  Is there any explanation for why adding wild-cards would
make a search faster?


Thanks in advance!

Matt

P.S.  Sergei, if you see this, in one of your replies to my full-text
suggestions back in September ( http://lists.mysql.com/mysql/149644 ),
you said Another reply will follow...  I never saw another reply
though. :-/  It's OK, I was just wondering what other interesting things
you were going to say! :-)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL 4.0.17 has been released

2003-12-17 Thread Matt W
Hi,

I saw the change as soon as it was posted last week or whenever and
didn't think anything of it. But the point Yves brings up seems very
important!

Although, I'm not sure what to do then with bug #1812. Too bad MySQL's
code can't make database/table names case-sensitive like on *nix.  e.g.
*Force* the case used in queries match that of the directory/file
name...


Matt


- Original Message -
From: Reverend Deuce
Sent: Wednesday, December 17, 2003 3:41 PM
Subject: Re: MySQL 4.0.17 has been released


 I agree, 100%. We live in a mixed environment of UNIX and Windows and
as
 such, we've assumed case insensitivity in our apps. I know that this
is bad
 practice, but forcing this flag on us is and even worse practice. This
 should always, always be an option.

 I wont be able to upgrade until this is fixed. :(

 -- R

 - Original Message -
 From: Yves Goergen
 Sent: Wednesday, December 17, 2003 1:12 PM
 Subject: Re: MySQL 4.0.17 has been released


  On Wednesday, December 17, 2003 1:58 PM CET, Lenz Grimmer wrote:
   Functionality added or changed:
  
  * `lower_case_table_names' is now forced to 1 if the database
directory is located on a case-insensitive file system. (Bug
   #1812)
 
  Uh, _very_ bad. I know that my Windows filesystem is
case-insensitive and
  that I cannot create tables only differing by case of the name, but
I need
  to have this setting off to be able to dump my tables on Windows
with the
  correct (and not lower-cased!!) table names to import them on my
 webhoster's
  Linux server. If the setting was enabled, I'd get all wrong table
names
 and
  my application couldn't find its tables anymore (as
'bb1_GroupMembers'
 gets
  to 'bb1_groupmembers' and that's something else!).
 
  So please change this back again, I believe it's up to the server
  administrator to set this in a correct and reasonable way, don't
you?


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: high water mark

2003-12-17 Thread Matt W
Hi Jamie,

Yes, when tables are made smaller in MySQL, the file size isn't
actually reduced. I guess it shouldn't usually make a speed difference
unless your SELECTs are doing a table scan. Reads that use an index
shouldn't really be affected, I don't think.

If you want to reclaim the deleted space, you can do so by running
OPTIMIZE TABLE.  Be aware that it may take awhile to run on a large
table!


Hope that helps.


Matt


- Original Message -
From: jamie murray
Sent: Wednesday, December 17, 2003 2:42 PM
Subject: high water mark


Guys,
Does mysql record the high water mark in it's tables. If so can it be
shrunken after a mass delete. I am new to mysql and haven't seen any
info on this so I'm not sure if it exists in this database. I am asking
this because after I deleted a lot of data from one table a simple
select is now very slow, this would make me think that like in oracle
the query is reading all blocks up to the high water mark even thought
there is no data there.
I am familiar with oracle but not MySql so excuse my ignorance if it is
showing with this particular topic.

Cheers


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Lost connection to MySQL server during query - pls help

2003-12-13 Thread Matt W
Hi Vanessa,

I don't think I saw a reply to this...

You can just reconnect to MySQL if you get this error. :-) Trying to
send the query a second or third time may also make the client try to
reconnect again.


Hope that helps.


Matt


- Original Message -
From: Kiky
Sent: Friday, December 05, 2003 12:14 AM
Subject: Lost connection to MySQL server during query - pls help


Hi Guys,
I have a problem with
Error 2013 - Lost connection to MySQL server during query

I'm using mysql 3.23.41 under Linux Mandrake, and mysql 4.0.15 under Win
XP Pro.
They turn out to have the same error.

My problem is:
I have a Java program which is actually a thread to send emails, so it
keeps running all the time.
When it's time to send emails, it will access MySQL database.
Based on my wait_timeout in mysql, I think the connection closes after 8
hrs.
If it's the time to send emails, and mysql has already closed the
connection, the first connection attemp will throw an error
Error 2013 - Lost connection to MySQL server during query

* Is there a way that I can avoid this error? Or to make the connection
keeps open all the the time?

Any help / suggestions will be very much appreciated :)

Thank you in advance.

Rgds,
Vanessa


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: newbie question

2003-12-13 Thread Matt W
Hi Peter,

You can probably safely have at least 1000-2000 tables in a single
database.


Hope that helps.


Matt


- Original Message -
From: peter
Sent: Friday, November 28, 2003 12:03 PM
Subject: newbie question


 Hi
 I am a webdesigner/hosting reseller
 my question is this:

 I am hosting various different CMS attached to mysql databases and
hope
 to host more in the future
 How many different tables can I safely store in the same database?
 I am currently storing tables from three different CMS on the same
 database with different prefix's
 Can I keep adding more? is there a significant performance issue with
 multiple sites using the same database?

 I'm not really that up to speed on mysql or databases in general
 so..

 any thoughts?

 thanks

 Peter


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Temporary tables rights

2003-12-13 Thread Matt W
Hi Alejandro,

Yeah, this issue has come up before. It's not possible to GRANT DROP on
temp tables without GRANTing DROP on the whole database.

The temp tables will be dropped when the client disconnects you know,
right? And if you want to empty the table or reuse it, you should be
able to TRUNCATE it, I think.

And actually, if you can TRUNCATE the other tables (if the DELETE
privilege allows it), isn't that just as bad as DROPping them? :-)


Matt


- Original Message -
From: adburne
To: [EMAIL PROTECTED]
Sent: Friday, December 12, 2003 11:31 AM
Subject: Temporary tables rights


Hi, I'm granting users to use temporary tables as:

GRANT CREATE TEMPORARY TABLES ON db1.* TO user1;

and having grants on many other tables as:

GRANT SELECT, INSERT, UPDATE, DELETE ON db1.table1 TO user1;
GRANT SELECT, INSERT, UPDATE, DELETE ON db1.table2 TO user1;


but how make this work

CREATE TEMPORARY TABLE tmp1 AS SELECT * FROM table1;
ok

SELECT * FROM tmp1;
Error: select command denied to user: [EMAIL PROTECTED] for table tmp1

also:
DROP TABLE tmp1;
Error: drop command denied to user: [EMAIL PROTECTED] for table tmp1

I don't want grant select and drop global privilege over db1 but I
want use temporary tables, there is a way to do this work?

Alejandro


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ALTER TABLE .. ORDER BY

2003-12-13 Thread Matt W
Hi Chris,

I don't know exactly what you mean by ALTER being as good as OPTIMIZE...

But yes, an ALTER that recreates the data file (as ALTER ... ORDER BY
does) will defragment the data file too.

However, OPTIMIZE also analyzes the key distribution (I don't know if
it's remembered after an ALTER or not...) and sorts the index pages (but
that should be done pretty well I think when the index is rebuilt during
ALTER).

To summarize, if you just want to reclaim deleted rows, ALTER ... ORDER
BY is enough. If you want to make sure everything else is done, use
OPTIMIZE too afterwards. :-)


Hope that helps.


Matt


- Original Message -
From: Chris Elsworth
Sent: Wednesday, December 10, 2003 12:49 PM
Subject: ALTER TABLE .. ORDER BY


 Hello,

 Just a quickie. Does anyone know if issueing an ALTER TABLE t ORDER BY
c
 is as good as an OPTIMIZE TABLE if I know the order I'll mostly be
 sorting with? Does the ALTER TABLE operation reclaimed deleted rows,
 for example? Or should I OPTIMIZE and then ALTER?

 Cheers :)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: fulltext search speed issue with SQL_CALC_FOUND_ROWS

2003-12-04 Thread Matt W
Hi,

Yes, you would have similar results with any query that uses
SQL_CALC_FOUND_ROWS. That's because MySQL has to see how many rows would
be found without the LIMIT. So in your case, it can't just abort the
query after it finds 10 rows. All rows that match the WHERE need to be
found.

You might want to try your fulltext search IN BOOLEAN MODE to see if
that runs any faster. :-)


Hope that helps.


Matt


- Original Message -
From: [EMAIL PROTECTED]
Sent: Thursday, December 04, 2003 9:13 AM
Subject: fulltext search speed issue with SQL_CALC_FOUND_ROWS


 I have some_table with 100,000 rows and with an
 average of 500 words in some_column of each row. When
 i do a fulltext search on this table using a query
 such as the following, all of my results are under 0.1
 seconds:

 SELECT something
 FROM some_table
 WHERE MATCH (some_column) AGAINST ('some_search_term')
 LIMIT 0,10

 However, when i add the SQL_CALC_FOUND_ROWS keyword
 like in the following query, some queries take longer
 than 1 minute:

 SELECT SQL_CALC_FOUND_ROWS something
 FROM some_table
 WHERE MATCH (some_column) AGAINST ('some_search_term')
 LIMIT 0,10

 How can there be a huge difference in speed if both
 queries always return the exact same results?

 Thanks,
 TK


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Named Pipe crashes on MySQL (4.1.1 alpha) WinXP

2003-12-04 Thread Matt W
Hi Ed,

Yeah, I just installed today's 4.1.1-alpha-nt on Win2k SP3 and get the
same thing. :-(  Sucks, 'cause named pipes are a lot faster for me than
TCP/IP.  And I was really looking forward to this release.  It's just
not the same with TCP/IP. :-(


Matt


- Original Message -
Subject: Named Pipe crashes on MySQL (4.1.1 alpha) WinXP


Named Pipe crashes on MySQL (4.1.1 alpha) WinXPI have been unable to get
named pipes to work on mysqld-nt 4.1.1
Alpha(including today's official release). As soon as I attempt a
connect the server crashes. I can't create a debug trace because named
pipes aren't enabled during a --debug. I have enable-named-pipe turned
on and my client is connecting with hostname of . notation. The last
several weeks of bitkeeper source distributions haven't worked for me
either. The last time I can confirm it worked for me was a bitkeeper
source build I did on Sep 2 2003. Does anyone have an Alpha 4.1.1
working with named pipes on WinXP (DELL 2.6ghz, P4, 640mb ram, ServPack
1)?

Thanks,
Ed Mierzwa


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql 'start' spawns 10 instances of mysqld

2003-12-01 Thread Matt W
Hi Scott,

Those aren't processes. There is 1 process with many threads and your
system is reporting them as separate processes. :-)


Hope that helps.


Matt


- Original Message -
From: Scott Stingel
Sent: Monday, December 01, 2003 4:47 PM
Subject: mysql 'start' spawns 10 instances of mysqld


 Hi-

 having a problem with a new installation of MySQL 4 on a Xeon-based
system
 (Tyan S2723), single processor installed.  Whenever I start mysql, I
then
 check using ps -ef and find that not only has mysqld_safe started, but
in
 addition I have 10 new 'mysqld' processes.

 Why are there so many processes?  Is this a config error or an error
in
 mysqld_safe?

 Thanks


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Index before or after inserts?

2003-11-27 Thread Matt W
Hi,

Create the indexes right away and then use

ALTER TABLE table DISABLE KEYS;

Load your data and then

ALTER TABLE table ENABLE KEYS;

This will not make a tmp copy of the data file, but will simply start
rebuilding the index.

However, DISABLE KEYS doesn't disable unique indexes, so these still
have to be updated as opposed to adding them afterwards. This is
probably good though for integrity. Using LOCK TABLES around multi-row
INSERTs will make index updating much faster than single-row non-locked
INSERTs. And a large enough key_buffer will make the indexes be flushed
less often.

For ENABLE KEYS, I think myisam_sort_buffer_size is the important
variable.

Also note: DISABLE/ENABLE KEYS doesn't work in 3.23, but you can do the
same thing using myisamchk.


Hope that helps.


Matt


- Original Message -
From: mos
Sent: Thursday, November 27, 2003 3:44 PM
Subject: RE: Index before or after inserts?


 At 03:19 PM 11/27/2003, you wrote:
 Mirza,
 
 Definitely, index after insert.
 
 Andy

 Maybe not. g I know this is the common sense approach that works
with
 many databases, but I'm not sure it is faster with MySQL.

 MySQL Manual on Alter Table 
  Note that if you use any other option to ALTER TABLE than RENAME,
MySQL
 will always create a temporary table, even if the data wouldn't
strictly
 need to be copied (like when you change the name of a column). We plan
to
 fix this in the future, but as one doesn't normally do ALTER TABLE
that
 often this isn't that high on our TODO. For MyISAM tables, you can
speed up
 the index recreation part (which is the slowest part of the recreation
 process) by setting the myisam_sort_buffer_size variable to a high
value. 

 So if the index is added later (after the data is inserted), a new
 temporary table is created and the data is reloaded. MySQL probably
does it
 this way to ensure the table isn't destroyed if something happens in
the
 middle of the table restructure.

 Now if you really, really want to add the indexes later, make sure you
add
 all the indexes in *one* alter table command so the data is reloaded
only
 once. Otherwise it will get loaded for each alter table

 Mike


   -Original Message-
   From: Mirza [mailto:[EMAIL PROTECTED]
   Sent: 27 November 2003 15:33
   Subject: Index before or after inserts?
  
  
   I need to insert hundreds of milions of records to a table and
make
   several indicies on it. Now, is it faster to make tables with
indicies
   and then fill tables or fill tables first, then make indicies? Any
   experiancies?
  
   regards,
  
   mirza


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: list, order and limit data

2003-11-27 Thread Matt W
Hi,

For the query that you would need, see this page in the manual:
http://www.mysql.com/doc/en/example-Maximum-column-group-row.html

Also see the comment from March 16, 2003 about the LEFT JOIN trick.

However, in your case, why don't you just add another column in the
tickets table? last_response or whatever. Then you have everything you
need right in 1 table and just have to UPDATE the last_response when a
response is made.


Hope that helps.


Matt


- Original Message -
From: brfg3 at yahoo
Sent: Thursday, November 27, 2003 1:00 PM
Subject: list, order and limit data


 MySQL version: 3.23.49
 OS: Debian3
 Scripting Language: PHP

 I'm working on a trouble ticket system. There are several tables
involved, but for this query only two tables are involved: tickets and
comments. They are related by the ticketnumber field. This client cannot
afford a high end database, and their host does not support MySQL 4 yet.

   I want to display the last 50 trouble tickets and the last response
from support for each ticket. I can pull the last fifty with this query:

 SELECT * FROM tickets LIMIT 50;

  and I can select the latest date of response from the comments table
for a given trouble ticket with this query:


 SELECT dtg FROM comments WHERE ticketnumber =540856 ORDER BY dtg
DESC LIMIT 1;

 My question is, how can I pull 50 rows from the tickets table and then
grab the last resonse date of each ticket from the comments table?. The
queries can be run individually easily, but I need them to run together
or I need some way of relating the comment table results and the ticket
table results. I plan to stick each row in an html table so the user is
presented with 50 ticket items, and a link to each item (that part is
easy, I just need to know how to pull that query). The reason for the
last resonse date is for informational purposes.

 Just to help you visualize (this is in an HTML table):

 | username | submit date | problem class | ticket status | last
response |

 the first four fields come from the tickets table, the last comes from
the comments table. There might be 20 commenst for each ticket, or there
may be none, but I only want to show the date of the last comment.
Hopefully I've been clear in what I'm trying to acomplish.

 Thanks in advance!


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Disorder result with ORDER BY with ENUM, INT

2003-11-27 Thread Matt W
Hi,

- Original Message -
From: Chuck Gadd
Sent: Wednesday, November 26, 2003 2:29 PM
Subject: Re: Disorder result with ORDER BY with ENUM, INT


 Kriengkrai J. wrote:

   -- System: MySQL 4.0.13, 4.0.16 on Linux x86
   -- Table type: MyISAM, InnoDB
   -- Description / How-To-Repeat:
   -- 1. When I use
   -- SELECT id, type FROM test.report ORDER BY type, id;
   -- the result is in wrong order
   -- --
   -- ++-+
   -- | id | type|
   -- ++-+
   -- |  4 | general |
   -- |  3 | general |
   -- |  1 | general |
   -- |  2 | general |
   -- |  5 | inhouse |
   -- |  6 | inhouse |
   -- ..

 It is sorted EXACTLY as you specified.  First by type, and then by ID.

It doesn't look sorted to me (generals aren't sorted by id). :-)
Shouldn't this be the order?

++-+
| id | type|
++-+
|  1 | general |
|  2 | general |
|  3 | general |
|  4 | general |
|  5 | inhouse |
|  6 | inhouse |


Matt


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique Index efficiency query

2003-11-26 Thread Matt W
Hi Chris,

It doesn't take MySQL any more or less time to update a unique index
than a non-unique one. :-)


Hope that helps.


Matt


- Original Message -
From: Chris Elsworth
Sent: Wednesday, November 26, 2003 12:14 PM
Subject: Unique Index efficiency query


 Hello,

 Let me just outline the table in question first.
 I have a rather large (40,000,000 rows) table as follows:

Table: MessageIDs
 Create Table: CREATE TABLE `MessageIDs` (
   `mid_msgid` char(96) NOT NULL default '',
   `mid_fileid` int(10) unsigned NOT NULL default '0',
   `mid_segment` smallint(5) unsigned NOT NULL default '0',
   `mid_date` int(10) unsigned NOT NULL default '0',
   `mid_bytes` mediumint(8) unsigned NOT NULL default '0',
   KEY `fid_msgid` (`mid_fileid`,`mid_msgid`(5)),
   KEY `fid_bytes` (`mid_fileid`,`mid_bytes`),
   KEY `mid_date` (`mid_date`)
 ) TYPE=MyISAM PACK_KEYS=1 DELAY_KEY_WRITE=1


 Index details:
 mysql show indexes from MessageIDs;

+++---+--+-+
---+-+--++--++-+
 | Table  | Non_unique | Key_name  | Seq_in_index | Column_name |
Collation | Cardinality | Sub_part | Packed | Null | Index_type |
Comment |

+++---+--+-+
---+-+--++--++-+
 | MessageIDs |  1 | fid_msgid |1 | mid_fileid  | A
| 1114302 | NULL | NULL   |  | BTREE  | |
 | MessageIDs |  1 | fid_msgid |2 | mid_msgid   | A
|20057449 |5 | NULL   |  | BTREE  | |
 | MessageIDs |  1 | fid_bytes |1 | mid_fileid  | A
| 1114302 | NULL | NULL   |  | BTREE  | |
 | MessageIDs |  1 | fid_bytes |2 | mid_bytes   | A
|40114898 | NULL | NULL   |  | BTREE  | |
 | MessageIDs |  1 | mid_date  |1 | mid_date| A
| 1744126 | NULL | NULL   |  | BTREE  | |

+++---+--+-+
---+-+--++--++-+

 Now, what I want to do with this table is create a unique index on
 (mid_fileid, mid_segment).

 How does MySQL deal with ensuring that a unique index doesn't end up
 non-unique with a table this large? Is making this index going to
 proportionally slow down my inserts as the table grows? Would I be
 better making it a non-unique index, and doing a select to ensure I'm
 not inserting a duplicate?

 Thanks for any tips,

 --
 Chris


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Why does -1 show up as 18446744073709551613?

2003-11-22 Thread Matt W
Hi Mark,

Keep in mind that the CAST() function doesn't work in MySQL 3.23. If you
want something that will work with 3.23 and 4.0+, you can just add 0.0
to your expression:

SELECT 0 - unsigned_col + 0.0 AS alias FROM ...

The result will have .0 on the end then, but I think you can take care
of that with FLOOR(expr) or TRUNCATE(expr, 0).


Hope that helps.


Matt


- Original Message -
From: Mark Marshall
Sent: Thursday, November 20, 2003 11:05 AM
Subject: Re: Why does -1 show up as 18446744073709551613?


 That would be it!

 Not sure how I missed that.

 Thank you!
 Mark


  Mikael Fridh [EMAIL PROTECTED] 11/20/03 11:44AM 
 This is in the Upgrading from 3.23 manual:
 http://www.mysql.com/doc/en/Upgrading-from-3.23.html


 Note: when you use subtraction between integer values where one is of
 type
 UNSIGNED, the result will be unsigned. In other words, before
upgrading
 to
 MySQL 4.0, you should check your application for cases where you are
 subtracting a value from an unsigned entity and want a negative answer
 or
 subtracting an unsigned value from an integer column. You can disable
 this
 behaviour by using the --sql-mode=NO_UNSIGNED_SUBTRACTION option when
 starting mysqld. See section 6.3.5 Cast Functions.


 In order to get your selects to work without changing column types
look
 at the
 cast functions:

 http://www.mysql.com/doc/en/Cast_Functions.html

 ...If you are using numerical operations (like +) and one of the
 operands is
 unsigned integer, the result will be unsigned. You can override this
by
 using
 the SIGNED and UNSIGNED cast operators...


 Mike

 On Thursday 20 November 2003 17.10, Mark Marshall wrote:
  I've been running this query for quite some time that basically
 says:
 
  SELECT  (A + B + C) - (X + Y + Z) AS Variance
  FROM .
 
  Up until now, this has been working correctly and showing up as
  anything from -100 to +100.  Now all of a sudden, it's showing up as
  18446744073709551613 instead of -1, 18446744073709551614 instead of
 -2,
  etc.
 
  Now, A, B, C, X, Y  Z are all defined as UNSIGNED in the database
  table.  But again, this WAS working as of MySQL 3.23, and we've been
  running it for well over a year.  Now, on version 4.x, I started
 getting
  the big numbers.
 
  I can only assume that there is some sort of precedence thing that
  changed.  Can anyone shed some light on this? Or maybe tell me what
 I
  need to do differently to make this work now?
 
  Thanks,
  Mark


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: strange difference between a != b and (a b OR a b)

2003-11-22 Thread Matt W
Hi,

!= and  are not optimized currently because I think it's assumed that
with

a  b

more rows will NOT match b than do match. Therefore it's faster to do a
table scan.

That assumption is not true in all cases of course, which is why I think
it will be optimized in the future to estimate how many rows will match
and see if the index can be used. Until then, you have to use (a  b OR
a  b) if you want it to be optimized.


Matt


- Original Message -
From: [EMAIL PROTECTED]
Sent: Thursday, November 20, 2003 10:12 AM
Subject: strange difference between a != b and (a  b OR a  b)


 Description:
 a simple select on a large table does not use an indexed column when
the WHERE clause uses a != b
 The same query using as WHERE a  b OR a  b  (which of course does
the same) *does* use an index.
 How-To-Repeat:

 mysql END_OF_FILE

 create database unequalproblem;
 use unequalproblem;

 -- MySQL dump 8.22
 --
 -- Host: localhostDatabase: unequalproblem
 -
 -- Server version 3.23.56

 --
 -- Table structure for table 'test'
 --

 CREATE TABLE test (
   id int(11) NOT NULL auto_increment,
   number int(11) NOT NULL default '0',
   PRIMARY KEY  (id),
   KEY k_number (number)
 ) TYPE=MyISAM;

 --
 -- Dumping data for table 'test'
 --


 INSERT INTO test VALUES (1,2);
 INSERT INTO test VALUES (2,4);
 INSERT INTO test VALUES (3,9);

 select('EXPLAIN SELECT id from test WHERE number != 1;   ***  uses NO
index');
 EXPLAIN SELECT id from test WHERE number != 1;
 select('EXPLAIN SELECT id from test WHERE number  1 OR number  1;
*** actually same query, index on number');
 EXPLAIN SELECT id from test WHERE number  1 OR number  1;

 drop database unequalproblem

 END_OF_FILE


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: UPDATE optimization?

2003-11-20 Thread Matt W
Hi,

You can combine those 2 UPDATEs like this:

UPDATE some_table SET some_field=IF(id=some_id, 1, 0);

Or, the standard SQL syntax:

UPDATE some_table
SET some_field=CASE id WHEN some_id THEN 1 ELSE 0 END;


Hope that helps.


Matt


- Original Message -
From: [EMAIL PROTECTED]
Sent: Thursday, November 20, 2003 6:39 PM
Subject: UPDATE optimization?


Hello guys,

Let say:
UPDATE some_table SET some_field=1 WHERE id = some_id
and
UPDATE some_table SET some_field=0 WHERE id  some_id

what I can do to merge these queries?
The first thing that came up in my mind was something like that:

UPDATE some_table SET some_field=1 WHERE id = some_id; SET some_field=0
WHERE id
 some_id;
so we can walk-through table only once...

What do you think?


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Standard vs. Log

2003-11-19 Thread Matt W
Hi Jon,

The -log suffix is added when you're running with logging (log or
log-bin in my.cnf/my.ini). log-bin may be being used for replication, so
be careful about removing it. And if one server isn't using logging, you
probably don't need it.


Hope that helps.


Matt



- Original Message -
From: Jonathan Rosenberg
Sent: Tuesday, November 18, 2003 11:21 PM
Subject: Standard vs. Log


 I have mysql on linux on two machines.  Both version 4.0.16.  On one
 machine, it is reported as 4.0.16-standard and on the other machine it
is
 reported as 4.0.16-log (according to PHPMyAdmin).  What is the
difference
 (obviously something with logging) and how can I change from standard
to log
 and vice versa?

 Thanks


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: using temporary / using filesort and disk tables

2003-11-19 Thread Matt W
Hi Arnaud,

A disk-based temp table is used if you're SELECTing a column [that can
be] longer than 255 characters (BLOB/TEXT to MySQL). This is because the
in memory HEAP tables don't currently support variable length rows.
Using something like LEFT(text_col, 255), if feasible, will get around
this problem.

Also could be disk based if the query examines many rows (large temp
table), but your tmp_table_size would probably cover that.

BTW, 512M is very, very high for tmp_table_size! Do you have enough
memory for 512M * number of connections? :-)


Matt


- Original Message -
From: Arnaud
Sent: Wednesday, November 19, 2003 2:18 AM
Subject: using temporary / using filesort and disk tables


 Hi!

 I have a query that allways creates temporary tables to disk (the
ratio
 created_tmp_disk_tables/created_tmp_tables is close to 0.8, but I
think it
 is 1 for this particular query).
 This query joins 4 tables, groups by a field a orders by another field
(or
 sometimes an sql variable).
 When I analyze it, I get where used, using temporary, using filesort
for
 the first table of the join (whatever the order of the join), and
where
 used for the other ones.
 I have only 2000 rows scanned forthe first table, and 1 for the 3
other
 ones.
 The variables tmp_table_size and max_heap_table_size are both set very
high
 (~512M).

 I would like to get rid of those disk tables, to improve the
performance of
 my query. I understand that using a group by and order by on different
 fields implies the use of a temporary table. What I don't understand
is why
 this table is created on disk, and not in memory? Is it because of the
 filesort? If yes, how could I get rid of the filesort?

 If this is not clear enough, I can post a sample query and the result
of the
 explain.

 Thanks in advance!

 Arnaud


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: very slow delete queries - never ending

2003-11-19 Thread Matt W
Hi Richard,

As I think Gerald Clark said, you could run DELETEs with LIMITs (like
1000-1, etc. at a time) in a loop until rows all rows are deleted.
This won't make the deletes any faster (probably slightly slower total,
actually), but will allow other clients to use the table in between.

Have you seen this page in the manual:
http://www.mysql.com/doc/en/Delete_speed.html What's the size of your
key_buffer? Might want to increase it.

Also make sure the table doesn't have any unnecessary indexes to make
DELETEs slower.


Hope that helps.


Matt


- Original Message -
From: [EMAIL PROTECTED]
Sent: Wednesday, November 19, 2003 4:12 AM
Subject: very slow delete queries - never ending


 I have a problem with a bigger table on mysql 4.0.16-log / debian
linux

 I played around with indexes, delete quick and such, but I just can't
get it
 to work.
 The following table holds 35mio rows and has 5mio inserts/replaces
per
 day. to clean it up I want to delete all rows older than X days.

 I would be very happy if somebody could help me on this. I'm stuck. I
worked
 with tables of that size with 3.23.49-log and didn't have problems,
although
 I must say that the amount of inserts is very high in this case.

 The server is a 2.5ghz pentium4, 1.5gb RAM, SCSI-RAID-disks and such
 hardware, so performance should not be a problem. what variables in
mysql should I
 modify, has anybody experience with that and can
 help?

 thanks!
 Richard

[snip]

 mysql  select count(*) from  datatable  where acttime  '2003-11-14
 09:39:49';
 +--+
 | count(*) |
 +--+
 |  7194367 |
 +--+
 1 row in set (3 min 22.15 sec)

 mysql select count(*) from datatable;
 +--+
 | count(*) |
 +--+
 | 36003669 |
 +--+
 1 row in set (5.87 sec)

 mysql delete quick  from datatable  where acttime  '2003-11-14
09:39:49';
 or
 mysql delete from datatable  where acttime  '2003-11-14 09:39:49';

 ...takes forever. I killed it after 20 hours...


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: JOIN vs INNER JOIN?

2003-11-19 Thread Matt W
Hi Yves,

http://www.mysql.com/doc/en/JOIN.html

table_reference [INNER | CROSS] JOIN table_reference [join_condition]

The [ ... ] means that INNER is optional -- in MySQL at least, not
sure about the SQL standard.


Hope that helps.


Matt


- Original Message -
From: Yves Goergen
Sent: Wednesday, November 19, 2003 8:12 AM
Subject: JOIN vs INNER JOIN?


 Can someone tell me the difference between a JOIN and an INNER JOIN of
two
 tables, please? I can't find the JOIN alone documented in the MySQL
 manual.

 --
 Yves Goergen
 [EMAIL PROTECTED]
 Please don't CC me (causes double mails)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Optimizing Custom Full Text Index

2003-11-19 Thread Matt W
Hi Mike,

Those tables aren't that big for what you're doing (which is about how
I'd do it if I wasn't using built-in full-text :-)).

How many results are your searches returning? How long are the queries
taking?

The C table: Do you need the index on content_id? For deletes or
something? Doesn't really matter for speed, but you might as well dump
it if it's not used. What's the PRIMARY id there for? Just to have an id
column? :-) It looks like you can get rid of it. I'd make a composite
PRIMARY KEY on (stem_word_id, content_id). Otherwise make it UNIQUE if
you *need* the PRIMARY id. This will make EXPLAIN say Using index on C
for searches which will save a lot of random disk seeks to the data
file.

Can I ask what the problems are with MySQL's built-in full-text search?
I know there's a few since I've encountered them too, but I have some
ideas to work around them. Unfortunately, one that would be hard to work
around is stemming (waiting for that to be implemented internally).
-( Or are you just doing stemming to save space in the index and not for
functionality?


Hope that helps.


Matt


- Original Message -
From: Mike Boone
Sent: Wednesday, November 19, 2003 12:08 PM
Subject: Optimizing Custom Full Text Index


 Hi all,

 I have used PHP and MySQL for years but have never tackled the amount
of
 data that I now have (and it's expected to grow considerably). My
queries
 run OK when the server is not busy but they just about kill the DB
when
 traffic picks up.

 My apologies if this is common knowledge...I've had trouble searching
on
 custom full text indexing because it generally brings up hits
regarding the
 built-in full text indexing for various DB servers. MySQL's built-in
 fulltext doesn't quite do what we want.

 We are currently running MySQL 4.0.16 compiled with LinuxThreads on
FreeBSD.


 Basically, I'm trying to optimize a search involving three tables.

 Table A: (content table...currently nearly 40,000 rows and 62 MB)
  id UNSIGNED INT PRIMARY
  status VARCHAR 10 INDEXED
  category VARCHAR 20 INDEXED
  content LONGTEXT
  + other fields

 Table B: (stem word index...instead of indexing the exact word, I just
keep
 the stem, so 'car' and 'cars' are both stored as 'car'. Table
currently has
 about 180,000 rows and is 9 MB)
  id UNSIGNED INT PRIMARY
  stem_word VARCHAR 30 INDEXED

 Table C: (full text index...currently about 4.5 million rows and 186
MB)
  id UNSIGNED INT PRIMARY
  stem_word_id (references id in table B) UNSIGNED INT INDEXED
  content_id (references id in table A) UNSIGNED INT INDEXED


 Here's how I perform the search right now.

 The user enters keywords. I turn those words into a list of unique
stems. I
 then search for the stem IDs from Table B using the following query:

 SELECT id FROM B WHERE stem_word IN ('truck','piano','move');

 Using the IDs from that query (say 10, 20, 30), I run the following
query:

 SELECT C.content_id, COUNT(C.content_id), A.category FROM A, C WHERE
 C.content_id=A.id AND A.status='Active' AND (C.stem_word_id IN
(10,20,30))
 GROUP BY C.content_id HAVING Count(C.content_id)=3;

 I have recently also tried this query, which is a little cleaner
without the
 count/having stuff, but it seems about the same speed-wise:

 SELECT DISTINCT C0.content_id, A.category FROM A, C C0, C C1, C C2
WHERE
 C0.content_id=A.id AND A.status='Active' AND C0.stem_word_id=10 AND
 C1.stem_word_id=20 AND C0.content_id=C1.content_id AND C2.stem_word=30
AND
 C0.content_id=C1.content_id;

 When running the EXPLAIN on both queries, both are doing 'using where;
using
 temporary; using filesort' on table C. I'm not sure how to avoid that.

 This system has 512MB and I'm basically using the my-large.cnf file
as-is.
 Running mytop shows that the key efficiency is 100%, and (cache?)
ratio
 around 36%. All my tables are MyISAM right now. I tried switching to
InnoDB
 but it was much slower and I figured there were enough variables to
 troubleshoot already without playing around with the InnoDB
parameters.

 So my questions:

 1. Is there anything blatantly wrong with my queries?
 2. Should I have designed my index table differently?
 3. Any parameter in my.cnf I should modify to be different from the
 my-large.cnf settings?
 4. Any web resources with instructions for building customized full
text
 indexing...not using built-in stuff?
 5. Since the content field of table A is only used for display (since
the
 words have been indexed), I was considering compressing the text in
that
 field so save DB disk space. Is that worth the effort?

 Any input is appreciated. Thanks for your help.

 Mike Boone
 (reply to the list or contact me directly at:
 http://boonedocks.net/mailmike.php3)


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Optimizer Troubles

2003-11-18 Thread Matt W
Hi Rob,

Since you're using 4.0.16, sounds like you are experiencing its
optimizer bug. From the ChangeLog for 4.0.17 (not released yet):

* Fixed optimizer bug, introduced in 4.0.16, when REF access plan was
preferred to more efficient RANGE on another column.


So hopefully the problem only exists in 4.0.16. Until you can use
another version, I guess you'll have to use USE INDEX. :-/


Hope that helps.


Matt


- Original Message -
From: Rob Brackett
Sent: Tuesday, November 18, 2003 4:33 PM
Subject: Optimizer Troubles


 I've got a table with two indices -- one is a datetime with massive
 cardinality, and another is a varchar with only 9 distinct values.
The
 optimizer seems to choose the varchar over the datetime in all cases
 unless I specifically say USE INDEX().  Is there some way to make
MySQL
 smarter in this case?  I tried analyzing the table, that didn't
work.
 Maybe this is a case where I have to say that's why USE INDEX()
 exists.  Strange thing is I didn't have this problem in 3.23 (running
 4.0.16 now).  Thanks for your help.

 -Rob-

 P.S.  Here are some details:

 SELECT count(*) FROM table1 WHERE date1  NOW() - INTERVAL 1 HOUR AND
 status='status1';

 1 row in set (1 min 14.18 sec)

 SELECT count(*) FROM table1 USE INDEX(date1) WHERE date1  NOW() -
 INTERVAL 1 HOUR AND status='status1';

 1 row in set (0.00 sec)

 EXPLAIN SELECT count(*) FROM table1 WHERE date1  NOW() - INTERVAL 1
 HOUR AND status='status1';

 Table: table1
 Type: ref
 Possible_keys: date1,status
 Key: status
 Key_len: 17
 Ref: const
 Rows: 4548428
 Extra: Using where

 EXPLAIN SELECT count(*) FROM table1 USE INDEX(date1) WHERE date1 
NOW()
 - INTERVAL 1 HOUR AND status='status1';

 Table: table1
 Type: range
 Possible_keys: date1
 Key: date1
 Key_len: 9
 Ref: NULL
 Rows: 16105
 Extra: Using where

 show keys from table1;

 Table: table1
 Non_unique: 0
 Key_name: PRIMARY
 Seq_in_index: 1
 Column_name: id
 Collation: A
 Cardinality: 7911940
 Sub_part: NULL
 Packed: NULL
 Null: NO
 Index_type: BTREE
 Comment:

 Table: table1
 Non_unique: 1
 Key_name: date1
 Seq_in_index: 1
 Column_name: date1
 Collation: A
 Cardinality: 3955970
 Sub_part: NULL
 Packed: NULL
 Null: YES
 Index_type: BTREE
 Comment:

 Table: table1
 Non_unique: 1
 Key_name: status
 Seq_in_index: 1
 Column_name: status
 Collation: A
 Cardinality: 9
 Sub_part: NULL
 Packed: NULL
 Null: YES
 Index_type: BTREE
 Comment:


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Please help DB Error: unknown error

2003-11-18 Thread Matt W
Hi Thai,

I think you're just joining wy too many tables! LOL


Matt


- Original Message -
From: Thai Thanh Ha
Sent: Sunday, November 16, 2003 8:10 PM
Subject: Please help DB Error: unknown error


 Hi all,
 I have a problem with my query on mySQL 4.0.
 DB Error: unknown error
 I don't know what is the problem. Please help me!!! Thanks in advance.
 Regards,
 Thai
 SELECT t.UserID, t.Country, t.Zip FROM tblUser t, tblUserProfile
 t1,tblUserProfile t2,tblUserProfile t3,tblUserProfile
t4,tblUserProfile
 t5,tblUserProfile t6,tblUserProfile t7,tblUserProfile
t8,tblUserProfile
 t9,tblUserProfile t10,tblUserProfile t11,tblUserProfile
t12,tblUserProfile
 t13,tblUserProfile t14,tblUserProfile t15,tblUserProfile
t16,tblUserProfile
 t17,tblUserProfile t18,tblUserProfile t19,tblUserProfile
t20,tblUserProfile
 t21,tblUserProfile t22,tblUserProfile t23,tblUserProfile
t24,tblUserProfile
 t25,tblUserProfile t26,tblUserProfile t27,tblUserProfile
t28,tblUserProfile
 t29,tblUserProfile t30,tblUserProfile t31,tblUserProfile
t32,tblUserProfile
 t33,tblUserProfile t34,tblUserProfile t35,tblUserProfile
t36,tblUserProfile
 t37,tblUserProfile t38,tblUserProfile t39,tblUserProfile
t40,tblUserProfile
 t41,tblUserProfile t42,tblUserProfile t43,tblUserProfile
t44,tblUserProfile
 t45,tblUserProfile t46,tblUserProfile t47,tblUserProfile
t48,tblUserProfile
 t49,tblUserProfile t50,tblUserProfile t51,tblUserProfile
t52,tblUserProfile
 t53,tblUserProfile t54,tblUserProfile t55,tblUserProfile
t56,tblUserProfile
 t57,tblUserProfile t58,tblUserProfile t59,tblUserProfile
t60,tblUserProfile
 t61,tblUserProfile t62,tblUserProfile t63,tblUserProfile
t64,tblUserProfile
 t65,tblUserProfile t66,tblUserProfile t67,tblUserProfile
t68,tblUserProfile
 t69,tblUserProfile t70,tblUserProfile t71,tblUserProfile
t72,tblUserProfile
 t73,tblUserProfile t74,tblUserProfile t75,tblUserProfile
t76,tblUserProfile
 t77,tblUserProfile t78,tblUserProfile t79,tblUserProfile
t80,tblUserProfile
 t81,tblUserProfile t82,tblUserProfile t83,tblUserProfile
t84,tblUserProfile
 t85,tblUserProfile t86,tblUserProfile t87,tblUserProfile
t88,tblUserProfile
 t89,tblUserProfile t90,tblUserProfile t91,tblUserProfile
t92,tblUserProfile
 t93,tblUserProfile t94,tblUserProfile t95,tblUserProfile
t96,tblUserProfile
 t97,tblUserProfile t98,tblUserProfile t99,tblUserProfile
t100,tblUserProfile
 t101,tblUserProfile t102,tblUserProfile t103,tblUserProfile
 t104,tblUserProfile t105,tblUserProfile t106,tblUserProfile
 t107,tblUserProfile t108,tblUserProfile t109,tblUserProfile
 t110,tblUserProfile t111,tblUserProfile t112,tblUserProfile
 t113,tblUserProfile t114,tblUserProfile t115,tblUserProfile
 t116,tblUserProfile t117,tblUserProfile t118,tblUserProfile
 t119,tblUserProfile t120,tblUserProfile t121,tblUserProfile
 t122,tblUserProfile t123,tblUserProfile t124,tblUserProfile
 t125,tblUserProfile t126,tblUserProfile t127,tblUserProfile
 t128,tblUserProfile t129,tblUserProfile t130,tblUserProfile
 t131,tblUserProfile t132,tblUserProfile t133,tblUserProfile
 t134,tblUserProfile t135,tblUserProfile t136,tblUserProfile
 t137,tblUserProfile t138,tblUserProfile t139,tblUserProfile
 t140,tblUserProfile t141,tblUserProfile t142,tblUserProfile
 t143,tblUserProfile t144,tblUserProfile t145,tblUserProfile
 t146,tblUserProfile t147,tblUserProfile t148,tblUserProfile
 t149,tblUserProfile t150,tblUserProfile t151,tblUserProfile
 t152,tblUserProfile t153,tblUserProfile t154,tblUserProfile
 t155,tblUserProfile t156,tblUserProfile t157,tblUserProfile
 t158,tblUserProfile t159,tblUserProfile t160,tblUserProfile t161,
 tblMatchProfile mp WHERE t.ShowProfile = 1 AND t.LoginHandle 
 'administrator' AND t.AccountType  2 AND t.AccountType  3 AND
t.Gender =
 1 AND (Year(Now()) - Year(t.BirthDate)) BETWEEN 18 AND 80 AND t.UserID
=
 t1.UserID AND t1.FieldID = 78 AND t1.FieldValue = 8 AND
t1.UserID=t2.UserID
 AND t2.FieldID = 78 AND t2.FieldValue = 6 AND t1.UserID=t3.UserID AND
 t3.FieldID = 78 AND t3.FieldValue = 5 AND t1.UserID=t4.UserID AND
t4.FieldID
 = 78 AND t4.FieldValue = 3 AND t1.UserID=t5.UserID AND t5.FieldID = 78
AND
 t5.FieldValue = 4 AND t1.UserID=t6.UserID AND t6.FieldID = 78 AND
 t6.FieldValue = 7 AND t1.UserID=t7.UserID AND t7.FieldID = 78 AND
 t7.FieldValue = 1 AND t1.UserID=t8.UserID AND t8.FieldID = 78 AND
 t8.FieldValue = 9 AND t1.UserID=t9.UserID AND t9.FieldID = 78 AND
 t9.FieldValue = 2 AND t1.UserID=t10.UserID AND t10.FieldID = 79 AND
 t10.FieldValue = 3 AND t1.UserID=t11.UserID AND t11.FieldID = 79 AND
 t11.FieldValue = 1 AND t1.UserID=t12.UserID AND t12.FieldID = 79 AND
 t12.FieldValue = 2 AND t1.UserID=t13.UserID AND t13.FieldID = 79 AND
 t13.FieldValue = 5 AND t1.UserID=t14.UserID AND t14.FieldID = 79 AND
 t14.FieldValue = 6 AND t1.UserID=t15.UserID AND t15.FieldID = 79 AND
 t15.FieldValue = 4 AND t1.UserID=t16.UserID AND t16.FieldID = 131 AND
 t16.FieldValue = 4 AND t1.UserID=t17.UserID AND t17.FieldID = 131 AND
 t17.FieldValue = 2 AND t1.UserID=t18.UserID AND t18.FieldID = 131 

  1   2   3   >