See:
http://downloads.mysql.com/archives.php
Thank you. Nice link to have around.
Key 3 is the FTS key. The others are a UNIQUE KEY (#1) and a KEY(#2).
Do you have the same values for full-text parameters (ft_mit_word_len
for example)?
Not at first. I had noticed that not long after I sent
I have replaced one server with another, and the new one has everything new
(RHEL 3, newest updates) and MySQL 4.0.23 (old one was RH9 and MySQL
4.0.18).
We now get table corruptions constantly (it only takes a minute before
several tables get marked as crashed). I'd like to revert to the 4.0.18
For production systems, I would never let the mysql optimizer guess a query
plan when there are joins of big tables and you know exactly how it should
behave. Once you think a query is finished, you should optimize it yourself.
Use STRAIGHT_JOIN and USE INDEX as found here in the manual:
Thanks. I passed this on and he found what was lost. I guess since the data
directory was owned by mysql, he could not find the databases when doing a
MacOS file search. I impressed upon him to use a separate datadir as we do
with our servers, thus bypassing this whole thing.
Thanks again!
Installing MySQL 4.1.7 (upgrading from 4.1.3) on MacOS X erased the contents
of /usr/local/mysql/data -- the privs and data of the previous installation.
FYI
Luckily (and unfortunately) we have a backup of that database from last
week. (The guy that did it here in the office is still in a bit of
Thank you very much for your bug report!
And sorry if I doubted your report at the beginning; I hadn't thought
of the rpm script.
No problem. I sometimes get bug reports that I know are impossible! Yet they
weren't. This one I would have barely noticed if it had not knocked the
slaves all
We start mysql with 'service mysql start' (we install from the RPM for
linux).
I've never seen mysql create binlog files under the name root before, and
after reverting to an old version, it doesn't again. It created a big mess
with all the slaves stuck at the end of an older binlog and not
Hmm, I don't see any changes in ft-related files since 4.0.18 that could
cause it (there were bugfixes, but they affect only *searching* - that
is MATCH - and not *updating*).
Can you create a test case ?
Well, I put up a file in the secret folder a few days ago as referenced in a
bug
We had some servers that were upgraded from 4.0.17/18 to 4.0.20 and had
several problems thereafter:
1. Tables with FTS indices became corrupted, with queries on them causing
segfaults on the servers.
2. BinLog files were getting created with ownership of root, not mysql. Then
Mysql complains
Since going from 4.0.18 to 4.0.20 (or 4.0.19) I now receive these warnings
on startup:
040520 14:55:21 mysqld started
040520 14:55:21 Warning: Asked for 196608 thread stack, but got 126976
/usr/sbin/mysqld: ready for connections.
Version: '4.0.20-standard' socket: '/tmp/mysql.sock' port: 3306
I've seen a quirk in Mysql behavior over the years when dealing with max().
In a query such as this:
select max(somecol) from sometbl where id=# and otherthing=#
(index is on id, but not on otherthing)
We see the query run just fine (0.x seconds to run) almost all of the time.
Here is the background: Anyone that is running a huge system like MARC
that has millions of uncompressed blob records in huge tables, needs to be
able to migrate, in real-time and without down-time, to compressed blobs.
Therefore, we need a way to know if a given field is compressed or not.
I saw something like this as well. Using 4.1.2 made it go away. Try doing a
bk pull of the dev version of 4.1.2 and give it a go.
-steve--
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
I am wary of something so 'do it yourself'. Have you looked at ReHat's
clustering solution?
http://www.redhat.com/software/rha/cluster/
http://www.redhat.com/software/rha/cluster/manager/
I don't think it has any issue with InnoDB, key buffers, etc.
I believe this solution works best for
Thanks for the additional information. When 4.1.2 comes out, I'll give it a
test and return with some stats on real world result times (for my data set
at least).
-steve-
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
You did an insert this way:
mysql insert into geom values(GeomFromText('POINT(1,1)'));
and expected results this way:
mysql select AsText(g) from geom;
+---+
| AsText(g) |
+---+
| Point(1 1)|
+---+
1 row in set (0.00 sec)
The formatting of the POINT
Does Mysql 4.1.1 have the two level index system integrated into it for full
text searches?
Thanks. :)
-steve-
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
downgrade back a version and have it still work OK?
Sincerely,
Steven Roussey
http://Network54.com/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
. And uglier...
Sincerely,
Steven Roussey
http://Network54.com/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Use transaction:
begin
update ...
update ...
...
update ...
commit;
This way you will only have a syncs to disk at every commit instead of
every
update.
This won't help -- I'm not doing a batch process. Each update is coming
from a different connection...
--steve-
--
MySQL
No, it turns out this is not the key. With mysql_connect() I'm
actually
failing MORE often than with mysql_pconnect - so far it hasn't stayed
up 15 minutes without error. (Fortunately, I have a cron job checking
on it and restarting.)
After the failed connection attempt, there will be an
I have a question about InnoDB and how it would handle updates on the
order of about 3,000-5,000 a second. The UPDATEs update a single record
on a primary key. In MySQL, it does a table lock thus serializing the
updates. There are a few selects, though on a couple of orders of
magnitude less
Thanks for replying. Your posts that I've found when searching for
FULLTEXT information have had great ideas. :-) Searching millions of
posts efficiently and effectively isn't easy. :-( Heh.
FULLTEXT does not scale very well once the files get bigger than your
RAM.
The redesign of the index
Lots of stuff
STEMMING! (controlled more finely than server level I hope),
multi-byte
character set support, proximity operators. Anything to get it closer
to
Verity's full-text functionality. ;-)
Yes, all these things would be nice... :)
And the FULLTEXT index shouldn't always be chosen
Here's the CREATEs, somewhat edited to remove parts not relevant
to this discussion, to save space:
I never actually looked at your JOIN statement more than a quick
glimpse, but I will (though not just right now). Before I do, can you
try this (I still don't have data or I'd play with it
All the indexes were single indexes, partly because I haven't
yet made the effort to understand composite index. I guess it's
time ;-).
Oh.
There are better places to start than this list. ;) The manual can be a
great starting place, and several people on this list have written books
about
Hmmm, just in case you can't change the table layout...
Run this through MySQL. First I get rid of the other index I made, then
add chained indexes so there is no need for data file lookup. Also, one
direction of the query table join chain was not always using the indexes
for the where. One
After looking over your results, I would keep the dir1 index at least on
the first and last table.
But since this data is read only, why not reformulate the data for the
queries you are going to make? This is the opposite of normalizing, and
will require more disk space, and is not flexible, but
Executing just the search on the word table, with no joins to the
table with the dates, is still slow:
Then it is not worth while to focus on anything else until you fix that.
Are the contents of this field always in lower case?
Is so, then change the column to a binary type. The explain says:
how to separate each stop word in the list
A different word on each line.
-steve-
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
No, the contents can be of mixed case. Where does that leave things?
**Index the length of the entire column.** It then should not need to
have to do the filesort. Actually the binary option would not have
really helped. The explain should say 'Using Index'. Get back to me on
this and tell me
GOD! OK, sorry, I wasn't quite expecting this:
Wow!
:)
But what's the explanation for this huge improvement? Again, I
was always told the opposite, and the Manual itself says: ...
Yes, and it is true (usually). But your EXPLAIN showed a filesort and
that is bad. What happens is that if
It looks like Igor comitted it to the 4.1 tree on the 2nd of this
month:
I'd assume that this change is necessary but not sufficient for the
MySQL table type table locking issue...
I know, I know, there is InnoDB for that, but there are reasons not to
use it despite this particular wonderful
ORing on two different fields is what I have been asking about :).
This is not optimized, and I don't think it is set to be optimized until
5.1 (as per someone else's comment).
Using a composite index was suggested
This is bad information. It works for AND, not for OR.
You have two
So does anyone else have any ideas what is going on here? Shall I
report this as a bug?
Did you post how you setup the servers to load the different my.cnf
files? Hopefully you don't have one at a default location.
Otherwise, it sounds like the config information is not properly set --
either
MyISAM performance is limited right now by a global lock in the key
cache. However, I believe there is work going on to fix that in the
4.1 tree.
Really? I thought it was going to be fixed in the 5.1 tree, which will
be years away from production quality. 4.1 would be really cool, but it
After testing a lot of different configurations (which was quite a
headache), I came up with the following. First of all, for both speed
and reliability, you will want SCSI. The list of reasons are quite long
for SCSI, and as you are doing research on the subject, it is an obvious
choice and I
If you're using a non-persistent connection, PHP will close the
connection when the script terminates, and the MySQL server will roll
back
implicitly. For a non-persistent connection, the connection remains
open
and you may not see the rollback behavior your expect.
I thought this was
Just a couple of quick notes:
1. While I use PHP CLI for a lot of things (can we say cron?), it is not
a sufficient replacement for triggers. What happens when someone is
using the Mysql command prompt to alter data? Or using a non-PHP
application?
2. While I agree that having application code
What sort of throughput are you seeing in that setup?
God, I can't remember anymore. I can run a test again though. If you
have one you want me to run, just send it. We don't have other people's
money to spend, so all our disks are U160 18GB 15K IBM. They were less
than $100 each when we got
2 x 2.8 GHZ Xeon
4 GB of RAM
5 15K SCSI Drives
ICP SCSCI RAID control card with 1 Gb of ram on it.
I just bought 30 of these boxes to build out my mysql farm for close to
400-600 queries a second with 60 connections a second of mix read /
writes.
What kind of queries are you doing? Our
A lot of table scans do to bitmasked column values.
Such that the above query will not utilize a key.
That statement gave me a cold shiver up my spine.
You could try an inverted index or match-cache technique, or
denormalization. These type of techniques are very app specific, but can
reduce
Just a note on this subject. We have a field that uses 0 to mean
something special too. It was a bad idea that is on my TODO list to fix
some day. (The corresponding table used 0 to mean something special, and
then joined to the table with the autoindex. The fix is too use null in
that other table
In http://www.mysql.com/doc/en/News-4.0.6.html
* mysql_change_user() will now reset the connection to the state of a
fresh connect (Ie, ROLLBACK any active transaction, close all temporary
tables, reset all user variables etc..)
So it is in there, starting with version MySQL 4.0.6.
-steve-
Quick question: Are the binlog and relaylog files the same format?
Initial tests seem to indicate that they are the same. Can I use
mysqlbinlog -o Relay_Log_Pos Relay_Log_File | mysql
to get the slave more up to date (without having the slave SQL thread
running)? I tried the above but the
Rick,
I am able to restore from logs that had binary data (even though the
output looked real strange and messed up the terminal window). I did
have a problem once when I tried filtering data between mysqlbinlog and
mysql. Be careful if you do that.
What version of mysql are you using?
I have
Hi,
mysqlbinlog -j Relay_Log_Pos Relay_Log_File | mysql
works fine. I used -o instead of -j before. So I answered my last
question. When doing this:
mysqlbinlog -j Relay_Log_Pos Relay_Log_File | more
I see that it had advanced to the query after the one with the problem
in the trace file. In
Hi,
And fixed.
Sorry for the waste of time. Only 4 days before I was set to replace the
disk the database was on, and it is going bad. :(
-steve-
sql,query
-
Before posting, please check:
http://www.mysql.com/manual.php
An update. I'm now running the debug version on the slave. I could not trace
out 'info' since it wrote way too much to the trace file.
What I did find that was unique when the table crashed is this:
handle_slave_sql: query: insert into forums_posts_new_3 (
w_search: error: Got errno: 0 from
I'm glad you found the problem! Sorry my suggestion did not work. I'm still
confused on why you have quotes around the field names in the order by part
of the query, though.
All the best!
-steve-
-
Before posting, please
Below is a trace (--debug=d,enter,exit,info,error,query,general,where:
O,/tmp/mysqld.trace) of the slave thread. This is the best I can do as
far as a bug report. No other queries were running and the slave I/O
thread was idle (I firewalled its connection to the master/rest of the
world).
Without
SELECT * FROM EventList ORDER BY 'EventDate', 'EventOrder' LIMIT 50;
I'm surprised you happened to get anything in order. Maybe the message got
simplified by the list manager, but did you really mean to order by a
constant string?
Why not:
SELECT * FROM EventList ORDER BY `EventDate`,
Hi all,
I have a problem with replication, that while repeatable for me very
easily, I can not come up with a way for others to repeat it without all
our tables and binlogs (tens of gigabytes). So I'm simply going to
describe things here and see if anyone else has experienced anything
similar or
V4.0.9
How can I use the mysqld.sym file via gdb? It doesn't like the format.
Copying the stacktrace into another file where I have to edit out a
bunch of junk from gdb for resolve_stack_dump is a bit slow.
It seems that two processes have hung for a while. All the slots have
filled up even
to denormalization didn't make any sense to me. What
level of normal form are you expecting?
Sincerely,
Steven Roussey
http://Network54.com/
-
Before posting, please check:
http://www.mysql.com/manual.php (the manual)
http
I guess the reason for this is that the I have a some blob fields
whitch are
all used.. (each record consist of approx 600 KB...)
There it is. MySQL's MyISAM will get the whole record even if it only
needs a part. You can try InnoDB as it does it differently.
[At some point I may try and add
I'm using temporary tables for this but there is a problem.
Temporary tables are visible trough the entire connection. So in
future
one browser window can interact (can display) with results from
another
browser window. Does anyone have a sugestion how to solve this?
You could do something
!
It'll sorta like a vmstat the watches the output of SHOW STATUS,
mostly the Com_* counters
Poor man's version:
watch mysqladmin extended-status
Sincerely,
Steven Roussey
http://Network54.com/
query,sql,stuff,cool
of having it work with several languages on a record basis, rather
than a table basis...
Now if only I had a paying job, I could focus on it and get it done
quicker...
Sincerely,
Steven Roussey
http://Network54.com/
-
Before
if the issue lies
with the version change or just coincident with it. Can anyone confirm this
behavior?
Sincerely,
Steven Roussey
http://Network54.com/
-
Before posting, please check:
http://www.mysql.com/manual.php
I use ext3 and have a qps of anywhere from 2800-8000 and use the
defaults with no problems. Have you tried:
iostat -k 1
to look at your disk access? What kind of disks are they anyhow? IDE or
SCSI? RAIDed? In what fashion?
Lastly, you said that this is a script that is running, right? The
-
Yesterday happened to be one of the busiest days for us ever
on our MySQL backed web site. For the entire day MySQL was
hit with up to 1200 queries/second, and many queries were
being delayed at least 2-15 seconds.
-
I know how you feel. We were hitting 7700
-
Yesterday happened to be one of the busiest days for us ever
on our MySQL backed web site. For the entire day MySQL was
hit with up to 1200 queries/second, and many queries were
being delayed at least 2-15 seconds.
-
I know how you feel. We were hitting 7700
Hi all!
I wanted to thank the MySQL team for making such a great product! We
moved from 3.23 to 4.0.x a couple of months ago and everything works
great. Just upgraded to 4.0.6 and glad to see it work out of the box
without a rev 4.0.6a. Those glib issues were such a pain!
4.0.5a and 4.0.6 have
you described
happens.
Sincerely,
Steven Roussey
http://Network54.com/
-
Before posting, please check:
http://www.mysql.com/manual.php (the manual)
http://lists.mysql.com/ (the list archive)
To request
0 processes running is beyond me...)
Sincerely,
Steven Roussey
http://Network54.com/
sql,query
-
Before posting, please check:
http://www.mysql.com/manual.php (the manual)
http://lists.mysql.com/ (the list
Does InnoDB support ALTER TABLE ... ORDER BY ...? If it weren't for this
command, we would never get the continuous great performance we get from
MySQL. And it keeps us from ever really considering InnoDB. :(
Sincerely,
Steven Roussey
http://Network54.com
There seems to be nothing in the Manual about a lot of things. For
example, the utilities mysqldumpslow, and mysqlcheck, etc.
Sincerely,
Steven Roussey
http://Network54.com/
-
Before posting, please check:
http
OK, it seems to be working. The load is not spiraling out of control. :)
Sincerely,
Steven Roussey
http://Network54.com/
-Original Message-
From: Lenz Grimmer [mailto:[EMAIL PROTECTED]]
Before I make a 4.0.5a release of the Linux binaries (and finally
announce
4.0.5), could
is that the client is seeing slow queries, but the MySQL server
is not.
Will Gigabit Ethernet alleviate this problem?
TIA!
Sincerely,
Steven Roussey
http://Network54.com/
-
Before posting, please check:
http://www.mysql.com
Comment out the body of the _restore_connection_defaults in php_mysql.c
file in PHP. Recompile, etc. Or dont use persistent connections. Should
be fixed in PHP 4.3.
See http://bugs.php.net/?id=19529 for more info.
Sincerely,
Steven Roussey
http://Network54.com
it and our load
went from 1 to 145 in 20 seconds...
Sincerely,
Steven Roussey
http://Network54.com/
-
Before posting, please check:
http://www.mysql.com/manual.php (the manual)
http://lists.mysql.com/ (the list
there...
I'll check what our bandwidth utilization is. We don't have a problem
yet.
Sql query
Sincerely,
Steven Roussey
http://Network54.com/
-
Before posting, please check:
http://www.mysql.com/manual.php (the manual
are
partitionable, then some thinking about that upfront will do you a world
of good later. Depend on your application though.
7) I'd also appreciate any input from people who have used
official mysql support before.
We have used their support and it was excellent.
Sincerely,
Steven Roussey
http
.
Such a relief! Thanks for all the work to resolve this!
Sincerely,
Steven Roussey
http://Network54.com/?pp=e
sql,query
-
Before posting, please check:
http://www.mysql.com/manual.php (the manual)
http://lists.mysql.com
, user, passwd, );
#endif
return 0;
}
And change the two copies of this:
if (connect_timeout != -1)
to
if (connect_timeout = 0)
My 2 cents for the day.
Sincerely,
Steven Roussey
http://Network54.com/?pp=e
see where
this would be part of the need.
Sincerely,
Steven Roussey
http://Network54.com/?pp=e
-Original Message-
From: PHP Bug Database [mailto:[EMAIL PROTECTED]]
Sent: Monday, October 07, 2002 5:03 am
To: [EMAIL PROTECTED]
Subject: Bug #19529 [Com]: Occational Commands out of sync errors
If MYSQL_OPT_CONNECT_TIMEOUT is set before mysql_connect() or
mysql_real_connect() and the value is set to zero, what is the expected
behavior? (Reason: PHP now does this as the default.)
Sincerely,
Steven Roussey
http://Network54.com/?pp=e
Since updating to 4.2.3, we have been getting intermittent errors of
Commands out of sync. Anyone else see this?
Sincerely,
Steven Roussey
http://Network54.com/?pp=e
php,sql,query
-
Before posting, please check:
http
,
Steven Roussey
http://Network54.com/?pp=e
-Original Message-
From: Jocelyn Fournier [mailto:[EMAIL PROTECTED]]
Hi,
Same problem for me, although it was already here with 4.2.0 for me
(well
it
seems to be also a high QPS problem...). The problem seems to
disappear
with
an apache
with 4.0.3 in that it works fine for a few
queries, but when I let it go at a normal 3000 queries per second it
choked up and died (much like 3.23.51 -- by the way, which 3.23.x are
you using now?).
Thanks!
Sincerely,
Steven Roussey
http://Network54.com/?pp=e
That optimization is for fields without an index AFAIK.
Sincerely,
Steven Roussey
http://Network54.com/?pp=e
sql,query
-
Before posting, please check:
http://www.mysql.com/manual.php (the manual)
http
there was a web accessible version of the source so I could
quickly go look at it (like PHP's lxr.php.net)...
Sincerely,
Steven Roussey
http://Network54.com/?pp=e
-Original Message-
From: Jocelyn Fournier [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, August 27, 2002 11:48 pm
To: [EMAIL PROTECTED
It did not occur to me under after I sent the last email, that the
binlog does not log every query. For our site, it does not even log
every database. So I'm going to look a bit harder at the other log
file...
Sincerely,
Steven Roussey
http://Network54.com/?pp=e
I have two database names that I would like binlog to ignore, how do I
do that?
binlog-ignore-db=db1
works OK for db1, but
binlog-ignore-db=db1 db2
binlog-ignore-db=db1,db2
do not work.
Sincerely,
Steven Roussey
http://Network54.com/?pp=e
sql,query
Also,
OS is ReadHat Linux 7.3
Glib is 2.2.5
Kernel is 2.4.18-10smp
Two Athlon MPs and 1.5Gb RAM.
Sincerely,
Steven Roussey
http://Network54.com/?pp=e
-Original Message-
From: Lenz Grimmer [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, August 28, 2002 12:49 am
To: [EMAIL PROTECTED
Lastly, we use the .tar.gz file of the linux binary made by MySQL AB.
Sincerely,
Steven Roussey
http://Network54.com/?pp=e
-Original Message-
OS is ReadHat Linux 7.3
Glib is 2.2.5
Kernel is 2.4.18-10smp
Two Athlon MPs and 1.5Gb RAM.
Sql,query
the sym file...
I guess I rushed to download too quickly...
I'll write back with more info soon.
Sincerely,
Steven Roussey
http://Network54.com/?pp=e
-
Before posting, please check:
http://www.mysql.com/manual.php
I can't look up a MySQL 4.0.2 crash since the mysqld.sym.gz is empty!
Can someone at MySQL email me the file, please. Pretty please. Thanks!
Sincerely,
Steven Roussey
http://Network54.com/?pp=e
query
-
Before posting
Is there a way to not have mysql put fulltext searches in the slow query
log?
Sincerely,
Steven Roussey
http://Network54.com/?pp=e
-
Before posting, please check:
http://www.mysql.com/manual.php (the manual)
http
Just a note: I tried MySQL 4.0.2 and it works fine. Seems to be only
3.23.51 built by MySQL itself that has the issue. Releases before, and
now a release after (albeit a 4.0.x version) work fine.
Sincerely,
Steven Roussey
http://Network54.com/?pp=e
I have MySQL 3.23.47 running on our sever
Just an update: I installed a new fresh version of RedHat 7.3 (smp
Athlon) and a new copy of MySQL 3.23.51, but the problem remains.
Sincerely,
Steven Roussey
http://Network54.com/?pp=e
-
Before posting, please check
read_chan
This is obviously different. 3.23.47 is in tcp_data_wait. 3.23.51 is
usually doing nothing (!) or in suspend. Odd.
Sincerely,
Steven Roussey
http://Network54.com/?pp=e
-Original Message-
From: Michael Bacarella [mailto:[EMAIL PROTECTED]]
Sent: Saturday, June 29, 2002 11:20 pm
else?
Sincerely,
Steven Roussey
http://Network54.com/?pp=e
-
Before posting, please check:
http://www.mysql.com/manual.php (the manual)
http://lists.mysql.com/ (the list archive)
To request this thread, e
compile with gcc3.1. Note the other guy
that had the same problem that went away after he compiled it himself:
http://marc.theaimsgroup.com/?l=mysqlm=102537522606976w=2
In that case, I doubt he has the altered glibc compiled in. Could
changes there have this effect?
Sincerely,
Steven Roussey
I tried 'skip-name-resolve' but it had no impact. :( So it may have
nothing to do with name resolution.
Here are the results in file RUN-mysql-Linux_2.4.16_0.13smp_i686:
I'm going to run the tests on .47 next to see if there is any
difference.
Sincerely,
Steven Roussey
http://Network54.com
2098.00 436.22 126.65 562.87
2667247
Sincerely,
Steven Roussey
http://Network54.com/?pp=e
-Original Message-
From: Michael Widenius [mailto:[EMAIL PROTECTED]]
Sent: Monday, June 24, 2002 4:24 am
To: [EMAIL PROTECTED]
Cc: Steven Roussey; [EMAIL PROTECTED
I used the mysql builds myself. Funny thing is that I use your tool
'mytop', which is very cool by the way, to watch things and it pauses
for about 5-8 seconds when refreshing with .51 and is almost instant
with .47
Sincerely,
Steven Roussey
http://Network54.com/?pp=e
-Original Message
causes
catastrophic problems by making queries last a multiple times longer,
which make the number of concurrent queries jump exponentially. This is
a bad thing. And sadly makes 3.23.51 unusable.
Does anyone else note these types of issues?
Sincerely,
Steven Roussey
http://Network54.com/?pp=e
was run):
[Steven Roussey]
Yes, I retract my corollary. MySQL can not use indexes on an OR clause
if there is no common prefix to the same index. It can have base level
OR and use an index but only if all the clauses in the OR use the same
index (specificly some prefix of the index). My bad. Always
1 - 100 of 161 matches
Mail list logo