SELECT * FROM XYX WHERE VAL IN ('BLUE','RED','GREEN'); query will make it
faster, if the field is ENUM
On Fri, Nov 16, 2012 at 12:36 AM, Anupam Karmarkar
sb_akarmar...@yahoo.comwrote:
Hi All,
Consider a scenario, I have table XYZ which contains value follow
BLUE
RED
GREEN
NULL
It depends on the distribution of the 4 'values' in that field.
If the cardinality is poor, then INDEX(VAL) won't be used, and they will all do
a table scan.
-Original Message-
From: Anupam Karmarkar [mailto:sb_akarmar...@yahoo.com]
Sent: Friday, November 16, 2012 12:36 AM
To:
Your outer query select cpe_mac,max(r3_dt) from rad_r3cap, is doing a full
table scan, you might want to check on this and use a WHERE condition to
use indexed column
On Fri, Sep 23, 2011 at 12:14 AM, supr_star suprstar1...@yahoo.com wrote:
I have a table with 24 million rows, I need to
On 09/01/2011 01:32 PM, Brandon Phelps wrote:
On 09/01/2011 12:47 PM, Shawn Green (MySQL) wrote:
On 9/1/2011 09:42, Brandon Phelps wrote:
On 09/01/2011 04:59 AM, Jochem van Dieten wrote:
...
WHERE
(open_dt = '2011-08-30 00:00:00' OR close_dt = '2011-08-30
00:00:00')
AND (open_dt =
Thinking outside the query, is there any archiving that could happen to make
your large tables kinder in the range scan?
Andy
On Thu, Sep 8, 2011 at 7:03 PM, Brandon Phelps bphe...@gls.com wrote:
On 09/01/2011 01:32 PM, Brandon Phelps wrote:
On 09/01/2011 12:47 PM, Shawn Green (MySQL) wrote:
Thanks for the reply Andy. Unfortunately the users will be selecting varying
date ranges and new data is constantly coming in, so I am not sure how I could
archive/cache the necessary data that would be any more efficient than simply
using the database directly.
On 09/08/2011 02:16 PM,
I don't think I saw any query timings in the emails (maybe I missed them).
What version of MySQL are you currently using?
What does the explain look like when your remove the limit 10?
Is your server tuned for MyISAM or InnoDB?
What kind of disk setup is in use?
How much memory is in your
If you're running version 5.1+ you may wanna take a look at table partitioning
options you may have.
On Sep 8, 2011, at 2:27 PM, Brandon Phelps wrote:
Thanks for the reply Andy. Unfortunately the users will be selecting varying
date ranges and new data is constantly coming in, so I am not
Andy,
The queries take minutes to run. MySQL is 5.1.54 and it's running on Ubuntu
server 11.04. Unfortunately the machine only has 2GB of RAM but no other major
daemons are running on the machine. We are running RAID 1 (mirroring) with 1TB
drives. The tables in question here are all
Correct me if I'm wrong. You're wanting to get all records that have an
open_date or a close_date between two times.
If that's correct, you might be able to get an index_merge by doing a query
like:
WHERE ((starting time)=open_dt= (ending time)) OR ((starting
time)=close_dt=(ending time))
Mihail,
I have considered this but have not yet determined how best to go about
partitioning the table. I don't think partitioning by dst_address or
src_address would help because most of the queries do not filter on IP address
(except very specific queries where the end-user is searching
Thanks for the idea Derek, however given the following query my EXPLAIN output
is identical:
SELECT
sc.open_dt,
sc.close_dt,
sc.protocol,
INET_NTOA(sc.src_address) AS src_address,
sc.src_port,
INET_NTOA(sc.dst_address) AS dst_address,
Partitioning isn't a bad idea for this however I'm still thinking about your
dataset size and possible hardware limitations. It's not likely going to fit
into relevant buffers/memory so you're going to be on disk more then you
want. You're probably creating temporary tables like crazy and I would
How about:
SELECT
sc.open_dt,
sc.close_dt,
sc.protocol,
INET_NTOA(sc.src_address) AS src_address,
sc.src_port,
INET_NTOA(sc.dst_address) AS dst_address,
sc.dst_port,
sc.sent,
sc.rcvd,
spm.desc AS src_port_desc,
Andrew,
Generally there is only 1 user performing the complicated SELECT query at a time, however
the background process that fills the table is constantly doing a fast SELECT (0.3
seconds) and a subsequent UPDATE. Basically whenever a connection is closed on the
firewall, the bg process
Mihail,
Thanks so much! I modified your example to include the proper ORDER BY and
LIMIT clauses and this, so far, is running super fast (0.0007 seconds).
Question, if a record's open_dt is between the range AND the close_dt is
between the range as well, will the UNION output the record
From the manual: The default behavior for UNION is that duplicate rows are
removed from the result.
On Sep 8, 2011, at 4:50 PM, Brandon Phelps wrote:
Mihail,
Thanks so much! I modified your example to include the proper ORDER BY and
LIMIT clauses and this, so far, is running super fast
Ah I see. Well thanks for your assistance!
-Brandon
On 09/08/2011 05:21 PM, Mihail Manolov wrote:
From the manual: The default behavior for UNION is that duplicate rows are removed
from the result.
On Sep 8, 2011, at 4:50 PM, Brandon Phelps wrote:
Mihail,
Thanks so much! I modified
On Aug 30, 2011 6:46 PM, Brandon Phelps wrote:
SELECT
sc.open_dt,
sc.close_dt,
sc.protocol,
INET_NTOA( sc.src_address ) AS src_address,
sc.src_port,
INET_NTOA( sc.dst_address ) AS dst_address,
sc.dst_port,
sc.sent,
sc.rcvd,
On 09/01/2011 04:59 AM, Jochem van Dieten wrote:
SELECT
sc.open_dt,
sc.close_dt,
sc.protocol,
INET_NTOA( sc.src_address ) AS src_address,
sc.src_port,
INET_NTOA( sc.dst_address ) AS dst_address,
sc.dst_port,
sc.sent,
On 9/1/2011 09:42, Brandon Phelps wrote:
On 09/01/2011 04:59 AM, Jochem van Dieten wrote:
...
WHERE
(open_dt = '2011-08-30 00:00:00' OR close_dt = '2011-08-30
00:00:00')
AND (open_dt = '2011-08-30 12:36:53' OR close_dt = '2011-08-30
12:36:53')
In that case your logic here
On 09/01/2011 12:47 PM, Shawn Green (MySQL) wrote:
On 9/1/2011 09:42, Brandon Phelps wrote:
On 09/01/2011 04:59 AM, Jochem van Dieten wrote:
...
WHERE
(open_dt = '2011-08-30 00:00:00' OR close_dt = '2011-08-30
00:00:00')
AND (open_dt = '2011-08-30 12:36:53' OR close_dt = '2011-08-30
There are a few things gonig on, but mainly it is the ORs that are killing
you.
As your require OR to examine two distinct columns, both of equal relevance
to the query, MySQL: is left with no choice but to do a full table scan on
what might be (at a guess) a very larger table. No amount of
Thanks Singer, this took my query down to 0.0007, perfect! I wasn't
aware a single index of multiple columns would work when one of the
columns was in the WHERE clause and the other in the ORDER BY clause.
Learn something new every day I guess!
On 08/10/2011 02:03 PM, Singer X.J. Wang
On 8/10/2011 1:01 PM, Brandon Phelps wrote:
Hello all,
I am using the query below and variations of it to query a database
with a TON of records. Currently the database has around 11 million
records but it grows every day and should cap out at around 150 million.
I am curious if there is
From: Brandon Phelps bphe...@gls.com
I am curious if there is any way I can better optimize the below query, as
currently it takes this query around 10 seconds to run but I am sure this
will get slower and slower as the database grows.
You need an index on `close_dt`.
SELECT
Hi Ciaran,
So I think there's a couple things going on:
1. The explain plan for your slow query looks wrong, such as mysql is
confused. It's possible your index statistics are incorrect. Try ANALYZE
TABLE on listings and addresses.
I think a sure way to fix it is to add STRAIGHT_JOIN to
Hi Gavin,Thanks very much, I'll implement as many of your suggestions as
possible. The varchar(255)'s are inexcusable and I feel suitably ashamed :)
The queries were generated by ActiveRecord (an ORM library for Ruby),
although even if I had written them myself they would probably not be much
The index hint is not in productoin code.. I was trying ot force it to use
the index even when using the OR clause.. ment to take that out before I
sent the email.
The table structure is:
CREATE TABLE `customer` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`ssn` varchar(32) DEFAULT NULL,
Try a union instead of an or condition.
http://dev.mysql.com/doc/refman/5.0/en/union.html
Johnny Withers wrote:
I have the following tables:
Customer: id,ssn
Customer_Id: id,customer_id,id_num
The customer table holds customers along with their SSN and the customer_id
table holds
Do you have an index on id_num? What sort of explain output do you
get when you don't use a query hint? Your USE INDEX hint may be
causing MySQL to ignore a better strategy.
If you have separate indexes on ssn and id_num, MySQL may be able to
efficiently use an index merge optimization . See
If you have separate indexes on ssn and id_num, MySQL may be able to
efficiently use an index merge optimization . See
http://dev.mysql.com/doc/refman/5.0/en/index-merge-optimization.html.
This is only in 5.0+ - on older versions of MySQL you may find a union
more efficient.
And in newer
On Tue, Jan 13, 2009 at 7:07 PM, Baron Schwartz ba...@xaprb.com wrote:
If you have separate indexes on ssn and id_num, MySQL may be able to
efficiently use an index merge optimization . See
http://dev.mysql.com/doc/refman/5.0/en/index-merge-optimization.html.
This is only in 5.0+ - on older
First, you might want to move the WHERE...t3.int_a = some integer
condition into the join condition for t3.
Your not using anything from t4, so I'm not sure why you have that
table in your query.
You can suggest or force mysql to use an index if it's using the wrong
one:
-What I'am trying to do:
Bit hard to explain. I've got a table consisting of ip addresses
(ipv4_src), destination addresses (ipv4_dst), and port numbers
(port_dst) and some other irrelevant columns. Ultimately my goal is to
find a linear order in a subset of ports. For example, host A connects
to
Joris Kinable schrieb:
Optimize query
I've got one query, which I would like to improve a lot since it takes
very long (24 hours) to execute. Here is the idea:
1. Take the table ipv4_srcipv4_dstport_dst (other rows in this
table are not mentioned for clearity) and remove all duplicate
tuple's.
[EMAIL PROTECTED] wrote:
Yes, a ranged query should respond faster than a negation. In some cases
you can seriously improve query performance for a negation query if you
split it into two range queries unioned together. Here is a pseudo
example:
This query should be slow due to the table
Robert DiFalco [EMAIL PROTECTED] wrote on 03/11/2006 12:43:43 PM:
In a previous database engine I was using an IN was more optimal than a
. So, for example:
SELECT * FROM table WHERE table.type IN (1,2,3);
Where the possible values of type are 0-3, was appreciably faster than:
Shawn,
Any performance gains for specifying type 0 than type 0 ?
R.
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Monday, March 13, 2006 6:37 AM
To: Robert DiFalco
Cc: mysql@lists.mysql.com
Subject: Re: Query Optimization Question
Robert
To: Robert DiFalco
Cc: mysql@lists.mysql.com
Subject: Re: Query Optimization Question
Robert DiFalco [EMAIL PROTECTED] wrote on 03/11/2006 12:43:43
PM:
In a previous database engine I was using an IN was more optimal than
a
. So, for example:
SELECT * FROM table WHERE
Subject: RE: Query Optimization Question
Yes, a ranged query should respond faster than a negation. In some cases
you can seriously improve query performance for a negation query if you
split it into two range queries unioned together. Here is a pseudo
example:
This query should be slow due
Robert DiFalco wrote:
In a previous database engine I was using an IN was more optimal than a
. So, for example:
SELECT * FROM table WHERE table.type IN (1,2,3);
Where the possible values of type are 0-3, was appreciably faster than:
SELECT * FROM table WHERE table.type 0;
I've
Robert DiFalco wrote:
In a previous database engine I was using an IN was more optimal than a
. So, for example:
SELECT * FROM table WHERE table.type IN (1,2,3);
Where the possible values of type are 0-3, was appreciably faster than:
SELECT * FROM table WHERE table.type 0;
IN
Hi Tripp, all!
Tripp Bishop wrote:
[[...]]
Why would adding extra fields to the query slow the
query down so much? In either case it looks like it's
using the same index.
If a query is to return only fields contained in the index, it can be
executed as an index-only query, and the true
It might be a good idea if you could post the results of an EXPLAIN SELECT ...
for a fast query and a slow query along with their actual SQL statements.
James Harvard
At 11:37 am -0800 12/1/06, Tripp Bishop wrote:
I've got a question regarding optimizing a query.
--
MySQL General Mailing List
Jim, here's an example query:
SELECT COUNT(listing.listing_id) AS listing_count FROM
listing_city_xref INNER JOIN listing
ON listing.listing_id = listing_city_xref.listing_id
AND listing.group_id = listing_city_xref.group_id
WHERE listing_city_xref.group_id =2
AND listing_city_xref.city_id IN
Gerald Taylor wrote:
Query optimization question
I am selecting from a single table but it has a lot of rows and it
has a very involved calculation. What I really want to do is
is FIRST restrict the number of rows so that the big calculation is only
performed on the ones that are within 3
When you don't have subselects, you have two options: temporary tables or
JOINed queries.In your case, I think the temporary table is the better way
to go.
I would also eliminate the ABS() check so that I can compare values
directly against the index. I know the math is correct your way but
I'm trying to sort out a query that identifies images that are not in the
story table (structures below). Peter Brawley has kindly pointed me in the
right direction with the sql structure (which does work), but I'm getting
benchmarks of 5+ seconds on the test data, whereas the live site has
At 21:55 -0300 3/7/04, Volnei Galbino wrote:
Hi,
Does anybody know where I can find information about query
optimization in MySQL? Of which the techniques that are used?
Regards,
Volnei Galbino
Yes, there's a chapter on optimization in the MySQL Reference Manual.
Chuck Gadd wrote:
I've got a query that I can't seem to get optimized, so I'm
hoping someone here can spot something I've missing!
Table has three columns:
CoordID int unsigned,
Zip_Lo char(9),
Zip_Hi char(9)
Table has 3 million records
indexes:
acg_lo (Zip_Lo)
acg_hi (Zip_Hi)
acg_combined
Without seeing the data I am assuming that you are going over the 30%
threshold with your less/greater equal to where clauses. What sort of
criteria are you asking the database engine to search for?
Original Message
On 2/25/04, 9:44:02 PM, [EMAIL PROTECTED] wrote regarding Re: Query
Maybe i'm wrong here, someone correct me, if its just int's you are gonna
use set the field types to bigint it may search faster you are doing a
character search, to get there quicker in a text search scenerio i'd
suggest mysql4 and full text searching MATCH AGAINST
I've got a query that I
Mysql 4.0.1+ supports searches IN BOOLEAN MODE
So if mysql 4 is an option, I think that would work
for you.
http://www.mysql.com/doc/en/Fulltext_Search.html
olinux
--- Andy Ingham [EMAIL PROTECTED] wrote:
Folks --
We have built a table with bibliographic information
that contains a
Actually, I'm not convinced it would, because I want to be able to
find a string that is *NOT fully contained* within any given field.
In other words, I want a search for
The catcher in%
to match the row:
+-+--+
| article |
That is not valid SQL.
a table can't be like something.
I think you ommitted the field name.
I also don't see why you need a join , unless you have duplicate ids.
Gabe wrote:
First off, please excuse me if this is the wrong list to post this sort of question
to (and please direct me to the
I'm sorry, I guess I should make sure the query syntax is correct when asking a
question... Here it is with the correct
field names:
SELECT main.*
FROM kw_table, kw_table AS kw_table1, main
WHERE kw_table.keyword LIKE 'first word%' AND kw_table1.keyword LIKE '2nd word%' AND
main.id =
Hi, I did some more research on my problem. (Sorry for not doing it before
I posted the other message), and I think perhaps my question could be
summarized into the following.
Can I use an index to speed up a select max() or select min()?
I read what the manual has to say about a query like
In the last episode (Jul 24), Dave Dutcher said:
I was wondering if somebody could give me any suggestions on how to optimize
a query I am working on.
This is my table:
CREATE TABLE testdata (
Begin char(9) NOT NULL default '',
End char(9) NOT NULL default '',
UNIQUE KEY
Pada Mon, 8 Jul 2002 16:07:04 +0530
Arul [EMAIL PROTECTED] menulis :
Hi All
The Query below took around 175 Secs to return 22 Rows..Any way this query
can be optimized
try to use group by instead of distinct i usually found group by is faster ;-)
anyway ... performance of query is
, July 08, 2002 4:25 PM
Subject: Re: Query Optimization
Pada Mon, 8 Jul 2002 16:07:04 +0530
Arul [EMAIL PROTECTED] menulis :
Hi All
The Query below took around 175 Secs to return 22 Rows..Any way this
query
can be optimized
try to use group by instead of distinct i usually found
On Monday, 8. July 2002 12:37, Arul wrote:
Hi All
The Query below took around 175 Secs to return 22 Rows..Any way this query
can be optimized
For analyzing your problem, you should send your table definition and an
output of the EXPLAIN statement.
Regards Georg
mysql, query
On 8 Jul 2002, at 16:07, Arul [EMAIL PROTECTED] wrote:
FROM
User_Type_Details UTD,
User_Type_Details B , Users U ,TempGeneralSearch1 T1,Users
A LEFT JOIN Company C on (U.CompanyID = C.CompanyID) LEFT
JOIN Expert_Info EI on (U.UserID =
Jeff,
Let's do some math here first. Your table has 47 Million rows. You
have a primary key on an int, and indexes across 22 chars and 10 chars.
You have 512MB of key buffer.
Now, assuming the indexes just have information on relevant fields
(they're actually bigger than this since they have
On Thu, Aug 23, 2001 at 11:11:38AM -0700, Steven Roussey wrote:
mysql explain select * from dominf.domip order by ip desc;
'ORDER BY something DESC' does not use an index. It must
be ascending. See the manual.
Also, I forgot, if you do this often and it needs to be in DESC
order,
mysql explain select * from dominf.domip order by ip desc;
'ORDER BY something DESC' does not use an index. It must
be ascending. See the manual.
Also, I forgot, if you do this often and it needs to be in DESC order, you
can always alter the app to create and use an extra column 'ip_desc
Peter Zaitsev wrote:
Hello mysql,
I was trying to load very huge (5GB 45mil rows) to the text dump in
sorted order (according to primary key).
The problem is mysql somehow does not wants to scan the table by
primary key to produce sorted output row by row, but prefers to use
Rafal Jank wrote:
Peter Zaitsev wrote:
Hello mysql,
I was trying to load very huge (5GB 45mil rows) to the text dump in
sorted order (according to primary key).
The problem is mysql somehow does not wants to scan the table by
primary key to produce sorted output row by
mysql explain select * from dominf.domip order by ip desc;
'ORDER BY something DESC' does not use an index. It must be ascending. See
the manual.
Sincerely,
Steven Roussey
http://Network54.com/?pp=e
-
Before posting, please
Hello
On Sat, 17 Mar 2001, Brd Farstad wrote:
I have a query in my program which is taking to long when I get data
in the database. I'm currently testing with ~40.000 articles in the
system. [...] Is it possible to optimize this query further?
Have you played with EXPLAIN SELECT and
Hi,
Without a where clause in your initial query, you will never get to use an
index.
How many rows are there in sessobjs? From the start of the sample output, it
looks like there are several thousand, so asking a complex question may,
indeed, take a long time.
You could create a result table,
Shouldn't the first query have parenthesis? As in:
DELETE FROM test WHERE userID=X AND (testID20 OR
testID80);
Even if it works the other way, parenthesis make it more clear what
you're trying to accomplish.
-Angela
Michael Griffith wrote:
When using mySQL I've learned to avoid OR
Shouldn't the first query have parenthesis? As in:
DELETE FROM test WHERE userID=X AND (testID20 OR
testID80);
Even if it works the other way, parenthesis make it more clear what
you're trying to accomplish.
-Angela
Actually, yes. Sorry about the poor example. My point is that
In the last episode (Feb 01), Michael Griffith said:
CREATE TABLE test(
userID int, # (non-unique)
testID int, # (non-unique)
PRIMARY key(testid,userid)
);
Suppose this table is populated with 1,000,000 rows. Then do this:
DELETE FROM test WHERE userID= AND
74 matches
Mail list logo