First, you might want to move the WHERE...t3.int_a = some integer
condition into the join condition for t3.
Your not using anything from t4, so I'm not sure why you have that
table in your query.
You can suggest or force mysql to use an index if it's using the wrong
one:
I have this query:
SELECT DISTINCT t1.string FROM t1
LEFT JOIN t2 ON t1.string=t2.string
LEFT JOIN t3 ON t1.int_a=t3.int_a
LEFT JOIN t4 ON t1.int_b=t4.int_b
WHERE
t1.string != '' AND
t2.string IS NULL AND
t3.int_a = some integer
ORDER BY
I have the following horrible sql. I need one result that has all the
data in one row. I am currently using 3 sub queries and figure it must
be a better way...
SELECT 'FS_DEV',
ifnull(a.severity, 0) AS aseverity,
I have a table in my database (currently) that grows to be huge (and I
need to keep the data). I'm in a redesign phase and I'm trying to do it
right. So here are the relevant details:
The table has several keys involved:
mysql desc data__ProcessedDataFrames;
On 6/27/07, Dave G [EMAIL PROTECTED] wrote:
Queries on this table when it gets large is slow as molasses. I'm
thinking about making a new table for anything with a different test_id
any opinions as to whether this is good or bad?
Hi Dave G.,
We need to know how:
a)How large the table
message without making a copy. Thank you.
- Original Message -
From: Dave G [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Wednesday, June 27, 2007 11:32 AM
Subject: optimization help
I have a table in my database (currently) that grows to be huge (and I
need to keep the data). I'm
I think I can do that:
I don't have any other indexes, just the keys.
mysql show create table data__ProcessedDataFrames;
, please
notify
the sender immediately by telephone or email and destroy the original
message without making a copy. Thank you.
- Original Message -
From: Dave G [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Wednesday, June 27, 2007 11:32 AM
Subject: optimization help
I have
On 6/27/07, Dave G [EMAIL PROTECTED] wrote:
select payload_time,HEX(processed_data) from data__ProcessedDataFrames
where test_id=18 AND top_level_product_name=DataProduct AND payload_time
11808.74704 AND payload_time 1180564096.24967;
What I'm concerned about is with how much data I will
That's quite a query. You may not be able to optimize it well with
those nested selects. You may want to think about changing your query
around a little, perhaps joining pieces of data using whatever
programming language you're using on the front end. You have MySQL
doing a lot of work and
I have the following sql that works for what I want to see but is
terribly slow due to the subquery. Any suggestions on how to get same
result faster? I have tried group by and cant seem to get the correct
results that way
Also is there a product that can help optimize sql and indexing?
Bryan,
A 'Not Exists' query
http://www.artfulsoftware.com/infotree/queries.php#41 is usually
faster when coded as an exclusion join, eg for max-some-value per key,
left join on the key variable and left.value right. value where
left.key is null, so you would need something like ...
SELECT
, 2007 1:40 PM
To: Bryan Cantwell
Cc: mysql@lists.mysql.com
Subject: Re: Need sql optimization help
Bryan,
A 'Not Exists' query
http://www.artfulsoftware.com/infotree/queries.php#41 is usually
faster when coded as an exclusion join, eg for max-some-value per key,
left join on the key variable
DESC
From: Peter Brawley [mailto:[EMAIL PROTECTED]
Sent: Saturday, March 03, 2007 1:40 PM
To: Bryan Cantwell
Cc: mysql@lists.mysql.com
Subject: Re: Need sql optimization help
Bryan,
A 'Not Exists' query
http://www.artfulsoftware.com/infotree/queries.php#41
1:40 PM
To: Bryan Cantwell
Cc: mysql@lists.mysql.com
Subject: Re: Need sql optimization help
Bryan,
A 'Not Exists' query
http://www.artfulsoftware.com/infotree/queries.php#41 is usually
faster when coded as an exclusion join, eg for max-some-value per key,
left join on the key variable
Hello.
Have a look here:
http://lists.mysql.com/mysql/194596
http://dev.mysql.com/doc/refman/5.0/en/innodb-tuning.html
If you feel uncomfortable with 10G ibdata size, you may want to
switch to per-file tablespace:
http://dev.mysql.com/doc/refman/5.0/en/multiple-tablespaces.html
I currently have a customer who we switched over to INNODB from MyISM. It's
a Dell Server with 4GB or Memory and RHEL4 64-bit. It's a fairly big
database the size of the MyISM folders (before we converted to INNODB)
was about 2-3Gigs.
Questions:
1. The ibdata1 file size is 10GB. Does
Shaun Adams wrote:
1. The ibdata1 file size is 10GB. Does that sound right? Should this file
be this big?
That sounds right. Innodb seems to incur large space overheads. but with
the cost of diskspace nowadays...
2. Once a week, I have to perform HUGE insert imports into the
:~ -Original Message-
:~ From: Shaun Adams [mailto:[EMAIL PROTECTED]
:~ Sent: Tuesday, February 07, 2006 4:32 PM
:~ To: mysql@lists.mysql.com
:~ Subject: INNODB Questions and Optimization help
:~
:~ Questions:
:~
:~ 1. The ibdata1 file size is 10GB. Does that sound right? Should
Hello.
I'm not sure if it suitable for you case, but sometimes it is better
to import data to the temporary table on the server and extract values
from the fields of that table.
Scott Klarenbach wrote:
I am importing records from a text file into my DB. Each record in the text
file
I am importing records from a text file into my DB. Each record in the text
file corresponds to six tables in the DB due to normalization.
So for example, an inventory record containing a part, vendor,
contactPerson, mpn etc...in ONE line of a text file, requires 6 inserts into
the DB.
Further
Hi!
Daniel Kasak wrote:
[EMAIL PROTECTED] wrote:
Views differ from tables in that they cannot be indexed.
I've just started experimenting with derived tables under 4.1.14, and I
had a hunch this was so. Is there any plan to include index support for
views / derived tables?
An index is a
[EMAIL PROTECTED] wrote:
Views differ from tables in that they cannot be indexed.
I've just started experimenting with derived tables under 4.1.14, and I
had a hunch this was so. Is there any plan to include index support for
views / derived tables?
--
Daniel Kasak
IT Developer
NUS
Thanks a lot Shawn.
I didn't realize that views don't take advantage of indexing. This is the
cause of my major performance hits. I'm basically using views as a form of
DB abstraction over the tables. So, many of my views pull all records from
all tables they join, and it is up to the user to
Hello,
I'm new to views and am discovering massive performance hits in the views
I've created once the records start to climb above 20,000 or so. Does
anyone know of a great primer/tutorial site for optimizing views in MySQL,
or even generally? What are the best practices etc...? I find when I
Scott Klarenbach [EMAIL PROTECTED] wrote on 01/06/2006 08:13:10
PM:
Hello,
I'm new to views and am discovering massive performance hits in the
views
I've created once the records start to climb above 20,000 or so. Does
anyone know of a great primer/tutorial site for optimizing views in
Hi I need some optimization help.
I currently have this query SELECT COUNT(*) as count,sum(p.amt) as
total,p.batch,m.username,m.id,p.method,DATE_FORMAT(p.occur, '%M %d, %Y') as
dateBought FROM pay p LEFT JOIN members m ON m.id=p.mid WHERE
p.method!='none' AND p.method!='' GROUP BY p.batch
Hi I need some optimization help.
I currently have this query SELECT COUNT(*) as count,sum(p.amt) as
total,p.batch,m.username,m.id,p.method,DATE_FORMAT(p.occur, '%M %d, %Y') as
dateBought FROM pay p LEFT JOIN members m ON m.id=p.mid WHERE
p.method!='none' AND p.method!='' GROUP BY p.batch
Hi I need some optimization help.
I currently have this query SELECT COUNT(*) as count,sum(p.amt) as
total,p.batch,m.username,m.id,p.method,DATE_FORMAT(p.occur, '%M %d, %Y') as
dateBought FROM pay p LEFT JOIN members m ON m.id=p.mid WHERE
p.method!='none' AND p.method!='' GROUP BY p.batch
I asked for help with a version of this query a few months
ago, and subsequently made some improvements to it, and also
fooled around with other parts of my app that were in need of
tuning. I've since done some more extensive benchmarking and
realized that this query really is somewhat slow. Even
Chuck Gadd wrote:
I've got a query that I can't seem to get optimized, so I'm
hoping someone here can spot something I've missing!
Table has three columns:
CoordID int unsigned,
Zip_Lo char(9),
Zip_Hi char(9)
Table has 3 million records
indexes:
acg_lo (Zip_Lo)
acg_hi (Zip_Hi)
acg_combined
optimization help:
Maybe i'm wrong here, someone correct me, if its just int's you are gonna
use set the field types to bigint it may search faster you are doing a
character search, to get there quicker in a text search scenerio i'd
suggest mysql4 and full text searching MATCH AGAINST
I've got
I've got a query that I can't seem to get optimized, so I'm
hoping someone here can spot something I've missing!
Table has three columns:
CoordID int unsigned,
Zip_Lo char(9),
Zip_Hi char(9)
Table has 3 million records
indexes:
acg_lo (Zip_Lo)
acg_hi (Zip_Hi)
acg_combined (Zip_Lo, Zip_Hi)
Maybe i'm wrong here, someone correct me, if its just int's you are gonna
use set the field types to bigint it may search faster you are doing a
character search, to get there quicker in a text search scenerio i'd
suggest mysql4 and full text searching MATCH AGAINST
I've got a query that I
Hi all -
I have a speed problem that I don't understand. I've been pretty active
with DB's for a few years, but I'm no expert, so let me know if I'm
missing the obvious. I have Paul DuBois' MySQL book (New Riders
edition) and Descartes and Bunce's Programming DBI book on my desk, so
feel free
I think... you don't have an index on the Incident field itself, just
on (Date, Incident, Type, Task) which means that it concatenates those
fields and orders the result - thus this may be virtually useless if
you're looking for a specific incident within a large date range. Since
your query
On Jan 12, 2004, at 08:09 AM, Douglas Sims wrote:
I think... you don't have an index on the Incident field itself,
just on (Date, Incident, Type, Task) which means that it concatenates
those fields and orders the result - thus this may be virtually
useless if you're looking for a specific
Hi,
I have a perl script that loops through and executes 2 queries 50
times. I need to make sure that I have done all I can to make these
queries and the indexing on the table as efficient as possible.
Would someone do me the gargantuan favor of taking a peek at the info
below and offer any
38 matches
Mail list logo