I did the following test:
My program still uses MySQL 5.7 libmysqlclient.so, but I connect now to a
5.6.16 server.
And the SQL interrupt works fine... so I suspect there is a MySQL server issue
in 5.7.
Seb
On 07/12/2016 01:01 PM, Sebastien FLAESCH wrote:
About:
> B) For some reason, the
About:
> B) For some reason, the program does not want to exit() - (must investigate)
In fact we use prepared statements with a sequence of mysql_stmt_init,
mysql_stmt_prepare, mysql_stmt_execute, mysql_stmt_fetch, ... and
mysql_stmt_close.
After the statement was interrupted, we try to free
You need to GROUP BY those fields NOT in the aggregate function. Like:
SELECT f.id,f.name,sum(p.price)
FROM fruit f
left join purchase p on f.id = p.fruit
where p.price is not null
group by f.id,f.name;
1, 'Apples', 2
2, 'Grapes', 6.5
4, 'Kiwis', 4
On 2015-10-23 04:15, Don Wieland wrote:
Hi
On 10/22/2015 11:48 AM, Don Wieland wrote:
On Oct 20, 2015, at 1:24 PM, shawn l.green wrote:
Which release of MySQL are you using?
Version 5.5.45-cll
How many rows do you get if you remove the GROUP_CONCAT operator? We don't need
to see the results.
I'm not at a terminal but have you tried grouping by p.pk_ProductID instead
of i.fk...? It is the actual value you are selecting as well as being on
the primary table in the query.
On Thu, Oct 22, 2015, 5:18 PM Don Wieland wrote:
> Hi gang,
>
> I have a query:
>
> SELECT
>
> On Oct 22, 2015, at 2:41 PM, Michael Dykman wrote:
>
> I'm not at a terminal but have you tried grouping by p.pk_ProductID instead
> of i.fk...? It is the actual value you are selecting as well as being on
> the primary table in the query.
Yeah I tried that - actually the
One more guess:
Try explicitly aliasing the fields of interest and using those aliases
exclusively throughout the rest of the expression.
SELECT
p.pk_ProductID as pid,
p.Description as dsc,
SUM(i.Quantity) as totl
FROM invoice_invoicelines_Product p
JOIN invoice_InvoiceLines i ON pid =
> On Oct 20, 2015, at 1:24 PM, shawn l.green wrote:
>
> Which release of MySQL are you using?
Version 5.5.45-cll
> How many rows do you get if you remove the GROUP_CONCAT operator? We don't
> need to see the results. (sometimes it is a good idea to look at the raw,
- Original Message -
> From: "Shawn Green" <shawn.l.gr...@oracle.com>
> Subject: Re: Query optimizer-miss with unqualified expressions, bug or
> feature?
>
> On a more serious note, indexes with limited cardinality are less useful
> than those with
On 2015-10-20 12:54 PM, Don Wieland wrote:
Hi all,
Trying to get a query working:
SELECT
ht.*,
CONCAT(o.first_name, " ", o.last_name) AS orphan,
GROUP_CONCAT(DISTINCT hti.rec_code ORDER BY hti.rec_code ASC SEPARATOR ", ") AS
alloc
FROM hiv_transactions ht
LEFT JOIN tk_orphans o ON
On 10/20/2015 1:54 PM, Don Wieland wrote:
Hi all,
Trying to get a query working:
SELECT
ht.*,
CONCAT(o.first_name, " ", o.last_name) AS orphan,
GROUP_CONCAT(DISTINCT hti.rec_code ORDER BY hti.rec_code ASC SEPARATOR ", ") AS
alloc
FROM hiv_transactions ht
LEFT JOIN tk_orphans o ON
Hi Roy,
Thanks for the clear explanation.
I guess (hypothetically) the optimizer could see if it has a key, and
then use two starts: one on 'a > 0' and one on 'a < 0', taking a union
of the result? Which might make a significant result to something?
Ben.
On 2015-10-19 14:19, Roy Lyseng
Hi Ben,
On 19.10.15 15.10, Ben Clewett wrote:
I have noticed that an unqualified boolean expression cannot be optimized by
MySQL to use an index in 5.6.24.
For example:
CREATE TABLE t (
i INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
a BOOLEAN NOT NULL,
KEY a (a)
) ENGINE=InnoDB;
This
Hi Shawn,
On 19.10.15 22.33, shawn l.green wrote:
On 10/19/2015 3:48 PM, Roy Lyseng wrote:
Hi Ben,
On 19.10.15 16.07, Ben Clewett wrote:
Hi Roy,
Thanks for the clear explanation.
I guess (hypothetically) the optimizer could see if it has a key, and
then use
two starts: one on 'a > 0' and
On 10/19/2015 3:48 PM, Roy Lyseng wrote:
Hi Ben,
On 19.10.15 16.07, Ben Clewett wrote:
Hi Roy,
Thanks for the clear explanation.
I guess (hypothetically) the optimizer could see if it has a key, and
then use
two starts: one on 'a > 0' and one on 'a < 0', taking a union of the
result?
Which
Hi Ben,
On 19.10.15 16.07, Ben Clewett wrote:
Hi Roy,
Thanks for the clear explanation.
I guess (hypothetically) the optimizer could see if it has a key, and then use
two starts: one on 'a > 0' and one on 'a < 0', taking a union of the result?
Which might make a significant result to
On 2014-10-08 1:38 PM, Jan Steinman wrote:
I often use CASE WHEN ... to pivot tables. For example, splitting sales data by
year:
SELECT
s_product.name AS `Product`,
SUM(CASE WHEN YEAR(sales.Date)='2007' THEN sales.Total ELSE NULL END) AS
`2007`,
SUM(CASE WHEN
2014/10/08 11:38 -0700, Jan Steinman
However, this pattern will often result in numerous empty columns -- empties
that would not be there had the table not been pivoted.
2014/10/08 16:42 -0500, Peter Brawley
MySQL stored procedures are less incomplete, and can do it, but they're
awkward.
It's good to know. Keep up with good work, cheers!!
--
*Wagner Bianchi, MySQL Database Specialist*
Mobile: +55.31.8654.9510
E-mail: m...@wagnerbianchi.com
Twitter: @wagnerbianchijr
2014-09-06 3:01 GMT-03:00 Ajay Garg ajaygargn...@gmail.com:
Hi Wagner.
That is what I did as the last
Hi Martin.
Thanks for the reply.
As I had mentioned, we are running both the instances since last 6
years or so, and the records are inserted/deleted on both the
instances.
So, we did a show table status like 'XX' \G; on both the
instances, and following are the outputs (here XX is the
__
Date: Sun, 7 Sep 2014 23:06:09 +0530
Subject: Re: Query on some MySQL-internals
From: ajaygargn...@gmail.com
To: mgai...@hotmail.com
CC: mysql@lists.mysql.com
Hi Martin.
Thanks for the reply.
As I had mentioned, we are running both
Hi Wagner.
That is what I did as the last resort, and that is only what solved the issue.
Thanks.
On Fri, Sep 5, 2014 at 1:52 AM, wagnerbianchi.com m...@wagnerbianchi.com
wrote:
You can try these steps:
1-) Stop slave and write down the replication coordinates getting that in
MySQL's
Date: Sat, 6 Sep 2014 14:26:22 +0530
Subject: Query on some MySQL-internals
From: ajaygargn...@gmail.com
To: mysql@lists.mysql.com
Hi all.
We are facing a very strange scenario.
We have two mysql-instances running on the same machine, and they had
been running functionally fine
Hi all.
Unfortunately, I have run into the logs, as described at
http://bugs.mysql.com/bug.php?id=71495
Unfortunately, the issue does not go away, even after reverting back
to slave-parallel-workers=0 in my.cnf, and restarting the mysql
instance.
Any quick idea, as to how we may get the
You can try these steps:
1-) Stop slave and write down the replication coordinates getting that in
MySQL's error log (*very important step*);
2-) Issue the `reset slave` command on MySQL Slave;
3-) Issue the CHANGE MASTER TO considering the replication coordinates
you've just written down on step
Ping !! :)
On Mon, Sep 1, 2014 at 8:27 AM, Ajay Garg ajaygargn...@gmail.com wrote:
Hi all.
We have replication set-up, where we cater to HUUGEE amounts of data.
Since quite some time, we have been facing issues wherein the slave
lags behind master quite a lot.
So, yesterday we were able
Thanks Akshay for the reply.
On Tue, Sep 2, 2014 at 12:25 PM, Akshay Suryavanshi
akshay.suryavansh...@gmail.com wrote:
Hello Ajay,
I tried testing the slave-parallel-workers few months ago, what I can surely
tell you its still under development, and at that time needed some critical
bug
Am 14.07.2014 12:48, schrieb Satendra:
Hi there, I'm struggling to find the total time taken by a database query
on the disk? As I understand when a database query start execution it takes
some time inside the database engine some time to seek the result from
disk (if that is not in
Satendra,
Google show profile as it may give you all the information that you need.
There is a lot more details in the performance_schema if you want to dig
into it, but it can be quite difficult to get out. Here is one place to
start if you want to pursue that angle:
Hi Satendra,
On Jul 14, 2014, at 3:48 AM, Satendra stdra...@gmail.com wrote:
Hi there, I'm struggling to find the total time taken by a database query
on the disk? As I understand when a database query start execution it takes
some time inside the database engine some time to seek the result
Hi Satendra,
On 7/14/2014 5:48 AM, Satendra wrote:
Hi there, I'm struggling to find the total time taken by a database query
on the disk? As I understand when a database query start execution it takes
some time inside the database engine some time to seek the result from
disk (if that is not
Hello Richard,
the count(*) for each week of 2013 so that I end up with:
http://dev.mysql.com/doc/refman/5.1/en/group-by-functions.html
Ilya.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql
On Fri, Apr 19, 2013 at 8:24 AM, Richard Reina gatorre...@gmail.com wrote:
Hello All,
Happy Friday! I know how to do the following query:
select count(*) from sales where WEEK(sale_date)=15 AND
YEAR(sale_date)=2013;
But can someone tell me I can do a query that will give me:
the count(*)
Perfect! Thank you Larry et all.
Have a great weekend.
2013/4/19 Larry Martell larry.mart...@gmail.com
On Fri, Apr 19, 2013 at 8:24 AM, Richard Reina gatorre...@gmail.com
wrote:
Hello All,
Happy Friday! I know how to do the following query:
select count(*) from sales where
On 3/31/2013 7:32 AM, william drescher wrote:
I have a table, schedule:
CREATE TABLE `schedule` (
`schedule_id` mediumint(9) NOT NULL AUTO_INCREMENT,
`provider` varchar(15) NOT NULL,
`apptTime` datetime NOT NULL,
`location` varchar(10) NOT NULL,
`duration` smallint(5) unsigned NOT
of course, Group By
bill
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql
Don't use DOUBLE (or FLOAT) for monetary values. Use DECIMAL(...). Two
decimal places ( ,2) suffices for most countries, but not all.
DOUBLE values have been rounded when converted from decimal to binary.
Multiply creates another rounding. The end result may look the same when
converted
Hi
To me it looks like quantity is being multiplied by the price and then added to
total.
Try something like this
SELECT total,(quantity*price) as QP from sales where total !=QP AND salesid=122
On 2012-12-29, at 7:25 AM, Trimurthy wrote:
hi,
i am working with mysql 5.1.36 and i
Hi,
A subquery with IN clause is not a good idea. If you want to tune this
query, try adding indexes on the tables accessed in the inner query
credits. A composite index on (success,promoter_id) would be sufficient,
then the optimizer will use this index for the where clause and as a
covering
.)
-Original Message-
From: Cabbar Duzayak [mailto:cab...@gmail.com]
Sent: Thursday, November 22, 2012 12:45 AM
To: mysql@lists.mysql.com
Subject: Re: Query Plan Analyzer
Is this a joke?
On Thu, Nov 22, 2012 at 10:20 AM, Zhigang Zhang
zzgang2...@gmail.comwrote:
By experience
By experience!
-Original Message-
From: Cabbar Duzayak [mailto:cab...@gmail.com]
Sent: Thursday, November 22, 2012 3:13 PM
To: mysql@lists.mysql.com
Subject: Query Plan Analyzer
Hi All,
Is there a way of looking at how mysql builds the query plan and executes
it for a given query?
Is this a joke?
On Thu, Nov 22, 2012 at 10:20 AM, Zhigang Zhang zzgang2...@gmail.comwrote:
By experience!
-Original Message-
From: Cabbar Duzayak [mailto:cab...@gmail.com]
Sent: Thursday, November 22, 2012 3:13 PM
To: mysql@lists.mysql.com
Subject: Query Plan Analyzer
Hi All,
- Original Message -
From: Cabbar Duzayak cab...@gmail.com
Is there a way of looking at how mysql builds the query plan and
executes it for a given query? EXPLAIN is definitely a useful tool, but it is
not exact (shows approximations as far as I can see), furthermore I want
SELECT * FROM XYX WHERE VAL IN ('BLUE','RED','GREEN'); query will make it
faster, if the field is ENUM
On Fri, Nov 16, 2012 at 12:36 AM, Anupam Karmarkar
sb_akarmar...@yahoo.comwrote:
Hi All,
Consider a scenario, I have table XYZ which contains value follow
BLUE
RED
GREEN
NULL
It depends on the distribution of the 4 'values' in that field.
If the cardinality is poor, then INDEX(VAL) won't be used, and they will all do
a table scan.
-Original Message-
From: Anupam Karmarkar [mailto:sb_akarmar...@yahoo.com]
Sent: Friday, November 16, 2012 12:36 AM
To:
3:48 PM
To: mysql@lists.mysql.com
Subject: Re: query tuning
Rick,
Thanks! Here is the create table info. Sorry for the size, but you can skip
down from all the fields to see the indexes.
CREATE TABLE `exams` (
`zAddDate` date DEFAULT NULL,
`zModDate` date DEFAULT NULL,
`IdPatient
of key_buffer_size? It should generally be 20% of _available_
RAM (for MyISAM usage).
-Original Message-
From: James W. McNeely [mailto:jmcne...@nwrads.com]
Sent: Wednesday, November 14, 2012 3:48 PM
To: mysql@lists.mysql.com
Subject: Re: query tuning
Rick,
Thanks! Here is the create table info. Sorry
Can't help without the SHOW CREATE TABLEs.
Perhaps e needs one of these:
INDEX(zzk)
INDEX(ProcModeCode, dateexam) -- in that order
(I can't predict which index it would use.)
Are IdAppt the same datatype and collation in each table?
-Original Message-
From: James W. McNeely
Rick,
Thanks! Here is the create table info. Sorry for the size, but you can skip
down from all the fields to see the indexes.
CREATE TABLE `exams` (
`zAddDate` date DEFAULT NULL,
`zModDate` date DEFAULT NULL,
`IdPatient` varchar(32) DEFAULT NULL,
`zModBy` varchar(255) DEFAULT NULL,
Message-
From: James W. McNeely [mailto:jmcne...@nwrads.com]
Sent: Wednesday, November 14, 2012 3:48 PM
To: mysql@lists.mysql.com
Subject: Re: query tuning
Rick,
Thanks! Here is the create table info. Sorry for the size, but you can skip
down from all the fields to see the indexes
Hi Rick,
Thanks for the advise. I have now set my query-cache to zero.
(I take your point about query cache too large. I understand that a
smaller cache size, and the use of the SQL_NO_CACHE and SQL_CACHE
directives can be used to control which queries are cached. Therefore
trying to get
As for the crash, I don't know. Instead, I recommend either shrinking the size
(if you use a big QC) or turning it off. This would make the issue go away.
ANY modification to a particular table leads to ALL entries in the Query cache
being purged. For that reason, we (Yahoo) almost never use
I think this will get you there:
SELECT LEAD, COUNT(*) FROM ORDERS GROUP BY LEAD;
It'll give you something more like:
| LEAD | COUNT(*) |
| F | 44 |
| S | 122 |
| R | 32 |
-Original Message-
From: Richard Reina
To: 'Richard Reina'; mysql@lists.mysql.com
Subject: RE: query help
I think this will get you there:
SELECT LEAD, COUNT(*) FROM ORDERS GROUP BY LEAD;
It'll give you something more like:
| LEAD | COUNT(*) |
| F | 44 |
| S | 122 |
| R | 32
2012/07/31 15:02 -0700, Haluk Karamete
So, in a case like this
shop.orders.32442
shop.orders.82000
shop.orders.34442
It would be the record whose source_recid is shop.orders.82000. Why? Cause
82000 happens to be the largest integer.
Now, if they are always 5-digit-long integers, try ending
: MySQL
Subject: Re: query help
On Tue, Jul 31, 2012 at 7:02 PM, Haluk Karamete
halukkaram...@gmail.com wrote:
I've got a text field called source_recid. It stores half string half
number like strings in it.
Example
shop.orders.32442
the syntax is DATABASENAME.TABLENAME.RECID
My
Hi,
Use LIMIT 1 to limit the number of output to single record.
Regards,
Vikas Shukla
On Wed, Aug 1, 2012 at 3:56 AM, Paul Halliday paul.halli...@gmail.comwrote:
On Tue, Jul 31, 2012 at 7:02 PM, Haluk Karamete halukkaram...@gmail.com
wrote:
I've got a text field called source_recid. It
On Tue, Jul 31, 2012 at 7:02 PM, Haluk Karamete halukkaram...@gmail.com wrote:
I've got a text field called source_recid. It stores half string half
number like strings in it.
Example
shop.orders.32442
the syntax is DATABASENAME.TABLENAME.RECID
My goal is to scan this col and find out
can u show the explain plan for your query
On Tue, Jul 10, 2012 at 2:59 PM, Darek Maciera darekmaci...@gmail.comwrote:
Hello,
I have table:
mysql DESCRIBE books;
|id |int(255) | NO | PRI |
NULL | auto_increment |
| idu
you are using a function-LOWER, which will not make use of the unique key
index on ksd.
Mysql does not support function based index, hence your query is doing a
FULL TABLE scan and taking more time.
On Tue, Jul 10, 2012 at 4:46 PM, Darek Maciera darekmaci...@gmail.comwrote:
2012/7/10 Ananda
2012/05/28 08:03 -0700, Don Wieland
Any assistance would be appreciated. Thanks!
Maybe something like this:
SELECT usie.client_id, first_name, last_name, COUNT(anie.client_id)
FROM
(SELECT client_id, first_name, last_name, time_start
FROM tl_appt
JOIN tl_rooms USING(room_id)
JOIN tl_clients
Don,
It is a little hard for me to comment on the query because I don't
have the context I need. When I tackle problems like this I ignore the
query and ask what are you trying to DO? The queries you sent look
like tunnel vision to me; there is probably a better way to do what
you're trying to
On 2012-05-21 11:17 AM, Don Wieland wrote:
I have got this query that returns the correct rows I want to display,
BUT the columns dr_ther_qty, dr_not_ther_qty, and dr_all_ther_qty are
not calculating correctly:
--- START QUERY ONE ---
SELECT q1.* FROM
(SELECT apt.user_id, apt.client_id,
For (1), ponder:
Group-by trick example: Find the most populous city in each state:
SELECT state, city, population, COUNT(*) AS num_cities
FROM
( SELECT state, city, population
FROM us
ORDER BY state, population DESC ) p
GROUP BY state
ORDER BY
On 2012-05-17 9:37 AM, Don Wieland wrote:
Hi folks,
I am trying to compile a query that does statistics on appointments
based on specific criteria. Here is my starting query:
SELECT
u.user_id,
c.client_id,
c.first_name,
c.last_name,
a.time_start AS stime,
Have you tried to set city = null (i.e. without the quotes)? David.
-Original Message-
From: Richard Reina [mailto:gatorre...@gmail.com]
Sent: Friday, March 09, 2012 4:24 PM
To: mysql@lists.mysql.com
Subject: query problem with null
When I do the following query:
SELECT * FROM
Ahhh... Thank you, that was exactly what the problem was. I will fix the
code that is setting the value of these new records to 'NULL'.
Thank you.
2012/3/9 David Lerer dle...@us.univision.com
Have you tried to set city = null (i.e. without the quotes)? David.
-Original
- Original Message -
From: David Lerer dle...@us.univision.com
Have you tried to set city = null (i.e. without the quotes)?
Spot on, I'd think.
NULL values are not a string with NULL in it - that's only what it looks like
in query results :-) An empty string ('') is to strings
http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_ifnull
On Wed, Feb 29, 2012 at 13:15, Don Wieland d...@pointmade.net wrote:
Little help...
In my mySQL query editor, I am trying to return a value of 0 when there is
no related rows from this query:
(select
On 2/29/2012 1:15 PM, Don Wieland wrote:
Little help...
In my mySQL query editor, I am trying to return a value of 0 when
there is no related rows from this query:
(select if(count(ip.payment_amount) IS NOT NULL,
count(ip.payment_amount) , 0) FROM tl_trans_pmt_items ip WHERE
t.transaction_id
From: Arthur Fuller fuller.art...@gmail.com
You should still lose the pointless WHERE 1.
Besides being pointless, is there any harm in having a superfluous WHERE 1
beyond the few microseconds it takes to parse it and throw it away?
Just curious, because phpMyAdmin includes it in its query
You should still lose the pointless WHERE 1.
Arthur
On Sun, Dec 4, 2011 at 1:38 AM, Jan Steinman j...@bytesmiths.com wrote:
DOH! Brain unfroze, and I realized I needed an aggregate:
SELECT
COUNT(lib.Dewey) AS Have,
ddn.Dewey AS DDN,
ddn.Classification AS Classification
FROM
Well, of that which you showed you _need_ only this, not really different from
that which you wrote:
SELECT
COUNT(lib.Dewey) AS Have,
ddn.Dewey AS DDN,
ddn.Classification
FROM s_library_dewey ddn
LEFT OUTER JOIN s_library lib ON ddn.Dewey = FLOOR(lib.Dewey)
GROUP BY ddn.Dewey
As for
Second attempt, using a join, returns just one row for Dewey 000 with the
COUNT being about half the volumes in the library, which isn't right...
I thought a LEFT OUTER JOIN would have returned a record for every record in
s_library_dewey, but it only returns the first.
Brain freeze again...
On 12/3/2011 9:35 PM, Jan Steinman wrote:
Second attempt, using a join, returns just one row for Dewey 000 with the
COUNT being about half the volumes in the library, which isn't right...
I thought a LEFT OUTER JOIN would have returned a record for every record in
s_library_dewey, but it only
DOH! Brain unfroze, and I realized I needed an aggregate:
SELECT
COUNT(lib.Dewey) AS Have,
ddn.Dewey AS DDN,
ddn.Classification AS Classification
FROM s_library_dewey ddn
LEFT OUTER JOIN s_library lib ON ddn.Dewey = FLOOR(lib.Dewey)
WHERE 1
GROUP BY ddn.Dewey
... although if there
2011/09/27 05:32 -0700, supr_star
Here's a simple query: select name,status,min(dt), max(dt),count(*) from
mytable group by name I need an efficient way to get this data, along with the
status of the row with the max(dt). 'status' is not in the group by, so I get
apparently random statuses.
Your outer query select cpe_mac,max(r3_dt) from rad_r3cap, is doing a full
table scan, you might want to check on this and use a WHERE condition to
use indexed column
On Fri, Sep 23, 2011 at 12:14 AM, supr_star suprstar1...@yahoo.com wrote:
I have a table with 24 million rows, I need to
On 09/01/2011 01:32 PM, Brandon Phelps wrote:
On 09/01/2011 12:47 PM, Shawn Green (MySQL) wrote:
On 9/1/2011 09:42, Brandon Phelps wrote:
On 09/01/2011 04:59 AM, Jochem van Dieten wrote:
...
WHERE
(open_dt = '2011-08-30 00:00:00' OR close_dt = '2011-08-30
00:00:00')
AND (open_dt =
Thinking outside the query, is there any archiving that could happen to make
your large tables kinder in the range scan?
Andy
On Thu, Sep 8, 2011 at 7:03 PM, Brandon Phelps bphe...@gls.com wrote:
On 09/01/2011 01:32 PM, Brandon Phelps wrote:
On 09/01/2011 12:47 PM, Shawn Green (MySQL) wrote:
Thanks for the reply Andy. Unfortunately the users will be selecting varying
date ranges and new data is constantly coming in, so I am not sure how I could
archive/cache the necessary data that would be any more efficient than simply
using the database directly.
On 09/08/2011 02:16 PM,
I don't think I saw any query timings in the emails (maybe I missed them).
What version of MySQL are you currently using?
What does the explain look like when your remove the limit 10?
Is your server tuned for MyISAM or InnoDB?
What kind of disk setup is in use?
How much memory is in your
If you're running version 5.1+ you may wanna take a look at table partitioning
options you may have.
On Sep 8, 2011, at 2:27 PM, Brandon Phelps wrote:
Thanks for the reply Andy. Unfortunately the users will be selecting varying
date ranges and new data is constantly coming in, so I am not
Andy,
The queries take minutes to run. MySQL is 5.1.54 and it's running on Ubuntu
server 11.04. Unfortunately the machine only has 2GB of RAM but no other major
daemons are running on the machine. We are running RAID 1 (mirroring) with 1TB
drives. The tables in question here are all
Correct me if I'm wrong. You're wanting to get all records that have an
open_date or a close_date between two times.
If that's correct, you might be able to get an index_merge by doing a query
like:
WHERE ((starting time)=open_dt= (ending time)) OR ((starting
time)=close_dt=(ending time))
Mihail,
I have considered this but have not yet determined how best to go about
partitioning the table. I don't think partitioning by dst_address or
src_address would help because most of the queries do not filter on IP address
(except very specific queries where the end-user is searching
Thanks for the idea Derek, however given the following query my EXPLAIN output
is identical:
SELECT
sc.open_dt,
sc.close_dt,
sc.protocol,
INET_NTOA(sc.src_address) AS src_address,
sc.src_port,
INET_NTOA(sc.dst_address) AS dst_address,
Partitioning isn't a bad idea for this however I'm still thinking about your
dataset size and possible hardware limitations. It's not likely going to fit
into relevant buffers/memory so you're going to be on disk more then you
want. You're probably creating temporary tables like crazy and I would
How about:
SELECT
sc.open_dt,
sc.close_dt,
sc.protocol,
INET_NTOA(sc.src_address) AS src_address,
sc.src_port,
INET_NTOA(sc.dst_address) AS dst_address,
sc.dst_port,
sc.sent,
sc.rcvd,
spm.desc AS src_port_desc,
Andrew,
Generally there is only 1 user performing the complicated SELECT query at a time, however
the background process that fills the table is constantly doing a fast SELECT (0.3
seconds) and a subsequent UPDATE. Basically whenever a connection is closed on the
firewall, the bg process
Mihail,
Thanks so much! I modified your example to include the proper ORDER BY and
LIMIT clauses and this, so far, is running super fast (0.0007 seconds).
Question, if a record's open_dt is between the range AND the close_dt is
between the range as well, will the UNION output the record
From the manual: The default behavior for UNION is that duplicate rows are
removed from the result.
On Sep 8, 2011, at 4:50 PM, Brandon Phelps wrote:
Mihail,
Thanks so much! I modified your example to include the proper ORDER BY and
LIMIT clauses and this, so far, is running super fast
Ah I see. Well thanks for your assistance!
-Brandon
On 09/08/2011 05:21 PM, Mihail Manolov wrote:
From the manual: The default behavior for UNION is that duplicate rows are removed
from the result.
On Sep 8, 2011, at 4:50 PM, Brandon Phelps wrote:
Mihail,
Thanks so much! I modified
On Aug 30, 2011 6:46 PM, Brandon Phelps wrote:
SELECT
sc.open_dt,
sc.close_dt,
sc.protocol,
INET_NTOA( sc.src_address ) AS src_address,
sc.src_port,
INET_NTOA( sc.dst_address ) AS dst_address,
sc.dst_port,
sc.sent,
sc.rcvd,
On 09/01/2011 04:59 AM, Jochem van Dieten wrote:
SELECT
sc.open_dt,
sc.close_dt,
sc.protocol,
INET_NTOA( sc.src_address ) AS src_address,
sc.src_port,
INET_NTOA( sc.dst_address ) AS dst_address,
sc.dst_port,
sc.sent,
On 9/1/2011 09:42, Brandon Phelps wrote:
On 09/01/2011 04:59 AM, Jochem van Dieten wrote:
...
WHERE
(open_dt = '2011-08-30 00:00:00' OR close_dt = '2011-08-30
00:00:00')
AND (open_dt = '2011-08-30 12:36:53' OR close_dt = '2011-08-30
12:36:53')
In that case your logic here
On 09/01/2011 12:47 PM, Shawn Green (MySQL) wrote:
On 9/1/2011 09:42, Brandon Phelps wrote:
On 09/01/2011 04:59 AM, Jochem van Dieten wrote:
...
WHERE
(open_dt = '2011-08-30 00:00:00' OR close_dt = '2011-08-30
00:00:00')
AND (open_dt = '2011-08-30 12:36:53' OR close_dt = '2011-08-30
There are a few things gonig on, but mainly it is the ORs that are killing
you.
As your require OR to examine two distinct columns, both of equal relevance
to the query, MySQL: is left with no choice but to do a full table scan on
what might be (at a guess) a very larger table. No amount of
Thanks Singer, this took my query down to 0.0007, perfect! I wasn't
aware a single index of multiple columns would work when one of the
columns was in the WHERE clause and the other in the ORDER BY clause.
Learn something new every day I guess!
On 08/10/2011 02:03 PM, Singer X.J. Wang
On 8/10/2011 1:01 PM, Brandon Phelps wrote:
Hello all,
I am using the query below and variations of it to query a database
with a TON of records. Currently the database has around 11 million
records but it grows every day and should cap out at around 150 million.
I am curious if there is
1 - 100 of 1474 matches
Mail list logo