I did the following test:
My program still uses MySQL 5.7 libmysqlclient.so, but I connect now to a
5.6.16 server.
And the SQL interrupt works fine... so I suspect there is a MySQL server issue
in 5.7.
Seb
On 07/12/2016 01:01 PM, Sebastien FLAESCH wrote:
About:
> B) For some reason, the p
About:
> B) For some reason, the program does not want to exit() - (must investigate)
In fact we use prepared statements with a sequence of mysql_stmt_init,
mysql_stmt_prepare, mysql_stmt_execute, mysql_stmt_fetch, ... and
mysql_stmt_close.
After the statement was interrupted, we try to free th
You need to GROUP BY those fields NOT in the aggregate function. Like:
SELECT f.id,f.name,sum(p.price)
FROM fruit f
left join purchase p on f.id = p.fruit
where p.price is not null
group by f.id,f.name;
1, 'Apples', 2
2, 'Grapes', 6.5
4, 'Kiwis', 4
On 2015-10-23 04:15, Don Wieland wrote:
Hi ga
One more guess:
Try explicitly aliasing the fields of interest and using those aliases
exclusively throughout the rest of the expression.
SELECT
p.pk_ProductID as pid,
p.Description as dsc,
SUM(i.Quantity) as totl
FROM invoice_invoicelines_Product p
JOIN invoice_InvoiceLines i ON pid = i.fk_Prod
> On Oct 22, 2015, at 2:41 PM, Michael Dykman wrote:
>
> I'm not at a terminal but have you tried grouping by p.pk_ProductID instead
> of i.fk...? It is the actual value you are selecting as well as being on
> the primary table in the query.
Yeah I tried that - actually the SUM I need is on the
I'm not at a terminal but have you tried grouping by p.pk_ProductID instead
of i.fk...? It is the actual value you are selecting as well as being on
the primary table in the query.
On Thu, Oct 22, 2015, 5:18 PM Don Wieland wrote:
> Hi gang,
>
> I have a query:
>
> SELECT
> p.pk_ProductID,
> p.De
On 10/22/2015 11:48 AM, Don Wieland wrote:
On Oct 20, 2015, at 1:24 PM, shawn l.green wrote:
Which release of MySQL are you using?
Version 5.5.45-cll
How many rows do you get if you remove the GROUP_CONCAT operator? We don't need
to see the results. (sometimes it is a good idea to look
> On Oct 20, 2015, at 1:24 PM, shawn l.green wrote:
>
> Which release of MySQL are you using?
Version 5.5.45-cll
> How many rows do you get if you remove the GROUP_CONCAT operator? We don't
> need to see the results. (sometimes it is a good idea to look at the raw,
> unprocessed results)
>
On 10/20/2015 1:54 PM, Don Wieland wrote:
Hi all,
Trying to get a query working:
SELECT
ht.*,
CONCAT(o.first_name, " ", o.last_name) AS orphan,
GROUP_CONCAT(DISTINCT hti.rec_code ORDER BY hti.rec_code ASC SEPARATOR ", ") AS
alloc
FROM hiv_transactions ht
LEFT JOIN tk_orphans o ON ht.orphan
On 2015-10-20 12:54 PM, Don Wieland wrote:
Hi all,
Trying to get a query working:
SELECT
ht.*,
CONCAT(o.first_name, " ", o.last_name) AS orphan,
GROUP_CONCAT(DISTINCT hti.rec_code ORDER BY hti.rec_code ASC SEPARATOR ", ") AS
alloc
FROM hiv_transactions ht
LEFT JOIN tk_orphans o ON ht.orphan_
- Original Message -
> From: "Shawn Green"
> Subject: Re: Query optimizer-miss with unqualified expressions, bug or
> feature?
>
> On a more serious note, indexes with limited cardinality are less useful
> than those with excellent cardinality. Cardinali
Hi Shawn,
On 19.10.15 22.33, shawn l.green wrote:
On 10/19/2015 3:48 PM, Roy Lyseng wrote:
Hi Ben,
On 19.10.15 16.07, Ben Clewett wrote:
Hi Roy,
Thanks for the clear explanation.
I guess (hypothetically) the optimizer could see if it has a key, and
then use
two starts: one on 'a > 0' and
On 10/19/2015 3:48 PM, Roy Lyseng wrote:
Hi Ben,
On 19.10.15 16.07, Ben Clewett wrote:
Hi Roy,
Thanks for the clear explanation.
I guess (hypothetically) the optimizer could see if it has a key, and
then use
two starts: one on 'a > 0' and one on 'a < 0', taking a union of the
result?
Which
Hi Ben,
On 19.10.15 16.07, Ben Clewett wrote:
Hi Roy,
Thanks for the clear explanation.
I guess (hypothetically) the optimizer could see if it has a key, and then use
two starts: one on 'a > 0' and one on 'a < 0', taking a union of the result?
Which might make a significant result to something
Hi Roy,
Thanks for the clear explanation.
I guess (hypothetically) the optimizer could see if it has a key, and
then use two starts: one on 'a > 0' and one on 'a < 0', taking a union
of the result? Which might make a significant result to something?
Ben.
On 2015-10-19 14:19, Roy Lyseng wr
Hi Ben,
On 19.10.15 15.10, Ben Clewett wrote:
I have noticed that an unqualified boolean expression cannot be optimized by
MySQL to use an index in 5.6.24.
For example:
CREATE TABLE t (
i INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
a BOOLEAN NOT NULL,
KEY a (a)
) ENGINE=InnoDB;
This wi
2014/10/08 11:38 -0700, Jan Steinman
However, this pattern will often result in numerous empty columns -- empties
that would not be there had the table not been pivoted.
2014/10/08 16:42 -0500, Peter Brawley
MySQL stored procedures are less incomplete, and can do it,
On 2014-10-08 1:38 PM, Jan Steinman wrote:
I often use CASE WHEN ... to pivot tables. For example, splitting sales data by
year:
SELECT
s_product.name AS `Product`,
SUM(CASE WHEN YEAR(sales.Date)='2007' THEN sales.Total ELSE NULL END) AS
`2007`,
SUM(CASE WHEN YEAR(sales.Date)
It's good to know. Keep up with good work, cheers!!
--
*Wagner Bianchi, MySQL Database Specialist*
Mobile: +55.31.8654.9510
E-mail: m...@wagnerbianchi.com
Twitter: @wagnerbianchijr
2014-09-06 3:01 GMT-03:00 Ajay Garg :
> Hi Wagner.
>
> That is what I did as the last resort, and that is "only
__
> Date: Sun, 7 Sep 2014 23:06:09 +0530
> Subject: Re: Query on some MySQL-internals
> From: ajaygargn...@gmail.com
> To: mgai...@hotmail.com
> CC: mysql@lists.mysql.com
>
> Hi Martin.
>
> Thanks for the reply.
>
&
Hi Martin.
Thanks for the reply.
As I had mentioned, we are running both the instances since last 6
years or so, and the records are inserted/deleted on both the
instances.
So, we did a "show table status like 'XX' \G;" on both the
instances, and following are the outputs (here "XX" is
> Date: Sat, 6 Sep 2014 14:26:22 +0530
> Subject: Query on some MySQL-internals
> From: ajaygargn...@gmail.com
> To: mysql@lists.mysql.com
>
> Hi all.
>
>
> We are facing a very strange scenario.
>
> We have two mysql-instances running on the same machine, and they had
> been running functi
Hi Wagner.
That is what I did as the last resort, and that is "only" what solved the issue.
Thanks.
On Fri, Sep 5, 2014 at 1:52 AM, wagnerbianchi.com
wrote:
> You can try these steps:
>
> 1-) Stop slave and write down the replication coordinates getting that in
> MySQL's error log (*very impo
You can try these steps:
1-) Stop slave and write down the replication coordinates getting that in
MySQL's error log (*very important step*);
2-) Issue the `reset slave` command on MySQL Slave;
3-) Issue the CHANGE MASTER TO considering the replication coordinates
you've just written down on step
Hi all.
Unfortunately, I have run into the logs, as described at
http://bugs.mysql.com/bug.php?id=71495
Unfortunately, the issue does not go away, even after reverting back
to "slave-parallel-workers=0" in "my.cnf", and restarting the mysql
instance.
Any quick idea, as to how we may get the mys
Thanks Akshay for the reply.
On Tue, Sep 2, 2014 at 12:25 PM, Akshay Suryavanshi
wrote:
> Hello Ajay,
>
> I tried testing the slave-parallel-workers few months ago, what I can surely
> tell you its still under development, and at that time needed some critical
> bug fixing.
>
> It is helpful in s
Ping !! :)
On Mon, Sep 1, 2014 at 8:27 AM, Ajay Garg wrote:
> Hi all.
>
>
> We have replication set-up, where we cater to HUUGEE amounts of data.
> Since quite some time, we have been facing issues wherein the slave
> lags behind master quite a lot.
>
>
> So, yesterday we were able to setup paral
Hi Satendra,
On 7/14/2014 5:48 AM, Satendra wrote:
Hi there, I'm struggling to find the total time taken by a database query
on the disk? As I understand when a database query start execution it takes
some time inside the database engine & some time to seek the result from
disk (if that is not
Hi Satendra,
On Jul 14, 2014, at 3:48 AM, Satendra wrote:
> Hi there, I'm struggling to find the total time taken by a database query
> on the disk? As I understand when a database query start execution it takes
> some time inside the database engine & some time to seek the result from
> disk (i
Satendra,
Google "show profile" as it may give you all the information that you need.
There is a lot more details in the performance_schema if you want to dig
into it, but it can be quite difficult to get out. Here is one place to
start if you want to pursue that angle:
http://www.markleith.co.uk
Am 14.07.2014 12:48, schrieb Satendra:
> Hi there, I'm struggling to find the total time taken by a database query
> on the disk? As I understand when a database query start execution it takes
> some time inside the database engine & some time to seek the result from
> disk (if that is not in cac
Perfect! Thank you Larry et all.
Have a great weekend.
2013/4/19 Larry Martell
> On Fri, Apr 19, 2013 at 8:24 AM, Richard Reina
> wrote:
> > Hello All,
> >
> > Happy Friday! I know how to do the following query:
> >
> >>select count(*) from sales where WEEK(sale_date)=15 AND
> > YEAR(sale_dat
On Fri, Apr 19, 2013 at 8:24 AM, Richard Reina wrote:
> Hello All,
>
> Happy Friday! I know how to do the following query:
>
>>select count(*) from sales where WEEK(sale_date)=15 AND
> YEAR(sale_date)=2013;
>
> But can someone tell me I can do a query that will give me:
>
> the count(*) for each w
Hello Richard,
>the count(*) for each week of 2013 so that I end up with:
http://dev.mysql.com/doc/refman/5.1/en/group-by-functions.html
Ilya.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql
of course, "Group By"
bill
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql
On 3/31/2013 7:32 AM, william drescher wrote:
I have a table, schedule:
CREATE TABLE `schedule` (
`schedule_id` mediumint(9) NOT NULL AUTO_INCREMENT,
`provider` varchar(15) NOT NULL,
`apptTime` datetime NOT NULL,
`location` varchar(10) NOT NULL,
`duration` smallint(5) unsigned NOT
Don't use DOUBLE (or FLOAT) for monetary values. Use DECIMAL(...). Two
decimal places ( ,2) suffices for most countries, but not all.
DOUBLE values have been rounded when converted from decimal to binary.
Multiply creates another rounding. The end result may look the same when
converted fro
Hi
To me it looks like quantity is being multiplied by the price and then added to
total.
Try something like this
SELECT total,(quantity*price) as QP from sales where total !=QP AND salesid=122
On 2012-12-29, at 7:25 AM, Trimurthy wrote:
> hi,
> i am working with mysql 5.1.36 and i wro
Hi,
A subquery with IN clause is not a good idea. If you want to tune this
query, try adding indexes on the tables accessed in the inner query
"credits". A composite index on (success,promoter_id) would be sufficient,
then the optimizer will use this index for the where clause and as a
covering in
ore detailed than MySQL's. (But
then, it is doing some tricky things.)
> -Original Message-
> From: Cabbar Duzayak [mailto:cab...@gmail.com]
> Sent: Thursday, November 22, 2012 12:45 AM
> To: mysql@lists.mysql.com
> Subject: Re: Query Plan Analyzer
>
> Is
- Original Message -
> From: "Cabbar Duzayak"
>
> Is there a way of looking at how mysql builds the query plan and
> executes it for a given query? EXPLAIN is definitely a useful tool, but it is
> not exact (shows approximations as far as I can see), furthermore I want
> something like ho
Is this a joke?
On Thu, Nov 22, 2012 at 10:20 AM, Zhigang Zhang wrote:
> By experience!
>
> -Original Message-
> From: Cabbar Duzayak [mailto:cab...@gmail.com]
> Sent: Thursday, November 22, 2012 3:13 PM
> To: mysql@lists.mysql.com
> Subject: Query Plan Analyzer
>
> Hi All,
>
> Is there a
By experience!
-Original Message-
From: Cabbar Duzayak [mailto:cab...@gmail.com]
Sent: Thursday, November 22, 2012 3:13 PM
To: mysql@lists.mysql.com
Subject: Query Plan Analyzer
Hi All,
Is there a way of looking at how mysql builds the query plan and executes
it for a given query? EXPLA
It depends on the distribution of the 4 'values' in that field.
If the cardinality is poor, then INDEX(VAL) won't be used, and they will all do
a table scan.
> -Original Message-
> From: Anupam Karmarkar [mailto:sb_akarmar...@yahoo.com]
> Sent: Friday, November 16, 2012 12:36 AM
> To: mys
SELECT * FROM XYX WHERE VAL IN ('BLUE','RED','GREEN'); query will make it
faster, if the field is ENUM
On Fri, Nov 16, 2012 at 12:36 AM, Anupam Karmarkar
wrote:
> Hi All,
>
> Consider a scenario, I have table XYZ which contains value follow
> BLUE
> RED
> GREEN
> NULL
>
> following are queries w
That should decline significantly.
What's the value of key_buffer_size? It should generally be 20% of _available_
RAM (for MyISAM usage).
-Original Message-
From: James W. McNeely [mailto:jmcne...@nwrads.com]
Sent: Wednesday, November 14, 2012 3:48 PM
To: mysql@lists.mysql.com
Subject: Re
> Note the huge estimated number of rows. That should decline significantly.
>
> What's the value of key_buffer_size? It should generally be 20% of
> _available_ RAM (for MyISAM usage).
>
>> -Original Message-
>> From: James W. McNeely [mail
#x27;s the value of key_buffer_size? It should generally be 20% of _available_
RAM (for MyISAM usage).
> -Original Message-
> From: James W. McNeely [mailto:jmcne...@nwrads.com]
> Sent: Wednesday, November 14, 2012 3:48 PM
> To: mysql@lists.mysql.com
> Subject: Re: query
Rick,
Thanks! Here is the create table info. Sorry for the size, but you can skip
down from all the fields to see the indexes.
CREATE TABLE `exams` (
`zAddDate` date DEFAULT NULL,
`zModDate` date DEFAULT NULL,
`IdPatient` varchar(32) DEFAULT NULL,
`zModBy` varchar(255) DEFAULT NULL,
`N
Can't help without the SHOW CREATE TABLEs.
Perhaps e needs one of these:
INDEX(zzk)
INDEX(ProcModeCode, dateexam) -- in that order
(I can't predict which index it would use.)
Are IdAppt the same datatype and collation in each table?
> -Original Message-
> From: James W. McNeely [mailto:
Hi Rick,
Thanks for the advise. I have now set my query-cache to zero.
(I take your point about query cache too large. I understand that a
smaller cache size, and the use of the SQL_NO_CACHE and SQL_CACHE
directives can be used to control which queries are cached. Therefore
trying to get t
As for the crash, I don't know. Instead, I recommend either shrinking the size
(if you use a big QC) or turning it off. This would make the issue go away.
ANY modification to a particular table leads to ALL entries in the Query cache
being purged. For that reason, we (Yahoo) almost never use
9 AM
> To: 'Richard Reina'; mysql@lists.mysql.com
> Subject: RE: query help
>
> I think this will get you there:
>
> SELECT LEAD, COUNT(*) FROM ORDERS GROUP BY LEAD;
>
> It'll give you something more like:
>
> | LEAD | COUNT(*) |
> | F | 44
I think this will get you there:
SELECT LEAD, COUNT(*) FROM ORDERS GROUP BY LEAD;
It'll give you something more like:
| LEAD | COUNT(*) |
| F | 44 |
| S | 122 |
| R | 32 |
-Original Message-
From: Richard Reina [mai
2012/07/31 15:02 -0700, Haluk Karamete
So, in a case like this
shop.orders.32442
shop.orders.82000
shop.orders.34442
It would be the record whose source_recid is shop.orders.82000. Why? Cause
82000 happens to be the largest integer.
Now, if they are always 5-digit-long integer
Hi,
Use LIMIT 1 to limit the number of output to single record.
Regards,
Vikas Shukla
On Wed, Aug 1, 2012 at 3:56 AM, Paul Halliday wrote:
> On Tue, Jul 31, 2012 at 7:02 PM, Haluk Karamete
> wrote:
> > I've got a text field called source_recid. It stores half string half
> > number like str
Might need some type coercion:
SELECTSUBSTRING(recid, 13, ) AS numbers FROM table
ORDER BY SUBSTRING(recid, 13, )+0 DESC
> -Original Message-
> From: Paul Halliday [mailto:paul.halli...@gmail.com]
> Sent: Tuesday, July 31, 2012 3:27 PM
> To: Haluk Karamete
> Cc: MySQ
On Tue, Jul 31, 2012 at 7:02 PM, Haluk Karamete wrote:
> I've got a text field called source_recid. It stores half string half
> number like strings in it.
>
> Example
>
> shop.orders.32442
>
> the syntax is DATABASENAME.TABLENAME.RECID
>
> My goal is to scan this col and find out the biggest RECI
you are using a function-LOWER, which will not make use of the unique key
index on ksd.
Mysql does not support function based index, hence your query is doing a
FULL TABLE scan and taking more time.
On Tue, Jul 10, 2012 at 4:46 PM, Darek Maciera wrote:
> 2012/7/10 Ananda Kumar :
> > can u show th
can u show the explain plan for your query
On Tue, Jul 10, 2012 at 2:59 PM, Darek Maciera wrote:
> Hello,
>
> I have table:
>
> mysql> DESCRIBE books;
>
> |id |int(255) | NO | PRI |
> NULL | auto_increment |
> | idu
2012/05/28 08:03 -0700, Don Wieland
Any assistance would be appreciated. Thanks!
Maybe something like this:
SELECT usie.client_id, first_name, last_name, COUNT(anie.client_id)
FROM
(SELECT client_id, first_name, last_name, time_start
FROM tl_appt
JOIN tl_rooms USING(room_id)
Don,
It is a little hard for me to comment on the query because I don't
have the context I need. When I tackle problems like this I ignore the
query and ask "what are you trying to DO?" The queries you sent look
like tunnel vision to me; there is probably a better way to do what
you're trying to d
On 2012-05-21 11:17 AM, Don Wieland wrote:
I have got this query that returns the correct rows I want to display,
BUT the columns dr_ther_qty, dr_not_ther_qty, and dr_all_ther_qty are
not calculating correctly:
--- START QUERY ONE ---
SELECT q1.* FROM
(SELECT apt.user_id, apt.client_id, c.l
On 2012-05-17 9:37 AM, Don Wieland wrote:
Hi folks,
I am trying to compile a query that does statistics on appointments
based on specific criteria. Here is my starting query:
SELECT
u.user_id,
c.client_id,
c.first_name,
c.last_name,
a.time_start AS stime,
FROM_UNIXTIME
For (1), ponder:
Group-by trick example: Find the most populous city in each state:
SELECT state, city, population, COUNT(*) AS num_cities
FROM
( SELECT state, city, population
FROM us
ORDER BY state, population DESC ) p
GROUP BY state
ORDER BY stat
- Original Message -
> From: "David Lerer"
>
> Have you tried to set city = null (i.e. without the quotes)?
Spot on, I'd think.
NULL values are not a string with "NULL" in it - that's only what it looks like
in query results :-) An empty string ('') is to strings what 0 (zero) is for
Ahhh... Thank you, that was exactly what the problem was. I will fix the
code that is setting the value of these new records to 'NULL'.
Thank you.
2012/3/9 David Lerer
> Have you tried to set city = null (i.e. without the quotes)? David.
>
>
>
> -Original Message-
> From: Richard
Have you tried to set city = null (i.e. without the quotes)? David.
-Original Message-
From: Richard Reina [mailto:gatorre...@gmail.com]
Sent: Friday, March 09, 2012 4:24 PM
To: mysql@lists.mysql.com
Subject: query problem with null
When I do the following query:
SELECT * FROM
On 2/29/2012 1:15 PM, Don Wieland wrote:
Little help...
In my mySQL query editor, I am trying to return a value of 0 when
there is no related rows from this query:
(select if(count(ip.payment_amount) IS NOT NULL,
count(ip.payment_amount) , 0) FROM tl_trans_pmt_items ip WHERE
t.transaction_id =
http://dev.mysql.com/doc/refman/5.0/en/control-flow-functions.html#function_ifnull
On Wed, Feb 29, 2012 at 13:15, Don Wieland wrote:
> Little help...
>
> In my mySQL query editor, I am trying to return a value of 0 when there is
> no related rows from this query:
>
> (select if(count(ip.payment_
> From: Arthur Fuller
>
> You should still lose the pointless WHERE 1.
Besides being "pointless," is there any harm in having a superfluous "WHERE 1"
beyond the few microseconds it takes to parse it and throw it away?
Just curious, because phpMyAdmin includes it in its query template, and I of
Well, of that which you showed you _need_ only this, not really different from
that which you wrote:
SELECT
COUNT(lib.Dewey) AS Have,
ddn.Dewey AS DDN,
ddn.Classification
FROM s_library_dewey ddn
LEFT OUTER JOIN s_library lib ON ddn.Dewey = FLOOR(lib.Dewey)
GROUP BY ddn.Dewey
As for
You should still lose the pointless WHERE 1.
Arthur
On Sun, Dec 4, 2011 at 1:38 AM, Jan Steinman wrote:
> DOH! Brain unfroze, and I realized I needed an aggregate:
>
> SELECT
>COUNT(lib.Dewey) AS Have,
>ddn.Dewey AS DDN,
>ddn.Classification AS Classification
> FROM s_library_dewey dd
DOH! Brain unfroze, and I realized I needed an aggregate:
SELECT
COUNT(lib.Dewey) AS Have,
ddn.Dewey AS DDN,
ddn.Classification AS Classification
FROM s_library_dewey ddn
LEFT OUTER JOIN s_library lib ON ddn.Dewey = FLOOR(lib.Dewey)
WHERE 1
GROUP BY ddn.Dewey
... although if there are
On 12/3/2011 9:35 PM, Jan Steinman wrote:
Second attempt, using a join, returns just one row for Dewey "000" with the
COUNT being about half the volumes in the library, which isn't right...
I thought a LEFT OUTER JOIN would have returned a record for every record in
s_library_dewey, but it onl
Second attempt, using a join, returns just one row for Dewey "000" with the
COUNT being about half the volumes in the library, which isn't right...
I thought a LEFT OUTER JOIN would have returned a record for every record in
s_library_dewey, but it only returns the first.
Brain freeze again...
2011/09/27 05:32 -0700, supr_star
Here's a simple query: select name,status,min(dt), max(dt),count(*) from
mytable group by name I need an efficient way to get this data, along with the
status of the row with the max(dt). 'status' is not in the group by, so I get
apparently random sta
Your outer query "select cpe_mac,max(r3_dt) from rad_r3cap", is doing a full
table scan, you might want to check on this and use a "WHERE" condition to
use indexed column
On Fri, Sep 23, 2011 at 12:14 AM, supr_star wrote:
>
>
> I have a table with 24 million rows, I need to figure out how to op
Ah I see. Well thanks for your assistance!
-Brandon
On 09/08/2011 05:21 PM, Mihail Manolov wrote:
From the manual: "The default behavior for UNION is that duplicate rows are removed
from the result."
On Sep 8, 2011, at 4:50 PM, Brandon Phelps wrote:
Mihail,
Thanks so much! I modified yo
>From the manual: "The default behavior for UNION is that duplicate rows are
>removed from the result."
On Sep 8, 2011, at 4:50 PM, Brandon Phelps wrote:
> Mihail,
>
> Thanks so much! I modified your example to include the proper ORDER BY and
> LIMIT clauses and this, so far, is running super
Mihail,
Thanks so much! I modified your example to include the proper ORDER BY and
LIMIT clauses and this, so far, is running super fast (0.0007 seconds).
Question, if a record's open_dt is between the range AND the close_dt is
between the range as well, will the UNION output the record twic
Andrew,
Generally there is only 1 user performing the complicated SELECT query at a time, however
the background process that fills the table is constantly doing a fast SELECT (0.3
seconds) and a subsequent UPDATE. Basically whenever a connection is closed on the
firewall, the bg process
How about:
SELECT
sc.open_dt,
sc.close_dt,
sc.protocol,
INET_NTOA(sc.src_address) AS src_address,
sc.src_port,
INET_NTOA(sc.dst_address) AS dst_address,
sc.dst_port,
sc.sent,
sc.rcvd,
spm.desc AS src_port_desc,
Partitioning isn't a bad idea for this however I'm still thinking about your
dataset size and possible hardware limitations. It's not likely going to fit
into relevant buffers/memory so you're going to be on disk more then you
want. You're probably creating temporary tables like crazy and I would b
Thanks for the idea Derek, however given the following query my EXPLAIN output
is identical:
SELECT
sc.open_dt,
sc.close_dt,
sc.protocol,
INET_NTOA(sc.src_address) AS src_address,
sc.src_port,
INET_NTOA(sc.dst_address) AS dst_address,
sc.ds
Mihail,
I have considered this but have not yet determined how best to go about
partitioning the table. I don't think partitioning by dst_address or
src_address would help because most of the queries do not filter on IP address
(except very specific queries where the end-user is searching the
Correct me if I'm wrong. You're wanting to get all records that have an
open_date or a close_date between two times.
If that's correct, you might be able to get an index_merge by doing a query
like:
WHERE ((starting time)<=open_dt<= (ending time)) OR ((starting
time)<=close_dt<=(ending time))
Andy,
The queries take minutes to run. MySQL is 5.1.54 and it's running on Ubuntu
server 11.04. Unfortunately the machine only has 2GB of RAM but no other major
daemons are running on the machine. We are running RAID 1 (mirroring) with 1TB
drives. The tables in question here are all MyISAM
If you're running version 5.1+ you may wanna take a look at table partitioning
options you may have.
On Sep 8, 2011, at 2:27 PM, Brandon Phelps wrote:
> Thanks for the reply Andy. Unfortunately the users will be selecting varying
> date ranges and new data is constantly coming in, so I am not
I don't think I saw any query timings in the emails (maybe I missed them).
What version of MySQL are you currently using?
What does the explain look like when your remove the limit 10?
Is your server tuned for MyISAM or InnoDB?
What kind of disk setup is in use?
How much memory is in your machine?
Thanks for the reply Andy. Unfortunately the users will be selecting varying
date ranges and new data is constantly coming in, so I am not sure how I could
archive/cache the necessary data that would be any more efficient than simply
using the database directly.
On 09/08/2011 02:16 PM, Andre
Thinking outside the query, is there any archiving that could happen to make
your large tables kinder in the range scan?
Andy
On Thu, Sep 8, 2011 at 7:03 PM, Brandon Phelps wrote:
> On 09/01/2011 01:32 PM, Brandon Phelps wrote:
>
>> On 09/01/2011 12:47 PM, Shawn Green (MySQL) wrote:
>>
>>> On 9
On 09/01/2011 01:32 PM, Brandon Phelps wrote:
On 09/01/2011 12:47 PM, Shawn Green (MySQL) wrote:
On 9/1/2011 09:42, Brandon Phelps wrote:
On 09/01/2011 04:59 AM, Jochem van Dieten wrote:
> > ...
> > WHERE
> > (open_dt >= '2011-08-30 00:00:00' OR close_dt >= '2011-08-30
00:00:00')
> > AND (open
On 09/01/2011 12:47 PM, Shawn Green (MySQL) wrote:
On 9/1/2011 09:42, Brandon Phelps wrote:
On 09/01/2011 04:59 AM, Jochem van Dieten wrote:
> > ...
> > WHERE
> > (open_dt >= '2011-08-30 00:00:00' OR close_dt >= '2011-08-30
00:00:00')
> > AND (open_dt <= '2011-08-30 12:36:53' OR close_dt <= '20
On 9/1/2011 09:42, Brandon Phelps wrote:
On 09/01/2011 04:59 AM, Jochem van Dieten wrote:
> > ...
> > WHERE
> > (open_dt >= '2011-08-30 00:00:00' OR close_dt >= '2011-08-30
00:00:00')
> > AND (open_dt <= '2011-08-30 12:36:53' OR close_dt <= '2011-08-30
12:36:53')
> In that case your logic
On 09/01/2011 04:59 AM, Jochem van Dieten wrote:
> > SELECT
> >sc.open_dt,
> >sc.close_dt,
> >sc.protocol,
> >INET_NTOA( sc.src_address ) AS src_address,
> >sc.src_port,
> >INET_NTOA( sc.dst_address ) AS dst_address,
> >sc.dst_port,
> >
On Aug 30, 2011 6:46 PM, "Brandon Phelps" wrote:
> SELECT
>sc.open_dt,
>sc.close_dt,
>sc.protocol,
>INET_NTOA( sc.src_address ) AS src_address,
>sc.src_port,
>INET_NTOA( sc.dst_address ) AS dst_address,
>sc.dst_port,
>sc.sent,
>
There are a few things gonig on, but mainly it is the ORs that are killing
you.
As your require OR to examine two distinct columns, both of equal relevance
to the query, MySQL: is left with no choice but to do a full table scan on
what might be (at a guess) a very larger table. No amount of inde
> From: Brandon Phelps
>
> I am curious if there is any way I can better optimize the below query, as
> currently it takes this query around 10 seconds to run but I am sure this
> will get slower and slower as the database grows.
You need an index on `close_dt`.
> SELECT
> open_dt,
>
On 8/10/2011 1:01 PM, Brandon Phelps wrote:
Hello all,
I am using the query below and variations of it to query a database
with a TON of records. Currently the database has around 11 million
records but it grows every day and should cap out at around 150 million.
I am curious if there is an
1 - 100 of 1512 matches
Mail list logo