On 11/5/2011 12:32 PM, Roger CPL wrote:
I installed Mysql using the .dmg package. I can use System Preferences to
start and stop the program. When I open a terminal window, I think this is the
way to us the program, any Mysql command I inter is not recognized.
Thanks
Roger
You will need to
Tim,
Just a reminder, as I am not sure if it is documented or not; After you get
MySQL up and running via the DMG package be sure to install the System
Preferences pane (it didn't use to install by default, not sure if it does now)
which should be one of the icons you get when the DMG first o
practically dead. I tihnk you will have more luck with
http://mxcl.github.com/homebrew/
I have not built mysql with it though on lion.
regards,
Vladislav
On Tue, Oct 11, 2011 at 3:56 PM, Brandon Phelps wrote:
Is there any reason why you are using ports and not the native 64-bit DMG
from
Is there any reason why you are using ports and not the native 64-bit DMG from
mysql.com?
http://www.mysql.com/downloads/mysql/#downloads
I run the latest version (5.5.15) on my macbook running lion and the install
goes without a hitch.
Brandon
On 10/10/2011 07:34 PM, Tim Johnson wrote:
I f
That query looks fine. What error are you getting if you execute the query
from the CLI? Also is it possible that the s_id or owed columns are no longer
numeric data types? If this column(s) is/are a character type now, then you
would need to have the values in quotes.
-Brandon
On 10/10/20
field_b
alone however, that first index is of no use to you.
On Fri, Oct 7, 2011 at 10:49 AM, Brandon Phelps wrote:
This thread has sparked my interest. What is the difference between an
index on (field_a, field_b) and an index on (field_b, field_a)?
On 10/06/2011 07:43 PM, Nuno Tavares wrote
This thread has sparked my interest. What is the difference between an index on
(field_a, field_b) and an index on (field_b, field_a)?
On 10/06/2011 07:43 PM, Nuno Tavares wrote:
Neil, whenever you see multiple fields you'd like to index, you should
consider, at least:
* The frequency of each
-statement.html
On Sep 21, 2011, at 2:23 PM, Brandon Phelps wrote:
Hello all,
I would like to create a stored procedure that does the following:
1. Accepts 4 values as parameters
2. SELECTS 1 record (LIMIT 1) from a table where the 4 parameters match fields
in that table
a. If a record was
Hello all,
I would like to create a stored procedure that does the following:
1. Accepts 4 values as parameters
2. SELECTS 1 record (LIMIT 1) from a table where the 4 parameters match fields
in that table
a. If a record was returned then UPDATE the table
b. If a record was not r
Personally I don't use any quotes for the numeric types, and single quotes for
everything else. Ie:
UPDATE mytable SET int_field = 5 WHERE id = 3;
SELECT id FROM mytable WHERE int_field = 5;
UPDATE mytable SET varchar_field = 'Test' WHERE id = 3;
SELECT id FROM mytable WHERE varchar_field = 'Te
Ah I see. Well thanks for your assistance!
-Brandon
On 09/08/2011 05:21 PM, Mihail Manolov wrote:
From the manual: "The default behavior for UNION is that duplicate rows are removed
from the result."
On Sep 8, 2011, at 4:50 PM, Brandon Phelps wrote:
Mihail,
Thanks so much!
dpm.desc AS dst_port_desc
FROM sonicwall_connections AS sc
LEFT JOIN port_mappings AS spm ON spm.port = sc.src_port
LEFT JOIN port_mappings AS dpm ON dpm.port = sc.dst_port
WHERE close_dt BETWEEN '2011-09-07 13:18:58' AND '2011-09-08 13:18:58'
On Sep 8, 2
and how far back are the users querying? How many users concurrently
performing queries on the 32m record table?
On Thu, Sep 8, 2011 at 8:04 PM, Brandon Phelps wrote:
Mihail,
I have considered this but have not yet determined how best to go about
partitioning the table. I don't think partition
dt' and the other on 'close_dt')
http://dev.mysql.com/doc/refman/5.1/en/index-merge-optimization.html
Regards,
Derek
On Sep 8, 2011, at 2:50 PM, Brandon Phelps wrote:
Andy,
The queries take minutes to run. MySQL is 5.1.54 and it's running on Ubuntu
server 11.04. Unfortunately
wanna take a look at table partitioning
options you may have.
On Sep 8, 2011, at 2:27 PM, Brandon Phelps wrote:
Thanks for the reply Andy. Unfortunately the users will be selecting varying
date ranges and new data is constantly coming in, so I am not sure how I could
archive/cache the necessary da
plain look like when your remove the limit 10?
Is your server tuned for MyISAM or InnoDB?
What kind of disk setup is in use?
How much memory is in your machine?
On Thu, Sep 8, 2011 at 7:27 PM, Brandon Phelps wrote:
Thanks for the reply Andy. Unfortunately the users will be selecting
varying d
, Andrew Moore wrote:
Thinking outside the query, is there any archiving that could happen to make
your large tables kinder in the range scan?
Andy
On Thu, Sep 8, 2011 at 7:03 PM, Brandon Phelps wrote:
On 09/01/2011 01:32 PM, Brandon Phelps wrote:
On 09/01/2011 12:47 PM, Shawn Green (MySQL) wrote
On 09/01/2011 01:32 PM, Brandon Phelps wrote:
On 09/01/2011 12:47 PM, Shawn Green (MySQL) wrote:
On 9/1/2011 09:42, Brandon Phelps wrote:
On 09/01/2011 04:59 AM, Jochem van Dieten wrote:
> > ...
> > WHERE
> > (open_dt >= '2011-08-30 00:00:00' OR close_dt &g
On 09/01/2011 12:47 PM, Shawn Green (MySQL) wrote:
On 9/1/2011 09:42, Brandon Phelps wrote:
On 09/01/2011 04:59 AM, Jochem van Dieten wrote:
> > ...
> > WHERE
> > (open_dt >= '2011-08-30 00:00:00' OR close_dt >= '2011-08-30
00:00:00')
> >
On 09/01/2011 04:59 AM, Jochem van Dieten wrote:
> > SELECT
> >sc.open_dt,
> >sc.close_dt,
> >sc.protocol,
> >INET_NTOA( sc.src_address ) AS src_address,
> >sc.src_port,
> >INET_NTOA( sc.dst_address ) AS dst_address,
> >sc.dst_port,
> >
Hello,
I have the following query I'd like to optimize a bit:
SELECT
sc.open_dt,
sc.close_dt,
sc.protocol,
INET_NTOA( sc.src_address ) AS src_address,
sc.src_port,
INET_NTOA( sc.dst_address ) AS dst_address,
sc.dst_port,
sc.sent,
Wang wrote:
Try a index on (dst_port,close_dt)
On Wed, Aug 10, 2011 at 14:01, Brandon Phelps mailto:bphe...@gls.com>> wrote:
Hello all,
I am using the query below and variations of it to query a database
with a TON of records. Currently the database has around 11 million
rec
Hello all,
I am using the query below and variations of it to query a database with
a TON of records. Currently the database has around 11 million records
but it grows every day and should cap out at around 150 million.
I am curious if there is any way I can better optimize the below query,
23 matches
Mail list logo