. But one question that was running through my mind, as I
was reading this is:
How do you do your backups?
I use mylvmbackup on a slave master replication server, but I would love to
know how or what you use and do your backups.
Hope you come right with your problem.
Thanks
Brent
--
MySQL Ge
+---+
| Variable_name | Value |
+---+---+
| Uptime| 18492 |
| Uptime_since_flush_status | 18492 |
+---+---+
> What is Apache's MaxClients?
MaxClients 1000
Thank you again.
Brent
>
>
> On 3/27/12 6
uffer_size=8384512
max_connections=41
The original /etc/mysql/my.cnf is untouched.
The only other service we have running is DRBD (active / active) and apache,
nothing is hitting apache as this is the standby node.
And there is no load, or anything consuming resources.
Thanks
Brent
On 27/03/
a, I
will buy you a meal, let alone a beer :)
Kindest Regards
Brent Clark
*** glibc detected *** /usr/sbin/mysqld: double free or corruption (!prev):
0x7f5e38003b60 ***
=== Backtrace: =
/lib/libc.so.6(+0x71bd6)[0x7f5e4575dbd6]
/lib/libc.so.6(cfree+0x6c)[0x7f5e4
[ERROR] 1 client is using or
hasn't closed the table properly
Would anyone know what is causing this, and how I can go about fixing it.
Do you think the problem could be on node 1?
Kind Regards
Brent Clark
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysq
Hiya
I was wondering. Is there anyway Mysql can be monitored for when you need to
run mysqladmin flush-hosts?
Kind Regards
Brent Clark
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql
ative solution. I could reply, write better queries, but
somehow I dont think the client would be to happy with that.
Kind Regards
Brent Clark
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Hiya
Thank you so much for replying. I really appreciate it.
I know the answer (well I think I do :) ), but im still going to ask.
What is the risk if do the "GRANT ALL PRIVILEGES ON *.* TO 'root'@'%'
IDENTIFIED BY 'mysql' WITH GRANT OPTION;"
To satisf
ils are:
mysql> select user, host from user WHERE user = 'root';
+--+-+
| user | host|
+--+-+
| root | 127.0.0.1 |
| root | localhost |
+--+-+
If someone can share their opinion, thoughts or share the same concerns
it would be appreciated.
Kind Regards
Brent Clark
Hiya
I think I read somewhere that Mysql 5.5 is defaulting on Innodb as its
default storage engine.
The question I would like to ask is. For those still running Mysql 5.0 /
5.1. Have any of you set the mysql default variable to be / use Innodb?
Regards
Brent Clark
--
MySQL General Mailing
_files
Value: 300
1 row in set (0.00 sec)
Could the other option to look at be 'open_files_limit'?
The database is a mixture of innodb and Myiasm.
I really need to get replication working, if someone could help my
understand this issue, it would be appreciated.
Regards
Brent Cla
we have opted for mysqldump and reimport.
Dont get me wrong, we know its slower, and may not be bullet proof, but
we are not seeing missing data, but we are reimporting for an extra measure.
Brent
P.s. The one cool thing is that we have been able to add is
'innodb_file_per_table'.
Sorry
Would you be so kind as to explain your thinking.
How would upgrading Mysql fix the issue?
Regards
Brent Clark
On 20/04/2011 06:23, Suresh Kuna wrote:
Install the latest version of mysql on top of the current version and
start the database.
On Tue, Apr 19, 2011 at 9:34 PM, Brent
Thanks for replying
5.1.55
On 19/04/2011 13:55, Suresh Kuna wrote:
What is the version of MYSQL you are using currently ?
On Tue, Apr 19, 2011 at 3:46 PM, Brent Clark
mailto:brentgclarkl...@gmail.com>> wrote:
Ive added
innodb_force_recovery=4
Still
Ive added
innodb_force_recovery=4
Still no go.
Original Message
Subject:Mysql issue / crashing
Date: Tue, 19 Apr 2011 12:15:30 +0200
From: Brent Clark
To: mysql@lists.mysql.com
Hiya
Im getting the following
I ran myisamchk --silent --force */*.MYI
But
Hiya
Im getting the following
I ran myisamchk --silent --force */*.MYI
But still I get the following.
I cant see how I can bring Mysql up.
# mysqld
110419 12:13:22 [Warning] 'for replication startup options' is
deprecated and will be removed in a future release. Please use ''CHANGE
MASTER''
---++-+
| 1 | SIMPLE | Contact | ALL | NULL | NULL | NULL|
NULL | 344710 | Using where |
++-+-+--+---+--+-+--++-----+
1 row in set (0.00 sec)
Thanks
Brent
--
MySQL General Mailing List
s
there a role back procedure, or can I just change the binlog format back
to STATEMENT.
If anyone can help me understand this, it would greatly be appreciated.
Kind Regards
Brent Clark
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscrib
ted 5.4GB database file, and I need to
import it as quickly as possible.
When I initially testing it, is ran for 170m40.935s
Generally I just scp the files (learnt this from Mylvmbackup), but the
problem is, is that the source DB is Mysql 5.1 and the target is Mysql 5.0.
Thanks
Brent
--
MyS
Lenny box?
If anyone could help / answer, it would be appreciated.
Kind Regards
Brent Clark
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
monitoring)
I use mylvmbackup. Works like a bomb.
HTH
Brent
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
But the real reason is that the first clients machine is under heavy
load and we are trying to see what can be improved.
Kind Regards
Brent Clark
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
then the forums was up
to date again.
Has anyone seeing this type of issue with MySQL. I promise you that both
slaves are up and synced. I actually nagios monitor it.
If anyone could share some thought on MySQL's going on's or anything,
it would be appreciated.
Kind Regards
B
--database newdb /tmp/dump
My question is, is my logic and understanding correct, and would it be
ok to do it like this.
Kind Regards
Brent Clark
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
share some thought or opinion on this. Or whats the correct
practice.
Kind Regards
Brent Clark
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
s do you use to help debuging and testing.
Kind Regards
Brent Clark
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
BLE?
Kind Regards
Brent Clark
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
On 11/03/2010 16:52, Krishna Chandra Prajapati wrote:
Hi Brent
You can visit the below link.
http://www.mysqlperformanceblog.com/2009/06/09/mysql-proxy-urgh-performance-and-scalability/
Well thats disappointing.
sigh
So what are we supposed to use for loadbalancing mysql.
Thank you for
ance under load.
If anyone can share their failures, successors or even just thoughts and
opinions on mysql-proxy (even SQL load balancing in general), I would be most
grateful.
Kind Regards
Brent Clark
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To
issue if you are actually
replaying those logs (ie. either for replication or for media recovery).
I think I was reading the MySQL 5.1 manual - so maybe this is different
with 5.0?
Kind Regards
Brent Clark
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysq
On 15/01/2010 11:54, Krishna Chandra Prajapati wrote:
Hi Brent,
I believe you missed log_slave_updates component on 2 and 1. You have
to enable binary logs + log_slave_updates on 2 and 1.
http://dev.mysql.com/doc/refman/5.1/en/replication-options-slave.html#option_mysqld_log-slave-updates
licate from one., but I was hoping to make
it off 2.
If anyone could help me understand this, it would gratefully be appreciated.
Kind Regards
Brent
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
On 17/12/2009 17:46, mos wrote:
"Load Data ..." is still going to be much faster.
Mike
Hiya
If you using on Linux and using LVM, look at mylvmbackup.
HTH
Brent Clark
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists
ize passes a certain
level, which is based on your RAM and InnoDB settings.
MyISAM performance is usually fairy steady as the size of the table
increases.
--
Brent Baisley
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.
pagination, and make sure what you are
ordering by is unique. This is simple to do, just add the unique ID
field as the last order by field. Then you will always get
"consecutive" rows.
Brent
On Nov 23, 2009, at 12:09 PM, Miguel Cardenas wrote:
Hello list :)
I am dev
since
it's unlikely all of them will be running a query at the same time.
Unless your queries are really slow.
I'm using a server with 2GB RAM that gets 300K hits a day, adds 250K
records per day to the databases, with one table having almost 50
million records.
Brent Baisley
On Fri, Sep
error checking, etc. That way
you don't have to rewrite the same lines every time you want to run a
query.
Brent Baisley
On Fri, Sep 4, 2009 at 6:51 AM, AndrewJames wrote:
> is there a better way (hopefully simpler) to code this?
>
> i want to get the user id of the logged in user to use i
tegories b on a.cat_id = b.cat_id where
a.user_id=1 and a.acc_id=3 order by a.tran_date ASC
Brent Baisley
On Thu, Sep 3, 2009 at 1:56 PM, John
Daisley wrote:
> Hi,
>
> Hoping someone can help me with this little issue! It seems really
> simple but my brain is refusing to work.
>
MySQL
database common files
ii mysql-server-5.0 5.0.51a-24+lenny1MySQL
database server binaries
ii php5-mysql 5.2.6.dfsg.1-1+lenny3MySQL
module for php5
Kind Regards
Brent Clark
--
MySQL General Mailing List
For list
MySQL doesn't support timezones (I think Postgres does). I usually
just store dates as Greenwich Mean Time and store the time zone hours
offset in a separate field.
Brent Baisley
On Tue, Jul 28, 2009 at 7:59 AM, Manoj Singh wrote:
> Hi All,
>
> Is it possible to store the timezone
ame is the current record, sometimes it's a "parent" record,
you need to conditional check which type of "record" it is and built
the sort value.
SELECT tablename.*,
IF(tablename.head_id=NULL,
CONCAT(tablename.name, tablename.member_id),
CONCAT(heads.name, table
.
SELECT * FROM orders USE INDEX (index_a) WHERE ...
Brent Baisley
On Tue, Jul 21, 2009 at 5:52 AM, Morten wrote:
>
> Hi, I have a table "orders" with the columns
>
> item_id INT FK items(id)
> customer_id INT FK customers(id)
> status_id TINYINT -- Be
t rid of all the dups.
But I agree, that is the best way to remove duplicates in place
provided the table is not too large.
Brent Baisley
On Tue, Jul 14, 2009 at 11:52 AM, Marcus
Bointon wrote:
> You can combine the two queries you have in option 3 (you'll need to change
> field names
of activity (inserts and deletes).
Brent Baisley
On Fri, Jun 26, 2009 at 11:25 AM, fa so wrote:
> I have a website where my database is continuously growing. And I started
> being worried about performance.
> I have a couple of questions, and I would appreciate it very much if you can
>
+
cos(radians(39.0788994))*cos(radians(latitude))*cos(radians(-77.1227036-longitude*60*1.1515
< 5) AS zips
ON custzip=zip
Often times that simple change speeds things up considerably in MySQL.
An explain should show it has a DERIVED TABLE if I recall correctly.
Brent Baisley
On Thu, Jun 18, 2009
AT(start), ',', 4 ), ',',
-1 ) start4,
SUBSTRING_INDEX( SUBSTRING_INDEX(GROUP_CONCAT(end), ',', 4 ), ',', -1 ) end4,
SUBSTRING_INDEX( SUBSTRING_INDEX(GROUP_CONCAT(start), ',', 5 ), ',',
-1 ) start5,
SUBSTRING_INDEX( SUBSTRING_INDEX(GROUP_CO
her than just one. Your inserts you should
absolutely use bulk inserts. Just build up a list of values that need
to be saved and when you hit 100 (or some other batch size), bulk
insert into the database and bulk write to the file.
Brent Baisley
On Wed, May 6, 2009 at 12:02 PM, Andrew Carlson wrote:
h
ones are in a different state. One of those may be the culprit.
Brent Baisley
On Mon, Apr 20, 2009 at 10:28 AM, living liquid | Christian Meisinger
wrote:
> Hi there.
>
> I've a small table with my daily banner hits.
> 1. version was with myisam but with a lot of concurrent que
like your issue has been resolved.
Interesting, your temp1 attached file shows mysql switched from using
the org_date index to the organization index.
Brent Baisley
2009/3/12 Carl :
> Brent,
>
> After a delay while I was busy killing alligators, I did as you suggested
> (added a
If you have an auto increment column, order it by that value. That
field will have the order the records were imported in.
Brent Baisley
On Mar 6, 2009, at 9:10 PM, revDAVE wrote:
Hi folks,
I'm trying to use MySQL Migration Toolkit 1.1 with MS SQL server 2005
http://dev.mysql.com/dow
ikely just locking the table.
Is that how many records you want to return? That seems like a lot.
Maybe reworking your query may help. Heck, post the sizeable query.
You've been spending weeks on it.
Brent Baisley
On Tue, Mar 3, 2009 at 10:53 AM, Carl wrote:
> I have been wrestling with
Be careful with using InnoDB with large tables. Performance drops
quickly and quite a bit once the size exceeds your RAM capabilities.
On Mar 1, 2009, at 3:41 PM, Claudio Nanni wrote:
Hi Baron,
I need to try some trick like that, a sort of offline index building.
Luckily I have a slave on th
have everything you need.
Brent Baisley
On Mon, Feb 23, 2009 at 6:58 AM, Andy Smith wrote:
> What RAID level to use, whether to use SCSI or SATA etc are all pretty much
> "how long is a piece of string?" questions. If you have a really high end
> hardware array RAID 5 may b
ord (i.e. triplicates), this will not clean the all
out. You can keep running the query to delete multiple duplicates of
records.
Test the query first to make sure it's working properly. Just switch
"DELETE tableNAME" with "SELECT fieldName(s)".
Brent Baisley
--
MySQL
7;;
$result4 = mysql_query($sql4, $db);
That example does not sanitize the data before inserting.
Brent
On Mon, Feb 23, 2009 at 10:25 AM, PJ wrote:
> I think this is a tough one... and way above my head:
> PLEASE READ ALL OF THE ABOVE TO UNDERSTAND WHAT I AM TRYING TO DO.
> Having a bit o
d AND quotation.id IN (107037, 304650,
508795, 712723, 1054653))
JOIN part ON ( part.id = quotation.part_id )
That may or may not help, check if the explain changes.
Brent Baisley
On Mon, Jan 26, 2009 at 6:16 AM, Jesse Sheidlower wrote:
>
> I have an app that joins results from a MySQL que
sure 100% of their uselessness.
Cheers
Claudio Nanni
Thanks for your reply and feedback.
Regards
Brent Clark
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Hiya
I just inherited a project, and before I get started, Id like to do a
little clean up.
There a * load of unused and empty tables in the db. My question
is, does this in any way affect the performance of mysql in anyway and
if so how?
Kind Regards
Brent Clark
--
MySQL General
The ratings field would be NULL. You could also add a count in your
query to tell how many ratings there were. If count is 0, you know
there are no ratings.
SELECT count(ratings.rating_id) AS rate_count, ...
Brent Baisley
On Mon, Dec 22, 2008 at 12:39 PM, Brian Dunning wrote:
> If I did
ou'll get an arbitrary supplier ID out of those with the
minimum price. This is because there is no unique value to join on.
Hope that points you in the right direction.
Brent Baisley
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Alias the table you are joining on so you can join it twice, one
for each condition.
select m.code, m.parent,
if( m.parent > 0, t.data, t1.data ) AS data
from main m
left join tdata t
on t.code = m.parent and t.country='dk'
left join tdata t1
on t1.code=m.code and t1.country='dk
That's because RAND() is a decimal (0.37689672). Try score*RAND().
Brent Baisley
On Nov 30, 2008, at 2:03 AM, sangprabv wrote:
Hi,
Thans for the reply, I have tried it but I don't see the RAND() to be
work. This list is always the same. TIA
Willy
-Original Message-
n Fri, Nov 21, 2008 at 2:12 PM, Andre Matos <[EMAIL PROTECTED]> wrote:
> Sounds interesting, but does the MERGER support complex SELECT statements
> and LEFT JOIN?
>
> Andre
>
>
> On 21-Nov-08, at 1:45 PM, Brent Baisley wrote:
>
>> On Fri, Nov 21, 2008 at 12:44 PM,
; FROM db5
> WHERE TaskDoneOn IS NOT NULL
>
>
> Today I have 5, but tomorrow I can have 50 and I don't want to forget any
> database.
>
> Thanks for any help.
>
> Andre
Create a MERGE table that is all those tables combined. Then you just
need to do 1 select as i
MERGE list while also removing the oldest table from the MERGE
list. You still have all the data, but you've removed it from normal
use with virtually no overhead.
Brent Baisley
On Nov 17, 2008, at 9:53 PM, Micah Stevens wrote:
I don't think this is indicative of a design iss
#x27;t be too hard to add another subquery (i.e.
LEFT JOIN on the dup select WHERE table.uniqueId IS NULL) to that to
filter so you can delete all duplicates in 1 shot. This has always
been something I had to do very infrequently, so I never bothered
taking it further.
Hope that help!
Br
On Mon, Nov 17, 2008 at 7:56 PM, sangprabv <[EMAIL PROTECTED]> wrote:
> Hi,
> I just want to know what things that cause table/db overhead? Because I
> have my tables always get overhead problem. And must run OPTIMIZE query
> every morning. Is there any other solution? TIA.
>
>
> Willy
What is hap
t will speed
up searches, the speed improvement likely won't be noticeable for the
searches you listed.
Make sure query cache is enabled. That will help a lot since the
result of the search will be cached until the table changes.
Brent Baisley
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Why are you creating a subquery/derived table?
Just change your limit to 1,2
ORDER BY updates.AcctSessionTime DESC LIMIT 1,2
Like you did in the outer query.
Brent
On Fri, Oct 17, 2008 at 5:12 AM, Ian Christian <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> I'm trying to work o
Both times seem a bit long, even if you database has millions of rows.
Can you post and explain of your query? That they are in different
databases should have minimal effect on your query.
Brent
On Oct 3, 2008, at 12:14 PM, mos wrote:
I have two indexed MyISAM tables, each in a separate
ds where a.Type = "Signature
Based Return".
Brent
On Sep 23, 2008, at 9:29 PM, kalin m wrote:
hi all...
can somebody explain why a conditional count like this one doesn't
work:
count(if(a.Type = "Signature Based Return", a.amount,'')) group
by.
pconnect.
Brent Baisley
On Fri, Sep 19, 2008 at 3:51 PM, Jaime Fuentes <[EMAIL PROTECTED]> wrote:
> You have to use mysql 64bits on S.O. 64bits
>
> --Mensaje original--
> De: Martin Gainty
> Para: Kinney, Gail
> Para: 'mysql@lists.mysql.com'
> Enviado: 19
omes in, which will
probably be often in this case.
Think of fields as labels for your data. If you create a table with 2
fields (label, value), you can have unlimited "fields" and add new
ones at will because they are just new records with a different label.
Depending on your data, you ma
size for varchar. Text is like varchar, but with a
"fixed" max size of 65,000 characters.
Brent Baisley
On Tue, Sep 9, 2008 at 8:24 AM, Krishna Chandra Prajapati
<[EMAIL PROTECTED]> wrote:
> Hi,
>
> I would like to know the difference between char, varchar and text.
>
MySQL has no idea how you are presenting the data (html, rtf, etc.),
so it couldn't hilight the words for you. It should really be that
tricky using grep and PHP.
Brent
On Sep 8, 2008, at 10:58 AM, Fco. Mario Barcala Rodríguez wrote:
Hi all:
I was reading documentation and searching
RGE tables if they
are applicable.
Hope that helps or points you in the right direction.
Brent Baisley
On Sep 4, 2008, at 4:26 PM, Josh Miller wrote:
Good afternoon,
I have recently converted a large table from MyISAM to InnoDB and am
experiencing severe performance issues because of i
lumns should probably be records with a column indicating what
type of data it is.
Brent Baisley
On Sep 4, 2008, at 5:11 AM, drflxms wrote:
Dear MySQL specialists,
this is a MySQL-newbie question: I want to create a view of a table,
where all NULL-values are substituted by 0. Therefore I
NCAT(DATA, @cntr) FROM tableA ORDER
BY LOC
That should add a sequential number to LOC and DATA that will reset to
0 whenever the value of LOC changes. Some of the IFs in there are just
to suppress output of variable assignment.
Hope that helps
Brent Baisley
On Sep 5, 2008, at 5:44 P
tiple queries.
Divide and conquer, it will scale better.
Brent Baisley
On Wed, Sep 3, 2008 at 1:05 PM, Jim Leavitt <[EMAIL PROTECTED]> wrote:
> Greetings List,
>
> We have a medium-large size database application which we are trying to
> optimize and I have a few questions.
>
> S
It might be because you you are comparing user_id to a string, when
the field type is a decimal. Drop the quotes around the user_id search
value and see if that works.
Brent Baisley
On Sep 1, 2008, at 3:59 PM, Krishna Chandra Prajapati wrote:
Hi,
In the query below explain gives
Just re-enable the
InnoDB stuff and you should be alright. You can leave your default
engine as MYISAM and if you like, you can use ALTER TABLE to convert
your InnoDB tables to MYISAM.
--
Brent Baisley
On Aug 15, 2008, at 1:01 AM, [EMAIL PROTECTED] wrote:
Hello mysql,
As I have previ
Just do a left join with the delete query.
DELETE feed_tag FROM feed_tag LEFT JOIN feed ON
feed_tag.feed_id=feed.id WHERE feed.id IS NULL
That should do it. You can change "DELETE feed_tag" to "SELECT" and
test it first.
--
Brent Baisley
On Aug 13, 2008, at 4:5
XPLAIN will actually be helpful.
Hope that helps.
Brent Baisley
On Aug 11, 2008, at 8:26 AM, Jonathan Terhorst wrote:
I have this query:
SELECT DISTINCT t1.string FROM t1
LEFT JOIN t2 ON t1.string=t2.string
LEFT JOIN t3 ON t1.int_a=t3.int_a
LEFT JOIN t4 ON t1.int_b=t4.in
y delete from multiple
tables in the same query.
Hope that helps.
Brent Baisley
On Wed, Aug 6, 2008 at 4:31 AM, Magnus Smith
<[EMAIL PROTECTED]> wrote:
> I have the following two tables
>
> ACCOUNTACTION
> +---+--+--+-+-+--
you start back up.
--
Brent Baisley
On Tue, Aug 12, 2008 at 8:29 AM, <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> I try to generate a unique id for each row in a Mysql-InnoDB Table. Because
> of many deletes I can't use an auto_increment column.
> After a Mysql rest
speeds.
One odd thing to check is if the "old" drive supports command queueing
and the new one does not. I assume that are both SATA drives. All SCSI
drives support command queueing and it can make a huge difference
depending on access patterns.
Brent
On Mon, Jul 21, 2008 at 8:42 AM, Ph
AS mx_cust_full
on cust_full.name=mx_cust_full.name AND
cust_full.acq_date=mx_cust_full.mx_acq_date
The name+acq_date is going to be your unique string to join on. Your
finding out the max, then finding out which record is associated with
the max.
Brent Baisley
I write code.
On Jun 20, 20
count(DISTINCT posts.post_id) as counted
That will count the number of unique posts. I don't know what your
unique field name is for the posts table.
Brent Baisley
Systems Architect
On Sat, May 3, 2008 at 9:00 PM, Patrick Aljord <[EMAIL PROTECTED]> wrote:
> hey all,
>
> I have
e, since the inner query returns all
records, then a limit is imposed.
Brent Baisley
Systems Architect
On Apr 26, 2008, at 7:22 AM, j's mysql general wrote:
Hi Guys,
Firstly, this is the only time I have ever encountered this problem
and
searching archives or google shed no luck since y
d the slave becomes the master.
Very simple in theory, a bit more complicated in practice.
Brent Baisley
Systems Architect
On Apr 23, 2008, at 2:28 PM, Paul Choi wrote:
Does anyone have experience with upgrading large databases (~500GB
each)
from MySQL 4.1 to 5.0? The tables are in InnoDB forma
Just because a database is setup as a slave, that doesn't mean you
can't use it like a typical database. You can insert, delete, update,
etc. just like any other DB. Something or someone is likely adding
records directly to the slave, which is then generating it's own auto-
in
Is the plus query return more then 50% of the records? If so, MySQL
won't return anything since the result set isn't that relevant.
Brent Baisley
Systems Architect
On Apr 11, 2008, at 8:08 AM, Barry wrote:
I am confused ( nothing new there), what I thought was a simple
search
just the opposite.
An outer join doesn't filter the table, it just finds any matching
content if it's present. Anything without matching content has a
"null" where normal matched content would be.
Brent Baisley
PHP, MySQL, Linux, Mac
I write code
On Mar 1, 2008, at 4
When you establish a connection, it's a connection to a server, not a
specific DB. You can set a default db so that you don't always have
to specify the db name you are working with. So to answer your
question, no, a new connection is not established.
Brent
On Jan 19, 2008, a
Sounds like you should create a MERGE table that links all the
underlying tables together. Then you just query the merge table and
MySQL handles which tables it needs to pull data from. You also then
don't need to query for the tables.
On Jan 9, 2008, at 9:12 AM, Cx Cx wrote:
Hi List,
I
1, or 2 as I recall.
The Falcon engine (MySQL 6) actually has a very good caching
mechanism, but that's not officially released yet.
Brent
On Jan 8, 2008, at 11:20 AM, Thomas Raso wrote:
Hi all,
how mysql manage the query_cache TimeToLive (live) and how can I
change it ?
Thanks
.
Brent
On Jan 8, 2008, at 12:24 PM, Wes Hegge wrote:
I am attempting to left join several tables. This was working at
one time
but has seemed to stop just recently. I have checked and installed
the
lastest version of mysql via Debian Etch apt-get. I am running
version
5.0.32.
I have
27;t seem that much, but MySQL optimizes it
very differently.
Brent
On Jan 4, 2008, at 5:47 PM, Eben wrote:
Hi,
I have a query that has to run on a full text indexed table with
many millions of records. I'm trying to figure out some
optimizations for it. Here's the gen
NG cnt>1
) AS t1
LEFT JOIN
addressbook AS t2
ON t1.email1=t2.email AND t1.id1!=t2.id
ORDER BY email1
I haven't tested it, but that query should work and give you the
output you want. I don't recall if it works in v4.0, but v4.1 and
above should work fine.
Brent
On Dec 12, 2007
JOIN. That information should be in the JOIN.
Try making that change. I believe I explained what is happening and
why. Although I may be wrong, please post if it doesn't work.
Regardless, you want to see the movements table listed first in your
EXPLAIN.
Brent
On Nov 9, 2007, at
1 - 100 of 622 matches
Mail list logo