Why doesn't MySQL support gzip encode/decode functions.

2007-06-12 Thread Kevin Burton

Why doesn't mysql support gzip for COMPRESS/UNCOMPRESS and only zlib

For network applications zlib is a lot less compatible than gzip.

For example I could send gzip'd content directly from the database within a
larger gzip'd stream.

Kevin

--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078


BUG: MySQL 5.1.19 with UNCOMPRESS on *VALID* data fails when generated by 3rd party zlib.

2007-06-11 Thread Kevin Burton

OK I think I've found a bug with MySQL's compression support. :-/

I'm using two Java zlib implementations.  One is jzlib 1.0.7 and the other
is java.io.DeflateOutputStream .  Both of these are referenced by the zlib
implementation as being compatible.

I can compress/uncompress locally WITHOUT a problem.

When I store the data in the DB the value is stored correctly in a blob and
I can compare the MD5 hashcode with my local array of and the hashcode
values are identical which for all practical purposes means they're the
same.

The only problem is that UNCOMPRESS won't work...

It returns null and I get:


mysql> SHOW WARNINGS;
+---+--+-+
| Level | Code |
Message
|
+---+--+-+
| Error | 1256 | Uncompressed data size too large; the maximum size is
1047552 (probably, length of uncompressed data was corrupted) |
+---+--+-+
1 row in set (0.00 sec)

Sure enough:

mysql> SELECT UNCOMPRESSED_LENGTH(BODY) FROM FOO;
+---+
| UNCOMPRESSED_LENGTH(BODY) |
+---+
| 147577464 |
+---+
1 row in set (0.00 sec)

..

I've tested this on 5.1.19 and 4.1.21 with the same symptoms.

Anyone have any advice here?  Did you guys make any changes with the zlib
implementation you're using?

I'm willing to file this as a bug if necessary.

Kevin

--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078


Potential problems running MyISAM on ram drives?

2007-04-10 Thread Kevin Burton

Hey.

We have the need to have some tables stored in memory for performance
reasons.

We were thinking about just using MEMORY tables but this is non ideal since
it uses a fixed row size.

Using MyISAM would be much better since it supports variable length rows.

Backups would be handled by just using mysqlhotcopy and snapshotting the
tables to disk either on the master or the slave.

Has anyone done this and have they had any problems?

Kevin

--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078


Re: what journal options should I use on linux?

2007-03-09 Thread Kevin Burton

Just use XFS. it's a solve problem..

Kevin

On 3/8/07, Christopher A. Kantarjiev <[EMAIL PROTECTED]> wrote:


I'm setting up mysql on linux for the first time (have been using OpenBSD
and
NetBSD with UFS until now). The default file system is ext3fs, and I don't
mind
that, but it seems really silly to use a journaled file system for the
database
data - doubling my writes.

In particular, I have a couple of use cases where I spend a week or so
creating
a 17GB data (table) file and its 15GB index file, and then do sparse
queries out
of it. I need as much write speed as I can get. I certainly don't want to
have
every data block written twice, once to the journal and once to the file,
along
with the extra seeks.

What do people with this sort of large problem use on Linux?

Thanks,
chris


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078


Ability to escape varbinary data when sent to the console?

2007-03-06 Thread Kevin Burton

We need to store  binary data form time to time in mysql.  To date I've just
base64 encoded the data to avoid having it corrupt the console on  SELECT *

Is there any way to have the mysql command line client automatically do this
for me?  Is there any work around?

base64 is about 30% data bloat that I'd like to have to avoid.

Kevin

--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078


Re: Mysql on linux - choosing the right filesystem

2007-02-25 Thread Kevin Burton

On 2/24/07, Jean-Sebastien Pilon <[EMAIL PROTECTED]> wrote:


Hello,

I would like to get some of your input on file systems to use with
mysql. Should I use a journaling filesystem ? Should I choose a
different one based on what I store (log files, myisam dbs, innodb
datafiles, etc ) ? Is there any file system tweaks you recommend ?

TIA
NOTICE: This email contains privileged and confidential information and is
intended only for the individual to whom it is addressed. If you are not the
named addressee, you should not disseminate, distribute or copy this e-mail.
Please notify the sender immediately by e-mail if you have received this
transmission by mistake and delete this communication from your system.
E-mail transmission cannot be guaranteed to be secured or error-free as
information could be intercepted, corrupted, lost, destroyed, arrive late or
incomplete, or contain viruses.

AVIS: Le présent courriel contient des renseignements de nature
privilégiée et confidentielle et n'est destiné qu'à la personne à qui il est
adressé. Si vous n'êtes pas le destinataire prévu, vous êtes par les
présentes avisés que toute diffusion, distribution ou reproduction de cette
communication est strictement interdite. Si vous avez reçu ce courriel par
erreur, veuillez en aviser immédiatement l'expéditeur et le supprimer de
votre système. Notez que la transmission de courriel ne peut en aucun cas
être considéré comme inviolable ou exempt d'erreur puisque les informations
qu'il contient pourraient être interceptés, corrompues, perdues, détruites,
arrivées en retard ou incomplètes ou contenir un virus.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078


Real BITs which use 1 bit in 5.1?

2007-02-17 Thread Kevin Burton

A little birdie:

http://forge.mysql.com/wiki/Top10SQLPerformanceTips

notes..

"In 5.1 BOOL/BIT NOT NULL type is 1 bit, in previous versions it's 1 byte."

Is this true?

I didn't see a note in the manual..

I assume it would be here

http://dev.mysql.com/doc/refman/5.1/en/storage-requirements.html

--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Script to perform unattended slave sync with master...

2007-02-16 Thread Kevin Burton

Has anyone built a script to add a new slave into a MySQL replication
setup which can operate (for the most part) unattended?

The set of operations is pretty straight forward but right now it's
mostly a manual step which ends up taking a LONG time.

The script would need to:

* connect to a master or a slave
* FLUSH TABLES WITH READ LOCK
* record master replication position
* take snapshot of myisam via mysqlhotcopy or simply CP the files into
a temp directly
* UNLOCK TABLES
* SCP the files to the target slave
* update replication positions on this box
* setup correct permissions
* startup replication
* assert that the box is functioning correctly

The transfer could be done unattended with SSH and ssh-agent.  The CPU
would be the bottleneck on gigabit ethernet but since it's unattended
it shouldn't matter as much.

One could even setup rsync with authentication if crypto was really
the bottleneck.

Thoughts?

I don't want to have to write anything because I'm amazingly lazy ;)

Kevin

--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



BUG? SHOW SLAVE STATUS blocks on disk full?

2007-02-12 Thread Kevin Burton

Hm. Running on 4.1.21 seems to have a 'feature' where SHOW SLAVE
STATUS blocks when the disk is full.  Thoughts?

Kevin

--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Merge tables and ON DUPLICATE KEY UPDATE

2007-02-12 Thread Kevin Burton

I thought about it I was actually going to use merge tables AND
partitions to split the underlying MyISAM tables across two disks and
then partition on top.

It's POSSIBLE to use partitioning the way I want it but I'm going to
have to grok it for a bit more.

Thanks though.

Kevin

On 2/12/07, Jay Pipes <[EMAIL PROTECTED]> wrote:

Kevin Burton wrote:
> I want to use a merge table so that I can direct all new INSERTs to a
> new merge table and migrate old data off the system by having a
> continually sliding window of underlying MyISAM tables.
>
> The problem is that of I do INSERT ... ON DUPLCATE KEY UPDATE  and
> that value isn't in the leading table where all INSERTs go a *new* row
> will be created.
>
> Is there any way around this problem?

What about using partitioning in MySQl 5.1+?  Would this work?

http://dev.mysql.com/doc/refman/5.1/en/partitioning.html

Cheers,

Jay




--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Merge tables and ON DUPLICATE KEY UPDATE

2007-02-12 Thread Kevin Burton

I want to use a merge table so that I can direct all new INSERTs to a
new merge table and migrate old data off the system by having a
continually sliding window of underlying MyISAM tables.

The problem is that of I do INSERT ... ON DUPLCATE KEY UPDATE  and
that value isn't in the leading table where all INSERTs go a *new* row
will be created.

Is there any way around this problem?

Kevin

--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



A 'Simple' Protocol for Manual MySQL Slave Promotion to Master

2007-02-07 Thread Kevin Burton

Hey.

I should have posted this hear earlier but it just dawned on me that
you guys could have some good feedback:

"We've been working on the design of a protocol which would enable
promotion of a slave to a master in a MySQL replication cluster.

Right now, if a MySQL master fails, most people just deal with a
temporary outage. They bring the box back up, run REPAIR TABLEs if
necessary, and generally take a few hours of downtime.

Google, Flickr, and Friendster have protocols in place for handling
master failure but for the most part these are undocumented.

One solution would be to use a system like DRDB to get a synchronous
copy of the data into a backup DB. This would work of course but would
require more hardware and a custom kernel.

You could also use a second master in multi-master replication but
this would require more hardware as well and complicates matters now
that you're using multi-master replication which has a few technical
issues.

A simpler approach is to just take a slave and promote it to the
master. If this were possible you'd be able to start writing to the
new master almost immediately after the old master fails. You'd lose a
few transactions but if you have any critical code that depends on
data insertion you can have it assert that it reached at least one
slave before moving forward."

.

http://www.feedblog.org/2007/02/a_simple_protoc_1.html

--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Monitoring for corrupt tables and transiently failing master INSERTs

2007-02-05 Thread Kevin Burton

We're trying to write a monitoring process for our master so that if a
table is corrupt it will raise flags which can then trigger
operations.

We can do the basic stuff such as asserting that the port is open and
that we can ping the machine but I want to test if any
INSERT/UPDATE/DELETEs are failing on the master due to table
corruption.

For example, if you have a functioning DB and then deliberately
corrupt the tables (for testing of course) I'd want SOME way to detect
that INSERTs were failing on this table.

There's no way to currently detect this I believe.  SHOW STATUS
doesn't help nor does SHOW TABLE STATUS.

Any pointers?

--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



ETA for 5.1.13 ?

2006-11-19 Thread Kevin Burton

What's the ETA for 5.1.13? There are a few critical bugs with NDB that are
fixed in this rev that I'd like to play with.

I'm hoping it's right around the corner :)

Kevin

--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078


Re: Announce: LBPool 1.0 beta1 (Load Balancing JDBC Connection Pool)

2006-07-19 Thread Kevin Burton

There was a thread before about this... this is much better than connector
J's load balancing.

You can take machines out of production, add thhem back in, it's MySQL slave
aware, etc

On 7/19/06, Christopher G. Stach II <[EMAIL PROTECTED]> wrote:


Kevin Burton wrote:
> Hey Gang.
>
> I wanted to get this out on the list and facilitate some feedback.
>
> http://www.feedblog.org/2006/07/announce_lbpool.html
>

What does this have over MySQL Connector/J's load balancing?

--
Christopher G. Stach II





--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org


Announce: LBPool 1.0 beta1 (Load Balancing JDBC Connection Pool)

2006-07-18 Thread Kevin Burton

Hey Gang.

I wanted to get this out on the list and facilitate some feedback.

http://www.feedblog.org/2006/07/announce_lbpool.html

I CC'd both lists because this might be of interest to the larger MySQL
community as the techniques I used here could be implemented in other
languages.

==

The lbpool project provides a load balancing JDBC driver for use with DB
connection pools. It wraps a normal JDBC driver providing reconnect
semantics in the event of additional hardware availability, partial system
failure, or uneven load distribution. It also evenly distributes all new
connections among slave DB servers in a given pool. Each time connect() is
called it will attempt to use the best server with the least system load.

The biggest scalability issue with large applications that are mostly READ
bound is the number of transactions per second that the disks in your
cluster can handle. You can generally solve this in two ways.

  1. Buy bigger and faster disks with expensive RAID controllers.
  2. Buy CHEAP hardware on CHEAP disks but lots of machines.

We prefer the cheap hardware approach and lbpool allows you to do this.

Even if you *did* manage to use cheap hardware most load balancing hardware
is expensive, requires a redundant balancer (if it were to fail), and seldom
has native support for MySQL.

The lbpool driver addresses all these needs.

The original solution was designed for use within MySQL replication
clusters. This generally involves a master server handling all writes with a
series of slaves which handle all reads. In this situation we could have
hundreds of slaves and lbpool would load balance queries among the boxes. If
you need more read performance just buy more boxes.

If any of them fail it won't hurt your application because lbpool will
simply block for a few seconds and move your queries over to a new
production server.

While currently designed for MySQL this could easily be updated to support
PostgresQL or any other DB that supports replication.


--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org


Insert speed on table with 7M rows but small row size.

2006-04-07 Thread Kevin Burton
I have a fairly small table WRT the data size.  Its about 300M of  
data.  Right now it has about 6M rows.


The schema is pretty simple.  It has one 64bit ID column.  Basically  
its for checking the existence of an object in our DB and is designed  
to work very fast.


One the table was FIRST created I could do inserts very fast.Now  
that its grown in size inserts are taking a LOT longer.  Sometimes 60  
seconds to 2-3 minutes.


I've migrated to using bulk inserts of 1k rows or more but in  
production this might be a bit difficult.


Is there anyway I can tune MySQL to improve this operation?

Kevin

Kevin A. Burton, Location - San Francisco, CA
  AIM/YIM - sfburtonator,  Web - http://www.feedblog.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Using Network Block Device on Linux to build HUGE/cheap memory-base MySQL boxes.

2006-02-19 Thread Kevin Burton
I was talking to a friend tonight about how they use NBD to run a  
single system image in memory.


NBD (Network Block Device) allows one Linux box to export a block  
device and for you to mount it on another filesystem.   For the  
memory component they just use a ram disk.


More info here:

http://www.ussg.iu.edu/hypermail/linux/kernel/9704.3/0492.html

Basically they just buy cheap 1U boxes with 4-8 gig and then mount  
them... this way they allow the process to allocate as much memory as  
it wants and it will them start swapping but instead of uses disk it  
starts using the remote memory.  Since gigabit ethernet is now FASTER  
than most disk installs in terms of throughput this would seem like a  
win/win.


Here's the idea I had though.

MySQL (except for MySQL cluster) doesn't scale if you need to run an  
image across 2 boxes.  For example you can't currently take two boxes  
and run your dataset on BOTH boxes at the same time for double  
scalability.


What if you booted a MySQL install and told it to use NBD mounted  
memory?  Theoretically you could build MUCH cheaper and MUCH faster  
clusters.  Your DB writes would still back to the local (RAID)  
filesystem but your innodb buffer pool and other buffers would be  
running out of swap and into your network memory subsystem.


This would allow you to have a HUGE buffer for MySQL.  Buffer your  
whole damn database in MEMORY.


The main downside I can see is fault tolerance if the ethernet port  
was pulled.  The box would fail.  Of course at this point its a bit  
like pulling a SCSI cable out.


If this turns out to be a good way to scale MySQL someone could just  
pay to have NBD enhanced to support fault tolerance with mirror nodes.


Thoughts?

Kevin

Kevin A. Burton, Location - San Francisco, CA
  AIM/YIM - sfburtonator,  Web - http://www.feedblog.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL table growth monitoring via RRDtool and histographs?

2005-11-01 Thread Kevin Burton

Hey.

I'm looking for a decent tool  which uses crontab to monitor the  
COUNT of tables within MySQL.  I'd also like to monitor other queries  
as well. Ideally it would use RRDtool to log the data and a PHP to  
draw the UI.


Gangla and Cacti seem to do similar tasks (if you stretch them) but  
they really fall down fast.


Does anyone have any other suggestions?


Kevin A. Burton, Location - San Francisco, CA
  AIM/YIM - sfburtonator,  Web - http://www.feedblog.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Will SELECT COUNT(*) FROM TABLE ever be cheap on INNODB?

2005-11-01 Thread Kevin Burton


Are you sure? Finding a single record using an index may be O(logN),
but wouldn't reading all of the index be O(N)?



Yeah.. you're right.  It would be O(N)... I was thinking this as I  
hit the "send" button :)


Kevin

Kevin A. Burton, Location - San Francisco, CA
  AIM/YIM - sfburtonator,  Web - http://www.feedblog.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Will SELECT COUNT(*) FROM TABLE ever be cheap on INNODB?

2005-11-01 Thread Kevin Burton
MyISAM has a cool feature where it keeps track of the internal row  
count so that


SELECT COUNT(*) FROM FOO executes in constant time.  Usually 1ms or so.

The same query on INNODB is O(logN) since it uses the btree to  
satisfy the query.


I believe that MyISAM just increments an internal count so that every  
insert/delete changes the count.


Are there plans to add this to INNODB?  I was relying on this for my  
database monitoring app and noticed that it was killing my  
performance (I forgot about this problem...)


Would be really nice to have.

Kevin

Kevin A. Burton, Location - San Francisco, CA
  AIM/YIM - sfburtonator,  Web - http://www.feedblog.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Optimal index for date range query with order by using index for sort???

2005-10-24 Thread Kevin Burton


OK.

I need help with the following query:

SELECT * FROM PRODUCT WHERE DATE > ? ORDER BY PRICE;

Basically find products created since a given date and order by prices.

I could put an index of DATE, PRICE but it will have to resort to a  
filesort since DATE isn't a constant value.


I was thinking of using a DAY column so that I can just find values  
in the last day.


Then I could rewrite it as:

SELECT * FROM PRODUCT WHERE DAY = ? ORDER BY PRICE;

and place an index on DAY, PRICE at which point I'd be able to order  
by the index.


Which would work really well.

The problem is that at midnight there would be no results since DAY  
isn't really a floating window.


I could use DAY IN { ? ? } and then ORDER BY PRICE but it would have  
to use a filesort again.


Is there any interesting way people have solved this problem in the  
past?


Kevin


Kevin A. Burton, Location - San Francisco, CA
  AIM/YIM - sfburtonator,  Web - http://www.feedblog.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Memory tables and INNODB have different query plans for GROUP BY with btree?

2005-10-03 Thread Kevin Burton
I was benchmarking a few of my queries tonight and I noticed that two  
queries had different query plans based on table type.


Here's the "broken" query:

mysql> EXPLAIN SELECT * FROM FOO_LINK_MEMORY_TEST GROUP BY  
TARGET_NODE_ID\G

*** 1. row ***
   id: 1
  select_type: SIMPLE
table: FOO_LINK_MEMORY_TEST
 type: index
possible_keys: NULL
  key: TEST
  key_len: 18
  ref: NULL
 rows: 1000
Extra:
1 row in set (0.00 sec)

Note no index is used.


mysql> EXPLAIN SELECT * FROM FOO_LINK_INNODB_TEST GROUP BY  
TARGET_NODE_ID\G

*** 1. row ***
   id: 1
  select_type: SIMPLE
table: FOO_LINK_INNODB_TEST
 type: index
possible_keys: NULL
  key: TARGET_NODE_ID
  key_len: 9
  ref: NULL
 rows: 1011
Extra:
1 row in set (0.00 sec)

...

and here it uses TARGET_NODE_ID. The only difference is that I  
created an INNODB table and inserted the columns in the memory table  
into the INNODB table.


I'm trying to follow the instructions here:

http://dev.mysql.com/doc/mysql/en/loose-index-scan.html

To get decent GROUP BY performance.  Is this a bug?  Is there a  
workaround?


Kevin A. Burton, Location - San Francisco, CA
  AIM/YIM - sfburtonator,  Web - http://www.feedblog.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04




Re: RAID stripe size recommendations

2005-09-29 Thread Kevin Burton


On Sep 28, 2005, at 5:05 PM, Atle Veka wrote:


I am planning on running some tests on a SATA server with a 3ware 9000
series RAID card to see if there's a stripe size that performs  
better than




This might be able to help you out:

http://hashmysql.org/index.php?title=Opteron_HOWTO

These are difficult questions.  you also should figure out what the  
block size of your filesystem is.  I think ideally it should be  
N*stripe_size where N is the number of disks you have.  This way you  
can read one block as a set of N IOs in parallel across your disks.


Also note that SATA is probably not what you want if you need decent  
IO.  SCSI will still give you a win.


Let us know what you find out...

Check the archives too.  I think there was some commentary about  
using a 16k strip and seeing a significant performance boost.  Also  
some RAID controllers don't allow you to change the strip size.


Moral of the story is that disk sucks... Disk is the new tape...

Kevin

Kevin A. Burton, Location - San Francisco, CA
  AIM/YIM - sfburtonator,  Web - http://www.feedblog.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: foreign keys in MyISAM?

2005-09-23 Thread Kevin Burton


On Sep 23, 2005, at 12:27 PM, Jacek Becla wrote:


Hi,

The documentation says "At a later stage, foreign key constraints
will be implemented for MyISAM tables as well". Does anybody know
what is the timescale?



I'm not sure there is a timescale.. I think it might be pretty open  
ended.  You could check out the changelog for MySQL 5 if you're  
really interested.  I wish the MySQL guys would start a blog or  
something so you could figure out what's going on with their  
development.


I'd like MyISAM to just keep the metadata and not enforce the FKs.   
That would be pretty nice.  Same with INNODB.


Kevin

--

Kevin A. Burton, Location - San Francisco, CA
  AIM/YIM - sfburtonator,  Web - http://www.feedblog.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



ETA and progress of full-text indexes on INNODB?

2005-09-12 Thread Kevin Burton
Anyone know the ETA of having full-text index support on INNODB?

Kevin

-- 
Kevin A. Burton, Location - San Francisco, CA
AIM/YIM - sfburtonator, Web - http://www.feedblog.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412


Benchmark of MyISAM vs Innodb vs Innod without FKs?!

2005-09-12 Thread Kevin Burton
Here's some thing I've been thinking about.

I want to use INNODB without FKs. I don't need or want referential integrity 
in my app (due to a schema and performance issue).

Basically I just create FKs in my OR layer and my app enforces the rules. 
The column is still an _ID column so I visually know a FK when I see one but 
INNODB doesn't have to do any runtime checks on insert.

My question is whether INNODB will be faster without them. If so by how 
much. If it's close to the speed of MyISAM then I'll be a happy camper.

Thoughts?

Kevin

-- 
Kevin A. Burton, Location - San Francisco, CA
AIM/YIM - sfburtonator, Web - http://www.feedblog.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412


Re: LARGE operation stuck. What now?

2005-09-06 Thread Kevin Burton
INNODB I assume?

Replicated environment?

What version of mysql?

See KILL in the SQL manual.. if you do a show processlist you can get the 
pid and you might be able to kill it.

I believe that it's safe to do a KILL on an DELETE but any decision you make 
her is your own...

That's a LOT of data...

Also.. if the kill works you could still delete in the future but put a 
LIMIT on the delete clause. This way you can determine how long your 
delete's will take. 

Kevin

On 9/6/05, Joseph Cochran <[EMAIL PROTECTED]> wrote:
> 
> Here's the situation. I have a table of about 900 million rows, consisting
> of a bigint and an int in each row. There's an index on the bigint. The
> table is referenced in a SELECT in other DB operations roughly 15 or 20
> times per day.
> 
> We're under tight deadlines and some operations on the table were getting
> too slow, so two weeks ago I initiated a DELETE from the table that should
> remove about 600 million of the rows, based on values in the int column.
> 
> It's still running.
> 
> In the interim, it appears that the transaction has finished (doing a 
> select
> will yield rows that should be there while rows that should not are indeed
> excluded), but the transaction is still "updating" in the processlist and
> hasn't returned to the prompt that issued it. Further, MySQL appears to be
> thrashing the disk, running about 250 transactions per second of about 
> 20-25
> KB per transaction. The end result of this is that any DB operation we run
> that's small enough to be in memory is unaffected, but anything that 
> starts
> swapping to the disk is slw.
> 
> We have changed our workflow and I've changed the code so that the 
> affected
> table isn't hurting us directly anymore, but now it's two weeks on and 
> very
> soon our deadline will have passed and I'll need to resolve this if the
> transaction doesn't finish.
> 
> At this point I don't care what I should or should not have done, or what
> went wrong to cause this sort of hang. I need to know what my options are
> when we have time to work on the server without impacting other people's
> deadlines, which means I need a few educated guesses on what's going on 
> with
> the DB. What will happen if I stop the transaction from my client? If I 
> kill
> the process? What will happen if we reboot the server? What the heck is
> MySQL DOING that's thrashing the disk anyway? Why is it not using a higher
> throughput to the disk?
> 
> We're okay for now, but we're kind of without a net in that backups take 
> 16
> hours to run because of the slow disk access. So we have something, it's
> just not very optimal.
> 
> But at some point in the next few weeks I'm going to have to take some 
> sort
> of action if this thing doesn't commit, and I don't want it to be offline
> for several days undoing whatever it's done.
> 
> Any advice would be most appreciated.
> 
> -- Joe
> 
> 


-- 
Kevin A. Burton, Location - San Francisco, CA
AIM/YIM - sfburtonator, Web - http://www.feedblog.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412


Re: Why would a UNION be 100x slower than the same SELECT...

2005-06-27 Thread Kevin Burton

Kevin Burton wrote:



Any idea whats going on and how I could fix this?


This seems like a bug in the SQL parser.  The LIMIT is only ignored in this one 
situation.

If I just add a:

UNION
(SELECT * FROM FOO LIMIT 0)

To the query will work correctly.

This might be an acceptable workaround

Kevin

--  



Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.


Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html

  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Why would a UNION be 100x slower than the same SELECT...

2005-06-27 Thread Kevin Burton

Kevin Burton wrote:



( SELECT * FROM FOO WHERE FOO.LAST_UPDATED
< 1119898418779 AND FOO.FEED_ID = 1 ORDER BY FOO.LAST_UPDATED DESC LIMIT
10 ) ORDER BY LAST_UPDATED DESC LIMIT 10


OK.  I *totally* just figured it out!

WOW.

so.. the LIMIT in the first SELECT is *totally* ignored and the entire 
SQL expression is evaluated which is then given to the union. Evil I 
say! Pure evil!


I was able to figure this out because Handler_read_next was being 
incremented to the same value as the total number of rows in this 
expression.


Cool now at least I know why its screwing up.

Kevin

--


Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.


Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html

  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Why would a UNION be 100x slower than the same SELECT...

2005-06-27 Thread Kevin Burton

Here's a big problem I'm having.

If I have a query like:

SELECT * FROM FOO WHERE FOO.LAST_UPDATED < 1119898418779 AND
FOO.FEED_ID = 1 ORDER BY FOO.LAST_UPDATED DESC LIMIT 10

it only takes about 10ms or so to execute.

but... if I rewrite it to wrap it in a union like so:

( SELECT * FROM FOO WHERE FOO.LAST_UPDATED
< 1119898418779 AND FOO.FEED_ID = 1 ORDER BY FOO.LAST_UPDATED DESC LIMIT
10 ) ORDER BY LAST_UPDATED DESC LIMIT 10

then its 100x slower and takes about 1000ms

No tmp disk tables were created (or at least thats what show status is 
telling me).


Any idea whats going on and how I could fix this?

Kevin

--


Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.


Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html

  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Possible to DoS a slave by using multiple connections on the master!.

2005-06-21 Thread Kevin Burton
Not sure if this is a known issue or not.. but I haven't seen it 
documented anywhere.


Anyway. My past thinking was that you should always use as many 
connections as you have tables (at least with myisam).   This way in the 
worst case scenario you could have locks open on all tables instead of 
one lock on one table holding back all other tables.


This is a BAD idea if you're trying to reach max qps with INSERTs.

What happens is that the disk controller is able to command queue (and 
other opterations) to optimize IO on the master since technically you  
have multiple INSERTs happening at once (one for each table).  Then on 
the slave since there's only one thread replaying the DML it will back 
up since the disk controller isn't able to optimize the IO.


We were actually running a master with RAID5 and a slave with RAID0 and 
the slave still couldn't keep up.  The problem was only fixed when we 
told our client to only use one connection. 

While this is a temporary fix this limits the scalability of MySQL as I 
could easily see a LOT more QPS going through these boxes. 

It might be possible place an intelligent scheduler to bulk up INSERTS 
and use FK relationships to allow non-dependent SQL to pass forward.  
You could also have one thread per slave per connection on the master.  
Then in the binary log you could flag the thread ID that performed the 
modification on the master and use the same thread on the slave.


The downside being that you'd need more resources on SLAVE boxes.

Seems like a wiki page in the making

--


Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.


Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html

  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Will multiple MASTER threaded writes break replication?

2005-06-21 Thread Kevin Burton

Atle Veka wrote:


On Mon, 20 Jun 2005, Kevin Burton wrote:

 


We're noticing a problem where if we were to write to the master with
multiple threads that our slave DB will fall behind.

Note that we're trying to perform as many inserts as humanly possible
and the load on the master is 1.
   



Out of curiosity, how many queries are we talking and what sort of
complexity level? I've had replication setups do 600 (simple) updates/s
and slaving was current most of the time and never more than 1 second
behind.
 

Mostly INSERTS.. We're running about 300qps at full speed and doing 
selects on slaves will cause it to fall behind. 

Reducing the connection count allows it to NOT fall behind but then I 
loose throughput.  I'm not happy with either situation.


Kevin

--


Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.


Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html

  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Will multiple MASTER threaded writes break replication?

2005-06-20 Thread Kevin Burton

Kevin Burton wrote:

We're noticing a problem where if we were to write to the master with 
multiple threads that our slave DB will fall behind.


BTW.. I should clarify.. when I mean "break" I really meant to say that 
the slave replication will fall WAY behind because it can't replay 
transactions as fast as the master.  So if your slave is 10k seconds 
behind its essentially "broken" .


Kevin


--


Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.


Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html

  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Will multiple MASTER threaded writes break replication?

2005-06-20 Thread Kevin Burton
We're noticing a problem where if we were to write to the master with 
multiple threads that our slave DB will fall behind.


Note that we're trying to perform as many inserts as humanly possible 
and the load on the master is 1.


My theory is that the master, since it can write to multiple tables, is 
faster due to the IO controller being able to more efficiently command 
queue and buffer IO.


Since replication is only one thread its not able to benefit from these 
optimizations and hence is prone to falling behind.


Kevin

--


Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.


Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html

  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: idepontent inserts? Allowing INSERT to fail faster than SELECT/INSERT

2005-06-14 Thread Kevin Burton

Simon Garner wrote:

I'm not entirely clear what you're talking about, but you could also 
have a look at INSERT IGNORE..., or INSERT... ON DUPLICATE KEY UPDATE, 
or REPLACE INTO...:



The problem is that I do NOT want it to update. 

Also.. REPLACE causes the row to be DELETED and INSERTED again which is 
really ugly.


I just want to take one SELECT and INSERT pair and reduce it to one 
INSERT which should be 2x faster :)


Kevin

--


Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.


Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html

  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



idepontent inserts? Allowing INSERT to fail faster than SELECT/INSERT

2005-06-14 Thread Kevin Burton

I've been thinking about this for a while now.

If you have an app that can compute a unique key (hashcode) and you have 
a unique index it should be possible to just do an INSERT instead of a 
SELECT first to see if the record doesn't exist and then an INSERT.


This should be 2x faster than the SELECT/INSERT combo right?

--


Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.


Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html

  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Seriously.. When are we going to get subqueries?!

2005-06-09 Thread Kevin Burton

Jochem van Dieten wrote:


Also, let's not mistake the means for the goal. Using indexes is just
a way to solve it and there may be other fixes. The goal is to improve
performance.

 


no.. using indexes is THE way to fix it :)

I don't want a subquery scanning all 700 million rows in my table where 
an index would reduce that to... 10...


Kevin

--


Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.


Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html

  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Seriously.. When are we going to get subqueries?!

2005-06-08 Thread Kevin Burton

Greg Whalin wrote:

Granted, Kevin's tone was a bit harsh, but his sentiments should be 
encouraged (frustration w/ a lack of feature).  The concept that 
people should be happy with what they get for a free product only 
serves to keep the quality of free products below what they could be.


It was 1/2 frustration and 1/2 humor.. Then again I have a strange sense 
of humor.


I just thing that the MySQL developers are moving really fast and that 
their priorities are really wrong on this one. 

Again I think that if they're going to wait so long to fix this issue 
that they whould remove the feature.  Its just going to shoot people in 
the foot and claiming that you support subqueries isn't actually correct 
since they're not usable.


The performance of sub-queries in mysql when used in an IN() clause is 
embarassingly slow.  They are in most cases, unusable because they do 
not use indexes correctly.  This is a legitimate complaint, and one 
that I personally hope Mysql looks into and repairs.  I echo Kevin's 
thoughts and I rarely even have a use for sub-queries (given I got 
used to them not being available).



Thanks...

Kevin

--


Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.


Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html

  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Seriously.. When are we going to get subqueries?!

2005-06-08 Thread Kevin Burton

Jeff Smelser wrote:

Thats funny.. looks like it will be added to 5.1.. Dunno why they think fixing 
it is adding a feature..


 


WOW!  That's just insane! This seriously has to be fixed in 5.0 or sooner...

The thing is that MySQL has both promised this feature and is claiming 
that 5.0 is now a REAL database and not having this is much higher than 
triggers, foreign keys, and all the other bells and whistles in 5.0 that 
I'll never use.. :-)


Kevin

--


Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.


Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html

  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Seriously.. When are we going to get subqueries?!

2005-06-07 Thread Kevin Burton

DBA wrote:

- Original Message - 
From: "Kevin Burton" <[EMAIL PROTECTED]>

To: <[EMAIL PROTECTED]>
Cc: 
Sent: Tuesday, June 07, 2005 6:11 PM
Subject: Re: Seriously.. When are we going to get subqueries?!


 


Greg Whalin wrote:

   

They do use indexes if you use them to build derived tables and are 
pretty fast.  The only case where I see them not using indexes when I 
think they should is when you use a sub-query for an IN() clause.
 

I'm sorry.. yes.. They're not using indexes when within IN clauses which 
for me is 99.9% of the time.


Maybe I'm wrong though and this is a rare but I don't think so...

Kevin
   



I use subqueries and I thought that it was using the index? I might be wrong? 
This is MySQL on 4.1.11-nt. Here is the explain plan:

 

Nope... its a bit confusing but no... it's not.  The subquery ITSELF is 
using queries but the parent query which is using your


ptt_trans_06 



table is not ...

Notice the Type: ALL and Key: NULL

Luckly though you only have 21k rows so its not that big of a deal.

KEvin

--


Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.


Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html

  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Seriously.. When are we going to get subqueries?!

2005-06-07 Thread Kevin Burton

Greg Whalin wrote:

They do use indexes if you use them to build derived tables and are 
pretty fast.  The only case where I see them not using indexes when I 
think they should is when you use a sub-query for an IN() clause.


I'm sorry.. yes.. They're not using indexes when within IN clauses which 
for me is 99.9% of the time.


Maybe I'm wrong though and this is a rare but I don't think so...

Kevin

--


Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.


Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html

  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Seriously.. When are we going to get subqueries?!

2005-06-07 Thread Kevin Burton

OK...

Subqueries in 4.1 are totally broken.  They don't use indexes.  They're 
evil.  We're told we have subqueries but there's no way anyone on earth 
could use them.  To make matters worse a lot of developers are TRICKED 
into using them and assume that mysql would do the right thing but its a 
HUGE performance hit.


So...

1.  When will subqueries that actually use indexes be implemented?  
We've been promised this feature since 4.0 it was one of the biggest 
feature wins of 4.1.


2. If they won't be in 5.0 could you please abandon a feature for 5.0 
and concentrate on subqueries?


3. If they won't be in 5.0 could you at least be honest and remove this 
feature since in the best case its useless and in the worse case its 
dangerous (god forbid someone should ship code that uses this)?


Not trying to be obnoxious here but I really want this feature and the 
current implementation is very.. evil.


:)

Kevin

--


Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.


Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html

  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Design of a Client-side MySQL Java Load Balancer

2005-06-02 Thread Kevin Burton

[EMAIL PROTECTED] wrote:


Hi,
i think that client load-balacer are more Dispatchers than real load balancer.

load balancing in the database side takes care to number of connections, but
also node load. So thisis more real. But this issue is difficult.

 

No... you're making assumptions.  With the two-phase protocol I 
developed the nodes cooperate and distribute load and connections.  They 
also handle failover.


Simply put I can do a better job than hardware balancers because I 
already KNOW what MySQL can do.  Most load balancers are dumb.



even for oracle with 9iRAC and 10gRAC, load balancing is not completely
controled.

you speak abot load balancing and introduce also the failover notion, which
isnot a load balancing concept. Fail over is difficult because controling it
implies that every node must have the image before of every transaction.

 


Image?

Failover isn't a load balancing concept?  Not according to our hardware 
vendor :)



With cache fusion, ora

> cle RAC gives a solution, but assumes failover only fo select 
statements. All DML statements are lost if a

> node is lost.

The DML situation here is a tough one.  For SELECTS I have no problem 
with failover.  For DML I would have no problem unless you're in a 
transaction. 


We don't use transaction and I think they're evil anyway.

Kevin

--


Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.


Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html

  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Design of a Client-side MySQL Java Load Balancer

2005-06-01 Thread Kevin Burton

[EMAIL PROTECTED] wrote:



Wouldn't it make better sense to build on the NDB protocol and keep 
the native messaging infrastructure than it would be to build a 
similar wrapper from scratch?  I mean to use the NDB communications on 
top of regular MySQL 


Biting off an NDB migration would be a LOT harder than implementing 
slave load balancing.  NDB shows promise but I just don't think its 
there yet...


Kevin

--


Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.


Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html

  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Design of a Client-side MySQL Java Load Balancer

2005-06-01 Thread Kevin Burton

I'd love to get some feedback here:

MySQL currently falls down by not providing a solution to transparent 
MySQL load

balancing. There are some hardware solutions but these are expensive and
difficult to configure. Also none of them provide any information 
about the

current state of your MySQL configuration. For example they can't handle
transparent query failover if a MySQL box fails. They also can't 
disconnect and

reconnect to another host if the load grows too high.

To that end I think it makes a lot of sense to have a MySQL 
client-side load

balancer.

This area is difficult to implement. There are a log of design issues. 
Also the

issues WRT distributed connection management start to make the problem
difficult.

The other day I had a bit of an epiphany on this topic.




http://peerfear.typepad.com/blog/2005/06/design_of_a_cli.html

--


Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.


Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html

  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



GCC 4.0, mysql and Opteron?

2005-05-24 Thread Kevin Burton
I'm curious what people here think of compiling mysql with gcc 4.0... 
Especially on Opteron.


I've heard that the way to go with Opteron is to use gcc-3.4 but that 
its a little unstable.


Of course it might be too early to find out if gcc 4.0 is better than 3.4...

Kevin

--


Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.


Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html

  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Problems with x86_64 mysql-standard-4.1.12

2005-05-23 Thread Kevin Burton

Pete Harlan wrote:


Hi,
and then it never comes back, presumably from the "auto_increment"
test.  If I run the auto_increment test alone (i.e., "./mysql-test-run
auto_increment"), it fails in this same way.  When it's hung, mysqld
isn't using any CPU.

 

Also.. CPU isn't the only thing you should be watching.  Run iostat -k 1 
and vmstat 1 to see what type of IO you're running at.  Maybe you're IO 
is just being really slow.


Its semi normal for your mysql box to be slowed down by disk...

Kevin

--


Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.


Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html

  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Problems with x86_64 mysql-standard-4.1.12

2005-05-23 Thread Kevin Burton

Pete Harlan wrote:


In addition to failing the tests, I deployed the server on Machine 1
for a while and it failed quickly, with a simple insert hanging up and
"kill " being unable to kill it.  (The thread's state was
"Killed", but it didn't go away and continued to block other threads
from accessing the (MyISAM) table.)

Any help would be appreciated, and please let me know if I can provide
further information.
 


See the Opteron HOWTO:

http://hashmysql.org/index.php?title=Opteron_HOWTO

Also.. are you running NPTL or Linux Threads?   If you have the 
libc6-i686 package installed you have NPTL (not sure if the mysql binary 
needs support for this or not).


I'd also highly recommend installing a glibc > 2.3.2 which is what ships 
on debian.  glibc-2.3.5 is in experimental and its what we're running.


Kevin

--


Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.


Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html

  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Hashmysql.org 0wn3d?

2005-05-20 Thread Kevin Burton
Was hashmysql.org hacked?
The wiki is gone and now all I get is:
"Stupidity is a crime against humanity."
Which is redundant btw...
Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Replication with failover

2005-05-18 Thread Kevin Burton
Gleb Paharenko wrote:
Hello.
I don't remember solutions with keepalived, but this issue is
discussed in the list from time to time. Search in archives at:
 http://lists.mysql.com/mysql
 

Someone should create a wiki page on this subject... its a commonly 
asked question...

Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Opteron HOWTO - #mysql Freenode

2005-05-17 Thread Kevin Burton
Richard Dale wrote:
Over the last week I added in lots of comments pasted in from various
places.  I'd appreciate those running with Opteron and MySQL to have a close
look at the WIKI and make any amendments/suggestions.
http://hashmysql.org/index.php?title=Opteron_HOWTO
My Opteron server will be here shortly and I'll do some performance testing
on RAID5 (8 disks) versus RAID 10 (8 disks, 2 channels)
 

Awesome!  Thats great news...
I reconfigured one of our slaves from RAID5 to RAID1 and setup the 
controller as writeback and caching the stripes.  I'm getting about 75% 
additional performance.  I wanted to put the controller on additional 
channels but it turns out that I need a stupid proprietary cable from 
Dell to accomplish this.  I'm not sure what benchmark I could see from 
using two channels then.

My gut though in our config says that we won't see any performance 
increase since I'm not maxing out the IO on each channel (but I might be 
wrong).

I'd love to see your numbers here.
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Preventing slaves from falling behind masters...

2005-05-11 Thread Kevin Burton
If you're running in a master/slave environment.. and you're application 
is using the slave too often... replication can fall behind which can 
then confuse your application.

This can happen if the IO performance of both the master and slaves is 
equivalent and you're performaning INSERT/UPDATE/DELETE with a load of 1 
on the master.  Then when the transactions move to the slave all the IO 
is used up and any additional SELECTS will just cause the slave to fall 
behind.

Has anyone else seen this?  One way I was thinking of solving this is to 
use RAID5 on our master and then RAID0 on the slaves so that the master 
is a hard bottleneck. Then the slaves have no problem running 
transactions via replication and have load available to run SELECTS.

Any other ideas?
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: SATA vs SCSI

2005-05-11 Thread Kevin Burton
Dathan Pattishall wrote:
Forget using drives all together for heavy hit applications.
Build data that can fit on a ram Drive (8GB) then your able to do 20K
 

Not everyone can run in this config...  We have way more data than we 
can casually story in memory. It would just be cost prohibitive.

Memory tables in some situations can be a good thing though but mostly 
when they're overview tables.

Also.. if you have a high cache hit rate you can effectively have memory 
tables (in theory at least).  I just haven't seen anywhere near 20k qps.

Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


SATA vs SCSI

2005-05-11 Thread Kevin Burton
Were kicking around using SATA drives in software RAID0 config.  

The price diff is significant.  You can also get SATA drives in 10k RPM 
form now.,

Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Opteron HOWTO?!

2005-05-11 Thread Kevin Burton
Dathan Pattishall wrote:
We do about 70K qps at peak for about 1 Billion Queries per day (only on
30 servers BOOYA). So, it's pretty stable.
 

Also... based on my math.. this yields ~ 2300 qps per MySQL box...  
which is pretty good.

Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Opteron HOWTO?!

2005-05-11 Thread Kevin Burton
Dathan Pattishall wrote:
Are you using NPTL?
   

No that sucks we use the other one. Can't make a static build with NPTL.
What type of performance boost are you getting from running a static build.
Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Can long_query_time be millseconds based?

2005-05-09 Thread Kevin Burton
Mark Matthews wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Kevin Burton wrote:
 

It seems strange that long_query_time is seconds based.  I'm trying to 
get most of our queries down to sub second time.

1 second is WAY too long.  I'd ideally like 500ms.
Can you specify .5 for long_query_time?  Doesn't seem to be working the 
way I'd hoped...

Kevin
   

Kevin,
(you probably already know this, but I'm posting it anyways).
Since you're using JDBC, you can log the slow queries on the client,
which will also log the location in your code where they happened.
The two properties you want are:
logSlowQueries=true
slowQueryThresholdMillis=n (where n is the number of milliseconds that
should pass to trigger a slow query being logged).
 

Actually.. we're still stuck on 3.0.12... long story.  I want to move to 
4.1.x ASAP.

We do have our own code that does this in our OM layer though.  It 
measures slow query times on top of the JDBC driver.  The issue is that 
some queries are measuring in at 700ms -> 1700ms which isn't good.  I'm 
trying to debug whether this is a Java layer issue or a MySQL layer 
issue.  Having mysql then log slow queries on TOP would be nice...

The cool think with the 4.1.x logSlowQueries is that you *could* use 
this with log4j to have slow queries logged to a dedicated log file.

Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Can long_query_time be millseconds based?

2005-05-09 Thread Kevin Burton
It seems strange that long_query_time is seconds based.  I'm trying to 
get most of our queries down to sub second time.

1 second is WAY too long.  I'd ideally like 500ms.
Can you specify .5 for long_query_time?  Doesn't seem to be working the 
way I'd hoped...

Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: MySQL not using optimum disk throughput.

2005-05-09 Thread Kevin Burton
Greg Whalin wrote:
Curious, were you seeing deadlocks in Suns JVM w/ Tomcat?
Never with Tomcat but we might have a different number of threads.  But 
it *was* with Java...

We were forced to run Tomcat w/ NPTL off due to deadlocks under glibc 
2.3.2+NPTL.  
Yup.. thats the problem we had.  But we have too many threads so 
Linuxthreads fell down.

It sounds like upgrading your glibc would fix this issue.
Under FC2, the JVM runs fine w/ NPTL, though glibc is now 2.3.3.  
I think this particular bug was fixed in 2.3.3 but there are other 
interesting bugs fixed in 2.3.4 so we went that route.  That and Debian 
has no 2.3.3 build.

We have had no NPTL issues w/ the x86 version of mysql, but the x86-64 
definite almost immediate deadlock (w/ 2.3.2).

Yeah.. these should related.  I mean its a race condition so the 
processor or schedulre might affect it.

So... this might be another Opteron issue that we've solved :)
I'd be interested in finding out if the switch fixes this issue..
Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Opteron HOWTO - #mysql Freenode

2005-05-09 Thread Kevin Burton
Its pretty filled now now.  If you have anything to add please feel free.
http://hashmysql.org/index.php?title=Opteron_HOWTO
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: MySQL not using optimum disk throughput.

2005-05-09 Thread Kevin Burton
Greg Whalin wrote:
We are currently running 2.3.2 (Fedora Core 1) on our Opterons.  When 
we were still running linux 2.6, we were on 2.3.3 (Fedora Core 2).
Yeah... we were being bitten by 2.3.2's NPTL implementation for MONTHs 
before I heard a rumor that the Internet Archive moved to 2.3.4. 

This literally solved all my problems so I'd recommend upgrading to 
2.3.4 if you notice this type of stuff again.

Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: MySQL not using optimum disk throughput.

2005-05-09 Thread Kevin Burton
Greg Whalin wrote:
I suspect this is an OS issue.  Our Opteron's were completing large 
data update queries aprox 2-3 times slower than our Xeons when running 
under 2.6.  After a switch to 2.4, Opteron's are faster than the 
Xeons.  I mentioned NPTL being shut off (LD_ASSUME_KERNEL=2.4.19 in 
init script).  When we left NPTL running, we saw almost instant 
deadlocks just watching replication catching up (no other site traffic 
directed to the machine).  This is in 2.4 btw, so this is the 
backported NPTL kernels from Fedora.  I somewhat suspect NPTL being a 
problem in 2.6 as well due to impressions I get from sifting through 
mysql's bug tracking system. The IO scheduler was also an obvious 
culprit.
Another point I wanted to note.
What version of glibc were you running.  We were running Debian with 
glibc 2.3.2 (libc6-i686-2.3.2) and were running into deadlocks with 
another piece of code.

2.3.2 has a number of known issues and we had to migrate to an 
experimental 2.3.4 build.  I've been considering moving our databases to 
2.3.4 but they weren't having any problems.

It might be that opteron is raising these issue more than Xeon.
FYI...
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Opteron HOWTO?!

2005-05-09 Thread Kevin Burton
Greg Whalin wrote:
I am all in favor of this idea.  Currently, this info is scattered all 
over the web, and finding it can be time consuming (even w/ Google).  
I see lots of people jumping the same hurdles, so a central location 
for this info seems it would greatly benefit the community.

Great!  I created a wiki node for this issue.
http://hashmysql.org/index.php?title=Opteron_HOWTO
Please help fill it out guys.  I promise that while we're making the 
migration that I'll make all my notes public and on this wiki node.

Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Opteron HOWTO?!

2005-05-09 Thread Kevin Burton
So... it sounds like a lot of people here (Dathan and Greg) have had 
problems deploying MySQL on Opteron in a production environment. 

I was wondering if we could start an Opteron HOWTO somewhere (mysql 
wiki?) which could illustrate the minefields they've had to walk to 
hopefully solidify MySQL on this platform and to prevent others from 
having the same bad experiences.

We're considering an Opteron migration as well and as the 2G memory 
barrier fast approaches I'm expecting more MySQL users to migrate as well. 

Maybe this should be a X86_64 bit FAQ though since Intel is coming out 
with more EM64T stuff

Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Will myisam lock if the query can be resolved from disk cache/query cache?

2005-05-08 Thread Kevin Burton
Harrison Fisk wrote:
There isn't really any way to "use" concurrent INSERT.  It happens 
automatically if possible.  However there are a few things you can do 
to help it along, such as OPTIMIZE after you DELETE large portions of 
the table.  Also it does have to enabled in LOAD DATA INFILE 
manually.  However, regular INSERTs and SELECTs should do this 
automatically.
Well... in order to "use" concurrent insert you need to prevent tables 
with deleted data.   So  the developer just needs to make sure they're 
always in this situation.

Why do you think this is your bottleneck?  Have you measured anything 
quantitatively?  What is your table_locks_immediate vs. your 
table_locks_waited?  What is your rate of INSERTs?
This is a good idea actually.  I took a look at these values and only 
3.6% of our queries wait for locks.

Of course the problem might be that these locks take up a lot of time in 
critical places.  It would be nice to see these values broken down into 
seconds waited. but this is probably too much to ask for just yet ;)

Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Will myisam lock if the query can be resolved from disk cache/query cache?

2005-05-08 Thread Kevin Burton
Harrison Fisk wrote:
aren't loaded into the query cache, they are loaded into the key cache 
(key_buffer_size).
Yes... you busted me ! :). I meant to say key cache though. 

Now assuming that you have the query cache actually being used (the 
cache of the actual statement), then normally the SELECT won't wait 
for any locks.  So someone can have a WRITE lock on the table, and 
your SELECT will still run.  If you have a query cache miss, then it 
will need to acquire the READ lock like a normal SELECT.

Yes... I realize.  The issue is though that only a small percentage of 
our queries are actually using the query cache. 

Not only THAT but it doesn't need to read the disk because the 
filesystem buffer has the blocks in memory.
In this config will the SELECTs block for the INSERTs?  I guess they 
would!

Yes.  If MySQL has to actually read the table or indexes then it will 
set locks to do so, as appropriate to the storage engine in use.  Keep 
in mind, if everything is coming from cache (key_buffer + disk 
buffer), the lock will generally be very quick as there isn't any disk 
i/o to block on.

Of course... I realize.  But what if its blocked by 5-10 INSERTs.  Then 
its going to have to WAIT for these INSERTs to complete even though it 
can resolve the query without waiting for the table :-/

This type of scenario would yield dramatic performance imporovements 
by migrating to INNODB... would it not?
Either that or there's the option of using MyISAM with no DELETEd 
rows (since it can then support concurrent insert.)

Concurrent insert sounds like it would work well with your above 
mythical application (since you didn't mention any DELETEs).  The only 
locking conflict you might have would be that your INSERT's would lock 
other INSERT's while it is occuring.
Yes... thats my current thinking.  That our INSERTs are blocking SELECTs 
even if they can complete without hitting disk.  Now we need to find out 
if we can use the concurrent select feature of myisam without migrating 
to INNODB.  Its harder to migrate to innodb right now.

Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Will myisam lock if the query can be resolved from disk cache/query cache?

2005-05-08 Thread Kevin Burton
OK.
Lets take a mythical application.  The app is spending about 50% of its 
time inserting into table FOO.  The other 50% of the time its spent 
doing SELECT against the table.

The SELECTs can use an index which is already full loaded into the query 
cache. Not only THAT but it doesn't need to read the disk because the 
filesystem buffer has the blocks in memory. 

In this config will the SELECTs block for the INSERTs?  I guess they would!
This type of scenario would yield dramatic performance imporovements by 
migrating to INNODB... would it not? 

Either that or there's the option of using MyISAM with no DELETEd rows 
(since it can then support concurrent insert.)

Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: MySQL not using optimum disk throughput.

2005-05-07 Thread Kevin Burton
Atle Veka wrote:
On Fri, 6 May 2005, Kevin Burton wrote:
 

For the record... no a loaded system what type of IO do you guys see?
Anywhere near full disk capacity?  I'm curious to see what type of IO
people are seeing on a production/loaded mysql box.
   

Mostly Linux in this thread so far, so I figured I'd throw some FreeBSD in
the mix. Our latest build which so far has worked out great, is MySQL
4.0.24 with linuxthreads on FreeBSD 4.10-R.
 

It looks like you're saying here that a single disk is FASTER than your 
RAID 10 setup.

Correct? 

Which is interesting.  I'm wondering if this is a RAID config issue.  It 
just seems to make a LOT more sense that RAID 1 or 10 would be faster 
than a single disk.

Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Client-side subqueries MUCH faster than server-side queries... DESIGN flaw!

2005-05-07 Thread Kevin Burton
So I think we all need to admit that using IN clauses with subqueries on 
MySQL 4.1.x is evil.  Pure evil.

I attached the blog post I made on the subject a while back.  (my blog 
is offline)

If you KNOW ahead of time that your subquery involves only a few 
columns, then just rewriting the query to become

SELECT * FROM FOO WHERE ID IN (?, ?, ? ... )
is MUCH faster.
So here's a constructive suggestion for fixing this.
Why not add a new feature called SQL_INLINE_SUBQUERY which tells MySQL 
that it should first run the dependent subquery, get back the results, 
then replace them inline.

SELECT SQL_INLINE_SUBQUERY * FROM FOO WHERE ID IN ( SELECT ID FROM BAR)
... and if BAR only had say ... 100 rows... it would be inlined instead 
of resorting to a full table scan.

Either that or (god forbid) fix this problem by having a smarter 
optimizer which can use a join.

Kevin
As far as I can tell MySQL subqueries in 4.1.x releases are totally 
broken with
IN clauses The major reason is that they don't use *ANY* indexes and 
resort to
full table scans.

Lets take two queries:
{{{

mysql> EXPLAIN
   SELECT * FROM FEED, ARTICLE WHERE ARTICLE.ID = 1628011 AND 
FEED.ID = ARTICLE.ID
*** 1. row ***
   id: 1
  select_type: SIMPLE
table: FEED
 type: const
possible_keys: PRIMARY
  key: PRIMARY
  key_len: 4
  ref: const
 rows: 1
Extra:
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: ARTICLE
 type: const
possible_keys: PRIMARY
  key: PRIMARY
  key_len: 4
  ref: const
 rows: 1
Extra:
2 rows in set (0.00 sec)


}}}
Which is *great*.  The join is using both of the PRIMARY indexes on 
the columns
and only references one row.

Can't get any better than that!
Now lets rewrite the SELECT to use a subquery:
{{{
mysql> EXPLAIN
   SELECT * FROM FEED WHERE ID IN
  (SELECT ID FROM ARTICLE WHERE ID = 1628011)
*** 1. row ***
   id: 1
  select_type: PRIMARY
table: FEED
 type: ALL
possible_keys: NULL
  key: NULL
  key_len: NULL
  ref: NULL
 rows: 2316698
Extra: Using where
*** 2. row ***
   id: 2


  select_type: DEPENDENT SUBQUERY
table: ARTICLE
 type: const
possible_keys: PRIMARY
  key: PRIMARY
  key_len: 4
  ref: const
 rows: 1
Extra: Using index
2 rows in set (0.00 sec)
}}}
And here's where the fun begins.  The FEED table won't use *ANY* 
index!  It
really can't get ANY worse than that.

So either this is a bug in both 4.1.10 and 4.1.7 or the optimizer is 
just plain
broken.

Note that using FORCE INDEX doesn't work at all.
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: MySQL not using optimum disk throughput.

2005-05-06 Thread Kevin Burton
Greg Whalin wrote:
What drives are you using?  For SCSI RAID, you definitly want deadline 
scheduler.  That said, even after the switch to deadline, we saw our 
Opteron's running way slow (compared to older slower Xeons).  Whatever 
the problem is, we fought it for quite a while (though difficult to 
test too much w/ production dbs) and ended up rolling back to 2.4.
Ug.. I don't want to roll back to 2.4... 2.6 has so many nice features 
we depend on.  We're using SCSI RAID5 on XEON of course.

I think its time to rule out some things.  I'm going to migrate to 
RAID1... just to verify... then try reviewing our kernel options.. maybe 
disabling NPTL... maybe try another filesystem...

Not fun.
For the record... no a loaded system what type of IO do you guys see?  
Anywhere near full disk capacity?  I'm curious to see what type of IO 
people are seeing on a production/loaded mysql box.

Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: MySQL not using optimum disk throughput.

2005-05-06 Thread Kevin Burton
Kevin Burton wrote:
Greg Whalin wrote:
Deadline was much faster.  Using sysbench:
test:
sysbench --num-threads=16 --test=fileio --file-total-size=20G 
--file-test-mode=rndrw run

So... FYI.  I rebooted with elevator=deadline as a kernel param.
db2:~# cat /sys/block/sda/queue/scheduler
noop anticipatory [deadline] cfq
(which I assume means I'm now running deadline.  Is there any other way 
to find out?)

And no performance diff.  Note that you're benchmarks only show a 20M 
addition overhead.  We're about 60x too slow for these drives so I'm not 
sure what could be going on here :-/

Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: MySQL not using optimum disk throughput.

2005-05-06 Thread Kevin Burton
Greg Whalin wrote:
Deadline was much faster.  Using sysbench:
test:
sysbench --num-threads=16 --test=fileio --file-total-size=20G 
--file-test-mode=rndrw run

Wow... what version of sysbench are you running?  Its giving me strange 
errors

sysbench v0.3.4:  multi-threaded system evaluation benchmark
Running the test with following options:
Number of threads: 16
Extra file open flags: 0
128 files, 160Mb each
20Gb total file size
Block size 16Kb
Number of random requests for random IO: 1
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!
FATAL: Failed to read file! file: 90 pos: 14761984 errno = 0 (Success)
FATAL: Failed to read file! file: 103 pos: 161398784 errno = 0 (Success)
FATAL: Failed to read file! file: 75 pos: 79413248 errno = 0 (Success)
FATAL: Failed to read file! file: 79 pos: 67207168 errno = 0 (Success)
FATAL: Failed to read file! file: 108 pos: 64028672 errno = 0 (Success)
FATAL: Failed to read file! file: 53 pos: 96157696 errno = 0 (Success)
FATAL: Failed to read file! file: 88 pos: 137068544 errno = 0 (Success)

--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: MySQL not using optimum disk throughput.

2005-05-06 Thread Kevin Burton
Greg Whalin wrote:
We have seen the exact same thing here.  We used the deadline 
scheduler and saw an immediate improvement.  However, we still saw 
much worse performance on our Opteron's (compared to our older Xeon 
boxes).  We ended up rolling back to Fedora Core 1 
2.4.22-1.2199.nptlsmp kernel and shut down NPTL and now our Opteron's 
are much much faster than our Xeons.

Sweet... I'm going to take a look at that!
Two votes for the deadline scheduler.  Though I'm an NPTL fan but I'm not sure 
our DB boxes need this as they don't use THAT many threads.
The thing I find strange about this is that our experience (@ Meetup) 
seems to match that of Friendsters (I know of a few other high traffic 
sites that have mentioned similar issues), in that Mysql on Opteron 
and Linux 2.6 is not a good solution.  Yet, Mysql recommends exactly 
this config and in fact, does not seem to even support (via support 
contract) a 2.4 solution for Opteron + Mysql.

Wow... whats the consensus on Opteron here then?  It seems to be a clear winner 
since you can give the mysql process more memory for caching.
Is it an OS issue since few of the distributions seem to support Opteron (well).
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


MySQL not using optimum disk throughput.

2005-05-06 Thread Kevin Burton
We have a few of DBs which aren't using disk IO to optimum capacity.
They're running at a load of 1.5 or so with a high workload of pending 
queries.

When I do iostat I'm not noticing much IO :
Device:rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/srkB/swkB/s 
avgrq-sz avgqu-sz   await  svctm  %util
sda  0.00  13.73 128.43 252.94 1027.45 1695.10   513.73   
847.55 7.1490.13  285.00   2.53  96.57

...
This is only seeing about 500k -> 1M per second throughput.
When I run bonnie++ on these drives they're showing 20M->40M throughput.
Which is really strange.
Most of our queries are single INSERTS/DELETES.  I could probably 
rewrite these
to become batch operations but I think I'd still end up seeing the above 
iostat
results but with higher throughput.

... so I'd like to get to the bottom of this before moving forward?
I ran OPTIMIZE TABLE on all tables but nothing. 

The boxes aren't paging.
They're running on a RAID5 disk on XFS.
Could it be that the disks are having to do a number of HEAD seeks since we
have large tables?
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com. 
See irc.freenode.net #rojo if you want to chat.

Rojo is Hiring! - http://www.rojonetworks.com/JobsAtRojo.html
  Kevin A. Burton, Location - San Francisco, CA
 AIM/YIM - sfburtonator,  Web - http://peerfear.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]