Why doesn't mysql support gzip for COMPRESS/UNCOMPRESS and only zlib
For network applications zlib is a lot less compatible than gzip.
For example I could send gzip'd content directly from the database within a
larger gzip'd stream.
Kevin
--
Founder/CEO Tailrank.com
Location: San Francisco
OK I think I've found a bug with MySQL's compression support. :-/
I'm using two Java zlib implementations. One is jzlib 1.0.7 and the other
is java.io.DeflateOutputStream . Both of these are referenced by the zlib
implementation as being compatible.
I can compress/uncompress locally WITHOU
Hey.
We have the need to have some tables stored in memory for performance
reasons.
We were thinking about just using MEMORY tables but this is non ideal since
it uses a fixed row size.
Using MyISAM would be much better since it supports variable length rows.
Backups would be handled by just u
Just use XFS. it's a solve problem..
Kevin
On 3/8/07, Christopher A. Kantarjiev <[EMAIL PROTECTED]> wrote:
I'm setting up mysql on linux for the first time (have been using OpenBSD
and
NetBSD with UFS until now). The default file system is ext3fs, and I don't
mind
that, but it seem
We need to store binary data form time to time in mysql. To date I've just
base64 encoded the data to avoid having it corrupt the console on SELECT *
Is there any way to have the mysql command line client automatically do this
for me? Is there any work around?
base64 is about 30% data bloat
On 2/24/07, Jean-Sebastien Pilon <[EMAIL PROTECTED]> wrote:
Hello,
I would like to get some of your input on file systems to use with
mysql. Should I use a journaling filesystem ? Should I choose a
different one based on what I store (log files, myisam dbs, innodb
datafiles, etc ) ? Is there an
A little birdie:
http://forge.mysql.com/wiki/Top10SQLPerformanceTips
notes..
"In 5.1 BOOL/BIT NOT NULL type is 1 bit, in previous versions it's 1 byte."
Is this true?
I didn't see a note in the manual..
I assume it would be here
http://dev.mysql.com/doc/refman/5.1/en/storage-req
Has anyone built a script to add a new slave into a MySQL replication
setup which can operate (for the most part) unattended?
The set of operations is pretty straight forward but right now it's
mostly a manual step which ends up taking a LONG time.
The script would need to:
* connect to a maste
Hm. Running on 4.1.21 seems to have a 'feature' where SHOW SLAVE
STATUS blocks when the disk is full. Thoughts?
Kevin
--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-637-8078
--
MySQL General Mailing List
For list
evin
On 2/12/07, Jay Pipes <[EMAIL PROTECTED]> wrote:
Kevin Burton wrote:
> I want to use a merge table so that I can direct all new INSERTs to a
> new merge table and migrate old data off the system by having a
> continually sliding window of underlying MyISAM tables.
>
> The
I want to use a merge table so that I can direct all new INSERTs to a
new merge table and migrate old data off the system by having a
continually sliding window of underlying MyISAM tables.
The problem is that of I do INSERT ... ON DUPLCATE KEY UPDATE and
that value isn't in the leading table wh
Hey.
I should have posted this hear earlier but it just dawned on me that
you guys could have some good feedback:
"We've been working on the design of a protocol which would enable
promotion of a slave to a master in a MySQL replication cluster.
Right now, if a MySQL master fails, most people j
We're trying to write a monitoring process for our master so that if a
table is corrupt it will raise flags which can then trigger
operations.
We can do the basic stuff such as asserting that the port is open and
that we can ping the machine but I want to test if any
INSERT/UPDATE/DELETEs are fai
What's the ETA for 5.1.13? There are a few critical bugs with NDB that are
fixed in this rev that I'd like to play with.
I'm hoping it's right around the corner :)
Kevin
--
Founder/CEO Tailrank.com
Location: San Francisco, CA
AIM/YIM: sfburtonator
Skype: burtonator
Blog: feedblog.org
Cell: 415-
There was a thread before about this... this is much better than connector
J's load balancing.
You can take machines out of production, add thhem back in, it's MySQL slave
aware, etc
On 7/19/06, Christopher G. Stach II <[EMAIL PROTECTED]> wrote:
Kevin Burton wrote:
> Hey Ga
Hey Gang.
I wanted to get this out on the list and facilitate some feedback.
http://www.feedblog.org/2006/07/announce_lbpool.html
I CC'd both lists because this might be of interest to the larger MySQL
community as the techniques I used here could be implemented in other
languages.
==
I have a fairly small table WRT the data size. Its about 300M of
data. Right now it has about 6M rows.
The schema is pretty simple. It has one 64bit ID column. Basically
its for checking the existence of an object in our DB and is designed
to work very fast.
One the table was FIRST cr
I was talking to a friend tonight about how they use NBD to run a
single system image in memory.
NBD (Network Block Device) allows one Linux box to export a block
device and for you to mount it on another filesystem. For the
memory component they just use a ram disk.
More info here:
ht
Hey.
I'm looking for a decent tool which uses crontab to monitor the
COUNT of tables within MySQL. I'd also like to monitor other queries
as well. Ideally it would use RRDtool to log the data and a PHP to
draw the UI.
Gangla and Cacti seem to do similar tasks (if you stretch them) but
Are you sure? Finding a single record using an index may be O(logN),
but wouldn't reading all of the index be O(N)?
Yeah.. you're right. It would be O(N)... I was thinking this as I
hit the "send" button :)
Kevin
Kevin A. Burton, Location - San Francisco, CA
AIM/YIM - sfburtonator,
MyISAM has a cool feature where it keeps track of the internal row
count so that
SELECT COUNT(*) FROM FOO executes in constant time. Usually 1ms or so.
The same query on INNODB is O(logN) since it uses the btree to
satisfy the query.
I believe that MyISAM just increments an internal count
OK.
I need help with the following query:
SELECT * FROM PRODUCT WHERE DATE > ? ORDER BY PRICE;
Basically find products created since a given date and order by prices.
I could put an index of DATE, PRICE but it will have to resort to a
filesort since DATE isn't a constant value.
I was thin
I was benchmarking a few of my queries tonight and I noticed that two
queries had different query plans based on table type.
Here's the "broken" query:
mysql> EXPLAIN SELECT * FROM FOO_LINK_MEMORY_TEST GROUP BY
TARGET_NODE_ID\G
*** 1. row ***
On Sep 28, 2005, at 5:05 PM, Atle Veka wrote:
I am planning on running some tests on a SATA server with a 3ware 9000
series RAID card to see if there's a stripe size that performs
better than
This might be able to help you out:
http://hashmysql.org/index.php?title=Opteron_HOWTO
These ar
On Sep 23, 2005, at 12:27 PM, Jacek Becla wrote:
Hi,
The documentation says "At a later stage, foreign key constraints
will be implemented for MyISAM tables as well". Does anybody know
what is the timescale?
I'm not sure there is a timescale.. I think it might be pretty open
ended. You c
Anyone know the ETA of having full-text index support on INNODB?
Kevin
--
Kevin A. Burton, Location - San Francisco, CA
AIM/YIM - sfburtonator, Web - http://www.feedblog.org/
GPG fingerprint: 5FB2 F3E2 760E 70A8 6174 D393 E84D 8D04 99F1 4412
Here's some thing I've been thinking about.
I want to use INNODB without FKs. I don't need or want referential integrity
in my app (due to a schema and performance issue).
Basically I just create FKs in my OR layer and my app enforces the rules.
The column is still an _ID column so I visually k
INNODB I assume?
Replicated environment?
What version of mysql?
See KILL in the SQL manual.. if you do a show processlist you can get the
pid and you might be able to kill it.
I believe that it's safe to do a KILL on an DELETE but any decision you make
her is your own...
That's a LOT of data
Kevin Burton wrote:
Any idea whats going on and how I could fix this?
This seems like a bug in the SQL parser. The LIMIT is only ignored in this one
situation.
If I just add a:
UNION
(SELECT * FROM FOO LIMIT 0)
To the query will work correctly.
This might be an acceptable workaround
Kevin Burton wrote:
( SELECT * FROM FOO WHERE FOO.LAST_UPDATED
< 1119898418779 AND FOO.FEED_ID = 1 ORDER BY FOO.LAST_UPDATED DESC LIMIT
10 ) ORDER BY LAST_UPDATED DESC LIMIT 10
OK. I *totally* just figured it out!
WOW.
so.. the LIMIT in the first SELECT is *totally* ignored and the ent
Here's a big problem I'm having.
If I have a query like:
SELECT * FROM FOO WHERE FOO.LAST_UPDATED < 1119898418779 AND
FOO.FEED_ID = 1 ORDER BY FOO.LAST_UPDATED DESC LIMIT 10
it only takes about 10ms or so to execute.
but... if I rewrite it to wrap it in a union like so:
( SELECT * FROM FOO WH
Not sure if this is a known issue or not.. but I haven't seen it
documented anywhere.
Anyway. My past thinking was that you should always use as many
connections as you have tables (at least with myisam). This way in the
worst case scenario you could have locks open on all tables instead of
Atle Veka wrote:
On Mon, 20 Jun 2005, Kevin Burton wrote:
We're noticing a problem where if we were to write to the master with
multiple threads that our slave DB will fall behind.
Note that we're trying to perform as many inserts as humanly possible
and the load on the m
Kevin Burton wrote:
We're noticing a problem where if we were to write to the master with
multiple threads that our slave DB will fall behind.
BTW.. I should clarify.. when I mean "break" I really meant to say that
the slave replication will fall WAY behind because
We're noticing a problem where if we were to write to the master with
multiple threads that our slave DB will fall behind.
Note that we're trying to perform as many inserts as humanly possible
and the load on the master is 1.
My theory is that the master, since it can write to multiple tables
Simon Garner wrote:
I'm not entirely clear what you're talking about, but you could also
have a look at INSERT IGNORE..., or INSERT... ON DUPLICATE KEY UPDATE,
or REPLACE INTO...:
The problem is that I do NOT want it to update.
Also.. REPLACE causes the row to be DELETED and INSERTED again
I've been thinking about this for a while now.
If you have an app that can compute a unique key (hashcode) and you have
a unique index it should be possible to just do an INSERT instead of a
SELECT first to see if the record doesn't exist and then an INSERT.
This should be 2x faster than the
Jochem van Dieten wrote:
Also, let's not mistake the means for the goal. Using indexes is just
a way to solve it and there may be other fixes. The goal is to improve
performance.
no.. using indexes is THE way to fix it :)
I don't want a subquery scanning all 700 million rows in my table wh
Greg Whalin wrote:
Granted, Kevin's tone was a bit harsh, but his sentiments should be
encouraged (frustration w/ a lack of feature). The concept that
people should be happy with what they get for a free product only
serves to keep the quality of free products below what they could be.
It w
Jeff Smelser wrote:
Thats funny.. looks like it will be added to 5.1.. Dunno why they think fixing
it is adding a feature..
WOW! That's just insane! This seriously has to be fixed in 5.0 or sooner...
The thing is that MySQL has both promised this feature and is claiming
that 5.0 is now
DBA wrote:
- Original Message -
From: "Kevin Burton" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc:
Sent: Tuesday, June 07, 2005 6:11 PM
Subject: Re: Seriously.. When are we going to get subqueries?!
Greg Whalin wrote:
They do use indexes if you use
Greg Whalin wrote:
They do use indexes if you use them to build derived tables and are
pretty fast. The only case where I see them not using indexes when I
think they should is when you use a sub-query for an IN() clause.
I'm sorry.. yes.. They're not using indexes when within IN clauses whi
OK...
Subqueries in 4.1 are totally broken. They don't use indexes. They're
evil. We're told we have subqueries but there's no way anyone on earth
could use them. To make matters worse a lot of developers are TRICKED
into using them and assume that mysql would do the right thing but its a
[EMAIL PROTECTED] wrote:
Hi,
i think that client load-balacer are more Dispatchers than real load balancer.
load balancing in the database side takes care to number of connections, but
also node load. So thisis more real. But this issue is difficult.
No... you're making assumptions. With t
[EMAIL PROTECTED] wrote:
Wouldn't it make better sense to build on the NDB protocol and keep
the native messaging infrastructure than it would be to build a
similar wrapper from scratch? I mean to use the NDB communications on
top of regular MySQL
Biting off an NDB migration would be a L
I'd love to get some feedback here:
MySQL currently falls down by not providing a solution to transparent
MySQL load
balancing. There are some hardware solutions but these are expensive and
difficult to configure. Also none of them provide any information
about the
current state of your MySQL
I'm curious what people here think of compiling mysql with gcc 4.0...
Especially on Opteron.
I've heard that the way to go with Opteron is to use gcc-3.4 but that
its a little unstable.
Of course it might be too early to find out if gcc 4.0 is better than 3.4...
Kevin
--
Use Rojo (RSS/Ato
Pete Harlan wrote:
Hi,
and then it never comes back, presumably from the "auto_increment"
test. If I run the auto_increment test alone (i.e., "./mysql-test-run
auto_increment"), it fails in this same way. When it's hung, mysqld
isn't using any CPU.
Also.. CPU isn't the only thing you shoul
Pete Harlan wrote:
In addition to failing the tests, I deployed the server on Machine 1
for a while and it failed quickly, with a simple insert hanging up and
"kill " being unable to kill it. (The thread's state was
"Killed", but it didn't go away and continued to block other threads
from acces
Was hashmysql.org hacked?
The wiki is gone and now all I get is:
"Stupidity is a crime against humanity."
Which is redundant btw...
Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com.
See irc.freenode.net #rojo if you want to chat.
Rojo is Hiring! - http://www.rojonetworks.com/JobsA
Gleb Paharenko wrote:
Hello.
I don't remember solutions with keepalived, but this issue is
discussed in the list from time to time. Search in archives at:
http://lists.mysql.com/mysql
Someone should create a wiki page on this subject... its a commonly
asked question...
Kevin
--
Use Rojo (RSS/
Richard Dale wrote:
Over the last week I added in lots of comments pasted in from various
places. I'd appreciate those running with Opteron and MySQL to have a close
look at the WIKI and make any amendments/suggestions.
http://hashmysql.org/index.php?title=Opteron_HOWTO
My Opteron server will be h
If you're running in a master/slave environment.. and you're application
is using the slave too often... replication can fall behind which can
then confuse your application.
This can happen if the IO performance of both the master and slaves is
equivalent and you're performaning INSERT/UPDATE/D
Dathan Pattishall wrote:
Forget using drives all together for heavy hit applications.
Build data that can fit on a ram Drive (8GB) then your able to do 20K
Not everyone can run in this config... We have way more data than we
can casually story in memory. It would just be cost prohibitive.
Mem
Were kicking around using SATA drives in software RAID0 config.
The price diff is significant. You can also get SATA drives in 10k RPM
form now.,
Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com.
See irc.freenode.net #rojo if you want to chat.
Rojo is Hiring! - http://www.ro
Dathan Pattishall wrote:
We do about 70K qps at peak for about 1 Billion Queries per day (only on
30 servers BOOYA). So, it's pretty stable.
Also... based on my math.. this yields ~ 2300 qps per MySQL box...
which is pretty good.
Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.co
Dathan Pattishall wrote:
Are you using NPTL?
No that sucks we use the other one. Can't make a static build with NPTL.
What type of performance boost are you getting from running a static build.
Kevin
--
Use Rojo (RSS/Atom aggregator)! - visit http://rojo.com.
See irc.freenode.net #rojo if you
Mark Matthews wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Kevin Burton wrote:
It seems strange that long_query_time is seconds based. I'm trying to
get most of our queries down to sub second time.
1 second is WAY too long. I'd ideally like 500ms.
Can you spec
It seems strange that long_query_time is seconds based. I'm trying to
get most of our queries down to sub second time.
1 second is WAY too long. I'd ideally like 500ms.
Can you specify .5 for long_query_time? Doesn't seem to be working the
way I'd hoped...
Kevin
--
Use Rojo (RSS/Atom agg
Greg Whalin wrote:
Curious, were you seeing deadlocks in Suns JVM w/ Tomcat?
Never with Tomcat but we might have a different number of threads. But
it *was* with Java...
We were forced to run Tomcat w/ NPTL off due to deadlocks under glibc
2.3.2+NPTL.
Yup.. thats the problem we had. But we h
Its pretty filled now now. If you have anything to add please feel free.
http://hashmysql.org/index.php?title=Opteron_HOWTO
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Greg Whalin wrote:
We are currently running 2.3.2 (Fedora Core 1) on our Opterons. When
we were still running linux 2.6, we were on 2.3.3 (Fedora Core 2).
Yeah... we were being bitten by 2.3.2's NPTL implementation for MONTHs
before I heard a rumor that the Internet Archive moved to 2.3.4.
Thi
Greg Whalin wrote:
I suspect this is an OS issue. Our Opteron's were completing large
data update queries aprox 2-3 times slower than our Xeons when running
under 2.6. After a switch to 2.4, Opteron's are faster than the
Xeons. I mentioned NPTL being shut off (LD_ASSUME_KERNEL=2.4.19 in
init
Greg Whalin wrote:
I am all in favor of this idea. Currently, this info is scattered all
over the web, and finding it can be time consuming (even w/ Google).
I see lots of people jumping the same hurdles, so a central location
for this info seems it would greatly benefit the community.
Great!
So... it sounds like a lot of people here (Dathan and Greg) have had
problems deploying MySQL on Opteron in a production environment.
I was wondering if we could start an Opteron HOWTO somewhere (mysql
wiki?) which could illustrate the minefields they've had to walk to
hopefully solidify MySQL
Harrison Fisk wrote:
There isn't really any way to "use" concurrent INSERT. It happens
automatically if possible. However there are a few things you can do
to help it along, such as OPTIMIZE after you DELETE large portions of
the table. Also it does have to enabled in LOAD DATA INFILE
manual
Harrison Fisk wrote:
aren't loaded into the query cache, they are loaded into the key cache
(key_buffer_size).
Yes... you busted me ! :). I meant to say key cache though.
Now assuming that you have the query cache actually being used (the
cache of the actual statement), then normally the SELECT
OK.
Lets take a mythical application. The app is spending about 50% of its
time inserting into table FOO. The other 50% of the time its spent
doing SELECT against the table.
The SELECTs can use an index which is already full loaded into the query
cache. Not only THAT but it doesn't need to re
Atle Veka wrote:
On Fri, 6 May 2005, Kevin Burton wrote:
For the record... no a loaded system what type of IO do you guys see?
Anywhere near full disk capacity? I'm curious to see what type of IO
people are seeing on a production/loaded mysql box.
Mostly Linux in this thread so far,
So I think we all need to admit that using IN clauses with subqueries on
MySQL 4.1.x is evil. Pure evil.
I attached the blog post I made on the subject a while back. (my blog
is offline)
If you KNOW ahead of time that your subquery involves only a few
columns, then just rewriting the query t
Greg Whalin wrote:
What drives are you using? For SCSI RAID, you definitly want deadline
scheduler. That said, even after the switch to deadline, we saw our
Opteron's running way slow (compared to older slower Xeons). Whatever
the problem is, we fought it for quite a while (though difficult t
Kevin Burton wrote:
Greg Whalin wrote:
Deadline was much faster. Using sysbench:
test:
sysbench --num-threads=16 --test=fileio --file-total-size=20G
--file-test-mode=rndrw run
So... FYI. I rebooted with elevator=deadline as a kernel param.
db2:~# cat /sys/block/sda/queue/scheduler
noop
Greg Whalin wrote:
Deadline was much faster. Using sysbench:
test:
sysbench --num-threads=16 --test=fileio --file-total-size=20G
--file-test-mode=rndrw run
Wow... what version of sysbench are you running? Its giving me strange
errors
sysbench v0.3.4: multi-threaded system evaluation
Greg Whalin wrote:
We have seen the exact same thing here. We used the deadline
scheduler and saw an immediate improvement. However, we still saw
much worse performance on our Opteron's (compared to our older Xeon
boxes). We ended up rolling back to Fedora Core 1
2.4.22-1.2199.nptlsmp kernel
We have a few of DBs which aren't using disk IO to optimum capacity.
They're running at a load of 1.5 or so with a high workload of pending
queries.
When I do iostat I'm not noticing much IO :
Device:rrqm/s wrqm/s r/s w/s rsec/s wsec/srkB/swkB/s
avgrq-sz avgqu-sz await svctm
75 matches
Mail list logo