Re: Performance Benchmarks

2004-02-18 Thread Adam Goldstein
What about a baseline of benchmerks for common, recent hardware using 
the sql-bench tools?

I can't find anything like that- It would be nice to know how my 
setup/server compares to other servers of the same or similar
ability.

--
Adam Goldstein
White Wolf Networks
http://whitewlf.net
On Feb 16, 2004, at 1:52 AM, Chris Nolan wrote:

Ahh, the infamous JDBC benchmark. :-)

There's been much commentary on this in the past. There are some things
to consider:
1. The native APIs for each DB will be faster.
2. The DB, App Server and Web Server were all on one box.
3. I'm not sure if the MySQL JDBC driver supported the NamedPipeFactory
class at the time those benchmarks were taken. If it didn't, then what
you are seeing is an upper-bound imposed by I/O restrictions.
4. In MySQL 4.1.1, InnoDB is able to further benefit from the query
cache, as it can now use it outside of autocommit mode.
5. This was done on Windows - scalability may be different on different
operating systems (Linux 2.6 and FreeBSD 5.2 are likely to be much
better).
6. Interestingly, the performance of MySQL dropped by two thirds when
the query cache was disabled - this sounds a bit weird as none of the
other databases have this sort of mechanism in place yet performed
better than 1/3 of the level of MySQL.
7. The JDBC driver has improved in performance since this test, across
the board.
8. MySQL 4.1.1 adds vastly improved FULLTEXT capabilities as well as
nested queries. These two additions may be of great benefit to some
applications (and basing development on MySQL 4.1.x may result in being
able to ship around the time 4.1 is declared production ready depending
on the development time involved).
I'm not sure if all aspects of this benchmark have been discussed.
Heikki has said that the performance of MySQL shouldn't have changed
between 4.0.0 and 4.0.16 (4.0.16 was the current version when I asked).
The fact that the query cache being turned off caused such a large
performance drop and that MySQL scaled so closely to Oracle even though
the two engines have such different workings points to the limit in 
this
case being the JDBC element of the test.

As I've said in other threads, I can't wait for MySQL AB to release
their new benchmarks - it will hopefully give us a simple, definitive
source for comparison across architectures, operating systems and 
access
methods.

Regards,

Chris

On Mon, 2004-02-16 at 15:08, Daniel Kasak wrote:
Rodrigo Galindez wrote:
List,
  Is there a site where I can see performance benchmarks on mySQL
vs. MS SQL Server 2000 ? We plan to migrate a database that's now
running under SQL Server 2000 to mySQL, because of online hosting
costs, but our boss is not so sure of doing that, he feels safe
working under SQL Server 2000.
  Sorry is this is a newbie question, this is my first work mySQL.
   Cheers,
http://www.eweek.com/article2/0,4149,293,00.asp
Click on the links for graphs.
--
Daniel Kasak
IT Developer
NUS Consulting Group
Level 5, 77 Pacific Highway
North Sydney, NSW, Australia 2060
T: (+61) 2 9922-7676 / F: (+61) 2 9922 7989
email: [EMAIL PROTECTED]
website: http://www.nusconsulting.com.au
__
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Connect to MySQL via PHP

2004-02-12 Thread Adam Goldstein
Apache2 and php is a buggy combination? Not that I have seen.

Or are you referring to trying to use the Apache Worker (mutlithreaded) 
MPM with php... I believe that is still a bit buggy, though, no rpms or 
packages seem to install it that way anyways. I'm hoping to get 
workerMPM working for me, too, as I have been having to use thttpd for 
serving images recently.

I'm not sure how horked up RHES3 is, but, std. RH7-9, Mandrake 7.2-9.2, 
Debian 3+ (among others) seem to have no problem what so ever 
connecting php-mysql out of the box, provided you have a user account 
on mysql to connect to and supply it in your my.cnf, php.ini or 
mysql_connect statements.

keep in mind rh and mdk broke up the portions of php into separate 
packages... such as php-imap, php-mysql, php-cli, php-ldap, etc. You 
will need to install the php-mysql package if mysql_connect is not 
being recognized. Also note, mdk has further separated the php.ini file 
to use separate php include files, similar to how they broke up 
apache's module configs...  the structure is /etc/php.ini and 
/etc/php/#_phpmodulename.ini..they load in order of their number... 
This is good to keep in mind, as the mysql settings are no longer 
included in the primary php.ini file but in 34_mysql.ini.

--
Adam Goldstein
White Wolf Networks
http://whitewlf.net
On Feb 10, 2004, at 10:34 PM, Don Read wrote:

On 11-Feb-2004 Eric W. Holzapfel wrote:
Hello Listers,

I have a  problem with my PHP/redhat setup, and possible problem with
my
Mysql setup.
I have Apache (2.0) and PHP (4.3.2) installed on a Red Hat 3.0 ES
system.
Apache 2.0 + PHP is a known buggy combination.

I have MySql installed on a Slackware linux box.
Server on a remote host? Gotcha.

I want to be able to use Apache/PHP to connect to the mysql database
on
the slackware box.
I think that PHP is set up ok, which may be  a lie, because PHP
says
it does not recognize the commands like -  mysql_pconnect and
mysql_connect.
No, wait, lemme guess. Like 'function not defined' ?
(eat your heart out, Miss Cleo)
Also if I try to connect to the database using something like this:
 mysql://user,[EMAIL PROTECTED] demodb this fails and the or die
getMessage() returns
DB: no such database.  (I am trying to use the Pear DB here)
Do I need to have mysql installed on the red hat machine?
You'll need to compile in the client libs at least.

I can connect to the slackware linux box from a Windows machine using
ODBC.
So your server is working and accepting remote connections.
This is a good thing!
Any ideas on what I have not done, or what I have done wrong?

You haven't configured the Redhat PHP install to build in the MySQL
client libraries.
To verify this --try this script:
---
?php
phpinfo();
?
---
Look for '--with-mysql' in the Configure Command section.
Also look for a MySQL Support section.
Regards,
--
Don Read   [EMAIL PROTECTED]
-- It's always darkest before the dawn. So if you are going to 
   steal the neighbor's newspaper, that's the time to do it.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Memory Problems on a G5/OSX/MySql4.0.17

2004-02-04 Thread Adam Goldstein
On Jan 31, 2004, at 1:09 AM, Adam Goldstein wrote:

On Jan 30, 2004, at 10:25 AM, Bruce Dembecki wrote:

On Jan 28, 2004, at 12:01 PM, Bruce Dembecki wrote this wonderful  
stuff:
So.. My tips for you:

1) Consider a switch to InnoDB, the performance hit was dramatic,  
and it's
about SO much more than transactions (which we still don't do)!

Consider it switched! as soon as I find the way to do so :)
Are there any changes necessary to his code/queries to use innodb?
No changes needed to code/queries, except probably the daily table
optimize/repair can go away... As far as how to do it...
SNIP SNIP

Our switch to innodb was fairly smooth, but one table is unable to be  
converted due to a FullText Index.  I see Innodb has only one drawback  
:)

However, the results so far are very worth it. We are still having some  
overloads, but, they are certainly not mysql's fault. Apache/php is  
taking up too much load and memory at a certain point, but the G5  
doesn't break a sweat.

We still have some configuring to due, as we started with 6 x 2G ibdata  
files, which mysteriously are only 1G on disk. my.cnf settings below.  
We also kept some ram in the MyIsam portion of the config for the one  
remaining (large/important) MyIsam table.  Do the settings look kosher?

One test of the speed difference has so far registered a 5-10X speed  
increase (max). These also depend on time of day and filesystem deletes  
of multiple files for each, there is a backlog of perhaps another  
million items left to archive that this is working on, so we'll have  
this script as a working test for a few more days, as we can only run  
this during low load hours.

Before Innodb:
 START (07:00:00) 
Done. 2279 archived.
 STOP  (07:50:07): 3005.91sec 
 START (20:00:00) 
Done. 5603 archived.
 STOP  (20:50:16): 3015.15sec 
 START (22:00:00) 
Done. 7265 archived.
 STOP  (22:50:04): 3002.85sec 
After Innodb:
 START (18:00:00) 
Done. 16092 archived.
 STOP  (18:50:03): 3002.25sec 
 START (19:00:00) 
Done. 19683 archived.
 STOP  (19:50:03): 3002.38sec 
 START (22:00:00) 
Done. 25370 archived.
 STOP  (22:50:04): 3003.6sec 
Under a simultaneous user/high load situation, would you suggest  
running with pconnects in php/mysql,
and with persistent connections in apache? We have been seeing 300-400  
outbound mysql connections from the main app server (via netstat -n  
-t|grep -c :3306 , which include mostly TIME_WAIT) , 300-450 apache  
processesoutbound *:80 connections on the primary app server (we are  
researching/pricing 2-4 frontend 1U servers now.. roughly  
2Gram/2Ghz+/gigabit boxes, either P4/Athlon/Athlon64 or Xserves).

We are still getting some odd results in stats, such as the same high
'change db' and 'connection' rates.
relevant(?) innodb status;

Per second averages calculated from the last 53 seconds
(now, during low hours. I am not sure how many of these stats would
change during high use hours, I will check tomorrow.)
5266530 OS file reads, 2492377 OS file writes, 448439 OS fsyncs
34.68 reads/s, 18790 avg bytes/read, 14.81 writes/s, 1.74 fsyncs/s
Ibuf for space 0: size 1, free list len 249, seg size 251,
970319 inserts, 970319 merged recs, 189753 merges
Hash table size 4980539, used cells 2737132, node heap has 3893  
buffer(s)
9649.16 hash searches/s, 1424.65 non-hash searches/s

Total memory allocated 1654471880; in additional pool allocated 2385280
Buffer pool size   76800
Free buffers   124
Database pages 72783
Modified db pages  9798
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages read 8644115, created 38059, written 2814095
39.87 reads/s, 0.21 creates/s, 17.83 writes/s
Buffer pool hit rate 1000 / 1000
Number of rows inserted 2353929, updated 1543175, deleted 1191888, read  
547022884
6.98 inserts/s, 0.47 updates/s, 5.94 deletes/s, 17275.54 reads/s

relevant status;

| Aborted_clients  | 3088   |
| Aborted_connects | 1  |
| Bytes_received   | 2788318966 |
| Bytes_sent   | 1674966066 |
| Com_change_db| 5245603|
| Com_delete   | 1091654|
| Com_insert   | 1933786|
| Com_insert_select| 440592 |
| Com_lock_tables  | 82167  |
| Com_select   | 5133100|
| Com_unlock_tables| 82172  |
| Com_update   | 1525300|
| Connections  | 788173 |
| Created_tmp_disk_tables  | 350|
| Created_tmp_tables   | 96399  |
| Created_tmp_files| 27 |
| Flush_commands   | 1  |
| Handler_commit   | 82157  |
| Handler_delete   | 0  |
| Handler_read_first   | 38191  |
| Handler_read_key | 1081224301 |
| Handler_read_next| 3683264158 |
| Handler_read_rnd | 70681449   |
| Handler_read_rnd_next| 1174208910 |
| Handler_rollback | 729518 |
| Handler_update   | 55200716   |
| Handler_write| 70961992

Re: InnoDB Backups

2004-01-30 Thread Adam Goldstein
The Hot backup/dump tools use the mysql server to create a live backup 
while the server is running. On MyIsam tables, I think they are locked 
during the entire process.. innodb may be different.

You can backup the DB files directly, but, the mysql server MUST be 
shut down to do so.. which is likely not what you want. Since lots of 
information may be sitting in the buffers when you copy the files, 
along with file-close checks and such, you would only get partial data 
backups, which would be far less effective to restore from. When the 
mysql server is shut down, those buffers would all be sent to disk. Any 
remaining FS/OS buffers would be honored by the FS call for copy. So, 
yes it works fine if the files are copied when the server is off.

I am about to switch to innodb myself, and I am simply going to have to 
buy the Innodb hot backup tool to make full backups. however, do not 
forget that even those are out of date the moment the backup is done ;) 
 Replication is your best friend, next to your Dog of course,

--
Adam Goldstein
White Wolf Networks
http://whitewlf.net
On Jan 30, 2004, at 11:50 AM, Mauro Marcellino wrote:

By open file tool I mean software that works concurrently with a backup
suite such as veritas that would backup any open files (such as MySQL
binaries) that would normally be skipped.
What do you mean by inconsistent?  What does InnoDB Hot backup do
differently than an open file agent?
So the only two ways to do an online backup of InnoDB tables is InnoDB 
Hot
backup or mysqldump?

Thanks,

Mauro
- Original Message -
From: Heikki Tuuri [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, January 30, 2004 4:22 AM
Subject: Re: InnoDB Backups

Mauro,

- Original Message -
From: Mauro Marcellino [EMAIL PROTECTED]
Newsgroups: mailing.database.myodbc
Sent: Thursday, January 29, 2004 10:44 PM
Subject: InnoDB Backups

--=_NextPart_000_00CE_01C3E67E.9D867B90
Content-Type: text/plain;
charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable
I have made a couple of other postings related to this but I guess my
question is:
Can I backup InnoDB tables (binary files) using an open file agent?
what do you mean by an open file agent?

You cannot just copy the ibdata files and ib_logfiles as is, because 
they
would be inconsistent. A commercial tool to make an online (= hot) 
binary
backup is InnoDB Hot Backup. A free way to make an online backup is 
to use
mysqldump.

If yes...and I am using Automatic COMMITs my backup will be current?

 If I am not using Automatic COMMITs then my backup will contain 
data =
up
to the last COMMIT (In other words, data since the last COMMIT will 
not
be included in the backup.  Is this true?

I appreciate any guidance...Thanks Much!

Mauro
Best regards,

Heikki Tuuri
Innobase Oy
http://www.innodb.com
Foreign keys, transactions, and row level locking for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up
MyISAM
tables

Order MySQL technical support from https://order.mysql.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Memory Problems on a G5/OSX/MySql4.0.17

2004-01-30 Thread Adam Goldstein
-neef.net.. a 
simple, php/rrd poller that generates nice graphs in web page formats. 
My client has created a page load time graph very recently which I will 
be correlating to our other graphs shortly to see if i can find any 
points to look at.

(Only odd/important settings included, all on app servers)
php.ini: output_buffering = Off
zlib.output_compression = Off
max_execution_time = 120
max_input_time = 90
memory_limit = 40M (was 16M then 24M... )
register_globals = On
default_socket_timeout = 60
sql.safe_mode = Off
mysql.allow_persistent = On
mysql.max_persistent = -1 (was 230, apache max clients was set to 230 
once, changed now)
mysql.max_links = 1200  (not -1, to prevent runaway)
mysql.connect_timeout = 60 (try higher... or -1 ? )
mysql.trace_mode = Off
session.save_handler = files
session.use_cookies = 1
session.auto_start = 0

from apach2 conf:
Timeout 160
KeepAlive On
MaxKeepAliveRequests 0
KeepAliveTimeout10
ListenBacklog   150
SendBufferSize  1024
StartServers40
MinSpareServers 40
MaxSpareServers 100
ServerLimit 512
MaxClients  450
MaxRequestsPerChild 1000
From sysctl:
net.ipv4.tcp_ecn = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.tcp_fin_timeout = 20
net.ipv4.tcp_keepalive_time = 2000
net.ipv4.tcp_sack = 0
net.ipv4.tcp_timestamps = 0
vm.bdflush = 50 1000 64 256 1000 3000 60 20 0
net.ipv4.tcp_wmem = 4096 16384 262143
vm.max-readahead = 512
vm.min-readahead = 10
--
Adam Goldstein
White Wolf Networks
http://whitewlf.net


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Memory Problems on G5/OSX/MySql4.0.17

2004-01-28 Thread Adam Goldstein
Raid 5 is just as common as any other raid in software, and on my other 
boxes it does not present any problem at all...  I have seen excellent 
tests with raid5 in software, and many contest that software raid 5 on 
a high powered system is faster than hardware raid 5 using the same 
disks-- I haven't seen proof of this, however.I have seen the CPU's 
used in many raid5 hardware cards and they are surprisingly  slow (avg 
33mhz).

The record sizes for our database are completely random, and therefore 
would likely require a multitude of disk reads, which would then be 
likely to need waits on spindles, etc (I am not aware of anyone syncing 
spindles anymore, or if it would have any effect if we did).

We are almost ready to switch to Gbit enet, however, I am unsure it 
will help either... according to my graphs, internal traffic (to/from 
the mysql/G5 server) is only an average of ~1.3Mbs  1.0 Mbs, with 
peaks to 5.7Mbs/5.0Mbs (I dunno is the below graph will make it through 
the list...). This graph is from the Apache/php server.





--
Adam Goldstein
White Wolf Networks
http://whitewlf.net
On Jan 28, 2004, at 11:33 AM, Brent Baisley wrote:

The split setup may be faster because you don't have contention for 
resources. Depending on how much data is being moved over the network 
connection, making it Gb ethernet may speed things up more.

In a RAID, ideally the strip size would match the record size in your 
database. So one record equals one read. Stripe sizes that are too 
small require multiple reads per record, stripe sizes that are too 
large require extraneous data to be read.  Read ahead often doesn't 
work that well with databases since the access is totally random. 
Unless you are accessing the database in the same order the records 
were written.

Did you have a software based RAID 5 setup on the Linux box? I never 
heard of implementing RAID 5 in software. I'm not sure what the CPU 
overhead would be on that, especially with 8 disks. So what exactly is 
your current setup (computers, disks, ram, software, database 
locations, etc)?



On Jan 27, 2004, at 10:48 PM, Adam Goldstein wrote:

I have had linux on soft-raid5 (6x18G, 8x9G, 4x18G) systems, and the 
load was even higher... The explanation for this could be that at 
high IO rates the data is not 100% synced across the spindles, and 
therefore smaller files (ie files smaller than the chunk size on each 
physical disk) must wait to be passed under the heads on all the 
disks... While larger chunk sizes may help this, I'm not sure.  A 
large ram buffer and read ahead on a dedicated raid system is more 
likely to work in that case, but, that would require either yet 
another fileserver (fairly expensive), or a hw dedicated Raid server 
(much more expensive), like the Xraid, which did not produce any real 
difference in the mysql bench results previously posted here. In 
fact, going by those simple benchmarks alone, my box already beat the 
Xserve/Xraid system in most of the tests.

Of course, the validity or relativity of those tests to a real world, 
heavily used server may be in question. :) I also am having trouble 
finding relative bench data to other good power systems (ie. I would 
like to see how this stacks up against an 8+G dual/quad xeon or 
sparc, etc)

I will ensure his nightly optimize/repair scripts feature the flush.

But, none of this yet explains why testing from the linux box using 
the remote G5/mysql server (over only 100Mbit switch) gives better 
results than testing directly on the server.

--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: Memory Problems on a G5/OSX/MySql4.0.17

2004-01-28 Thread Adam Goldstein
On Jan 28, 2004, at 12:01 PM, Bruce Dembecki wrote this wonderful stuff:

I don't think there would be any benefit to using InnoDB, at least not
from a transaction point of view
For the longest time I was reading the books and listening to the 
experts
and all I was hearing is InnoDB is great because it handles 
transactions.
Having little interest in transactions per se I pretty much started 
tuning
things out whenever people mentioned InnoDB.

One day when talking to some MySQL AB folks they asked why I wasn't 
using
InnoDB... I kind of looked at them blankly and replied that I don't 
need
transactions, and they looked back as if I was mad.

Turns out InnoDB is far better at handling large databases than 
MyISAM, we
had a massive (and I do mean massive) increase in performance just by
switching to InnoDB. Uses a little more disk space, but it's worth it, 
and
with a 5GByte database and a G5 server you have room to spare, even if 
you
only got the smaller disks.

InnoDB is a major thing for us now, everything is InnoDB. If an 
Engineer
complains something they have done is running slowly it usually turns 
out to
be they made some new thing and didn't make the table InnoDB. The fix 
is
easy and quick. I also suspect that you could do away with that nightly
table repair that ties up the machine for hours at a time if you were 
using
InnoDB.

We have 4 G5 towers serving MySQL for us, all DP2GHz machines with 
4GBytes
of RAM. If your data is changing rapidly, as it appears from your 
samples
most pages include some sort of insert, you will have limited benefit 
from
the Query cache - every time a table receives any type of change to 
it's
data any queries in the query cache that use that table are dumped. In
February we are adding to the mix with 2 G5 XServes... These are for 
new
projects, the current servers are handling their loads fine.

On the Disk side we got the dual 250GBytes and mirrored them for 
redundancy,
speed isn't an issue as far as we can tell.

We chose to replace our old database servers with G5s. The old 
machines were
quad processor Sun boxes, and one was an 8 CPU machine. The G5s left 
them
all for dead in terms of performance, although I'd prefer a couple of 
extra
processors, something inside me still feels better knowing that when a
process goes AWOL it's not holding up 50% of the server's resources. 
The
Application servers are still typically Sun, although new ones won't 
be.

We average about 140 Queries per second per machine (of course the load
isn't that well distributed... but it gives you an idea), and typical 
high
points are about 400 - 500 qps on any given machine without stressing 
the
machines (replication catch up can see 1500 - 2000 queries per second, 
but
that's not so common and of course is mostly inserts).

Before we did the upgrade to 4.0.17 during last Friday's maintenance 
window
we were over 1.5 billion queries total for the 28 days the machines 
had been
up.

So.. My tips for you:

1) Consider a switch to InnoDB, the performance hit was dramatic, and 
it's
about SO much more than transactions (which we still don't do)!

Consider it switched! as soon as I find the way to do so :)
Are there any changes necessary to his code/queries to use innodb?
2) Drop the query cache to something more practical, a gigabyte is 
fine if
your data is static, if it's not it's way too much. We use 128MBytes 
and
typically have about a 30% hit rate on the Query cache and the busiest
server is showing 80MBytes unused memory in the query cache and a 41% 
hit
rate, and our databases take about 40G of disk space. Remember having 
a big
query cache doesn't help if it's mostly sitting unused (in fact if 
ours are
still sitting with 80M free in a week I'll drop all of them 64MBytes).

we have an average of ~15-20%, with times sustaining 30+%

3) Give lots of memory to InnoDB, I'll share my settings below.

Thank You!

4) Take most of the non InnoDB memory settings and drop them down real 
low,
InnoDB does well on it's own and if you convert all tables you don't 
need to
leave much in the way of resources for MyISAM.

ok

5) Turn on and use the slow query log (and if need be change the time 
needed
to qualify as a slow query, the default 10 seconds is a lifetime). You 
may
not code the queries yourself, but you can identify the queries that 
are
causing problems and from there you can advise the client on changes 
to the
database structure (indexes etc) or at least tell him exactly what the
problem queries are.

The slow log has helped us a lot in the past... with the current slow 
log settings, only
about 0.1% are slow queries.  3K out of 4million in the past 18hours. 
Currently the time appears to be set at 2 (From show variables: 
slow_launch_time   2 ).

6) Go get MyTOP from Jeremy Zawodny at
http://jeremy.zawodny.com/mysql/mytop/ - personally I like version 1.3 
but
that may just be what I am used to... You may not be able to control 
the
coding part but you can at least monitor the 

Re: Memory Problems on G5/OSX/MySql4.0.17

2004-01-27 Thread Adam Goldstein
I have managed to get what looks like 2G for the process, but, it does  
not want to do a key_buffer of that size

I gave it a Key_buffer of 768M and a query cache of 1024M, and it seems  
happier.. though, not noticeably faster.

[mysqld]
key_buffer   = 768M
max_allowed_packet = 8M
table_cache   = 512
sort_buffer_size = 2M
read_buffer_size   = 2M
myisam_sort_buffer_size = 512M
thread_cache = 8
thread_concurrency = 8
max_connections = 1000
skip-name-resolve
skip-bdb
skip-innodb
skip-locking
ft_min_word_len= 2
join_buffer_size = 3M
query_cache_size=1024M
bulk_insert_buffer_size=256M
tmp_table_size =128M
sort_buffer =8M
read_rnd_buffer_size=8M
record_buffer=32M
open_files_limit=15000
tmpdir  = /tmp/ 
log-bin = /var/log/mysql/raptor-bin.log
Benchmarks are just plain weird. Here is from the linux server to the  
G5:

alter-table: Total time: 11 wallclock secs ( 0.03 usr  0.02 sys +  0.00  
cusr  0.00 csys =  0.05 CPU)
ATIS: Failed  
(output/ATIS-mysql_fast-Linux_2.4.22_21mdkenterprise_i686-cmp-mysql- 
fink-64)
big-tables: Total time: 15 wallclock secs ( 4.31 usr  2.79 sys +  0.00  
cusr  0.00 csys =  7.10 CPU)
connect: Total time: 324 wallclock secs (46.64 usr 30.27 sys +  0.00  
cusr  0.00 csys = 76.91 CPU)
create: Total time: 105 wallclock secs ( 2.04 usr  1.10 sys +  0.00  
cusr  0.00 csys =  3.14 CPU)
insert: Total time: 1237 wallclock secs (295.16 usr 73.22 sys +  0.00  
cusr  0.00 csys = 368.38 CPU)
select: Total time: 134 wallclock secs (32.39 usr  6.77 sys +  0.00  
cusr  0.00 csys = 39.16 CPU)
wisconsin: Failed  
(output/wisconsin-mysql_fast-Linux_2.4.22_21mdkenterprise_i686-cmp- 
mysql-fink-64)

and here is on the G5 locally:

alter-table: Total time: 38 wallclock secs ( 0.07 usr  0.05 sys +  0.00  
cusr  0.00 csys =  0.12 CPU)
ATIS: Total time: 20 wallclock secs ( 7.90 usr  7.77 sys +  0.00 cusr   
0.00 csys = 15.67 CPU)
big-tables: Total time: 27 wallclock secs ( 6.49 usr 16.10 sys +  0.00  
cusr  0.00 csys = 22.59 CPU)
connect: Total time: 167 wallclock secs (39.79 usr 52.78 sys +  0.00  
cusr  0.00 csys = 92.57 CPU)
create: Total time: 106 wallclock secs ( 6.12 usr  2.94 sys +  0.00  
cusr  0.00 csys =  9.06 CPU)
insert: Total time: 1257 wallclock secs (388.48 usr 311.51 sys +  0.00  
cusr  0.00 csys = 699.99 CPU)
select: Total time: 132 wallclock secs (40.22 usr 27.92 sys +  0.00  
cusr  0.00 csys = 68.14 CPU)
wisconsin: Total time:  5 wallclock secs ( 1.89 usr  1.65 sys +  0.00  
cusr  0.00 csys =  3.54 CPU)

Some of the strangeness is due to it being a live server, tested during  
low use hours. How accurate are these bench
marks, and do they represent the overall strength of the mysql server  
to handle large loads?  I can't get a good idea a to how many  
queries/sec it should be able to handle, considering I can't tell how  
complex the queries are. All I can say is the site serves 12mil  
pages/month (~100mil hits/mo), 80% concentrated into 13H of the day,  
with perhaps 40% in just 4-5hours... About 1million hits to the  
heaviest sql page/month, broken up into the above portions.

Of course, that is also not including the amount of people potentially  
trying to access the site during this time, which by eyeball estimates  
on the graphs suggest easily 25-150% more, plus the amount more that  
would come if the site could handle them and they were happy. (We all  
know web users are a fickle bunch, and will drop a slow loading site  
like a hot potato.)

--
Adam Goldstein
White Wolf Networks
http://whitewlf.net
On Jan 26, 2004, at 12:05 PM, Brad Eacker wrote:

Adam,
 Off the wall question, but is White Wolf Networks related in
any way to White Wolf Publishing?
You may be hitting an OSX limit. While you can install more than 2GB
on a system, I don't think any one process is allowed to allocated
more than 2GB of RAM to itself. It's not a 64-bit OS yet. You should
be able to search the Apple website for this limit.
 Could you take a look at the ulimit man page to see if it will
allow greater than a signed 32 bit value (2G).  If it does not then
there is still a 32 bit limitation on process size due to this basic
constraint.
 Brad Eacker ([EMAIL PROTECTED])


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Memory Problems on G5/OSX/MySql4.0.17

2004-01-27 Thread Adam Goldstein
I have had linux on soft-raid5 (6x18G, 8x9G, 4x18G) systems, and the 
load was even higher... The explanation for this could be that at high 
IO rates the data is not 100% synced across the spindles, and therefore 
smaller files (ie files smaller than the chunk size on each physical 
disk) must wait to be passed under the heads on all the disks... While 
larger chunk sizes may help this, I'm not sure.  A large ram buffer and 
read ahead on a dedicated raid system is more likely to work in that 
case, but, that would require either yet another fileserver (fairly 
expensive), or a hw dedicated Raid server (much more expensive), like 
the Xraid, which did not produce any real difference in the mysql bench 
results previously posted here. In fact, going by those simple 
benchmarks alone, my box already beat the Xserve/Xraid system in most 
of the tests.

Of course, the validity or relativity of those tests to a real world, 
heavily used server may be in question. :) I also am having trouble 
finding relative bench data to other good power systems (ie. I would 
like to see how this stacks up against an 8+G dual/quad xeon or sparc, 
etc)

I will ensure his nightly optimize/repair scripts feature the flush.

But, none of this yet explains why testing from the linux box using the 
remote G5/mysql server (over only 100Mbit switch) gives better results 
than testing directly on the server.

--
Adam Goldstein
White Wolf Networks
http://whitewlf.net
On Jan 27, 2004, at 9:45 AM, Brent Baisley wrote:

I don't think there would be any benefit to using InnoDB, at least not 
from a transaction support view.

After your nightly optimize/repair are you also doing a flush? That 
may help.

I haven't seen any direct comparisons between HFS+ and file systems 
supported by Linux. I would believe that Linux would be faster since 
Linux tends to be geared towards performance first rather than 
usability. But you shouldn't rely on disk caching only. The disks 
still need to be read in order to fill the cache, so you want to get 
the best disk performance you can. Based on your other email, it looks 
like you are using individual disks for storing your data. While I 
understand what you were trying to do by separating your data onto 
different disks, you would get far better performance by combining 
your disks in a RAID, even a software RAID.
If you are using software based RAID, you would need to choose between 
mirroring or striping. Both will give you better read speeds, 
mirroring will slow down writes. If you are striping, the more drives 
you use the better performance you'll get, although I wouldn't put 
more than 4 drives on a single SCSI card.
I think you can use Apple's RAID software for your SCSI disk, but 
SoftRAID (softraid.com) would give you more options. Moving to RAID 
should improve things across the board and will give the best bang for 
your buck (SoftRAID is $129). Personally, I think you should always 
use some form of RAID on all servers.

On Jan 26, 2004, at 5:41 PM, Adam Goldstein wrote:

I have added these settings to my newer my.cnf, including replacing 
the key_buffer=1600M with this 768M... It was a touch late today to 
see if it has a big effect during the heavy load period (~3am to 4pm 
EST, site has mostly european users)

I did not have any of these settings explicitly set in my latest 
my.cnf trialsm, except key_buffer, and I ommitted the innodb ones, as 
we are not (currently) using innodb... would there be any benefit? 
transactions are not a priority, so says my client, so he does not 
use them.

I see the query_cache_size is rather large here, but I am unsure what 
the default size would be. I do not know, yet, how large I can/should 
make either setting, but, it does appear to work without 
malloc/memory errors appearing in the log. Note: while it bitched in 
the logs about the malloc setting, the server did not crash, but, 
kept running. Obviously with an undetermined amount of cache. I 
cannot seem to find any good way to know how much ram 
(cache/buffer/other) mysql uses, as the top output from osx is not 
very appealing... not that linux top tells me much more either.  On 
average, on the old system (all on one box) mysql was said to be 
using about 350MB avg in top... except after the nightly 
optimize/repair script which left it using 1.2G of ram for hours, and 
making all queries rather slow.

Also- a more G5 specific question: as MySql is supposed to gain much 
from the OS disk caching, how does OSX/HFS+ compare to other *nIX 
filesystems... such as Linux 2.4 w/reiserfs?

--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Memory Problems on G5/OSX/MySql4.0.17

2004-01-26 Thread Adam Goldstein
I cannot seem to allocate any large amounts of memory to Mysql on our 
system...

Can anyone suggest any settings/changes/etc to get this running to the 
best of it's ability?

Dual 2Ghz G5, 4G ram, OSX 10.3.2, 73G-10Krpm Sata Raptor drives

Using both the 'Complete Mysql4.0.15 and Standard binary package 
4.0.17 I cannot seem to get the daemon to accept using a large 
Key_buffer setting, such as 1900M or 2600M (though 1600M seemed to work 
the first startup).
I get this error:

Version: '4.0.17-standard-log'  socket: '/tmp/mysql.sock'  port: 3306
040126 05:40:16  mysqld started
*** malloc: vm_allocate(size=2597892096) failed (error code=3)
*** malloc[14345]: error: Can't allocate region
/usr/local/mysql/bin/mysqld: ready for connections.
Version: '4.0.17-standard-log'  socket: '/tmp/mysql.sock'  port: 3306
040126 05:43:00  mysqld started
*** malloc: vm_allocate(size=1984614400) failed (error code=3)
*** malloc[14378]: error: Can't allocate region
/usr/local/mysql/bin/mysqld: ready for connections.
Version: '4.0.17-standard-log'  socket: '/tmp/mysql.sock'  port: 3306
This is a dedicated mysql backend server using MyISAM tables 
(currently) and we need it to run a fairly heavy load.

This is only the relevant conf data:

[mysqld]
skip-locking
key_buffer = 1990M
max_allowed_packet = 8M
table_cache = 512
sort_buffer_size = 2M
read_buffer_size = 2M
join_buffer_size = 3M
myisam_sort_buffer_size = 64M
thread_cache = 8
query_cache_size = 32M
thread_concurrency = 8
max_connections = 1200
skip-name-resolve
skip-bdb
skip-innodb
ft_min_word_len = 2
tmpdir  = /tmp/ 
log-bin = /var/log/mysql/raptor-bin.log
These are the largest tables in the db (other clipped):
Raptor: ../data root# ls -lhS /Library/MySQL/var/db1/
total 9177432
-rw-rw  1 mysql  mysql975M 21 Jan 20:30 axxx_search.MYD
-rw-rw  1 mysql  mysql619M 21 Jan 20:30 axxx_search.MYI
-rw-rw  1 mysql  mysql571M 21 Jan 20:05 
td_visitor_archive.MYD
-rw-rw  1 mysql  mysql492M 21 Jan 20:37 message.MYD
-rw-rw  1 mysql  mysql435M 21 Jan 20:30 axxx_description.MYD
-rw-rw  1 mysql  mysql412M 21 Jan 20:37 enxxx.MYD
-rw-rw  1 mysql  mysql336M 21 Jan 20:37 enxxx.MYI
-rw-rw  1 mysql  mysql200M 23 Dec 09:05 
axxx_title_images.MYD
-rw-rw  1 mysql  mysql 97M 21 Jan 20:06 rating.MYD
-rw-rw  1 mysql  mysql 81M 21 Jan 20:06 rating.MYI
-rw-rw  1 mysql  mysql 49M 21 Jan 20:24 bxx.MYI
-rw-rw  1 mysql  mysql 28M 21 Jan 20:24 bxx.MYD
...clip...

These are our best benchmarks:
alter-table: Total time:  6 wallclock secs ( 0.03 usr  0.04 sys +  0.00 
cusr  0.00 csys =  0.07 CPU)
ATIS: Total time: 21 wallclock secs (17.20 usr  3.37 sys +  0.00 cusr  
0.00 csys = 20.57 CPU)
big-tables: Total time: 15 wallclock secs ( 4.30 usr  3.60 sys +  0.00 
cusr  0.00 csys =  7.90 CPU)
connect: Total time:  4 wallclock secs ( 0.61 usr  0.29 sys +  0.00 
cusr  0.00 csys =  0.90 CPU)
create: Total time: 98 wallclock secs (11.63 usr  3.02 sys +  0.00 cusr 
 0.00 csys = 14.65 CPU)
insert: Total time: 1405 wallclock secs (312.96 usr 78.26 sys +  0.00 
cusr  0.00 csys = 391.22 CPU)
select: Total time: 122 wallclock secs (33.21 usr  7.03 sys +  0.00 
cusr  0.00 csys = 40.24 CPU)
wisconsin: Total time:  8 wallclock secs ( 5.00 usr  0.49 sys +  0.00 
cusr  0.00 csys =  5.49 CPU)
--
Adam Goldstein
White Wolf Networks
http://whitewlf.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Memory Problems on G5/OSX/MySql4.0.17

2004-01-26 Thread Adam Goldstein
Others on this list have claimed to be able to set over 3G, and my 
failure is with even less than 2G (though, I am unsure if there is a 
combination of other memory settings working together to create an 2GB 
situation combined)

Even at 1.6G, which seems to work (though, -not- why we got 4G of 
expensive ram), does anyone have any advice for optimizing the 
settings?  Or are they pretty optimized as it is?  (according to 
benchmarks, anyways)

--
Adam Goldstein
White Wolf Networks
http://whitewlf.net
On Jan 26, 2004, at 10:13 AM, Brent Baisley wrote:

You may be hitting an OSX limit. While you can install more than 2GB 
on a system, I don't think any one process is allowed to allocated 
more than 2GB of RAM to itself. It's not a 64-bit OS yet. You should 
be able to search the Apple website for this limit.

On Jan 26, 2004, at 6:10 AM, Adam Goldstein wrote:

I cannot seem to allocate any large amounts of memory to Mysql on our 
system...

Can anyone suggest any settings/changes/etc to get this running to 
the best of it's ability?

Dual 2Ghz G5, 4G ram, OSX 10.3.2, 73G-10Krpm Sata Raptor drives

Using both the 'Complete Mysql4.0.15 and Standard binary package 
4.0.17 I cannot seem to get the daemon to accept using a large 
Key_buffer setting, such as 1900M or 2600M (though 1600M seemed to 
work the first startup).
I get this error:

Version: '4.0.17-standard-log'  socket: '/tmp/mysql.sock'  port: 3306
040126 05:40:16  mysqld started
*** malloc: vm_allocate(size=2597892096) failed (error code=3)
*** malloc[14345]: error: Can't allocate region
/usr/local/mysql/bin/mysqld: ready for connections.
Version: '4.0.17-standard-log'  socket: '/tmp/mysql.sock'  port: 3306
040126 05:43:00  mysqld started
*** malloc: vm_allocate(size=1984614400) failed (error code=3)
*** malloc[14378]: error: Can't allocate region
/usr/local/mysql/bin/mysqld: ready for connections.
Version: '4.0.17-standard-log'  socket: '/tmp/mysql.sock'  port: 3306
This is a dedicated mysql backend server using MyISAM tables 
(currently) and we need it to run a fairly heavy load.

This is only the relevant conf data:

[mysqld]
skip-locking
key_buffer = 1990M
max_allowed_packet = 8M
table_cache = 512
sort_buffer_size = 2M
read_buffer_size = 2M
join_buffer_size = 3M
myisam_sort_buffer_size = 64M
thread_cache = 8
query_cache_size = 32M
thread_concurrency = 8
max_connections = 1200
skip-name-resolve
skip-bdb
skip-innodb
ft_min_word_len = 2
tmpdir  = /tmp/ 
log-bin = /var/log/mysql/raptor-bin.log
These are the largest tables in the db (other clipped):
Raptor: ../data root# ls -lhS /Library/MySQL/var/db1/
total 9177432
-rw-rw  1 mysql  mysql975M 21 Jan 20:30 axxx_search.MYD
-rw-rw  1 mysql  mysql619M 21 Jan 20:30 axxx_search.MYI
-rw-rw  1 mysql  mysql571M 21 Jan 20:05 
td_visitor_archive.MYD
-rw-rw  1 mysql  mysql492M 21 Jan 20:37 message.MYD
-rw-rw  1 mysql  mysql435M 21 Jan 20:30 
axxx_description.MYD
-rw-rw  1 mysql  mysql412M 21 Jan 20:37 enxxx.MYD
-rw-rw  1 mysql  mysql336M 21 Jan 20:37 enxxx.MYI
-rw-rw  1 mysql  mysql200M 23 Dec 09:05 
axxx_title_images.MYD
-rw-rw  1 mysql  mysql 97M 21 Jan 20:06 rating.MYD
-rw-rw  1 mysql  mysql 81M 21 Jan 20:06 rating.MYI
-rw-rw  1 mysql  mysql 49M 21 Jan 20:24 bxx.MYI
-rw-rw  1 mysql  mysql 28M 21 Jan 20:24 bxx.MYD
...clip...

These are our best benchmarks:
alter-table: Total time:  6 wallclock secs ( 0.03 usr  0.04 sys +  
0.00 cusr  0.00 csys =  0.07 CPU)
ATIS: Total time: 21 wallclock secs (17.20 usr  3.37 sys +  0.00 cusr 
 0.00 csys = 20.57 CPU)
big-tables: Total time: 15 wallclock secs ( 4.30 usr  3.60 sys +  
0.00 cusr  0.00 csys =  7.90 CPU)
connect: Total time:  4 wallclock secs ( 0.61 usr  0.29 sys +  0.00 
cusr  0.00 csys =  0.90 CPU)
create: Total time: 98 wallclock secs (11.63 usr  3.02 sys +  0.00 
cusr  0.00 csys = 14.65 CPU)
insert: Total time: 1405 wallclock secs (312.96 usr 78.26 sys +  0.00 
cusr  0.00 csys = 391.22 CPU)
select: Total time: 122 wallclock secs (33.21 usr  7.03 sys +  0.00 
cusr  0.00 csys = 40.24 CPU)
wisconsin: Total time:  8 wallclock secs ( 5.00 usr  0.49 sys +  0.00 
cusr  0.00 csys =  5.49 CPU)
--
Adam Goldstein
White Wolf Networks
http://whitewlf.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Memory Problems on G5/OSX/MySql4.0.17

2004-01-26 Thread Adam Goldstein
. 
It is hard to determine the exact causes (many theories, no good 
solutions).
--
Adam Goldstein
White Wolf Networks
http://whitewlf.net

On Jan 26, 2004, at 11:49 AM, Gabriel Ricard wrote:

2GB was the per-process memory limit in Mac OS X 10.2 and earlier. 
10.3 increased this to 4GB per-process. I've gotten MySQL running with 
3GB of RAM on the G5 previously.

This is an excerpt from a prior email to the list from back in October 
when I was first testing MySQL on the G5:

 query_cache_size=1024M
 bulk_insert_buffer_size=256M
 tmp_table_size=128M
 sort_buffer=8M
 read_rnd_buffer_size=8M
 key_buffer=768M
 record_buffer=32M
 myisam_sort_buffer_size=512M
 innodb_buffer_pool_size=1024M
 innodb_additional_mem_pool_size=32M
 However, for some reason, when I swapped the values key_buffer and 
query_cache_size to try and give
 key_buffer 1GB, it failed. I swapped the values back and it worked 
fine... odd.

- Gabriel

On Jan 26, 2004, at 11:16 AM, Brent Baisley wrote:

Yes, MySQL is capable of using more than 2GB, but it still must obey 
the limits of the underlying OS. This means file sizes, memory 
allocation and whatever else. Have you heard of anybody allocating 
more the 2GB using OSX? I've heard of quite a bit more using Linux or 
other Unix flavors, but not OSX.

As for optimizing settings, you need to profile you work load. You 
may actually run into I/O, CPU or Network bottleneck before you hit a 
memory bottleneck. You need to run things and find where the 
bottleneck is to optimize performance.

On Jan 26, 2004, at 11:09 AM, Adam Goldstein wrote:

Others on this list have claimed to be able to set over 3G, and my 
failure is with even less than 2G (though, I am unsure if there is a 
combination of other memory settings working together to create an 
2GB situation combined)

--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


Re: Memory Problems on G5/OSX/MySql4.0.17

2004-01-26 Thread Adam Goldstein
The primary server (Dual Athlon) has several U160 scsi disks, 10K and 
15K rpm... Approximately half the full size images are on one 73G U160, 
the other half on another (about 120G of large images alone being 
stored... I am trying to get him to abandon/archive old/unused images). 
  The system/logs run on a 36G 10K, Mysql used to run on another 36G 
15k, and /home (with thumbnails and php files) is on another 36G 10K... 
There is also a 250G U133 drive for archives/backups. 
Apache2.0.47/PHP4.3.4

We are going to upgrade the rest of the 10Krpm drives to 15Krpm, but, 
that does not (yet) help the G5... it is a full tower unit at the 
moment, though we are now looking at replacing it with a G5 Xserve. The 
desktop unit can only contain 2xSATA drives internally, and we do not 
have an external raid/scsi/FC system to use on it.. yet. My thought 
when setting this up was to use more RAM cache than disk for the DB. 
The entire DB is about 5.5GB total, currently, and resides on it's own 
partition on it's own disk.

The G5 is using std. HFS+ on all disks, but the Athlon/linux server is 
using reiserfs on most disks.

I will relay the HEAP/EXPLAIN info to my client, as I do not work on 
that portion of the system... He does the code, I keep the systems 
up/running.  We are trying to implement load balancing and, eventually, 
failover redundancy... The initial thought was the G5 and Dual Athlon 
being cooperative/redundant machines but, it is looking like we 
will need several frontends and the G5/D.Athlon be backends...

All of this needs to be done in the tightest budget  shortest time 
possible... we are looking at adding 3-5 1U frontend machines, but only 
if we can make sure the G5/D.Athlon boxes can handle it. Obviously 
there need to be some larger changes, but we want to avoid throwing 
hardware  money at it without reason.

We also have a second 'frontend' machine temporarily being used, a Dual 
PIII/850 w/2G ram and 4xscsi drives. It seems strangely unable to 
handle much user load at all Initially I tried simple DNS load 
balancing, but, that was quickly discarded for subdomain/site topic 
separation. It can handle only about 20% on the main server's userload 
it seems. (php files reside local on it, all images are served via main 
server/thttpd, some dynamic includes are done via NFS mount to main 
server).

--
Adam Goldstein
White Wolf Networks
http://whitewlf.net
On Jan 26, 2004, at 4:39 PM, Brent Baisley wrote:

Have you tried reworking your queries a bit? I try to avoid using IN 
as much as possible. What does EXPLAIN say about how the long queries 
are executed? If I have to match something against a lot of values, I 
select the values into a HEAP table and then do a join. Especially if 
YOU are going to be reusing the values within the current session.
Are you storing images (img1, img2, img3) in the database? I would 
recommend against that in  a high load database, it bloats the 
database size forcing the database to use a lot more RAM to cache the 
database. It also prevents you from creating a database with fixed 
length records. Keeping the images as files will push the loading of 
the images out to the file system and web server.
What kind of RAID setup do you have? You just said you had 73GB 10K 
disks. Why didn't you go with 15k disks? Cost?

On Jan 26, 2004, at 3:42 PM, Adam Goldstein wrote:

Yes, I saw this port before... I am not sure why I cannot allocate 
more ram on this box- It is a clean 10.3 install, with 10.3.2 update. 
 I got this box as I love OSX, and have always loved apple, but, this 
is not working out great. Much less powerful (and less expensive) 
units can do a better job of this (the entire site was run on ONE 
dual athlon box with 3G ram, and it seems to have made -NO- 
difference moving the mysql to the dedicated G5.)

Obviously, there is something wrong somewhere- And, I need to find 
where. My client (site creator) is depending on me to help him boost 
the ability of the site to handle more users, but we've always been 
able to do it on a light budget. I need to know where to look first, 
as we are running out of time... His users are a fickle bunch, and 
will likely migrate off to other sites if this slowness continues (it 
has been degrading for past 3-4 months from slow at peak, to dead for 
all peak hours).

These are example queries from the heavier pages:

1: SELECT E AS name, id_parent, fvc_parent_path, fvc_parent_path_E AS 
fvc_parent_path_txt, fvc_childs, items FROM categories WHERE id = 
4204 LIMIT 0,1
Time: 0.0004551410675 sec / Type: Buffered
2: SELECT e.id_enchere, e.id_vendeur, e.titre, e.img1, e.img2, 
e.img3, e.prix_depart, e.price_present, e.fnb_buyitnow_price, 
e.date_debut, e.date_debut_origine, e.date_fin, e.fin_bids, 
e.fvc_active FROM enchere e WHERE e.id_categorie IN (4204) AND 
(e.date_fin = '2004-01-26 14:41:59') ORDER BY date_fin ASC LIMIT 0, 
80
Time: 37.60733294 sec / Type: Buffered
3: SELECT COUNT(e.id_enchere) AS nbre FROM

Re: Memory Problems on G5/OSX/MySql4.0.17

2004-01-26 Thread Adam Goldstein
I have added these settings to my newer my.cnf, including replacing the 
key_buffer=1600M with this 768M... It was a touch late today to see if 
it has a big effect during the heavy load period (~3am to 4pm EST, site 
has mostly european users)

I did not have any of these settings explicitly set in my latest my.cnf 
trialsm, except key_buffer, and I ommitted the innodb ones, as we are 
not (currently) using innodb... would there be any benefit? 
transactions are not a priority, so says my client, so he does not use 
them.

I see the query_cache_size is rather large here, but I am unsure what 
the default size would be. I do not know, yet, how large I can/should 
make either setting, but, it does appear to work without malloc/memory 
errors appearing in the log. Note: while it bitched in the logs about 
the malloc setting, the server did not crash, but, kept running. 
Obviously with an undetermined amount of cache. I cannot seem to find 
any good way to know how much ram (cache/buffer/other) mysql uses, as 
the top output from osx is not very appealing... not that linux top 
tells me much more either.  On average, on the old system (all on one 
box) mysql was said to be using about 350MB avg in top... except after 
the nightly optimize/repair script which left it using 1.2G of ram for 
hours, and making all queries rather slow.

Also- a more G5 specific question: as MySql is supposed to gain much 
from the OS disk caching, how does OSX/HFS+ compare to other *nIX 
filesystems... such as Linux 2.4 w/reiserfs?

--
Adam Goldstein
White Wolf Networks
http://whitewlf.net
On Jan 26, 2004, at 11:49 AM, Gabriel Ricard wrote:

2GB was the per-process memory limit in Mac OS X 10.2 and earlier. 
10.3 increased this to 4GB per-process. I've gotten MySQL running with 
3GB of RAM on the G5 previously.

This is an excerpt from a prior email to the list from back in October 
when I was first testing MySQL on the G5:

 query_cache_size=1024M
 bulk_insert_buffer_size=256M
 tmp_table_size=128M
 sort_buffer=8M
 read_rnd_buffer_size=8M
 key_buffer=768M
 record_buffer=32M
 myisam_sort_buffer_size=512M
 innodb_buffer_pool_size=1024M
 innodb_additional_mem_pool_size=32M
 However, for some reason, when I swapped the values key_buffer and 
query_cache_size to try and give
 key_buffer 1GB, it failed. I swapped the values back and it worked 
fine... odd.

- Gabriel

On Jan 26, 2004, at 11:16 AM, Brent Baisley wrote:

Yes, MySQL is capable of using more than 2GB, but it still must obey 
the limits of the underlying OS. This means file sizes, memory 
allocation and whatever else. Have you heard of anybody allocating 
more the 2GB using OSX? I've heard of quite a bit more using Linux or 
other Unix flavors, but not OSX.

As for optimizing settings, you need to profile you work load. You 
may actually run into I/O, CPU or Network bottleneck before you hit a 
memory bottleneck. You need to run things and find where the 
bottleneck is to optimize performance.

On Jan 26, 2004, at 11:09 AM, Adam Goldstein wrote:

Others on this list have claimed to be able to set over 3G, and my 
failure is with even less than 2G (though, I am unsure if there is a 
combination of other memory settings working together to create an 
2GB situation combined)

--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Export Database Structure sans data

2004-01-26 Thread Adam Goldstein
phpMyAdmin also give a nice, simple frontend for doing this... copy 
tables or db's with/without data.

Personally, I don't think mysql should be used on a box without 
phpMyAdmin on it, at least as a backup admin tool;)
--
Adam Goldstein
White Wolf Networks
http://whitewlf.net

On Jan 26, 2004, at 5:05 PM, Daniel Kasak wrote:

David Perron wrote:

Im looknig for the function that will allow me to export the database
structure into a static file, without the actual data.
I would like to create an ERD diagram with the output file.
David


mysqldump -d

--
Daniel Kasak
IT Developer
NUS Consulting Group
Level 5, 77 Pacific Highway
North Sydney, NSW, Australia 2060
T: (+61) 2 9922-7676 / F: (+61) 2 9922 7989
email: [EMAIL PROTECTED]
website: http://www.nusconsulting.com.au
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: can't install DBI on panther

2004-01-26 Thread Adam Goldstein
Fink works excellent for DBI, and even for mysql. You can also change  
the mysql.info file to add compiler options, like G5 optimizations,  
openssl, etc.

http://fink.sourceforge.net/
--  
Adam Goldstein
White Wolf Networks
http://whitewlf.net

On Jan 26, 2004, at 8:50 PM, tait sanders wrote:

i'm already logged in as SU so sudo won't help.
thanks anyways.
ta
tait
On 27/01/2004, at 12:12 PM, Douglas Sims wrote:

I installed DBI and dbd:mysql on 10.3.2 a few months ago and had all  
sort of problems but I finally got it to work.

I don't exactly remember what finally did it, though.  I think it  
might have been running the install with sudo, as in:
sudo perl -MCPAN ... etc.

but I'm not sure.  If you haven't tried that, perhaps it will work.

I've been trying to install GD (gd-2.0.21) off and on for a few days  
now and meeting with the same frustration.  I build zlib, libpng,  
jpeg-6b without any errors, but gd blows up when I make.

Here is the specific part of the build which blows up:

gcc -DHAVE_CONFIG_H -I. -I. -I. -I/usr/X11R6/include  
-I/usr/X11R6/include/freetype2 -I/usr/X11R6/include -g -O2 -MT  
gd_jpeg.lo -MD -MP -MF .deps/gd_jpeg.Tpo -c gd_jpeg.c  -fno-common  
-DPIC -o .libs/gd_jpeg.lo
gd_jpeg.c:41:21: jpeglib.h: No such file or directory
gd_jpeg.c:42:20: jerror.h: No such file or directory
gd_jpeg.c:54: error: parse error before cinfo
gd_jpeg.c: In function `fatal_jpeg_error':

I don't find gd_jpeg.lo anywhere, but I'm not sure what this means.

I find gd_jpeg.Plo in the .deps directory.  I'm sorry, I don't know  
what a .Plo or .lo file is.  The .Plo file just contains #dummy.   
This is a bit off-topic from mysql or the original question, I'm  
afraid.



tait sanders wrote:

i've run both 'perl -MCPAN -eshell' and 'make DBI' and both come  
back  reporting heaps of errors like the following:

from Perl.xs:1:
/System/Library/Perl/5.8.1/darwin-thread-multi-2level/CORE/perl.h: 
380: 30: sys/types.h: No such file or directory
/System/Library/Perl/5.8.1/darwin-thread-multi-2level/CORE/perl.h: 
411: 19: ctype.h: No such file or directory
/System/Library/Perl/5.8.1/darwin-thread-multi-2level/CORE/perl.h: 
423: 23: locale.h: No such file or directory
/System/Library/Perl/5.8.1/darwin-thread-multi-2level/CORE/perl.h: 
440: 20: setjmp.h: No such file or directory
/System/Library/Perl/5.8.1/darwin-thread-multi-2level/CORE/perl.h: 
446: 26: sys/param.h: No such file or directory
/System/Library/Perl/5.8.1/darwin-thread-multi-2level/CORE/perl.h: 
451: 23: stdlib.h: No such file or directory

/System/Library/Perl/5.8.1/darwin-thread-multi-2level/CORE/perl.h: 
1749:  error: parse error before STRLEN
/System/Library/Perl/5.8.1/darwin-thread-multi-2level/CORE/perl.h: 
1749:  warning: type defaults to `int' in declaration of `STRLEN'
/System/Library/Perl/5.8.1/darwin-thread-multi-2level/CORE/perl.h: 
1749:  warning: data definition has no type or storage class
In file included from   
/System/Library/Perl/5.8.1/darwin-thread-multi-2level/CORE/handy.h: 
121,
 from   
/System/Library/Perl/5.8.1/darwin-thread-multi-2level/CORE/perl.h: 
1805,
 from DBIXS.h:19,
 from Perl.xs:1:
/usr/include/gcc/darwin/3.3/inttypes.h:33:72: sys/cdefs.h: No such  
file  or directory
/usr/include/gcc/darwin/3.3/inttypes.h:34:56: machine/ansi.h: No  
such  file or directory
In file included from /usr/include/gcc/darwin/3.3/inttypes.h:35,
 from   
/System/Library/Perl/5.8.1/darwin-thread-multi-2level/CORE/handy.h: 
121,
 from   
/System/Library/Perl/5.8.1/darwin-thread-multi-2level/CORE/perl.h: 
1805,
 from DBIXS.h:19,
 from Perl.xs:1:
/usr/include/gcc/darwin/3.3/stdint.h:24:27: machine/types.h: No such  
 file or directory
In file included from /usr/include/gcc/darwin/3.3/inttypes.h:35,
 from   
/System/Library/Perl/5.8.1/darwin-thread-multi-2level/CORE/handy.h: 
121,
 from   
/System/Library/Perl/5.8.1/darwin-thread-multi-2level/CORE/perl.h: 
1805,
 from DBIXS.h:19,
 from Perl.xs:1:
/usr/include/gcc/darwin/3.3/stdint.h:34: error: parse error before   
uint8_t
/usr/include/gcc/darwin/3.3/stdint.h:35: error: parse error before   
uint16_t
/usr/include/gcc/darwin/3.3/stdint.h:36: error: parse error before   
uint32_t
/usr/include/gcc/darwin/3.3/stdint.h:37: error: parse error before   
uint64_t
/usr/include/gcc/darwin/3.3/stdint.h:41: error: parse error before   
int_least8_t
/usr/include/gcc/darwin/3.3/stdint.h:42: error: parse error before   
int_least16_t
/usr/include/gcc/darwin/3.3/stdint.h:43: error: parse error before   
int_least32_t

etc etc etc

this just goes on and on...

what am I to do to get DBI installed on my OS10.3??

please help

ta
tait


On 22/01/2004, at 5:42 AM, Moritz von Schweinitz wrote:

i've never used MT, but this kinda sounds as if you dont have the  
DBI  installed:

$ perl -MCPAN -eshell
install DBI
cheers,
M.
tait

Re: Startup error on 4.0.15

2004-01-26 Thread Adam Goldstein
Try commenting out the entry in the file /Library/MySQL/var/my.cnf

If it does not exist, copy one of the files from 
/Library/MySQL/dist/my-small,medium,large.cnf
renamed to the above location, or, to  /etc/my.cnf

Though, you may want to try uninstalling the package and reinstalling 
it again, just to be sure, as
that'innodb_buffer_pool_size = 70M'   entry seems valid.

--
Adam Goldstein
White Wolf Networks
http://whitewlf.net
On Jan 26, 2004, at 9:07 PM, Kev wrote:

I just installed the server logistics package of mysql 4.0.15 and am 
getting
the following error entry in the error log on attempting to start the
server:



040126 20:43:45 mysqld started

/Library/MySQL/libexec/mysqld: ERROR: unknown variable
'innodb_buffer_pool_size = 70M'
040126 20:43:45 mysqld ended



The directory referenced in the error message only contains the mysqld

file:



Kevins-Computer:/Library/MySQL/libexec kevinbarry$ ls mysqld
Kevins-Computer:/Library/MySQL/libexec kevinbarry$


where is the innodb file reference being picked up? What am I missing? 
There
is not config file in the error directory, is that the problem?



I upgraded from 3.0.x to 4.0.17 on Linux over the weekend and the 
install
went easier than it has thus far on Mac OS X!!! go figure.





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: migration to 64 bit - successful!

2004-01-14 Thread Adam Goldstein
What kind of my.cnf file are you using with that setup?

--
Adam Goldstein
White Wolf Networks
http://whitewlf.net
On Jan 8, 2004, at 2:48 PM, [EMAIL PROTECTED] wrote:

We moved our main production server to a dual opteron last night,
running SuSE 9.0 x86_64 (kernel 2.4.21), and the binaries mysql offer
for mysql4 work great.
The only hitches doing the change were between the chair and the desk.
MYD/MYI/frm are all binary compatible, and the server speed is awesome
mostly down to all that extra memory bandwidth over our previous intel
box.
If anyone else is wondering whether linux+mysql is stable enough on
64bit, well, at least for us, it is. (so far - touch wood). Both kernel
and server feel solid. The server handles 2000 questions per second, 
150
mysqld processes, and about 400 tables, from 6gb in size down, in a 
20gb
database. So far it appears to be about 4x faster than then 1.4ghz
pentium IIIs it replaced, but with other advantages as well, not the
least of which is the 16gb of memory the motherboard now has!

-J.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Any word on G5/64bit compiles of mysql?

2003-11-24 Thread Adam Goldstein
Any word on G5/64bit compiles of mysql?

I know I am not the only one wanting to know Does it work, and How to 
compile best for it.

or am I alone in the universe?
--
Adam Goldstein
White Wolf Networks
http://whitewlf.net
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


G5 64Bit Questions again

2003-11-13 Thread Adam Goldstein
I posted this before, Subject: 64bit G5 Panther compiles  but 
received no reply.

Is there any answer yet for the ability to compile a working, 64bit 
Mysql on OSX Panther?

Would you use GCC compiler flags-mpowerpc-gpopt and -mpowerpc64 ?

--
Adam Goldstein
White Wolf Networks
http://whitewlf.net
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


64bit G5 Panther compiles

2003-11-10 Thread Adam Goldstein
Is there any definitive answer yet for the ability to compile a 
working, 64bit Mysql?

The mysql documentation says that mysql performs far better for heavy 
queries using 64bit. I assume, also, that the new memory limits also 
help if you allow mysql to take advantage of them, as I have read in 
some recent ML entries. (which is not 8GB, btw,but 16GB according to 
the tech docs... 2GB sticks are just not really 'normal' yet.)

I am trying to move a client with a very heavy load off a current linux 
box (Dual Athlon, Dual U160) to use a Dual G5 2Ghz as a primary server 
(3G currently, 5G shortly), using the linux box as a backupslave 
machine and for offloading mail and other services, and providing a hot 
rollover function. His site produces huge amounts of load currently, 
mostly due to large amounts of queries/page and large amounts of 
concurrent users.

Some info I found below, and I was wondering what would be the best 
config to try to compile mysql with to produce the best results.

http://developer.apple.com/technotes/tn/tn2087.html
...
Take advantage of the full precision hardware square root
The G5 has a full-precision hardware square root implementation. If 
your code executes square root, check for the availability of the 
hardware square root in the G5 and execute code calling the instruction 
directly (e.g. __fsqrt()) instead of calling the sqrt() routine. (Use 
__fsqrts() for single-precision.) You can use the GCC compiler 
flags-mpowerpc-gpopt and -mpowerpc64 to transform sqrt() function calls 
directly into the PPC sqrt instruction.

Align hot code for maximum instruction dispatch

Alignment of code in memory is a more significant factor for 
performance on the G5 than on the G4. The G5 fetches instructions in 
aligned 32 byte blocks. Therefore, it is often profitable to align hot 
loops, branches, or branch targets to fetch boundaries. GCC 3.3 offers 
the following flags to align code: -falign-functions=32, 
-falign-labels=32,-falign-loops=32, -falign-jumps=32. Additionally, you 
may need to specify -falign-[functions, labels, loops, 
jumps]-max-skip=[15,31] to get the desired alignment.

...
I assume that adding -mpowerpc-gpopt and -mpowerpc64 to the gcc 
compiler will generally format the code towards the g5 and 64bit.. then 
again, I do not know much  about compiling past using configure and 
make ;)

I need it to be the fastest model, using the least resources but be 
stable... able to handle as many requests/sec as possible.  Sort of a 
High Performance fork of the Apache2/php/mysql set.  Mysql being the 
most important. ;)

--
Adam Goldstein
White Wolf Networks
http://whitewlf.net
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]