Re: Memory Problems on G5/OSX/MySql4.0.17

2004-01-28 Thread Adam Goldstein
Raid 5 is just as common as any other raid in software, and on my other 
boxes it does not present any problem at all...  I have seen excellent 
tests with raid5 in software, and many contest that software raid 5 on 
a high powered system is faster than hardware raid 5 using the same 
disks-- I haven't seen proof of this, however.I have seen the CPU's 
used in many raid5 hardware cards and they are surprisingly  slow (avg 
33mhz).

The record sizes for our database are completely random, and therefore 
would likely require a multitude of disk reads, which would then be 
likely to need waits on spindles, etc (I am not aware of anyone syncing 
spindles anymore, or if it would have any effect if we did).

We are almost ready to switch to Gbit enet, however, I am unsure it 
will help either... according to my graphs, internal traffic (to/from 
the mysql/G5 server) is only an average of ~1.3Mbs  1.0 Mbs, with 
peaks to 5.7Mbs/5.0Mbs (I dunno is the below graph will make it through 
the list...). This graph is from the Apache/php server.





--
Adam Goldstein
White Wolf Networks
http://whitewlf.net
On Jan 28, 2004, at 11:33 AM, Brent Baisley wrote:

The split setup may be faster because you don't have contention for 
resources. Depending on how much data is being moved over the network 
connection, making it Gb ethernet may speed things up more.

In a RAID, ideally the strip size would match the record size in your 
database. So one record equals one read. Stripe sizes that are too 
small require multiple reads per record, stripe sizes that are too 
large require extraneous data to be read.  Read ahead often doesn't 
work that well with databases since the access is totally random. 
Unless you are accessing the database in the same order the records 
were written.

Did you have a software based RAID 5 setup on the Linux box? I never 
heard of implementing RAID 5 in software. I'm not sure what the CPU 
overhead would be on that, especially with 8 disks. So what exactly is 
your current setup (computers, disks, ram, software, database 
locations, etc)?



On Jan 27, 2004, at 10:48 PM, Adam Goldstein wrote:

I have had linux on soft-raid5 (6x18G, 8x9G, 4x18G) systems, and the 
load was even higher... The explanation for this could be that at 
high IO rates the data is not 100% synced across the spindles, and 
therefore smaller files (ie files smaller than the chunk size on each 
physical disk) must wait to be passed under the heads on all the 
disks... While larger chunk sizes may help this, I'm not sure.  A 
large ram buffer and read ahead on a dedicated raid system is more 
likely to work in that case, but, that would require either yet 
another fileserver (fairly expensive), or a hw dedicated Raid server 
(much more expensive), like the Xraid, which did not produce any real 
difference in the mysql bench results previously posted here. In 
fact, going by those simple benchmarks alone, my box already beat the 
Xserve/Xraid system in most of the tests.

Of course, the validity or relativity of those tests to a real world, 
heavily used server may be in question. :) I also am having trouble 
finding relative bench data to other good power systems (ie. I would 
like to see how this stacks up against an 8+G dual/quad xeon or 
sparc, etc)

I will ensure his nightly optimize/repair scripts feature the flush.

But, none of this yet explains why testing from the linux box using 
the remote G5/mysql server (over only 100Mbit switch) gives better 
results than testing directly on the server.

--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: Memory Problems on G5/OSX/MySql4.0.17

2004-01-27 Thread Brent Baisley
I don't think there would be any benefit to using InnoDB, at least not 
from a transaction support view.

After your nightly optimize/repair are you also doing a flush? That may 
help.

I haven't seen any direct comparisons between HFS+ and file systems 
supported by Linux. I would believe that Linux would be faster since 
Linux tends to be geared towards performance first rather than 
usability. But you shouldn't rely on disk caching only. The disks still 
need to be read in order to fill the cache, so you want to get the best 
disk performance you can. Based on your other email, it looks like you 
are using individual disks for storing your data. While I understand 
what you were trying to do by separating your data onto different 
disks, you would get far better performance by combining your disks in 
a RAID, even a software RAID.
If you are using software based RAID, you would need to choose between 
mirroring or striping. Both will give you better read speeds, mirroring 
will slow down writes. If you are striping, the more drives you use the 
better performance you'll get, although I wouldn't put more than 4 
drives on a single SCSI card.
I think you can use Apple's RAID software for your SCSI disk, but 
SoftRAID (softraid.com) would give you more options. Moving to RAID 
should improve things across the board and will give the best bang for 
your buck (SoftRAID is $129). Personally, I think you should always use 
some form of RAID on all servers.

On Jan 26, 2004, at 5:41 PM, Adam Goldstein wrote:

I have added these settings to my newer my.cnf, including replacing 
the key_buffer=1600M with this 768M... It was a touch late today to 
see if it has a big effect during the heavy load period (~3am to 4pm 
EST, site has mostly european users)

I did not have any of these settings explicitly set in my latest 
my.cnf trialsm, except key_buffer, and I ommitted the innodb ones, as 
we are not (currently) using innodb... would there be any benefit? 
transactions are not a priority, so says my client, so he does not use 
them.

I see the query_cache_size is rather large here, but I am unsure what 
the default size would be. I do not know, yet, how large I can/should 
make either setting, but, it does appear to work without malloc/memory 
errors appearing in the log. Note: while it bitched in the logs about 
the malloc setting, the server did not crash, but, kept running. 
Obviously with an undetermined amount of cache. I cannot seem to find 
any good way to know how much ram (cache/buffer/other) mysql uses, as 
the top output from osx is not very appealing... not that linux top 
tells me much more either.  On average, on the old system (all on one 
box) mysql was said to be using about 350MB avg in top... except after 
the nightly optimize/repair script which left it using 1.2G of ram for 
hours, and making all queries rather slow.

Also- a more G5 specific question: as MySql is supposed to gain much 
from the OS disk caching, how does OSX/HFS+ compare to other *nIX 
filesystems... such as Linux 2.4 w/reiserfs?

--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Memory Problems on G5/OSX/MySql4.0.17

2004-01-27 Thread Adam Goldstein
I have managed to get what looks like 2G for the process, but, it does  
not want to do a key_buffer of that size

I gave it a Key_buffer of 768M and a query cache of 1024M, and it seems  
happier.. though, not noticeably faster.

[mysqld]
key_buffer   = 768M
max_allowed_packet = 8M
table_cache   = 512
sort_buffer_size = 2M
read_buffer_size   = 2M
myisam_sort_buffer_size = 512M
thread_cache = 8
thread_concurrency = 8
max_connections = 1000
skip-name-resolve
skip-bdb
skip-innodb
skip-locking
ft_min_word_len= 2
join_buffer_size = 3M
query_cache_size=1024M
bulk_insert_buffer_size=256M
tmp_table_size =128M
sort_buffer =8M
read_rnd_buffer_size=8M
record_buffer=32M
open_files_limit=15000
tmpdir  = /tmp/ 
log-bin = /var/log/mysql/raptor-bin.log
Benchmarks are just plain weird. Here is from the linux server to the  
G5:

alter-table: Total time: 11 wallclock secs ( 0.03 usr  0.02 sys +  0.00  
cusr  0.00 csys =  0.05 CPU)
ATIS: Failed  
(output/ATIS-mysql_fast-Linux_2.4.22_21mdkenterprise_i686-cmp-mysql- 
fink-64)
big-tables: Total time: 15 wallclock secs ( 4.31 usr  2.79 sys +  0.00  
cusr  0.00 csys =  7.10 CPU)
connect: Total time: 324 wallclock secs (46.64 usr 30.27 sys +  0.00  
cusr  0.00 csys = 76.91 CPU)
create: Total time: 105 wallclock secs ( 2.04 usr  1.10 sys +  0.00  
cusr  0.00 csys =  3.14 CPU)
insert: Total time: 1237 wallclock secs (295.16 usr 73.22 sys +  0.00  
cusr  0.00 csys = 368.38 CPU)
select: Total time: 134 wallclock secs (32.39 usr  6.77 sys +  0.00  
cusr  0.00 csys = 39.16 CPU)
wisconsin: Failed  
(output/wisconsin-mysql_fast-Linux_2.4.22_21mdkenterprise_i686-cmp- 
mysql-fink-64)

and here is on the G5 locally:

alter-table: Total time: 38 wallclock secs ( 0.07 usr  0.05 sys +  0.00  
cusr  0.00 csys =  0.12 CPU)
ATIS: Total time: 20 wallclock secs ( 7.90 usr  7.77 sys +  0.00 cusr   
0.00 csys = 15.67 CPU)
big-tables: Total time: 27 wallclock secs ( 6.49 usr 16.10 sys +  0.00  
cusr  0.00 csys = 22.59 CPU)
connect: Total time: 167 wallclock secs (39.79 usr 52.78 sys +  0.00  
cusr  0.00 csys = 92.57 CPU)
create: Total time: 106 wallclock secs ( 6.12 usr  2.94 sys +  0.00  
cusr  0.00 csys =  9.06 CPU)
insert: Total time: 1257 wallclock secs (388.48 usr 311.51 sys +  0.00  
cusr  0.00 csys = 699.99 CPU)
select: Total time: 132 wallclock secs (40.22 usr 27.92 sys +  0.00  
cusr  0.00 csys = 68.14 CPU)
wisconsin: Total time:  5 wallclock secs ( 1.89 usr  1.65 sys +  0.00  
cusr  0.00 csys =  3.54 CPU)

Some of the strangeness is due to it being a live server, tested during  
low use hours. How accurate are these bench
marks, and do they represent the overall strength of the mysql server  
to handle large loads?  I can't get a good idea a to how many  
queries/sec it should be able to handle, considering I can't tell how  
complex the queries are. All I can say is the site serves 12mil  
pages/month (~100mil hits/mo), 80% concentrated into 13H of the day,  
with perhaps 40% in just 4-5hours... About 1million hits to the  
heaviest sql page/month, broken up into the above portions.

Of course, that is also not including the amount of people potentially  
trying to access the site during this time, which by eyeball estimates  
on the graphs suggest easily 25-150% more, plus the amount more that  
would come if the site could handle them and they were happy. (We all  
know web users are a fickle bunch, and will drop a slow loading site  
like a hot potato.)

--
Adam Goldstein
White Wolf Networks
http://whitewlf.net
On Jan 26, 2004, at 12:05 PM, Brad Eacker wrote:

Adam,
 Off the wall question, but is White Wolf Networks related in
any way to White Wolf Publishing?
You may be hitting an OSX limit. While you can install more than 2GB
on a system, I don't think any one process is allowed to allocated
more than 2GB of RAM to itself. It's not a 64-bit OS yet. You should
be able to search the Apple website for this limit.
 Could you take a look at the ulimit man page to see if it will
allow greater than a signed 32 bit value (2G).  If it does not then
there is still a 32 bit limitation on process size due to this basic
constraint.
 Brad Eacker ([EMAIL PROTECTED])


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Memory Problems on G5/OSX/MySql4.0.17

2004-01-27 Thread Adam Goldstein
I have had linux on soft-raid5 (6x18G, 8x9G, 4x18G) systems, and the 
load was even higher... The explanation for this could be that at high 
IO rates the data is not 100% synced across the spindles, and therefore 
smaller files (ie files smaller than the chunk size on each physical 
disk) must wait to be passed under the heads on all the disks... While 
larger chunk sizes may help this, I'm not sure.  A large ram buffer and 
read ahead on a dedicated raid system is more likely to work in that 
case, but, that would require either yet another fileserver (fairly 
expensive), or a hw dedicated Raid server (much more expensive), like 
the Xraid, which did not produce any real difference in the mysql bench 
results previously posted here. In fact, going by those simple 
benchmarks alone, my box already beat the Xserve/Xraid system in most 
of the tests.

Of course, the validity or relativity of those tests to a real world, 
heavily used server may be in question. :) I also am having trouble 
finding relative bench data to other good power systems (ie. I would 
like to see how this stacks up against an 8+G dual/quad xeon or sparc, 
etc)

I will ensure his nightly optimize/repair scripts feature the flush.

But, none of this yet explains why testing from the linux box using the 
remote G5/mysql server (over only 100Mbit switch) gives better results 
than testing directly on the server.

--
Adam Goldstein
White Wolf Networks
http://whitewlf.net
On Jan 27, 2004, at 9:45 AM, Brent Baisley wrote:

I don't think there would be any benefit to using InnoDB, at least not 
from a transaction support view.

After your nightly optimize/repair are you also doing a flush? That 
may help.

I haven't seen any direct comparisons between HFS+ and file systems 
supported by Linux. I would believe that Linux would be faster since 
Linux tends to be geared towards performance first rather than 
usability. But you shouldn't rely on disk caching only. The disks 
still need to be read in order to fill the cache, so you want to get 
the best disk performance you can. Based on your other email, it looks 
like you are using individual disks for storing your data. While I 
understand what you were trying to do by separating your data onto 
different disks, you would get far better performance by combining 
your disks in a RAID, even a software RAID.
If you are using software based RAID, you would need to choose between 
mirroring or striping. Both will give you better read speeds, 
mirroring will slow down writes. If you are striping, the more drives 
you use the better performance you'll get, although I wouldn't put 
more than 4 drives on a single SCSI card.
I think you can use Apple's RAID software for your SCSI disk, but 
SoftRAID (softraid.com) would give you more options. Moving to RAID 
should improve things across the board and will give the best bang for 
your buck (SoftRAID is $129). Personally, I think you should always 
use some form of RAID on all servers.

On Jan 26, 2004, at 5:41 PM, Adam Goldstein wrote:

I have added these settings to my newer my.cnf, including replacing 
the key_buffer=1600M with this 768M... It was a touch late today to 
see if it has a big effect during the heavy load period (~3am to 4pm 
EST, site has mostly european users)

I did not have any of these settings explicitly set in my latest 
my.cnf trialsm, except key_buffer, and I ommitted the innodb ones, as 
we are not (currently) using innodb... would there be any benefit? 
transactions are not a priority, so says my client, so he does not 
use them.

I see the query_cache_size is rather large here, but I am unsure what 
the default size would be. I do not know, yet, how large I can/should 
make either setting, but, it does appear to work without 
malloc/memory errors appearing in the log. Note: while it bitched in 
the logs about the malloc setting, the server did not crash, but, 
kept running. Obviously with an undetermined amount of cache. I 
cannot seem to find any good way to know how much ram 
(cache/buffer/other) mysql uses, as the top output from osx is not 
very appealing... not that linux top tells me much more either.  On 
average, on the old system (all on one box) mysql was said to be 
using about 350MB avg in top... except after the nightly 
optimize/repair script which left it using 1.2G of ram for hours, and 
making all queries rather slow.

Also- a more G5 specific question: as MySql is supposed to gain much 
from the OS disk caching, how does OSX/HFS+ compare to other *nIX 
filesystems... such as Linux 2.4 w/reiserfs?

--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Memory Problems on G5/OSX/MySql4.0.17

2004-01-26 Thread Brent Baisley
You may be hitting an OSX limit. While you can install more than 2GB on 
a system, I don't think any one process is allowed to allocated more 
than 2GB of RAM to itself. It's not a 64-bit OS yet. You should be able 
to search the Apple website for this limit.

On Jan 26, 2004, at 6:10 AM, Adam Goldstein wrote:

I cannot seem to allocate any large amounts of memory to Mysql on our 
system...

Can anyone suggest any settings/changes/etc to get this running to the 
best of it's ability?

Dual 2Ghz G5, 4G ram, OSX 10.3.2, 73G-10Krpm Sata Raptor drives

Using both the 'Complete Mysql4.0.15 and Standard binary package 
4.0.17 I cannot seem to get the daemon to accept using a large 
Key_buffer setting, such as 1900M or 2600M (though 1600M seemed to 
work the first startup).
I get this error:

Version: '4.0.17-standard-log'  socket: '/tmp/mysql.sock'  port: 3306
040126 05:40:16  mysqld started
*** malloc: vm_allocate(size=2597892096) failed (error code=3)
*** malloc[14345]: error: Can't allocate region
/usr/local/mysql/bin/mysqld: ready for connections.
Version: '4.0.17-standard-log'  socket: '/tmp/mysql.sock'  port: 3306
040126 05:43:00  mysqld started
*** malloc: vm_allocate(size=1984614400) failed (error code=3)
*** malloc[14378]: error: Can't allocate region
/usr/local/mysql/bin/mysqld: ready for connections.
Version: '4.0.17-standard-log'  socket: '/tmp/mysql.sock'  port: 3306
This is a dedicated mysql backend server using MyISAM tables 
(currently) and we need it to run a fairly heavy load.

This is only the relevant conf data:

[mysqld]
skip-locking
key_buffer = 1990M
max_allowed_packet = 8M
table_cache = 512
sort_buffer_size = 2M
read_buffer_size = 2M
join_buffer_size = 3M
myisam_sort_buffer_size = 64M
thread_cache = 8
query_cache_size = 32M
thread_concurrency = 8
max_connections = 1200
skip-name-resolve
skip-bdb
skip-innodb
ft_min_word_len = 2
tmpdir  = /tmp/ 
log-bin = /var/log/mysql/raptor-bin.log
These are the largest tables in the db (other clipped):
Raptor: ../data root# ls -lhS /Library/MySQL/var/db1/
total 9177432
-rw-rw  1 mysql  mysql975M 21 Jan 20:30 axxx_search.MYD
-rw-rw  1 mysql  mysql619M 21 Jan 20:30 axxx_search.MYI
-rw-rw  1 mysql  mysql571M 21 Jan 20:05 
td_visitor_archive.MYD
-rw-rw  1 mysql  mysql492M 21 Jan 20:37 message.MYD
-rw-rw  1 mysql  mysql435M 21 Jan 20:30 
axxx_description.MYD
-rw-rw  1 mysql  mysql412M 21 Jan 20:37 enxxx.MYD
-rw-rw  1 mysql  mysql336M 21 Jan 20:37 enxxx.MYI
-rw-rw  1 mysql  mysql200M 23 Dec 09:05 
axxx_title_images.MYD
-rw-rw  1 mysql  mysql 97M 21 Jan 20:06 rating.MYD
-rw-rw  1 mysql  mysql 81M 21 Jan 20:06 rating.MYI
-rw-rw  1 mysql  mysql 49M 21 Jan 20:24 bxx.MYI
-rw-rw  1 mysql  mysql 28M 21 Jan 20:24 bxx.MYD
...clip...

These are our best benchmarks:
alter-table: Total time:  6 wallclock secs ( 0.03 usr  0.04 sys +  
0.00 cusr  0.00 csys =  0.07 CPU)
ATIS: Total time: 21 wallclock secs (17.20 usr  3.37 sys +  0.00 cusr  
0.00 csys = 20.57 CPU)
big-tables: Total time: 15 wallclock secs ( 4.30 usr  3.60 sys +  0.00 
cusr  0.00 csys =  7.90 CPU)
connect: Total time:  4 wallclock secs ( 0.61 usr  0.29 sys +  0.00 
cusr  0.00 csys =  0.90 CPU)
create: Total time: 98 wallclock secs (11.63 usr  3.02 sys +  0.00 
cusr  0.00 csys = 14.65 CPU)
insert: Total time: 1405 wallclock secs (312.96 usr 78.26 sys +  0.00 
cusr  0.00 csys = 391.22 CPU)
select: Total time: 122 wallclock secs (33.21 usr  7.03 sys +  0.00 
cusr  0.00 csys = 40.24 CPU)
wisconsin: Total time:  8 wallclock secs ( 5.00 usr  0.49 sys +  0.00 
cusr  0.00 csys =  5.49 CPU)
--
Adam Goldstein
White Wolf Networks
http://whitewlf.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Memory Problems on G5/OSX/MySql4.0.17

2004-01-26 Thread Adam Goldstein
Others on this list have claimed to be able to set over 3G, and my 
failure is with even less than 2G (though, I am unsure if there is a 
combination of other memory settings working together to create an 2GB 
situation combined)

Even at 1.6G, which seems to work (though, -not- why we got 4G of 
expensive ram), does anyone have any advice for optimizing the 
settings?  Or are they pretty optimized as it is?  (according to 
benchmarks, anyways)

--
Adam Goldstein
White Wolf Networks
http://whitewlf.net
On Jan 26, 2004, at 10:13 AM, Brent Baisley wrote:

You may be hitting an OSX limit. While you can install more than 2GB 
on a system, I don't think any one process is allowed to allocated 
more than 2GB of RAM to itself. It's not a 64-bit OS yet. You should 
be able to search the Apple website for this limit.

On Jan 26, 2004, at 6:10 AM, Adam Goldstein wrote:

I cannot seem to allocate any large amounts of memory to Mysql on our 
system...

Can anyone suggest any settings/changes/etc to get this running to 
the best of it's ability?

Dual 2Ghz G5, 4G ram, OSX 10.3.2, 73G-10Krpm Sata Raptor drives

Using both the 'Complete Mysql4.0.15 and Standard binary package 
4.0.17 I cannot seem to get the daemon to accept using a large 
Key_buffer setting, such as 1900M or 2600M (though 1600M seemed to 
work the first startup).
I get this error:

Version: '4.0.17-standard-log'  socket: '/tmp/mysql.sock'  port: 3306
040126 05:40:16  mysqld started
*** malloc: vm_allocate(size=2597892096) failed (error code=3)
*** malloc[14345]: error: Can't allocate region
/usr/local/mysql/bin/mysqld: ready for connections.
Version: '4.0.17-standard-log'  socket: '/tmp/mysql.sock'  port: 3306
040126 05:43:00  mysqld started
*** malloc: vm_allocate(size=1984614400) failed (error code=3)
*** malloc[14378]: error: Can't allocate region
/usr/local/mysql/bin/mysqld: ready for connections.
Version: '4.0.17-standard-log'  socket: '/tmp/mysql.sock'  port: 3306
This is a dedicated mysql backend server using MyISAM tables 
(currently) and we need it to run a fairly heavy load.

This is only the relevant conf data:

[mysqld]
skip-locking
key_buffer = 1990M
max_allowed_packet = 8M
table_cache = 512
sort_buffer_size = 2M
read_buffer_size = 2M
join_buffer_size = 3M
myisam_sort_buffer_size = 64M
thread_cache = 8
query_cache_size = 32M
thread_concurrency = 8
max_connections = 1200
skip-name-resolve
skip-bdb
skip-innodb
ft_min_word_len = 2
tmpdir  = /tmp/ 
log-bin = /var/log/mysql/raptor-bin.log
These are the largest tables in the db (other clipped):
Raptor: ../data root# ls -lhS /Library/MySQL/var/db1/
total 9177432
-rw-rw  1 mysql  mysql975M 21 Jan 20:30 axxx_search.MYD
-rw-rw  1 mysql  mysql619M 21 Jan 20:30 axxx_search.MYI
-rw-rw  1 mysql  mysql571M 21 Jan 20:05 
td_visitor_archive.MYD
-rw-rw  1 mysql  mysql492M 21 Jan 20:37 message.MYD
-rw-rw  1 mysql  mysql435M 21 Jan 20:30 
axxx_description.MYD
-rw-rw  1 mysql  mysql412M 21 Jan 20:37 enxxx.MYD
-rw-rw  1 mysql  mysql336M 21 Jan 20:37 enxxx.MYI
-rw-rw  1 mysql  mysql200M 23 Dec 09:05 
axxx_title_images.MYD
-rw-rw  1 mysql  mysql 97M 21 Jan 20:06 rating.MYD
-rw-rw  1 mysql  mysql 81M 21 Jan 20:06 rating.MYI
-rw-rw  1 mysql  mysql 49M 21 Jan 20:24 bxx.MYI
-rw-rw  1 mysql  mysql 28M 21 Jan 20:24 bxx.MYD
...clip...

These are our best benchmarks:
alter-table: Total time:  6 wallclock secs ( 0.03 usr  0.04 sys +  
0.00 cusr  0.00 csys =  0.07 CPU)
ATIS: Total time: 21 wallclock secs (17.20 usr  3.37 sys +  0.00 cusr 
 0.00 csys = 20.57 CPU)
big-tables: Total time: 15 wallclock secs ( 4.30 usr  3.60 sys +  
0.00 cusr  0.00 csys =  7.90 CPU)
connect: Total time:  4 wallclock secs ( 0.61 usr  0.29 sys +  0.00 
cusr  0.00 csys =  0.90 CPU)
create: Total time: 98 wallclock secs (11.63 usr  3.02 sys +  0.00 
cusr  0.00 csys = 14.65 CPU)
insert: Total time: 1405 wallclock secs (312.96 usr 78.26 sys +  0.00 
cusr  0.00 csys = 391.22 CPU)
select: Total time: 122 wallclock secs (33.21 usr  7.03 sys +  0.00 
cusr  0.00 csys = 40.24 CPU)
wisconsin: Total time:  8 wallclock secs ( 5.00 usr  0.49 sys +  0.00 
cusr  0.00 csys =  5.49 CPU)
--
Adam Goldstein
White Wolf Networks
http://whitewlf.net

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Memory Problems on G5/OSX/MySql4.0.17

2004-01-26 Thread Gabriel Ricard
2GB was the per-process memory limit in Mac OS X 10.2 and earlier. 10.3 
increased this to 4GB per-process. I've gotten MySQL running with 3GB 
of RAM on the G5 previously.

This is an excerpt from a prior email to the list from back in October 
when I was first testing MySQL on the G5:

 query_cache_size=1024M
 bulk_insert_buffer_size=256M
 tmp_table_size=128M
 sort_buffer=8M
 read_rnd_buffer_size=8M
 key_buffer=768M
 record_buffer=32M
 myisam_sort_buffer_size=512M
 innodb_buffer_pool_size=1024M
 innodb_additional_mem_pool_size=32M
 However, for some reason, when I swapped the values key_buffer and 
query_cache_size to try and give
 key_buffer 1GB, it failed. I swapped the values back and it worked 
fine... odd.

- Gabriel

On Jan 26, 2004, at 11:16 AM, Brent Baisley wrote:

Yes, MySQL is capable of using more than 2GB, but it still must obey 
the limits of the underlying OS. This means file sizes, memory 
allocation and whatever else. Have you heard of anybody allocating 
more the 2GB using OSX? I've heard of quite a bit more using Linux or 
other Unix flavors, but not OSX.

As for optimizing settings, you need to profile you work load. You may 
actually run into I/O, CPU or Network bottleneck before you hit a 
memory bottleneck. You need to run things and find where the 
bottleneck is to optimize performance.

On Jan 26, 2004, at 11:09 AM, Adam Goldstein wrote:

Others on this list have claimed to be able to set over 3G, and my 
failure is with even less than 2G (though, I am unsure if there is a 
combination of other memory settings working together to create an 
2GB situation combined)

--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Memory Problems on G5/OSX/MySql4.0.17

2004-01-26 Thread Adam Goldstein
Yes, I saw this port before... I am not sure why I cannot allocate more 
ram on this box- It is a clean 10.3 install, with 10.3.2 update.  I got 
this box as I love OSX, and have always loved apple, but, this is not 
working out great. Much less powerful (and less expensive) units can do 
a better job of this (the entire site was run on ONE dual athlon box 
with 3G ram, and it seems to have made -NO- difference moving the mysql 
to the dedicated G5.)

Obviously, there is something wrong somewhere- And, I need to find 
where. My client (site creator) is depending on me to help him boost 
the ability of the site to handle more users, but we've always been 
able to do it on a light budget. I need to know where to look first, as 
we are running out of time... His users are a fickle bunch, and will 
likely migrate off to other sites if this slowness continues (it has 
been degrading for past 3-4 months from slow at peak, to dead for all 
peak hours).

These are example queries from the heavier pages:

1: SELECT E AS name, id_parent, fvc_parent_path, fvc_parent_path_E AS 
fvc_parent_path_txt, fvc_childs, items FROM categories WHERE id = 4204 
LIMIT 0,1
Time: 0.0004551410675 sec / Type: Buffered
2: SELECT e.id_enchere, e.id_vendeur, e.titre, e.img1, e.img2, e.img3, 
e.prix_depart, e.price_present, e.fnb_buyitnow_price, e.date_debut, 
e.date_debut_origine, e.date_fin, e.fin_bids, e.fvc_active FROM enchere 
e WHERE e.id_categorie IN (4204) AND (e.date_fin = '2004-01-26 
14:41:59') ORDER BY date_fin ASC LIMIT 0, 80
Time: 37.60733294 sec / Type: Buffered
3: SELECT COUNT(e.id_enchere) AS nbre FROM enchere e WHERE 
e.id_categorie IN (4204) AND (e.date_fin = '2004-01-26 14:41:59')
Time: 0.9267110825 sec / Type: Buffered
4: INSERT INTO td_loadtime SET fvc_page = '/liste.php', fvc_query = 
'language=Ecat=4204sql_log=Y', fdt_date = '2004-01-26', fdt_time = 
'14:42:38', fnb_seconds = 39.22
Time: 0.005410909653 sec / Type: Buffered

making the page take  40 seconds to load.

A few minutes later it can take only 1 second to load the same page... 
Strangely enough, even after shutting off apache on the primary server, 
it still took 33sec to execute query #2 above directly on the G5/mysql 
server. A few moments before shutting off apache, my client informed me 
a page just took 220sec to load. The apache/php server had a load 8 
at that time, and there were numerous apache connections/children 
running at that time.

 Now, a few minutes later:

Page generated in 33.96 seconds.
1: SELECT E AS name, id_parent, fvc_parent_path, fvc_parent_path_E AS 
fvc_parent_path_txt, fvc_childs, items FROM categories WHERE id = 4204 
LIMIT 0,1
Time: 0.0008599758148 sec / Type: Buffered
2: SELECT e.id_enchere, e.id_vendeur, e.titre, e.img1, e.img2, e.img3, 
e.prix_depart, e.price_present, e.fnb_buyitnow_price, e.date_debut, 
e.date_debut_origine, e.date_fin, e.fin_bids, e.fvc_active FROM enchere 
e WHERE e.id_categorie IN (4204) AND (e.date_fin = '2004-01-26 
15:18:47') ORDER BY date_fin ASC LIMIT 0, 80
Time: 1.710601091 sec / Type: Buffered
3: SELECT COUNT(e.id_enchere) AS nbre FROM enchere e WHERE 
e.id_categorie IN (4204) AND (e.date_fin = '2004-01-26 15:18:47')
Time: 30.83186793 sec / Type: Buffered
4: INSERT INTO td_loadtime SET fvc_page = '/liste.php', fvc_query = 
'language=Ecat=4204sql_log=Y', fdt_date = '2004-01-26', fdt_time = 
'15:19:21', fnb_seconds = 33.96
Time: 0.01498007774 sec / Type: Buffered

Query 2 is fast, and 3 is slow. Completely reversed.

Here is a snapshot of stats (my own script) at the time of the first, 
larger, slowdown (with explanations):

[EMAIL PROTECTED] root]# webstat
 14:54:57 up 22:48,  2 users,  load average: 8.62, 7.39, 6.82
Page Connections: 408--- avg of several netstat outputs on 
webIP:80 (apache/php)
Image Connections: 538   --- avg of several netstat outputs on 
imageIP:80 (thttpd)
Mysql Connections: 386   --- avg of several netstat outputs on :3306
Mail Connections: 28--- avg of several netstat outputs on :25
Apache Processes: 402   --- apache child process count
Mysql Processes: 0
php Processes: 2   --- number of currently running cron'd 
php/shell scripts
procs ---memory-- ---swap-- -io --system-- 
cpu
 r  b   swpd   free   buff  cache   si   sobibo   incs us 
sy id wa
 1  3  2 18 55   132600   14379  240   211 10  
6 84  0
 1  1  2 15 54   133000  4448  2784 1351  2753 31 
27 42  0
 3  1  2  9 54   133100  3332  2960 1275  3146 28 
29 43  0
 4  2  2  9 53   133200  2564  2892 1722  3213 25 
25 50  0

We also have lots of good graph info from SystemStats php/rrd grapher, 
but only from the main apache server. I am adding it to the G5/sql 
server now. The main thing on the graph is the inverse reaction of 
Load, Processes, and Bandwidth usage. The bandwidth drops from 
~2-3Mbits, to 50-80% less then slowly rises again, creating vivid dips. 
It is 

Re: Memory Problems on G5/OSX/MySql4.0.17

2004-01-26 Thread Brent Baisley
Have you tried reworking your queries a bit? I try to avoid using IN 
as much as possible. What does EXPLAIN say about how the long queries 
are executed? If I have to match something against a lot of values, I 
select the values into a HEAP table and then do a join. Especially if 
YOU are going to be reusing the values within the current session.
Are you storing images (img1, img2, img3) in the database? I would 
recommend against that in  a high load database, it bloats the database 
size forcing the database to use a lot more RAM to cache the database. 
It also prevents you from creating a database with fixed length 
records. Keeping the images as files will push the loading of the 
images out to the file system and web server.
What kind of RAID setup do you have? You just said you had 73GB 10K 
disks. Why didn't you go with 15k disks? Cost?

On Jan 26, 2004, at 3:42 PM, Adam Goldstein wrote:

Yes, I saw this port before... I am not sure why I cannot allocate 
more ram on this box- It is a clean 10.3 install, with 10.3.2 update.  
I got this box as I love OSX, and have always loved apple, but, this 
is not working out great. Much less powerful (and less expensive) 
units can do a better job of this (the entire site was run on ONE dual 
athlon box with 3G ram, and it seems to have made -NO- difference 
moving the mysql to the dedicated G5.)

Obviously, there is something wrong somewhere- And, I need to find 
where. My client (site creator) is depending on me to help him boost 
the ability of the site to handle more users, but we've always been 
able to do it on a light budget. I need to know where to look first, 
as we are running out of time... His users are a fickle bunch, and 
will likely migrate off to other sites if this slowness continues (it 
has been degrading for past 3-4 months from slow at peak, to dead for 
all peak hours).

These are example queries from the heavier pages:

1: SELECT E AS name, id_parent, fvc_parent_path, fvc_parent_path_E AS 
fvc_parent_path_txt, fvc_childs, items FROM categories WHERE id = 4204 
LIMIT 0,1
Time: 0.0004551410675 sec / Type: Buffered
2: SELECT e.id_enchere, e.id_vendeur, e.titre, e.img1, e.img2, e.img3, 
e.prix_depart, e.price_present, e.fnb_buyitnow_price, e.date_debut, 
e.date_debut_origine, e.date_fin, e.fin_bids, e.fvc_active FROM 
enchere e WHERE e.id_categorie IN (4204) AND (e.date_fin = 
'2004-01-26 14:41:59') ORDER BY date_fin ASC LIMIT 0, 80
Time: 37.60733294 sec / Type: Buffered
3: SELECT COUNT(e.id_enchere) AS nbre FROM enchere e WHERE 
e.id_categorie IN (4204) AND (e.date_fin = '2004-01-26 14:41:59')
Time: 0.9267110825 sec / Type: Buffered
4: INSERT INTO td_loadtime SET fvc_page = '/liste.php', fvc_query = 
'language=Ecat=4204sql_log=Y', fdt_date = '2004-01-26', fdt_time = 
'14:42:38', fnb_seconds = 39.22
Time: 0.005410909653 sec / Type: Buffered

making the page take  40 seconds to load.

--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Memory Problems on G5/OSX/MySql4.0.17

2004-01-26 Thread Adam Goldstein
The primary server (Dual Athlon) has several U160 scsi disks, 10K and 
15K rpm... Approximately half the full size images are on one 73G U160, 
the other half on another (about 120G of large images alone being 
stored... I am trying to get him to abandon/archive old/unused images). 
  The system/logs run on a 36G 10K, Mysql used to run on another 36G 
15k, and /home (with thumbnails and php files) is on another 36G 10K... 
There is also a 250G U133 drive for archives/backups. 
Apache2.0.47/PHP4.3.4

We are going to upgrade the rest of the 10Krpm drives to 15Krpm, but, 
that does not (yet) help the G5... it is a full tower unit at the 
moment, though we are now looking at replacing it with a G5 Xserve. The 
desktop unit can only contain 2xSATA drives internally, and we do not 
have an external raid/scsi/FC system to use on it.. yet. My thought 
when setting this up was to use more RAM cache than disk for the DB. 
The entire DB is about 5.5GB total, currently, and resides on it's own 
partition on it's own disk.

The G5 is using std. HFS+ on all disks, but the Athlon/linux server is 
using reiserfs on most disks.

I will relay the HEAP/EXPLAIN info to my client, as I do not work on 
that portion of the system... He does the code, I keep the systems 
up/running.  We are trying to implement load balancing and, eventually, 
failover redundancy... The initial thought was the G5 and Dual Athlon 
being cooperative/redundant machines but, it is looking like we 
will need several frontends and the G5/D.Athlon be backends...

All of this needs to be done in the tightest budget  shortest time 
possible... we are looking at adding 3-5 1U frontend machines, but only 
if we can make sure the G5/D.Athlon boxes can handle it. Obviously 
there need to be some larger changes, but we want to avoid throwing 
hardware  money at it without reason.

We also have a second 'frontend' machine temporarily being used, a Dual 
PIII/850 w/2G ram and 4xscsi drives. It seems strangely unable to 
handle much user load at all Initially I tried simple DNS load 
balancing, but, that was quickly discarded for subdomain/site topic 
separation. It can handle only about 20% on the main server's userload 
it seems. (php files reside local on it, all images are served via main 
server/thttpd, some dynamic includes are done via NFS mount to main 
server).

--
Adam Goldstein
White Wolf Networks
http://whitewlf.net
On Jan 26, 2004, at 4:39 PM, Brent Baisley wrote:

Have you tried reworking your queries a bit? I try to avoid using IN 
as much as possible. What does EXPLAIN say about how the long queries 
are executed? If I have to match something against a lot of values, I 
select the values into a HEAP table and then do a join. Especially if 
YOU are going to be reusing the values within the current session.
Are you storing images (img1, img2, img3) in the database? I would 
recommend against that in  a high load database, it bloats the 
database size forcing the database to use a lot more RAM to cache the 
database. It also prevents you from creating a database with fixed 
length records. Keeping the images as files will push the loading of 
the images out to the file system and web server.
What kind of RAID setup do you have? You just said you had 73GB 10K 
disks. Why didn't you go with 15k disks? Cost?

On Jan 26, 2004, at 3:42 PM, Adam Goldstein wrote:

Yes, I saw this port before... I am not sure why I cannot allocate 
more ram on this box- It is a clean 10.3 install, with 10.3.2 update. 
 I got this box as I love OSX, and have always loved apple, but, this 
is not working out great. Much less powerful (and less expensive) 
units can do a better job of this (the entire site was run on ONE 
dual athlon box with 3G ram, and it seems to have made -NO- 
difference moving the mysql to the dedicated G5.)

Obviously, there is something wrong somewhere- And, I need to find 
where. My client (site creator) is depending on me to help him boost 
the ability of the site to handle more users, but we've always been 
able to do it on a light budget. I need to know where to look first, 
as we are running out of time... His users are a fickle bunch, and 
will likely migrate off to other sites if this slowness continues (it 
has been degrading for past 3-4 months from slow at peak, to dead for 
all peak hours).

These are example queries from the heavier pages:

1: SELECT E AS name, id_parent, fvc_parent_path, fvc_parent_path_E AS 
fvc_parent_path_txt, fvc_childs, items FROM categories WHERE id = 
4204 LIMIT 0,1
Time: 0.0004551410675 sec / Type: Buffered
2: SELECT e.id_enchere, e.id_vendeur, e.titre, e.img1, e.img2, 
e.img3, e.prix_depart, e.price_present, e.fnb_buyitnow_price, 
e.date_debut, e.date_debut_origine, e.date_fin, e.fin_bids, 
e.fvc_active FROM enchere e WHERE e.id_categorie IN (4204) AND 
(e.date_fin = '2004-01-26 14:41:59') ORDER BY date_fin ASC LIMIT 0, 
80
Time: 37.60733294 sec / Type: Buffered
3: SELECT COUNT(e.id_enchere) AS nbre FROM 

Re: Memory Problems on G5/OSX/MySql4.0.17

2004-01-26 Thread Adam Goldstein
I have added these settings to my newer my.cnf, including replacing the 
key_buffer=1600M with this 768M... It was a touch late today to see if 
it has a big effect during the heavy load period (~3am to 4pm EST, site 
has mostly european users)

I did not have any of these settings explicitly set in my latest my.cnf 
trialsm, except key_buffer, and I ommitted the innodb ones, as we are 
not (currently) using innodb... would there be any benefit? 
transactions are not a priority, so says my client, so he does not use 
them.

I see the query_cache_size is rather large here, but I am unsure what 
the default size would be. I do not know, yet, how large I can/should 
make either setting, but, it does appear to work without malloc/memory 
errors appearing in the log. Note: while it bitched in the logs about 
the malloc setting, the server did not crash, but, kept running. 
Obviously with an undetermined amount of cache. I cannot seem to find 
any good way to know how much ram (cache/buffer/other) mysql uses, as 
the top output from osx is not very appealing... not that linux top 
tells me much more either.  On average, on the old system (all on one 
box) mysql was said to be using about 350MB avg in top... except after 
the nightly optimize/repair script which left it using 1.2G of ram for 
hours, and making all queries rather slow.

Also- a more G5 specific question: as MySql is supposed to gain much 
from the OS disk caching, how does OSX/HFS+ compare to other *nIX 
filesystems... such as Linux 2.4 w/reiserfs?

--
Adam Goldstein
White Wolf Networks
http://whitewlf.net
On Jan 26, 2004, at 11:49 AM, Gabriel Ricard wrote:

2GB was the per-process memory limit in Mac OS X 10.2 and earlier. 
10.3 increased this to 4GB per-process. I've gotten MySQL running with 
3GB of RAM on the G5 previously.

This is an excerpt from a prior email to the list from back in October 
when I was first testing MySQL on the G5:

 query_cache_size=1024M
 bulk_insert_buffer_size=256M
 tmp_table_size=128M
 sort_buffer=8M
 read_rnd_buffer_size=8M
 key_buffer=768M
 record_buffer=32M
 myisam_sort_buffer_size=512M
 innodb_buffer_pool_size=1024M
 innodb_additional_mem_pool_size=32M
 However, for some reason, when I swapped the values key_buffer and 
query_cache_size to try and give
 key_buffer 1GB, it failed. I swapped the values back and it worked 
fine... odd.

- Gabriel

On Jan 26, 2004, at 11:16 AM, Brent Baisley wrote:

Yes, MySQL is capable of using more than 2GB, but it still must obey 
the limits of the underlying OS. This means file sizes, memory 
allocation and whatever else. Have you heard of anybody allocating 
more the 2GB using OSX? I've heard of quite a bit more using Linux or 
other Unix flavors, but not OSX.

As for optimizing settings, you need to profile you work load. You 
may actually run into I/O, CPU or Network bottleneck before you hit a 
memory bottleneck. You need to run things and find where the 
bottleneck is to optimize performance.

On Jan 26, 2004, at 11:09 AM, Adam Goldstein wrote:

Others on this list have claimed to be able to set over 3G, and my 
failure is with even less than 2G (though, I am unsure if there is a 
combination of other memory settings working together to create an 
2GB situation combined)

--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]