Re: Way too slow Load Data Infile

2006-07-29 Thread C.R.Vegelin

Hi Mike,

Try the following:

ALTER TABLE tblname DISABLE KEYS;
LOAD DATA INFILE ...
ALTER TABLE tblname ENABLE KEYS;

hth, Cor

- Original Message - 
From: mos [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Saturday, July 29, 2006 4:50 AM
Subject: Re: Way too slow Load Data Infile


I ran a file monitor and it appears MySQL has been updating the table's 
index for the past several hours as part of the Load Data Infile process. 
Is there any way to speed this up? I'm using MySIAM tables in v4.1.10. Here 
are the settings I'm using. Is there anything in there that will speed up 
the re-indexing?


TIA
Mike

+-+-+
| Variable_name   | Value 
|

+-+-+
| back_log| 50 
|
| basedir | u:\mysql\ 
|
| bdb_cache_size  | 8388600 
|

| bdb_home| |
| bdb_log_buffer_size | 0 
|

| bdb_logdir  | |
| bdb_max_lock| 1 
|
| bdb_shared_data | OFF 
|

| bdb_tmpdir  | |
| binlog_cache_size   | 32768 
|
| bulk_insert_buffer_size | 33554432 
|
| character_set_client| latin1 
|
| character_set_connection| latin1 
|
| character_set_database  | latin1 
|
| character_set_results   | latin1 
|
| character_set_server| latin1 
|
| character_set_system| utf8 
|
| character_sets_dir  | u:\mysql\share\charsets/ 
|
| collation_connection| latin1_swedish_ci 
|
| collation_database  | latin1_swedish_ci 
|
| collation_server| latin1_swedish_ci 
|
| concurrent_insert   | ON 
|
| connect_timeout | 5 
|
| datadir | u:\mysql_data\ 
|
| date_format | %Y-%m-%d 
|
| datetime_format | %Y-%m-%d %H:%i:%s 
|
| default_week_format | 0 
|
| delay_key_write | OFF 
|
| delayed_insert_limit| 100 
|
| delayed_insert_timeout  | 300 
|
| delayed_queue_size  | 1000 
|
| expire_logs_days| 0 
|
| flush   | OFF 
|
| flush_time  | 1800 
|
| ft_boolean_syntax   | + -()~*:| 
|
| ft_max_word_len | 84 
|
| ft_min_word_len | 4 
|
| ft_query_expansion_limit| 20 
|
| ft_stopword_file| (built-in) 
|
| group_concat_max_len| 1024 
|
| have_archive| NO 
|
| have_bdb| DISABLED 
|
| have_compress   | YES 
|
| have_crypt  | NO 
|
| have_csv| NO 
|
| have_example_engine | NO 
|
| have_geometry   | YES 
|
| have_innodb | DISABLED 
|
| have_isam   | NO 
|
| have_ndbcluster | NO 
|
| have_openssl| NO 
|
| have_query_cache| YES 
|
| have_raid   | NO 
|
| have_rtree_keys | YES 
|
| have_symlink| YES 
|

| init_connect| |
| init_file   | |
| init_slave  | |
| innodb_additional_mem_pool_size | 1048576 
|
| innodb_autoextend_increment | 8 
|
| innodb_buffer_pool_awe_mem_mb   | 0 
|
| innodb_buffer_pool_size | 8388608 
|

| innodb_data_file_path   | |
| innodb_data_home_dir| |
| innodb_fast_shutdown| ON 
|
| innodb_file_io_threads  | 4 
|
| innodb_file_per_table   | OFF 
|
| innodb_flush_log_at_trx_commit  | 1 
|

| innodb_flush_method | |
| innodb_force_recovery   | 0 
|
| innodb_lock_wait_timeout| 50 
|
| innodb_locks_unsafe_for_binlog  | OFF 
|

| innodb_log_arch_dir | |
| innodb_log_archive  | OFF 
|
| innodb_log_buffer_size  | 1048576 
|
| innodb_log_file_size| 5242880 
|
| innodb_log_files_in_group   | 2 
|

| innodb_log_group_home_dir   | |
| innodb_max_dirty_pages_pct  | 90 
|
| innodb_max_purge_lag| 0 
|
| innodb_mirrored_log_groups  | 1 
|
| innodb_open_files   | 300 
|
| innodb_table_locks  | ON 
|
| innodb_thread_concurrency   | 8 
|
| interactive_timeout | 28800 
|
| join_buffer_size| 33550336 
|
| key_buffer_size | 67108864 
|
| key_cache_age_threshold | 300 
|
| key_cache_block_size| 1024 
|
| key_cache_division_limit| 100 
|
| language| u:\mysql\share\english\ 
|
| large_files_support | ON 
|
| license | GPL 
|
| local_infile  

concurrency problem on mysql5?

2006-07-29 Thread Christian Mittendorf

Dear list,

we are running a Mysql 5.0.22 server that is accessed by a number of  
java web applications (jdbc driver 3.1) over the network. The  
database layer was realized using the Cayenne framework and InnoDB  
was chosen as the table type. All applications share the same  
database because they use the same type of objects. These  
applications were running fine for some time now, but during the last  
weeks a problem came up that occurs randomly from time to time:  
duplicate key exceptions.


The Cayenne framework is using a a single table to keep the primary  
keys (PK) for its objects. The method accessing this table is  
surrounded by lock / unlock commands, so no other thread can read the  
same PK from the table while another thread is currently reading and  
incrementing the PK. Each thread accessing the table is incrementing  
the value by 20 for caching PK values to reduce the load on the  
database.


What we experience in all of our applications now is duplicate key  
error messages by the database. By analyzing the log files we found  
out, that different applications were using the same set of PK values  
for the objects they were creating. A thing that should not be  
possible because of the lock / unlock logic described above. While  
different threads do use the same set of PK values only one thread  
can win the insert statement, the other one will of course receive  
a duplicate key error from the database. In those cases the log file  
of an application shows 20 exceptions due to the caching mechanism.


However, this error is not always happening, in 99% of the time the  
application is running as expected without errors and there is no  
rule visible to which we can relate the occurrence of the problem.


Other developers of Cayenne have done some load and concurrency tests  
after I reported this problem to the mailing list. But nobody was  
able to reproduce the described problem.


In the beginning some of our apps were using autoReconnect=true as  
an old relict and we thought that might be cause for our problem.  
Consider that the connection to the db host is lost for some reason  
inside the lock / unlock block, the lock on the table is gone but the  
connection is reestablished by the jdbc driver. The Cayenne framework  
won't realize the lost connection and continue to read the PK values.  
Without the lock another thread can easily read the PK table and  
bm, that would be the cause for our problem. Therefore we removed  
all autoReconnect statements so that the jdbc driver would fall back  
to its default autoReconnect=false and let Cayenne handle the  
reconnect itself. But in the end this patch didn't solve the  
problem,  these error messages still occur.


Does anybody have an idea how to solve this problem? In what  
direction could we search next? Is there a switch to let mysql log a  
lost db connection? Any hints are appreciated!


Thanks!

Christian



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Who's locking ?

2006-07-29 Thread Marco Simon
Hello list,

I've a question about understanding table-locks of myisam-tables:

From time to time it happens that my proccesslist gets flooded by
Queries, that
are waiting for a locked table. Quickly there are 100-500 waiting
Queries in the
queue - some of them waiting since more than 1000 seconds.

My question is: How can I see who (which query) set the lock, which all
other
queries are waiting for to be released ? I thought there should be at
least one
update Query in the processlist, which set the lock and which is still
working.
But there are only locked queries for the locked table - no updates, no
deletes.

Could somebody try to explain, how I can investigate, who's blocking the
line ?





smime.p7s
Description: S/MIME Cryptographic Signature


protecting sensitive info in data directory

2006-07-29 Thread Les Schaffer
i am working on a data collection system for an epidemiological study.
the study holds among other things sensitive personal  information. the
personal info is stored in ONE table (i am using innodb_file_per_table)
which we encrypt and wipe after the server is shutdown.

the question is, where else could this data sit on disk? since i am new
to MySQL, i am trying to get clear on the log files, what they contain,
which ones hold data. i gather if we want to be able to perform
rollbacks, we would need to keep the log files around, and since they
store  SQL transactions, we would have to encrypt (and wipe) them as
well after the data collection session is over and server is shut down.

right now i see ib_logfile{0,1}files and an ibdata1 file in addition to
our *.frm and *.ibd tables. need we encrypt the *.frm files as well?
which of the ibXXX files need we encrypt? are there any other MySQL
files ANYWHERE that i should be aware of? Does MySQL ever write anything
to the Windows temp directories? (Windows only app)

yes, i am aware of MySQL encryption on a per field basis. we were asked
to encrypt the whole table.

many thanks

les schaffer



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Way too slow Load Data Infile

2006-07-29 Thread mos

At 03:56 AM 7/29/2006, C.R.Vegelin wrote:

Hi Mike,

Try the following:

ALTER TABLE tblname DISABLE KEYS;
LOAD DATA INFILE ...
ALTER TABLE tblname ENABLE KEYS;

hth, Cor



Hi,
I had tried that on another (larger) t year, and it does of course 
load the data slightly faster because the non-unique keys are disabled (the 
unique keys  primary key are still enabled), but when it tries to rebuild 
the index with the Alter Table Enable Keys, the same thing happens. It runs 
for days trying to rebuild the index. It looks like rebuilding the index is 
disk bound even though there is 400MB still free and available for use. I 
could go out and buy more RAM but I doubt with my current MySQL settings 
it's going to do any good. So if there is a way to improve the index 
building by adjusting my MySQL settings, I'd certainly like to hear from 
you. If I don't get this solved soon, I may have to abandon MySQL and use 
another database. :(


Mike



- Original Message - From: mos [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Saturday, July 29, 2006 4:50 AM
Subject: Re: Way too slow Load Data Infile


I ran a file monitor and it appears MySQL has been updating the table's 
index for the past several hours as part of the Load Data Infile process. 
Is there any way to speed this up? I'm using MySIAM tables in v4.1.10. 
Here are the settings I'm using. Is there anything in there that will 
speed up the re-indexing?


TIA
Mike

+-+-+
| Variable_name   | Value |
+-+-+
| back_log| 50 |
| basedir | u:\mysql\ |
| bdb_cache_size  | 8388600 |
| bdb_home| |
| bdb_log_buffer_size | 0 |
| bdb_logdir  | |
| bdb_max_lock| 1 |
| bdb_shared_data | OFF |
| bdb_tmpdir  | |
| binlog_cache_size   | 32768 |
| bulk_insert_buffer_size | 33554432 |
| character_set_client| latin1 |
| character_set_connection| latin1 |
| character_set_database  | latin1 |
| character_set_results   | latin1 |
| character_set_server| latin1 |
| character_set_system| utf8 |
| character_sets_dir  | u:\mysql\share\charsets/ |
| collation_connection| latin1_swedish_ci |
| collation_database  | latin1_swedish_ci |
| collation_server| latin1_swedish_ci |
| concurrent_insert   | ON |
| connect_timeout | 5 |
| datadir | u:\mysql_data\ |
| date_format | %Y-%m-%d |
| datetime_format | %Y-%m-%d %H:%i:%s |
| default_week_format | 0 |
| delay_key_write | OFF |
| delayed_insert_limit| 100 |
| delayed_insert_timeout  | 300 |
| delayed_queue_size  | 1000 |
| expire_logs_days| 0 |
| flush   | OFF |
| flush_time  | 1800 |
| ft_boolean_syntax   | + -()~*:| |
| ft_max_word_len | 84 |
| ft_min_word_len | 4 |
| ft_query_expansion_limit| 20 |
| ft_stopword_file| (built-in) |
| group_concat_max_len| 1024 |
| have_archive| NO |
| have_bdb| DISABLED |
| have_compress   | YES |
| have_crypt  | NO |
| have_csv| NO |
| have_example_engine | NO |
| have_geometry   | YES |
| have_innodb | DISABLED |
| have_isam   | NO |
| have_ndbcluster | NO |
| have_openssl| NO |
| have_query_cache| YES |
| have_raid   | NO |
| have_rtree_keys | YES |
| have_symlink| YES |
| init_connect| |
| init_file   | |
| init_slave  | |
| innodb_additional_mem_pool_size | 1048576 |
| innodb_autoextend_increment | 8 |
| innodb_buffer_pool_awe_mem_mb   | 0 |
| innodb_buffer_pool_size | 8388608 |
| innodb_data_file_path   | |
| innodb_data_home_dir| |
| innodb_fast_shutdown| ON |
| innodb_file_io_threads  | 4 |
| innodb_file_per_table   | OFF |
| innodb_flush_log_at_trx_commit  | 1 |
| innodb_flush_method | |
| innodb_force_recovery   | 0 |
| innodb_lock_wait_timeout| 50 |
| innodb_locks_unsafe_for_binlog  | OFF |
| innodb_log_arch_dir | |
| innodb_log_archive  | OFF |
| innodb_log_buffer_size  | 1048576 |
| innodb_log_file_size| 5242880 |
| 

Re: Can Innodb reuse the deleted rows disk space?

2006-07-29 Thread Jochem van Dieten

On 7/28/06, Dan Nelson wrote:

In the last episode (Jul 28), leo huang said:


So,  the deleted rows' disk space in tablespace can't re-use when I
use Innodb, can it? And the tablespace is growing when we update the
tables, even the amount of rows do not increase.


It can be re-used after the transaction has been committed


After all transactions that were started before the transaction that
did the delete committed have either been committed or rolled back.

Jochem

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Why release 5.0.23 instead of 5.0.24?

2006-07-29 Thread Abdullah Ibn Hamad Al-Marri
Hello,
 
Why MySQL 5.0.23 while it has serious bug?
 
I thought you guys will release 5.0.24 instead.
 
Security fix: If a user has access to MyISAM table t, that user can create a 
MERGE table m that accesses t. However, if the user's privileges on t are 
subsequently revoked, the user can continue to access t by doing so through m. 
If this behavior is undesirable, you can start the server with the new 
--skip-merge option to disable the MERGE storage engine. (Bug#15195)
 
 MySQL 5.0.23 contained a fix for Bug#10952 that has been reverted in 5.0.24 
because it introduced the risk of unintended data loss.
 
Thank you,
 
-Abdullah

Re: Why release 5.0.23 instead of 5.0.24?

2006-07-29 Thread Jim Winstead
On Sat, Jul 29, 2006 at 08:35:41AM -0700, Abdullah Ibn Hamad Al-Marri wrote:
 Why MySQL 5.0.23 while it has serious bug?
  
 I thought you guys will release 5.0.24 instead.

This is still the case. 5.0.23 will not be released, and 5.0.24 will be
released as soon as it is ready.

(I see that some 5.0.23 binaries are listed on the 5.0 download page --
this is a mistake.)

Jim Winstead
MySQL Inc.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Why release 5.0.23 instead of 5.0.24?

2006-07-29 Thread Abdullah Ibn Hamad Al-Marri
That's why I wondered.
 
So when will we get MySQL 5.0.25 for FreeBSD?
 
-A
On Sat, Jul 29, 2006 at 08:35:41AM -0700, Abdullah Ibn Hamad Al-Marri wrote:
 Why MySQL 5.0.23 while it has serious bug?
  
 I thought you guys will release 5.0.24 instead.

This is still the case. 5.0.23 will not be released, and 5.0.24 will be
released as soon as it is ready.

(I see that some 5.0.23 binaries are listed on the 5.0 download page --
this is a mistake.)

Jim Winstead
MySQL Inc.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: Who's locking ?

2006-07-29 Thread Eric Bergen

Marco,
  It is also possible to set table locks by hand using a lock table
query. These connections will simply show up as sleeping or doing
something else. There currently isn't a show locks command in MySQL.
Depending on what the process list looks like tracking it down could
be as simple as looking for the sleeping connection that has been
asleep for about as long as the longest locked select. If it's locking
tables then doing something else it may be more difficult.


On 7/29/06, Marco Simon [EMAIL PROTECTED] wrote:

Hello list,

I've a question about understanding table-locks of myisam-tables:

From time to time it happens that my proccesslist gets flooded by
Queries, that
are waiting for a locked table. Quickly there are 100-500 waiting
Queries in the
queue - some of them waiting since more than 1000 seconds.

My question is: How can I see who (which query) set the lock, which all
other
queries are waiting for to be released ? I thought there should be at
least one
update Query in the processlist, which set the lock and which is still
working.
But there are only locked queries for the locked table - no updates, no
deletes.

Could somebody try to explain, how I can investigate, who's blocking the
line ?









--
Eric Bergen
[EMAIL PROTECTED]
http://www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Looking For How test database performans on different hardware

2006-07-29 Thread Eric Bergen

Vahric,
 There are so many variations between different hardware platforms
that can affect performance on your specific application that it's
better to run benchmarks on the actual hardware if you have it. If not
then Peter Zaitsev's performance blog is the next best resource.
http://www.mysqlperformanceblog.com/category/hardware/

On 7/27/06, Vahric MUHTARYAN [EMAIL PROTECTED] wrote:

Hello ,

I want to test Mysql , on Dual Intel Xeon 2MB Cache CPU and Dual AMD Opteron
platforms for looking differents which one is better then other !
I found something but I want to asl to list , is there anybody have an
experiance about this type tests !

Regards

Vahric MUHTARYAN
Sistem Mühendisi/System Engineer
DorukNet
Tel / Phone : +90 212 326 92 00
Fax : +90 212 227 28 11
E-mail : [EMAIL PROTECTED]
http://www.doruk.net.tr
http://www.doruk.net.tr/english.html

Bu e-posta mesajı kişiye özel olup, gizli bilgiler içeriyor olabilir. Eğer
bu e-posta mesajı size yanlışlıkla ulaşmışsa, e-posta mesajını kullanıcıya
hemen geri gönderiniz ve mesaj kutunuzdan siliniz. Bu e-posta mesajı, hiç
bir şekilde, herhangi bir amaç için çoğaltılamaz, yayınlanamaz ve para
karşılığı satılamaz. Yollayıcı, bu e-posta mesajının - virüs koruma
sistemleri ile kontrol ediliyor olsa bile - virüs içermediğini garanti etmez
ve meydana gelebilecek zararlardan doğacak hiçbir sorumluluğu kabul etmez.
The information contained in this message is confidential, intended solely
for the use of the individual or entity to whom it is addressed and may be
protected by professional secrecy. You should not copy, disclose or
distribute this information for any purpose.
If you are not the intended recipient of this message or you receive this
mail in error, you should refrain from making any use of the contents and
from opening any attachment. In that case, please notify the sender
immediately and return the message to the sender, then, delete and destroy
all copies.
This e-mail message has been swept by anti-virus systems for the presence of
computer viruses. In doing so, however, we cannot warrant that virus or
other forms of data corruption may not be present and we do not take any
responsibility in any occurrence.






--
Eric Bergen
[EMAIL PROTECTED]
http://www.provenscaling.com


Re: Can Innodb reuse the deleted rows disk space?

2006-07-29 Thread Dan Nelson
In the last episode (Jul 29), Jochem van Dieten said:
 On 7/28/06, Dan Nelson wrote:
 In the last episode (Jul 28), leo huang said:
 So,  the deleted rows' disk space in tablespace can't re-use when I
 use Innodb, can it? And the tablespace is growing when we update
 the tables, even the amount of rows do not increase.
 
 It can be re-used after the transaction has been committed
 
 After all transactions that were started before the transaction that
 did the delete committed have either been committed or rolled back.

Ouch.

-- 
Dan Nelson
[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL 5.0.22 install problem with NetBSD

2006-07-29 Thread Clay R White
From: [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Subject: MySQL 5.0.22 install problems with NetBSD

Description:
Started the install (new install). Reached the step:
scripts/mysql_install_db --user=mysql.
Recieved the following errors:
scripts/mysql_install_db: line 86: ./bin/my_print_defaults: cannot execute
binary file
Neither host 'roswell.lazyrockinw.com' nor 'localhost' could be looked up
with
./bin/resolveip
Please configure the 'hostname' command to return a correct hostname.
If you want to solve this at a later stage, restart this script with
the --force option

I then tried to run ./bin/resolveip localhost received  

-bash:./bin/resolveip: cannot execute binary file

file is currently set as: -rwxr-xr-x  1 root  wheel  1144116 May 26 13:24
bin/resolveip

The distribution was mysql-standard-5.0.22-freebsd6.0-i386.tar.gz although
I've also tried the openbsd distribution. 
I followed the instructions in the INSTALL_BINARY file, both distributions
failed with identical errors on the same script.

How-To-Repeat:

 Repeated whenerver I attempted to run scripts/mysql_install_db

Fix:
I did notice in the support pages that this was a known bug back in one of
the 4.X distributions. I could
not find where there was a fix, patch or workaround. 

Submitter-Id:  
Originator:Clay White
Organization:
 CJ Enterprises
Bertram, TX 78605
MySQL support: none
Synopsis:  Unable to install MySQL on NetBSD due to unknown
installation script errors
Severity:  serious
Priority:  medium 
Release:   mysql-5.0.22-standard (MySQL Community Edition - Standard
(GPL))

C compiler:gcc (GCC) 3.4.4 [FreeBSD] 20050518
C++ compiler:  gcc (GCC) 3.4.4 [FreeBSD] 20050518
Environment:
machine, os, target, libraries (multiple lines)
System: NetBSD roswell.lazyrockinw.com 3.99.20 NetBSD 3.99.20 (piggy) 
#2: Mon Jun 12 22:59:37 CDT 2006
[EMAIL PROTECTED]:/stuff/src_current/sys/arch/i386
/compile/obj/piggy i386


Some paths:  /usr/pkg/bin/perl /usr/bin/make /usr/pkg/bin/gmake /usr/bin/gcc
/us
r/bin/cc
GCC: Using built-in specs.
C compiler:gcc (GCC) 3.4.4 [FreeBSD] 20050518
C++ compiler:  gcc (GCC) 3.4.4 [FreeBSD] 20050518
Environment:
machine, os, target, libraries (multiple lines)
System: NetBSD roswell.lazyrockinw.com 3.99.20 NetBSD 3.99.20 (piggy) #2:
Mon Ju
n 12 22:59:37 CDT 2006
[EMAIL PROTECTED]:/stuff/src_current/sys/arch/i386
/compile/obj/piggy i386


Some paths:  /usr/pkg/bin/perl /usr/bin/make /usr/pkg/bin/gmake /usr/bin/gcc
/us
r/bin/cc
GCC: Using built-in specs.
Configured with:
/home/nick/work/netbsd/src/tools/gcc/../../gnu/dist/gcc/configu
re --enable-long-long --disable-multilib --enable-threads --disable-symvers
--bu
ild=i386-unknown-netbsdelf2.0. --host=i386--netbsdelf
--target=i386--netbsdelf
Thread model: posix
gcc version 3.3.3 (NetBSD nb3 20040520)
Compilation info: CC='ccache gcc'  CFLAGS=''  CXX='ccache gcc'  CXXFLAGS=''
LDF
LAGS=''  ASFLAGS=''
LIBC:
lrwxr-xr-x  1 root  wheel  16 Jun 13 01:15 /lib/libc.so - libc.so.12.128.2
lrwxr-xr-x  1 root  wheel  16 Jun 13 01:15 /lib/libc.so.12 -
libc.so.12.128.2
-r--r--r--  1 root  wheel  881693 Jun 13 01:15 /lib/libc.so.12.128.2
lrwxr-xr-x  1 root  wheel  16 Jun 13 01:15 /lib/libc.so - libc.so.12.128.2
lrwxr-xr-x  1 root  wheel  16 Jun 13 01:15 /lib/libc.so.12 -
libc.so.12.128.2
-r--r--r--  1 root  wheel  881693 Jun 13 01:15 /lib/libc.so.12.128.2
-r--r--r--  1 root  wheel  1702186 Jun 13 01:15 /usr/lib/libc.a
lrwxr-xr-x  1 root  wheel  21 Jun 13 01:15 /usr/lib/libc.so -
/lib/libc.so.12.1
28.2
lrwxr-xr-x  1 root  wheel  21 Jun 13 01:15 /usr/lib/libc.so.12 -
/lib/libc.so.1
2.128.2
lrwxr-xr-x  1 root  wheel  21 Jun 13 01:15 /usr/lib/libc.so.12.128.2 -
/lib/lib
c.so.12.128.2

Configure command: ./configure '--prefix=/usr/local/mysql'
'--localstatedir=/usr
/local/mysql/data' '--libexecdir=/usr/local/mysql/bin' '--with-comment=MySQL
Com
munity Edition - Standard (GPL)' '--with-server-suffix=-standard'
'--enable-thre
ad-safe-client' '--enable-local-infile' '--enable-assembler' '--with-pic'
'--dis
able-shared' '--with-zlib-dir=bundled' '--with-big-tables' '--with-yassl'
'--wit
h-readline' '--with-archive-storage-engine' '--with-innodb'
'--with-extra-charse
ts=complex' 'CC=ccache gcc' 'CXX=ccache gcc'