Re: Anyone using LVM for backing up?

2009-06-22 Thread David Sparks
Little, Timothy wrote:
 We have a 20 gig db (that includes the MYIs and MYDs and FRMs).
 
 We are wondering how long LVM snapshots take.. in that how long might
 the DB be read-locked?  Do we have to read-lock it and flush tables?

Take a look at mylvmbackup which takes care of flushing tables, creating and
destroying the snapshot, etc:

http://www.lenzg.net/mylvmbackup/

Expect a serious performance hit while the lvm snapshot is active.

ds


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Oracle , what else ?

2009-04-24 Thread David Sparks
Joshua D. Drake wrote:
 I would expect that MySQL in two years likely won't exist except on the
 most tertiary level. Most new projects will be developed in either
 PostgreSQL, Interbase or one of the forks (MariaDB, Drizzle).
 
 Sincerely,
 
 Joshua D. Drake
 
 --
 PostgreSQL - XMPP: jdr...@jabber.postgresql.org

Your FUD would be better posted on a Postres list with all the onging
discussions on how Mysql doesn't support foreign keys, transactions, etc.

Begone Postgres troll!

ds

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Oracle , what else ?

2009-04-24 Thread David Sparks
Glyn Astill wrote:
 Begone Postgres troll!
 
 Oh the hostility of a scorned mysql user. Joshua has posted no more FUD
 than you mysql chaps have done yourselvs over the past few days. You were
 worried about the future and he's posted a few ideas of how you can
 prepare.

No he didn't.  He posted doom and gloom:

It will be a supported but second class citizen from Oracle.

Oracle is not interested in the 1000/yr business. For the most
part that is where MySQL revenue is.

maintain it long enough to allow MySQL to kill itself.

I would expect that MySQL in two years likely won't exist except on the
most tertiary level.

One more time: begone Postgres troll!


Switching gears ...

All said, I'm cautiously optimistic that Oracle taking over the reins to Mysql
will benefit all.  Mysql is the long running leader in the open source
database space, and with the DB smarts of Oracle behind it I expect to see the
gap between Mysql and the other open source DB servers widen, not close up.

Mysql is getting better at a pace that is making the other open source DB
servers irrelevant.

ds

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: MySQL runs on 16-cores server

2009-04-13 Thread David Sparks
 Right now if you want a more scalable *current* version of
 MySQL, you need to look to the Google patches, the Percona builds (and
 Percona XtraDB, a fork of InnoDB), or OurDelta builds.

Is there a webpage somewhere that compares and contrasts the above patchsets?

I thought the Google patches were mostly in the OurDelta patchset?

ds

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Partitioning suggestion/workaround needed

2009-04-09 Thread David Sparks
Hi all,

I'm just creating my first partitioned table and have run into a bit of a
snag.  The table primary key is a double and I want to create partitions based
on ranges of the key.

I have read the Partition Limitations section in the docs which states that
the partition key must be, or resolve to an integer.  I can't figure out how
to cast/convert the double value into an integer that is usable by the
partition calculation.

This is basically what I'm trying to do:

create table t1 (
id double not null,
data char(32) not null,
primary key (id)
)
engine=innodb
partition by range(cast(id as int)) (
partition p1 VALUES LESS THAN (1239257000),
partition p2 VALUES LESS THAN (1239258000),
partition p3 VALUES LESS THAN MAXVALUE
);


What workarounds are there to partition a table keyed on a double?

Note: the double value is a Perl HiRes time.

Thanks!

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Upgraded to 5.0.x from 4.1.x and ORDER BY sucks!

2007-05-22 Thread David Sparks
Gmail User wrote:
 I had perfectly working complex queries both with LEFT JOIN and without
 and they were returning results in under a second. After upgrade to
 5.0.x, the same queries would return results in 20-30 second range.

I had a similar problem once (not related to 4.x-5.x though), it turns
out after some maintenance mysql had lost the stats for the table and
was doing a terrible job in optimizing queries.  A quick analyze of all
the tables in the db fixed the problem.

mysqlcheck -h$host -u$user -p$pass --analyze $dbname

ds



 Through trial and error, I discovered that in case of SELECT ... FROM
 table1, table2 ... ORDER BY table2.column will be very slow (10s of
 seconds), while the same query ordered by table1.column will be in under
 1 second. If I switch the order of tables in FROM, the same will hold
 true in reverse order. Is this a bug or I missed something in my
 googling? More importantly, what can I do change this--I need those
 sorts! :-(
 
 I got same results with 5.0.22, 5.0.27 (Linux).
 
 
 TIA,
 
 Ed
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysqlmanager can't tell diff between clean shutdown and crash?

2007-01-30 Thread David Sparks
I shut down a database using:

mysqladmin -uroot -pxxx shutdown

and the db shutdown as expected.  But then it restarted!  My only guess
is that mysqlmanager can't tell the difference between a clean shutdown
and a crash.

Maybe this is expected?  But then what good is the shutdown command
available via mysqladmin?  My guess is that this was never thought about
until now.

Anyone else getting frustrated with mysqlmanager?

ds

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysqlmanager safe?

2007-01-24 Thread David Sparks
I've switched over to the mysqlmanager startup system instead of the old
mysqld_safe because thats the only supported method in mysql5.

I needed to restart a DB so I did a `/etc/init.d/mysqlmanager restart`
which seemed to work, but there were some problems:

- the daemon was no longer accepting connections
- ps showed 2 copies of mysqld running

I also noticed a lot of errors like this in mysqld.err:

070124 15:27:02 [ERROR] /usr/sbin/mysqld: Incorrect information in file:
'./databasename/table.frm'

One of the daemon processes would not respond to kill so eventually I
fired up gdb and killed it (it was stuck on a futex operation).  I then
restarted mysql and it went through a huge crash recovery.

A co-worker tells me that mysqlmanager has been known to do this for
ages.  Is this true?  The DB in question is all InnoDB, approx 150GB in
12 tables.  mysql version 5.0.30.

Thanks!

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Fwd: innodb_log_files_in_group

2006-12-28 Thread David Sparks

Hi Juan,

The default (and recommended) is 2. The log files, save the trasactions 
into

file in circular order. This files are like a redolog files in oracle. This
log file are useful when you recover your database after some crash for
example or when you use a replication mysql.


innodb_buffer_pool_size you don`t have any restrictions ( normally 80% of
phisical mem).

Now, for the innodb_log_file_size ( 25% of buffer_pool_size) , the combined
size of the InnoDB log files must not exceed 4 GB. If you
have 2 log files, you can make each at most 2000 MB in size.

When you startup your MySQL using innodb, in your my.cnf you can set the
variable innodb_log_files_in_group. For example, if your set
innodb_log_files_in_group=2
( ib_log_file001, ib_log_file002 are created) the total size must not 
exceed

4G.


Thanks, I think I recall reading that text before.  It doesn't really 
answer my question though.  Why is this option there?  Is it really of 
no benefit?  Why was time spent coding and debugging the feature to have 
more than 2 logfiles if there is no benefit of doing it?


If this options is useless why wasn't it removed in MySQL 5.x?

Cheers,

ds


Is there any benefit/reason to set innodb_log_files_in_group to
something other than 2?




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



innodb_log_files_in_group

2006-12-27 Thread David Sparks
Is there any benefit/reason to set innodb_log_files_in_group to 
something other than 2?


Thanks,

ds

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: interesting benchmark at tweakers.net

2006-12-19 Thread David Sparks
Jochem van Dieten wrote:
 On 12/19/06, David Sparks wrote:
 I noticed an interesting benchmark at tweakers.net that shows mysql not
 scaling very well on hyperthreading and multicore cpus (see links at end
 of email).

 Does anyone know what engine they are using for their tests? (Innodb,
 myisam, berkdb heheh)
 
 InnoDB, the first installment of the series of tests had the following
 configuration: http://tweakers.net/reviews/620/2

I don't see where they say what engine they use, I just see that they
slightly tuned up a few Innodb parameters.  They also tuned up myisam
parameters so the configuration section doesn't really answer that question.


 And I would venture that these results are not because they did
 horrible things to their MySQL configuration. On the second
 installment of the series http://tweakers.net/reviews/633/7 engineers
 from Sun were brought in and they consulted with engineers from MySQL
 and on the last installment Peter Zaitsev of the MySQL Performance
 Blog did a review of their configuration:
 http://tweakers.net/reviews/660/6

So its confirmed that mysql has serious problems scaling on concurrent
hardware (both hyperthreading, multicore, and multiple cpu).

This sucks ... our newest DB server is 2x dualcore.

ds

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



interesting benchmark at tweakers.net

2006-12-18 Thread David Sparks
I noticed an interesting benchmark at tweakers.net that shows mysql not
scaling very well on hyperthreading and multicore cpus (see links at end
of email).

Does anyone know what engine they are using for their tests? (Innodb,
myisam, berkdb heheh)

In fact they seem to show that postgres is a faster db overall.  This
goes against my personal experience where I benchmarked a real world app
we have and found mysql 10 to 100 times faster, and that was ignoring
both postgres poor connection performance and the hideous vacuum
rigmarole.  But that was 2 years ago, maybe postgres performance has
finally caught up?

Any other recent benchmark links?

http://tweakers.net/reviews/657

http://tweakers.net/reviews/646/10

ds

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump slows to crawl

2006-12-11 Thread David Sparks
Mathieu Bruneau wrote:
 I never experience any dump that were slow due to the index. The index
 aren't dumped anyway they will be recreate when you import them back so
 it shouldn't matter. (And that will cause problem if the db is running)
 so I wouldn't drop the index on your table if I were you...

Good point.


 Your getting a lot of compression ratio 2.7G = 270 Megs

Opps I wasn't clear, I killed the dump when it was  10% done.  It never
would've finished.


 , is it possible
 that your dump is CPU bound ? I have seen this quite often when using
 bzip2 for example which makes the dump takes very long! You can see that
 from top when the dump is running. If that's the case you could try gzip
 which takes much less cpu (but will give a bigger dump size)

I am using gzip ... the cpu utilization is at 0%.  The dump runs on a
different server than the DB.


 Also about using the mysqldump 5.0 on a mysql 4.1 server... hmmm not
 sure about which side effect that may have! I usually use the version
 that comes with the server...

I guess I could copy the binary and libs to another server to test this.
 However strace suggests that mysqldump is waiting for the server to
send data (its reading the socket).

I just checked my latest dump attempt and it has now spent 128077
seconds trying to dump the 29GB table and making almost no progress (1
row every 30 seconds as estimated by strace).  I guess the MVCC
implementation is pushed to its limits because I can see other queries
not finishing in a timely manner. :(

Anyone have any other ideas?

ds

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Issues with MySQL x86_64 crashing

2006-12-07 Thread David Sparks
Kevin Old wrote:
 Hello everyone,
 
 We have a 4 CPU master server running the 5.0.27 RPM x86_64 version of
 MySQL with a mix of InnoDB and MyISAM tables.
 
 We normally run at 1500 queries/per second and lately, the server will
 all of a sudden lock up and we are forced to restart mysql.

That isn't a crash.

Can you still connect to the db?  If so output of 'show full
processlist\G', etc would be useful.

If it is truly locked up then running mytop in a background shell
somewhere might be a good idea.  You would know what queries were
running when it locked up.

ds

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: alter table ... import tablespace NFG?

2006-12-07 Thread David Sparks
David Sparks wrote:
 I want to move 3 100GB .ibd files into a new DB.
 
 I followed the instructions here:
 
 http://dev.mysql.com/doc/refman/5.0/en/multiple-tablespaces.html
 
 But it doesn't work:
 
 mysql alter table reports discard tablespace;
 Query OK, 0 rows affected (0.04 sec)
 
 mysql alter table reports import tablespace;
 ERROR 1030 (HY000): Got error -1 from storage engine

Oh no!

http://bugs.mysql.com/bug.php?id=5904

[5 Oct 2004 20:04] John David Duncan

Description:
Allow ALTER TABLE t IMPORT TABLESPACE to import a tablespace
created on some other MySQL server.  This requires changing the
tablespace number (unless it is available...) and resetting the
transaction identifiers in the imported .ibd file.
---%---

Argh!  Note that some other MySQL server above applies to the same
hardware and just a different mysql instance.  ie If you're trying to
restore from backup.  Import tablespace has little use without being
able to move the files between server instances. :(

ds

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



alter table ... import tablespace NFG?

2006-12-06 Thread David Sparks
I want to move 3 100GB .ibd files into a new DB.

I followed the instructions here:

http://dev.mysql.com/doc/refman/5.0/en/multiple-tablespaces.html

But it doesn't work:

mysql alter table reports discard tablespace;
Query OK, 0 rows affected (0.04 sec)

mysql alter table reports import tablespace;
ERROR 1030 (HY000): Got error -1 from storage engine


mysqld.err says:

061206 14:57:35  InnoDB: Error: tablespace id in file
'./money/reports.ibd' is 88, but in the InnoDB
InnoDB: data dictionary it is 34.
InnoDB: Have you moved InnoDB .ibd files around without using the
InnoDB: commands DISCARD TABLESPACE and IMPORT TABLESPACE?
InnoDB: Please refer to
InnoDB:
http://dev.mysql.com/doc/mysql/en/InnoDB_troubleshooting_datadict.html
InnoDB: for how to resolve the issue.
061206 14:57:35  InnoDB: cannot find or open in the database directory
the .ibd file of
InnoDB: table `money/reports`
InnoDB: in ALTER TABLE ... IMPORT TABLESPACE


How to fix this?

ds

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



speed up index creation on existing tables?

2006-10-05 Thread David Sparks
I have a table with ~100,000,000 rows.  I recently discovered that I
need to start using one of the non-indexed columns in WHERE clauses.  As
expected the performance is horrid.  I decided to bite the bullet and
create an index (innodb):

mysql show full processlist\G
*** 1. row ***
 Id: 109496
   User: root
   Host: localhost
 db: orson
Command: Query
   Time: 161079
  State: copy to tmp table
   Info: create index ix_card on game (ender)


Its already been running 2 days.  I probably need to index some more
columns in another table -- is there anything that can be done to speed
this up?

dump and re-import is impractical.

Server is decent -- 4xcpu, 16GB RAM...

Thanks,

ds

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: daemon crash when shutting down large databases

2005-09-30 Thread David Sparks
Gleb Paharenko wrote:

Hello.

  = 77591546 K


Really - something is wrong with your memory settings - MySQL is using
about 77G of memory


Unfortunately getting the daemon to not go above the theoretical limit
has tanked performance.  In reality I never see the daemon go above 45%
RAM usage when using the settings that can theoretically go to 77GB RAM
usage.

What if I added a 80GB swap file?  Would this not make sure there is
available RAM if the daemon really needs it and eliminate all memory
exhaustion cases from the crash?

Cheers,

ds


 (or you have such a cool server :)! Please send the 
output of 'SHOW VARIABLES' statement, 'SHOW STATUS' statement and your 
configuration file. Include the amount of physical memory.



David Sparks wrote:
  

mysql usually crashes when being shutdown.  The machine is a dual AMD64 
w 8GB RAM running mysql-4.1.14 on Gentoo linux with a ~40GB database.  I 
had similar crashes running 4.0.24 on an x86 running a ~275GB database.

I always use `mysqladmin shutdown` rather than the init scripts to 
shutdown the daemon.

Are there any known problems with shutting down large databases?

Thanks,

ds

050923 10:41:58  InnoDB: Starting shutdown...
050923 10:44:00InnoDB: Assertion failure in thread 1174235488 in file 
os0sync.c line 634
InnoDB: Failing assertion: 0 == pthread_mutex_destroy(fast_mutex)
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/mysql/en/Forcing_recovery.html
InnoDB: about forcing recovery.
mysqld got signal 11;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help 
diagnose
the problem, but since we have already crashed, something is definitely 
wrong
and this may fail.

key_buffer_size=2147483648
read_buffer_size=33550336
max_used_connections=217
max_connections=768
threads_connected=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections 
= 77591546 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.






  



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



config diff: query time went from 70 mins to 20 seconds

2005-09-30 Thread David Sparks
Here is a config diff that made mysql usable again.  As the database 
grew in size, buffer sizes in the config were increased to try to boost 
mysql performance.


Unfortunately it didn't work as expected.  As the config was tweaked, 
mysql slowed down even more.  Removing all settings from the my.cnf 
restored performance.


So what was the setting below that was tanking mysql performance?  I 
suspect that innodb_log_buffer_size=32M was the culprit.



 #skip-innodb
 key_buffer = 2048M
 max_allowed_packet = 1M
-table_cache= 1536
-sort_buffer_size   = 256M
-net_buffer_length  = 64K
-read_buffer_size   = 256M
-read_rnd_buffer_size   = 256M
-myisam_sort_buffer_size= 256M
+#table_cache   = 1536
+#sort_buffer_size  = 256M
+#net_buffer_length = 64K
+#read_buffer_size  = 256M
+#read_rnd_buffer_size  = 256M
+#myisam_sort_buffer_size   = 256M
 language   = /usr/share/mysql/english
 myisam_data_pointer_size   = 6

 # daves: this may not quite work ...
-join_buffer_size   = 256M
+#join_buffer_size  = 256M
 long_query_time= 10
 log-long-format
 log-slow-queries
@@ -117,9 +117,9 @@
 # the rest of the innodb config follows:
 # don't eat too much memory, we're trying to be safe on 64Mb boxes.
 # you might want to bump this up a bit on boxes with more RAM
-innodb_buffer_pool_size= 2048M
+innodb_buffer_pool_size= 6144M
 # this is the default, increase if you have lots of tables
-innodb_additional_mem_pool_size= 32M
+innodb_additional_mem_pool_size= 4M
 #
 # i'd like to use /var/lib/mysql/innodb, but that is seen as a 
database :-(
 # and upstream wants things to be under /var/lib/mysql/, so that's the 
route

@@ -134,7 +134,7 @@
 # sensible values range from 1MB to 
(1/innodb_log_files_in_group*innodb_buffer_pool_size)

 innodb_log_file_size   = 1G
 # this is the default, increase if you have very large transactions.
-innodb_log_buffer_size = 32M
+innodb_log_buffer_size = 1M
 # this is the default, and won't hurt you.
 # you shouldn't need to tweak it.
 set-variable   = innodb_log_files_in_group=2

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: daemon crash when shutting down large databases

2005-09-28 Thread David Sparks

Gleb Paharenko wrote:

Hello.

  = 77591546 K


Really - something is wrong with your memory settings - MySQL is using
about 77G of memory (or you have such a cool server :)! Please send the 
output of 'SHOW VARIABLES' statement, 'SHOW STATUS' statement and your 
configuration file. Include the amount of physical memory.


Thanks for pointing that out ... the machine only has 8GB ram.  I 
massaged the config values to get under the 8GB limit but still suffer 
the same problem:


mysqld got signal 11;
...
key_buffer_size=1073741824
read_buffer_size=4190208
max_used_connections=257
max_connections=256
threads_connected=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections 
= 4193278 K

bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
--%


The server is dual AMD64 w/ 8GB RAM w/ 500GB disk.  It has 4 databases, 
1 mostly innodb and 3 myisam.  The innodb and one of the myisam dbs see 
100x the usage of the other 2 dbs.  If anyone spots any problems in the 
my.cnf I'd love to hear about it. :)




Hopefully this isn't line-wrapped into oblivion:
%--
[client]
#password   = your_password
port= 3306
socket  = 
/var/run/mysqld/mysqld.sock


[mysql]
character-sets-dir=latin1
default-character-set=latin1

[mysqladmin]
character-sets-dir=latin1
default-character-set=latin1

[mysqlcheck]
character-sets-dir=latin1
default-character-set=latin1

[mysqldump]
character-sets-dir=latin1
default-character-set=latin1

[mysqlimport]
character-sets-dir=latin1
default-character-set=latin1

[mysqlshow]
character-sets-dir=latin1
default-character-set=latin1

[myisamchk]
character-sets-dir=latin1

[myisampack]
character-sets-dir=latin1

# use [safe_mysqld] with mysql-3
[mysqld_safe]
err-log = /var/log/mysql/mysql.err

# add a section [mysqld-4.1] or [mysqld-5.0] for specific configurations.
[mysqld]
max_connections = 256
character-set-server= latin1
default-character-set   = latin1
user= mysql
port= 3306
socket  = 
/var/run/mysqld/mysqld.sock

pid-file= /var/run/mysqld/mysqld.pid
log-error   = /var/log/mysql/mysqld.err
basedir = /usr
datadir = /var/lib/mysql
#skip-locking
#skip-innodb
key_buffer  = 1024M
max_allowed_packet  = 1M
table_cache = 1556
sort_buffer_size= 8M
net_buffer_length   = 64K
read_buffer_size= 4M
read_rnd_buffer_size= 1M
myisam_sort_buffer_size = 8M
language= /usr/share/mysql/english
myisam_data_pointer_size= 6

# daves: this may not quite work ...
join_buffer_size= 8M
long_query_time = 10
log-long-format
log-slow-queries
max_binlog_size = 10G
binlog_cache_size   = 1M
expire_logs_days= 1
tmp_table_size  = 64M

query_cache_limit   = 1M
query_cache_size= 64M
query_cache_type= 1


#security:
#using localhost in connects use sockets by default
#skip-networking
#bind-address   = 127.0.0.1

log-bin
server-id   = 1

# Point the following paths to different dedicated disks
tmpdir  = /tmp/
#log-update = 
/path-to-dedicated-directory/hostname


# you need debug use flag enabled to use this ones.
# if needed uncomment them, start the server and issue
# #tail -f /tmp/mysqld.sql /tmp/mysqld.trace
# this will show you *exactly* what's appening in your server ;)

#log= /tmp/mysqld.sql
#gdb
#debug  = d:t:i:o,/tmp/mysqld.trace
#one-thread

# Uncomment the following if you are using BDB tables
#bdb_cache_size = 4M
#bdb_max_lock   = 1

# The following is the InnoDB configuration
# if you wish to disable innodb instead
# uncomment just the next line
#skip-innodb
#
# the rest of the innodb config follows:
# don't eat too much memory, we're trying to be safe on 64Mb boxes.
# you might want to bump this up a bit on boxes with more RAM
innodb_buffer_pool_size = 1024M
# this is the default, increase if you have lots 

Re: daemon crash when shutting down large databases

2005-09-28 Thread David Sparks
I forgot to include the output of show variables and show status in the 
last message :(



mysql show variables\G
*** 1. row ***
Variable_name: back_log
Value: 50
*** 2. row ***
Variable_name: basedir
Value: /usr/
*** 3. row ***
Variable_name: binlog_cache_size
Value: 1048576
*** 4. row ***
Variable_name: bulk_insert_buffer_size
Value: 8388608
*** 5. row ***
Variable_name: character_set_client
Value: latin1
*** 6. row ***
Variable_name: character_set_connection
Value: latin1
*** 7. row ***
Variable_name: character_set_database
Value: latin1
*** 8. row ***
Variable_name: character_set_results
Value: latin1
*** 9. row ***
Variable_name: character_set_server
Value: latin1
*** 10. row ***
Variable_name: character_set_system
Value: utf8
*** 11. row ***
Variable_name: character_sets_dir
Value: /usr/share/mysql/charsets/
*** 12. row ***
Variable_name: collation_connection
Value: latin1_swedish_ci
*** 13. row ***
Variable_name: collation_database
Value: latin1_swedish_ci
*** 14. row ***
Variable_name: collation_server
Value: latin1_swedish_ci
*** 15. row ***
Variable_name: concurrent_insert
Value: ON
*** 16. row ***
Variable_name: connect_timeout
Value: 5
*** 17. row ***
Variable_name: datadir
Value: /var/lib/mysql/
*** 18. row ***
Variable_name: date_format
Value: %Y-%m-%d
*** 19. row ***
Variable_name: datetime_format
Value: %Y-%m-%d %H:%i:%s
*** 20. row ***
Variable_name: default_week_format
Value: 0
*** 21. row ***
Variable_name: delay_key_write
Value: ON
*** 22. row ***
Variable_name: delayed_insert_limit
Value: 100
*** 23. row ***
Variable_name: delayed_insert_timeout
Value: 300
*** 24. row ***
Variable_name: delayed_queue_size
Value: 1000
*** 25. row ***
Variable_name: expire_logs_days
Value: 1
*** 26. row ***
Variable_name: flush
Value: OFF
*** 27. row ***
Variable_name: flush_time
Value: 0
*** 28. row ***
Variable_name: ft_boolean_syntax
Value: + -()~*:|
*** 29. row ***
Variable_name: ft_max_word_len
Value: 84
*** 30. row ***
Variable_name: ft_min_word_len
Value: 4
*** 31. row ***
Variable_name: ft_query_expansion_limit
Value: 20
*** 32. row ***
Variable_name: ft_stopword_file
Value: (built-in)
*** 33. row ***
Variable_name: group_concat_max_len
Value: 1024
*** 34. row ***
Variable_name: have_archive
Value: NO
*** 35. row ***
Variable_name: have_bdb
Value: NO
*** 36. row ***
Variable_name: have_blackhole_engine
Value: NO
*** 37. row ***
Variable_name: have_compress
Value: YES
*** 38. row ***
Variable_name: have_crypt
Value: YES
*** 39. row ***
Variable_name: have_csv
Value: NO
*** 40. row ***
Variable_name: have_example_engine
Value: NO
*** 41. row ***
Variable_name: have_geometry
Value: NO
*** 42. row 

daemon crash when shutting down large databases

2005-09-23 Thread David Sparks
mysql usually crashes when being shutdown.  The machine is a dual AMD64 
w 8GB RAM running mysql-4.1.14 on Gentoo linux with a ~40GB database.  I 
had similar crashes running 4.0.24 on an x86 running a ~275GB database.


I always use `mysqladmin shutdown` rather than the init scripts to 
shutdown the daemon.


Are there any known problems with shutting down large databases?

Thanks,

ds

050923 10:41:58  InnoDB: Starting shutdown...
050923 10:44:00InnoDB: Assertion failure in thread 1174235488 in file 
os0sync.c line 634

InnoDB: Failing assertion: 0 == pthread_mutex_destroy(fast_mutex)
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/mysql/en/Forcing_recovery.html
InnoDB: about forcing recovery.
mysqld got signal 11;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help 
diagnose
the problem, but since we have already crashed, something is definitely 
wrong

and this may fail.

key_buffer_size=2147483648
read_buffer_size=33550336
max_used_connections=217
max_connections=768
threads_connected=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections 
= 77591546 K

bytes of memory
Hope that's ok; if not, decrease some variables in the equation.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: explain not explaining long running query?

2005-08-08 Thread David Sparks
Hi all!

Gleb Paharenko wrote:
 Hello.
 
 
 
I have a query that is taking days to complete (not good).  If I change
 
 
 Really, not good. What does SHOW PROCESSLIST report about the thread of 
 this query?

The query has been running for ~5 days now:


 Id: 27977
   User: root
   Host: localhost
 db: fractyl
Command: Query
   Time: 421540
  State: Sending data
   Info: select count(*) from msgs where message_id  112000 and
message_id  112111


I also tried using the BETWEEN syntax but it has exactly the same problem.

Any other ideas?

ds

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



explain not explaining long running query?

2005-08-04 Thread David Sparks
I have a query that is taking days to complete (not good).  If I change
the query so that it selects less rows it runs fast.

I ran an explain on both queries and it didn't give any hints as to why
the one query is taking days to run.  In fact explain knows how many
rows each query will examine.

Please help explain this behavior to me.

Thanks,

ds



The output of running the queries:

mysql select count(*) from msgs where message_id  112000 and
message_id  112001;
+--+
| count(*) |
+--+
|6 |
+--+
1 row in set (0.00 sec)


mysql select count(*) from msgs where message_id  112000 and
message_id  112111;
(running for 2 days now)
-%-

The output of explain on both queries:

mysql explain select count(*) from msgs where message_id  112000
and message_id  112111\G
*** 1. row ***
   id: 1
  select_type: SIMPLE
table: msgs
 type: range
possible_keys: PRIMARY
  key: PRIMARY
  key_len: 8
  ref: NULL
 rows: 580
Extra: Using where; Using index
1 row in set (0.00 sec)

mysql explain select count(*) from msgs where message_id  112000
and message_id  112001\G
*** 1. row ***
   id: 1
  select_type: SIMPLE
table: msgs
 type: range
possible_keys: PRIMARY
  key: PRIMARY
  key_len: 8
  ref: NULL
 rows: 5
Extra: Using where; Using index
1 row in set (0.00 sec)
-%


The table description:

mysql describe messages\G
*** 1. row ***
  Field: message_id
   Type: double(15,5) unsigned
   Null:
Key: PRI
Default: 0.0
  Extra:
*** 2. row ***
  Field: abc1
   Type: int(10) unsigned
   Null:
Key:
Default: 0
  Extra:
*** 3. row ***
  Field: r_datetime
   Type: datetime
   Null: YES
Key:
Default: -00-00 00:00:00
  Extra:
*** 4. row ***
  Field: abc2
   Type: int(10) unsigned
   Null: YES
Key: MUL
Default: 0
  Extra:
*** 5. row ***
  Field: abc3
   Type: int(10) unsigned
   Null: YES
Key: MUL
Default: 0
  Extra:
*** 6. row ***
  Field: abc4
   Type: varchar(255)
   Null: YES
Key:
Default:
  Extra:
*** 7. row ***
  Field: abc5
   Type: float
   Null: YES
Key: MUL
Default: 0
  Extra:
*** 8. row ***
  Field: abc6
   Type: int(10) unsigned
   Null:
Key: MUL
Default: 0
  Extra:
*** 9. row ***
  Field: abc7
   Type: int(10) unsigned
   Null: YES
Key: MUL
Default: 0
  Extra:
*** 10. row ***
  Field: abc8
   Type: int(10) unsigned
   Null: YES
Key: MUL
Default: 0
  Extra:
*** 11. row ***
  Field: abc9
   Type: int(10) unsigned
   Null: YES
Key: MUL
Default: 0
  Extra:
*** 12. row ***
  Field: abc10
   Type: int(10) unsigned
   Null:
Key:
Default: 0
  Extra:
*** 13. row ***
  Field: abc11
   Type: int(10) unsigned
   Null:
Key:
Default: 0
  Extra:
13 rows in set (0.00 sec)

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysql-test-run --external?

2005-06-21 Thread David Sparks
According to the README, mysql-test-run supports an --external option:

db1 mysql-test # grep -a1 external README

If you want to run the test with a running MySQL server use the --external
option to mysql-test-run.


However it doesn't actually support it:

db1 mysql-test # ./mysql-test-run --external
Unrecognized option: --external


How do I get the test suite to run with an existing DB?

Cheers,

ds

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql-test-run --external?

2005-06-21 Thread David Sparks
Petr,

Thanks for the reply!

 I think you are looking for --extern option of the test suite. I
 corrected the README file. The changes should be propagated to the
 public repository soon, but you could check the commit mail for more
 details right now:
 http://lists.mysql.com/internals/26266

Ahah, this explains it.  We had tried the --extern option without any
args but it didn't work due to the fallback behavior.  A useful
enhancement would be to send a warning to STDERR saying that the
--extern option is being disregarded.

The example shows how to run 2 tests.  I though there were (a lot) more
tests available?  Why isn't there a simple way of running all available
tests on an installed and running mysql server?

And when I try it this happens:

db1 mysql-test # ./mysql-test-run --extern alias analyze
Starting Tests

TESTRESULT
---
alias  [ fail ]

Errors are (from /usr/share/mysql/mysql-test/var/log/mysqltest-time) :
./mysql-test-run: line 1637: /usr/share/mysql/bin/mysqltest: No such
file or directory
-%-


A couple of fixes need to be made to this script similar to the patch
below which does:

a) doesn't try to use a full path to mysqltest (as it has already been
installed)
b) fixes a case sensitivity problem with $MYSQL_TEST_ARGS (there seems
to be several more places in the script where it needs patching)

Cheers,

ds

--- mysql-test-run.orig 2005-06-21 16:58:18.419746418 -0700
+++ mysql-test-run  2005-06-21 17:06:46.286679482 -0700
@@ -1634,9 +1634,9 @@
 $RM -f r/$tname.*reject
 mysql_test_args=-R $result_file $EXTRA_MYSQL_TEST_OPT
 if [ -z $DO_CLIENT_GDB ] ; then
-  `$MYSQL_TEST  $mysql_test_args  $tf 2 $TIMEFILE`;
+  `mysqltest  $MYSQL_TEST_ARGS  $tf 2 $TIMEFILE`;
 else
-  do_gdb_test $mysql_test_args $tf
+  do_gdb_test $MYSQL_TEST_ARGS $tf
 fi

 res=$?


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



myisam insta corruption in 4.1.12

2005-06-21 Thread David Sparks
db1 corruption # cat  my.sql
DROP TABLE IF EXISTS service_contacts;
CREATE TABLE service_contacts (
  croeated datetime NOT NULL default '-00-00 00:00:00'
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
INSERT INTO service_contacts VALUES ('2006-06-14 10:27:40');

db1 corruption # mysqladmin -u root -p create test1
Enter password:

db1 corruption # mysql -u root -p test1  my.sql
Enter password:

db1 corruption # /etc/init.d/mysql stop
 * Stopping mysqld (/etc/mysql/my.cnf) ...
 [ ok ]

db1 corruption # myisamchk /var/lib/mysql/test1/*MYI
Checking MyISAM file: /var/lib/mysql/test1/service_contacts.MYI
Data records:   1   Deleted blocks:   0
- check file-size
myisamchk: warning: Datafile is almost full,  9 of  7 used
- check record delete-chain
- check key delete-chain
- check index reference
MyISAM-table '/var/lib/mysql/test1/service_contacts.MYI' is usable but
should be fixed
--%--

1 row in a 1 column table and it is already corrupt.  Yikes!  The
problem is related to this my.cnf setting:

myisam_data_pointer_size = 8

I seem to be having this problem on x86 (Gentoo + SuSE), x86_64 (Gentoo
+ Debian) and also using the Mysql binary build static glibc 2.2.

Does anyone understand what is going on under the hood here?

Thanks,

ds

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]