Re: problems w/ Replication over the Internet

2008-04-25 Thread Jan Kirchhoff
Hmmm...
no more ideas or suggestions anybody? :(

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: problems w/ Replication over the Internet

2008-04-22 Thread Jan Kirchhoff
Eric Bergen schrieb:
 TCP checksums aren't as strong as encryption. It's rare but corruption
 can happen.
   
But it happens every other day? that means at least one error in 4GB of
data (I have around 2GB of binlogs/day)?
Every DVD-ISO you download would be corrupt (statistically)?

 Where are you reading the positions from and how are you taking the
 snapshot to restore the slave?
   
From the log file:

080415  6:39:20 [ERROR] Error
running query, slave SQL thread aborted. Fix the problem, and restart
the slave SQL thread with SLAVE START. We stopped at log
'mysql-bin.045709' position 172


I use rsync to set up the slave...



 On Mon, Apr 21, 2008 at 12:30 AM, Jan Kirchhoff [EMAIL PROTECTED] wrote:
   
 Eric Bergen schrieb:

 
 Hi Jan,
   
  
   You have two separate issues here. First the issue with the link
   between the external slave and the master. Running mysql through
   something like stunnel may help with the connection and data loss
   issues.
  
  I wonder how any corruption could happen on a TCP connection as TCP has
  its own checksums and a connection would break down in case of a missing
  packet?

 
 The second problem is that your slave is corrupt. Duplicate key errors
   
   are sometimes caused by a corrupt table but more often by restarting
   replication from an incorrect binlog location. Try recloning the slave
   and starting replication again through stunnel.
  
  The duplicate key errors happen after I start at the beginning of a
  logfile (master_log_pos=0) when the positions that mysql reports as its
  last positions is not working.

  I think I have 2 issues:
  #1: how can this kind of binlog corruption happen on a TCP link although
  TCP has its checksums and resends lost packets?

  #2: why does mysql report a master log position that is obviously wrong?
  mysql  reports log-posion 172 which is not working at all in a change
  master to command, my only option is to start with master_log_pos=0 and
  the number of duplicate key errors and such that I have to skip after
  starting from master_log_pos=0 shows me that the real position that
  mysql has stopped processing the binlog must be something in the
  thousands or tenthousands and not 172?!

  Jan

 



   


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: problems w/ Replication over the Internet

2008-04-21 Thread Jan Kirchhoff
Eric Bergen schrieb:
 Hi Jan,

 You have two separate issues here. First the issue with the link
 between the external slave and the master. Running mysql through
 something like stunnel may help with the connection and data loss
 issues.
   
I wonder how any corruption could happen on a TCP connection as TCP has
its own checksums and a connection would break down in case of a missing
packet?
 The second problem is that your slave is corrupt. Duplicate key errors
 are sometimes caused by a corrupt table but more often by restarting
 replication from an incorrect binlog location. Try recloning the slave
 and starting replication again through stunnel.
   
The duplicate key errors happen after I start at the beginning of a
logfile (master_log_pos=0) when the positions that mysql reports as its
last positions is not working.

I think I have 2 issues:
#1: how can this kind of binlog corruption happen on a TCP link although
TCP has its checksums and resends lost packets?

#2: why does mysql report a master log position that is obviously wrong?
mysql  reports log-posion 172 which is not working at all in a change
master to command, my only option is to start with master_log_pos=0 and
the number of duplicate key errors and such that I have to skip after
starting from master_log_pos=0 shows me that the real position that
mysql has stopped processing the binlog must be something in the
thousands or tenthousands and not 172?!

Jan

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



problems w/ Replication over the Internet

2008-04-15 Thread Jan Kirchhoff
I have a setup with a master and a bunch of slaves in my LAN as well as
one external slave that is running on a Xen-Server on the internet.
All servers run Debian Linux and its mysql version 5.0.32
Binlogs are around 2 GB per day. I have no trouble at all with my local
slaves, but the external one hangs once every two days.
As this server has no other problems like crashing programs, kenrel
panics, corrupted files or such, I am pretty sure that the hardware is OK.

the slave's log:

Apr 15 06:39:19 db-extern mysqld[24884]: 080415  6:39:19 [ERROR] Error
reading packet from server: Lost connection to MySQL server during query
( server_errno=2013)
Apr 15 06:39:19 db-extern mysqld[24884]: 080415  6:39:19 [Note] Slave
I/O thread: Failed reading log event, reconnecting to retry, log
'mysql-bin.045709' position 7334981
Apr 15 06:39:19 db-extern mysqld[24884]: 080415  6:39:19 [Note] Slave:
connected to master '[EMAIL PROTECTED]:1234',replication resumed in log
'mysql-bin.045709' at position 7334981
Apr 15 06:39:20 db-extern mysqld[24884]: 080415  6:39:20 [ERROR] Error
in Log_event::read_log_event(): 'Event too big', data_len: 503316507,
event_type: 16
Apr 15 06:39:20 db-extern mysqld[24884]: 080415  6:39:20 [ERROR] Error
reading relay log event: slave SQL thread aborted because of I/O error
Apr 15 06:39:20 db-extern mysqld[24884]: 080415  6:39:20 [ERROR] Slave:
Could not parse relay log event entry. The possible reasons are: the
master's binary log is corrupted (you can check this by running
'mysqlbinlog' on the binary log), the slave's relay log is corrupted
(you can check this by running 'mysq
lbinlog' on the relay log), a network problem, or a bug in the master's
or slave's MySQL code. If you want to check the master's binary log or
slave's relay log, you will be able to know their names by issuing 'SHOW
SLAVE STATUS' on this slave. Error_code: 0
Apr 15 06:39:20 db-extern mysqld[24884]: 080415  6:39:20 [ERROR] Error
running query, slave SQL thread aborted. Fix the problem, and restart
the slave SQL thread with SLAVE START. We stopped at log
'mysql-bin.045709' position 172
Apr 15 06:40:01 db-extern mysqld[24884]: 080415  6:40:01 [Note] Slave
I/O thread killed while reading event
Apr 15 06:40:01 db-extern mysqld[24884]: 080415  6:40:01 [Note] Slave
I/O thread exiting, read up to log 'mysql-bin.045709', position 23801854
Apr 15 06:40:01 db-extern mysqld[24884]: 080415  6:40:01 [Note] Slave
SQL thread initialized, starting replication in log 'mysql-bin.045709'
at position 172, relay log './db-extern-relay-bin.01' position: 4
Apr 15 06:40:01 db-extern mysqld[24884]: 080415  6:40:01 [Note] Slave
I/O thread: connected to master '[EMAIL PROTECTED]:1234',  replication
started in log 'mysql-bin.045709' at position 172
Apr 15 06:40:01 db-extern mysqld[24884]: 080415  6:40:01 [ERROR] Error
reading packet from server: error reading log entry ( server_errno=1236)
Apr 15 06:40:01 db-extern mysqld[24884]: 080415  6:40:01 [ERROR] Got
fatal error 1236: 'error reading log entry' from master when reading
data from binary log
Apr 15 06:40:01 db-extern mysqld[24884]: 080415  6:40:01 [Note] Slave
I/O thread exiting, read up to log 'mysql-bin.045709', position 172

slave start;
doesn't help.

slave stop, reset slave; change master to
master_log_file=mysql-bin.045709, master_log_pos=172;slave start
does not help as well

the only way to get this up and running again is to do a change master
to master_log_file=mysql-bin.045709, master_log_pos=0 and use
sql_slave_skip_counter when I get duplicate key errors. this sucks.
When this problem occurs, the log positions are always small number, I
would say less than 500.

I also get connection errors in the log from time to time, but it
recovers itself:
Apr 14 22:27:17 db-extern mysqld[24884]: 080414 22:27:17 [ERROR] Error
reading packet from server: Lost connection to MySQL server during query
( server_errno=2013)
Apr 14 22:27:17 db-extern mysqld[24884]: 080414 22:27:17 [Note] Slave
I/O thread: Failed reading log event, reconnecting to retry, log
'mysql-bin.045705' position 34671615
Apr 14 22:27:17 db-extern mysqld[24884]: 080414 22:27:17 [Note] Slave:
connected to master '[EMAIL PROTECTED]:1234',replication resumed in log
'mysql-bin.045705' at position 34671615

Sometimes I have
Apr 13 23:22:04 db-extern mysqld[24884]: 080413 23:22:04 [ERROR] Slave:
Error 'You have an error in your SQL syntax; check the manual that
corresponds to your MySQL server version for the right syntax to use
near '^\' at line 1' on query.
Apr 13 23:22:04 db-extern mysqld[24884]: 080413 23:22:04 [ERROR] Error
running query, slave SQL thread aborted. Fix the problem, and restart
the slave SQL thread with SLAVE START. We stopped at log
'mysql-bin.045699' position 294101453
But this time 
slave stop, reset slave; change master to
master_log_file=mysql-bin.045699, master_log_pos=294101453;slave start
helps!

master# mysqlbinlog --position=172 mysql-bin.045709
/*!40019 SET @@session.max_insert_delayed_threads=0*/;

Re: Mass insert on InnoDB

2008-01-29 Thread Jan Kirchhoff

David Schneider-Joseph schrieb:

Hi all,

I am attempting to convert a very large table (~23 million rows) from 
MyISAM to InnoDB.  If I do it in chunks of one million at a time, the 
first million are very fast (approx. 3 minutes or so), and then it 
gets progressively worse, until by the time I get even to the fourth 
chunk, it's taking 15-20 minutes, and continuing to worsen.  This is 
much worse degradation than the O*log(N) that you would expect.



[...]


This problem can even be reproduced in a very simple test case, where 
I continuously insert approximately 1 million rows into a table, with 
random data.  `big_table` can be any table with approximately one 
million rows in id range 1 through 100 (we're not actually using 
any data from it):



[...]

Any ideas, anyone?

Thanks,
David




what hardware are you running on and you much memory do you have? what 
version of mysql?| |

How did you set innodb_buffer_pool_size? you might want to read
http://dev.mysql.com/doc/refman/5.0/en/innodb-tuning.html
and do some tuning.

In case that doesn't help you, you'll need to post more info on your config.

Jan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mass insert on InnoDB

2008-01-29 Thread Jan Kirchhoff

David Schneider-Joseph schrieb:

On Jan 29, 2008, at 6:09 PM, Jan Kirchhoff wrote:

what hardware are you running on and you much memory do you have? 
what version of mysql?| |

How did you set innodb_buffer_pool_size?


Hardware:
Dual AMD Opteron 246 2.0 GHz
4 GB DDR RAM (no swap being used)
Dual 146 GB SCSI drives with a RAID 1

Software:
RedHat Linux, kernel version 2.6.9-55.ELsmp
MySQL 5.0.45-community-log


[...]
As best I can tell, our server is tuned appropriately.  We've 
definitely spent effort on tuning it already.


This is on my Replication-Slave I use for backups:

mysql create table test (id int NOT NULL auto_increment, x int NOT 
NULL, primary key (id), key (x)) ENGINE=InnoDB;

Query OK, 0 rows affected (0.05 sec)

mysql insert into test (x) select cast(rand()*1000 as unsigned) 
from verybigtable limit 100;

Query OK, 100 rows affected (20.20 sec)
Records: 100  Duplicates: 0  Warnings: 0

mysql insert into test (x) select cast(rand()*1000 as unsigned) 
from verybigtable limit 100;

Query OK, 100 rows affected (17.60 sec)
Records: 100  Duplicates: 0  Warnings: 0

mysql insert into test (x) select cast(rand()*1000 as unsigned) 
from verybigtable limit 100;

Query OK, 100 rows affected (15.67 sec)
Records: 100  Duplicates: 0  Warnings: 0

mysql insert into test (x) select cast(rand()*1000 as unsigned) 
from verybigtable limit 100;

Query OK, 100 rows affected (14.91 sec)
Records: 100  Duplicates: 0  Warnings: 0

mysql insert into test (x) select cast(rand()*1000 as unsigned) 
from verybigtable limit 100;

Query OK, 100 rows affected (17.89 sec)
Records: 100  Duplicates: 0  Warnings: 0

mysql insert into test (x) select cast(rand()*1000 as unsigned) 
from verybigtable limit 100;

Query OK, 100 rows affected (16.24 sec)
Records: 100  Duplicates: 0  Warnings: 0

your innodb_log_buffer_size and innodb_log_file_size look tiny, this is 
my config on the system:
(dual-socket dual-core opteron 2216 with Areca sata-raid10 (w/ bbu) and 
16gb RAM)

+-+---+
| Variable_name   | Value |
+-+---+
| innodb_additional_mem_pool_size | 16777216  |
| innodb_autoextend_increment | 1000  |
| innodb_buffer_pool_awe_mem_mb   | 0 |
| innodb_buffer_pool_size | 1048576   |
| innodb_checksums| ON|
| innodb_commit_concurrency   | 0 |
| innodb_concurrency_tickets  | 500   |
| innodb_data_file_path   | /var/lib/mysql/ibdata:100M:autoextend |
| innodb_data_home_dir|   |
| innodb_doublewrite  | ON|
| innodb_fast_shutdown| 1 |
| innodb_file_io_threads  | 4 |
| innodb_file_per_table   | ON|
| innodb_flush_log_at_trx_commit  | 0 |
| innodb_flush_method |   |
| innodb_force_recovery   | 0 |
| innodb_lock_wait_timeout| 120   |
| innodb_locks_unsafe_for_binlog  | OFF   |
| innodb_log_arch_dir | /var/lib/mysql2/innodb/   |
| innodb_log_archive  | OFF   |
| innodb_log_buffer_size  | 33554432  |
| innodb_log_file_size| 536870912 |
| innodb_log_files_in_group   | 3 |
| innodb_log_group_home_dir   | /var/lib/mysql2/innodb/   |
| innodb_max_dirty_pages_pct  | 90|
| innodb_max_purge_lag| 0 |
| innodb_mirrored_log_groups  | 1 |
| innodb_open_files   | 300   |
| innodb_rollback_on_timeout  | OFF   |
| innodb_support_xa   | OFF   |
| innodb_sync_spin_loops  | 20|
| innodb_table_locks  | ON|
| innodb_thread_concurrency   | 16|
| innodb_thread_sleep_delay   | 1 |
+-+---+

play around

Re: How to encrypt Text and still be able to use full text search? 3rd Attempt ++

2007-10-27 Thread Jan Kirchhoff

mos schrieb:
I posted this message twice in the past 3 days, and it never gets on 
the mailing list. Why?

Here it is again:

I have a Text field that contains paragraph text and for security 
reasons I need to have it encrypted. If I do this, how can I still 
implement full text search on it?
Also, I have a lot of Float columns that need to be protected but the 
user has to use comparison operators like  and  on them. Any 
recommendations?

Mike,
What size ist the database?
Could you create some kind of (temporary) table holding the data 
unencrypted?
As fulltext search is only possible on myisam tables, you might want to 
put this on a ramdisk and create it during mysql startup (just make a 
symlink like /var/lib/mysql/UnencryptedDatabase - /ramdisk/ and use the 
|--init-file-Paramter for mysqld to create and fill the table).
It would at least make it more difficult to get the data for somebody 
who has physical access to the machine as long as you have all your 
partitions encrypted as well have to enter your password during startup.


||I know there is still danger: somebody at the ISP could shut down the 
server and modify your initrd and try to get you password when you enter 
it during startup, but as long as you won't host the machine yourself, 
there probably is no better option. Get rackspace that has doors and can 
be locked... a little more security, but usually the ISP has a second 
key in their safe :(
||Or you might set it up so you have to enter 2 Passwords, the first one 
to decrypt and start a small program that checksums the kernel and 
initrd that is in memory, and then a second one to mount the partitions...|

|
If the value of the data is really a million, host it on your own and 
install security systems etc. and a 24/7 NOC keeping an eye on your 
server looking for hackers and so on.

If your budget is only $100/month I would do the way I described above.
|||
|Jan|
||

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Configuration for a powerful server?

2007-10-03 Thread Jan Kirchhoff

Ratheesh K J schrieb:

Hello all,

What is the best possible values in my.cnf for a 8 processor (Quad core-2 cpu) 
8 GB RAM machine dedicated for MySQL server only. No other application will run 
on this machine.

the innodb_buffer_pool_size cannot accept values above 2000 MB due to 32 bit 
machine constraint. So what other parameters can be tweaked to make use of this 
powerful server to its best?

NOTE: All our tables are of INNODB storage engine.
  


You simply cannot make use of your server's power on a 32bit OS.
Since it is a dedicated server anyway, install a 64bit OS (I prefer 
debian linux, but install whatever 64bit linux you know best) and set 
the buffer pool to around 6.5GB. All other variables depend on you 
usage, so are you running a web application with lots of small queries 
or is it only very few connections running big queries? what is the size 
of your DB? You'll need to provide more information to get help here.


Jan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: [Replication] - urgent

2007-10-03 Thread Jan Kirchhoff

Ratheesh K J schrieb:

Thanks,
 
It helped me a lot. I wanted to know


   1. what are the various scenarios where my replication setup can
  fail? (considering even issues like network failure and server
  reboot etc). What is the normal procedure to correct the failure
  when something unpredicted happens?

You should first read the right parts of the manual at 
https//dev.mysql.com/doc before asking such questions.

Basically:
-Use good hardware with ECC-RAM and RAID-Controllers in order to 
minimize trouble with faulty hardware.
-Never write on the slaves without knowing what this could do to your 
replication setup
-Watch the diskspace and make sure it's always enough space for the 
binlogs. Otherwise you might end up with half-written binlogs on either 
the slave or master because of a full disk which can cause trouble and 
some work to get it up and running again.


When a master goes down or network connection is lost, the slave 
automatically tries to reconnect once a minute or so. Restarting the 
master or exchanging some network equipment is no problem. When the 
slave reboots, it tries to reconnect on startup, too.


This is out-of-the-box-behaviour. You can modify it in the my.cnf 
(i.e. use the  skip-slave-start option etc)



   1. What are the scenarios where the SQL THREAD stops running and
  what are the scenarios where the IO THREAD stops running?

SQL thread stops when it can't run a SQL-Query from the binlogs for any 
reason, as you have experiences when the table already existed.


The IO-Thread only stops when it has an error reading a binlog from the 
master. When its only a lost connection, it automatically reconnects.
Other problems (i.e. unable to read a binlog) should never happen as 
long a you don't delete binlogs on the master that have not yet been 
copied over to the slave by the io-thread (show master status and 
show slave status commands and their output) or you have faulty 
hardware (io_errors on the harddisk or such things)



   1. Does SQL_SLAVE_SKIP_COUNTER skip the statement of the master
  binlog from being replicated to the slave relay log OR Has the
  statement already been copied into the slave relay log and has
  been skipped from the relay log?

it skips the entry on the local copy of the binlog. The IO-Thread 
replicates the whole binlog and the sql-thread skips an entry in it when 
you use sql_slave_skip_counter


   1. How do I know immediately that replication has failed? (
  have heard that the enterprise edition has some technique for
  this )?

watch the logfile, it is written there. Or run a cronjob once a minute 
with something like
mysql -e 'show slave status\G' |grep '_Running:' /dev/null || bash 
my_alarm_script_that_sends_mail_or_whatever.sh




regards
Jan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: [Replication] - urgent

2007-10-02 Thread Jan Kirchhoff

Ratheesh K J schrieb:

Hello all,

I issued a create table statement on the master for a table which was not 
present on the master but present on the slave.
I did this purposely to see the error on slave.

I am a newbie to replication. Now when i see SLave status on the slave machine 
it shows that the SQL Thread has stopped.

When I start the SQL thread it does not start and gives the error message that 
the table exists. How do i correct this and how do I calculate the next 
position that the slave must start executing from the relay log.

Is there any article on MySQL replication that tells me how to deal when errors 
occur.

Thanks  regards,
Ratheesh
  


You have 2 options:

1.
on the slave, enter SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; and then 
SLAVE START; on the slave. This skips the upcoming entry in the binlog 
which is the create table command that causes your problem.


2.
if you don't have any data in the table on the slave, just drop the 
table and do a slave start;, it will then create the table again as 
this is the next command in the binlog.


Remember: never write on the slave without knowing what you do and 
you'll be happy with your replication ;)


Jan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Innodb log sequence error - urgent

2006-12-10 Thread Jan Kirchhoff

Ratheesh K J schrieb:

Hello all,

yesterday we seperated our app server and db server. We moved our 70GB of data from our app server to a new DB server. We installed MySQL 4.1.11 on the DB server. 


Now the following happened. On the DB server the ibdata1 and all the databases 
are the old ones (which were copied from the app server). But when Mysql was 
installed the ib_logfile0 ib_logfile1 and ib_logfile2 were created freshly on 
the DB serever. Each of these log files were created with 5M size. on the app 
server these files were 256M in size (innodb_log_file_size = 256M). On the DB 
server it is (innodb_log_file_size = 5M).

Today morning when I checked the error log, there seems to be a lot of error 
msg flowing in.

061211 11:41:47  InnoDB: Error: page 203046 log sequence number 87 3002891543
InnoDB: is in the future! Current system log sequence number 86 4025048037.
InnoDB: Your database may be corrupt.


You cannot just copy innodb-databases to other servers without adjusting 
your my.cnf: Once you created an innodb-database, you cannot change 
parameters like innodb_log_file_size any more.
(this is explained in the manual, you should read the chapter about 
backing up and restoring innodb-databases) So when you copy the database 
to the new server, be sure to copy the settings from the my.cnf, too!


Jan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: 5.0.18-max-log as a slave of a 4.1.13-standard-log master problem - slave hangs

2006-02-08 Thread Jan Kirchhoff

Kishore Jalleda schrieb:

Hi you may be having issues with the byte order on the opetron's and the
P4's , this was asked earlier in the list, and here's what Jimmy from Mysql
had to say 

Kishore,
Thanks for the suggestion, but all x86 have the same byte order... and 
as I wrote its not a cluster problem but a replication problem :(


btw: I just started the mysql-tests and it hangs, too:

db5:/usr/local/mysql/mysql-test# ./mysql-test-run
Installing Test Databases
Removing Stale Files
Installing Master Databases
running  ../bin/mysqld --no-defaults --bootstrap --skip-grant-tables 
--basedir=.. --datadir=mysql-test/var/master-da
ta --skip-innodb --skip-ndbcluster --skip-bdb
Installing Master Databases 1
running  ../bin/mysqld --no-defaults --bootstrap --skip-grant-tables 
--basedir=.. --datadir=mysql-test/var/master-da
ta1 --skip-innodb --skip-ndbcluster --skip-bdb
Installing Slave Databases
running  ../bin/mysqld --no-defaults --bootstrap --skip-grant-tables 
--basedir=.. --datadir=mysql-test/var/slave-dat
a --skip-innodb --skip-ndbcluster --skip-bdb
Manager disabled, skipping manager start.

Starting ndbcluster
Starting ndbd
Starting ndbd
Waiting for started...
NDBT_ProgramExit: 0 - OK
Connected to Management Server at: localhost:9350
Cluster Configuration
-
[ndbd(NDB)] 2 node(s)
id=1@127.0.0.1  (Version: 5.0.18, Nodegroup: 0, Master)
id=2@127.0.0.1  (Version: 5.0.18, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=3@127.0.0.1  (Version: 5.0.18)

[mysqld(API)]   4 node(s)
id=4 (not connected, accepting connect from any host)
id=5 (not connected, accepting connect from any host)
id=6 (not connected, accepting connect from any host)
id=7 (not connected, accepting connect from any host)

Loading Standard Test Databases
Starting Tests

TESTRESULT
---
alias  [ pass ]  
alter_table[ pass ]  
analyse[ pass ]  
analyze[ pass ]  
ansi   [ pass ]  
archive[ pass ]  
archive_gis[ pass ]  


now nothing happens, cpuload is at 0 - any ideas?


Jan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: 5.0.18-max-log as a slave of a 4.1.13-standard-log master problem - slave hangs

2006-02-07 Thread Jan Kirchhoff

A neverending story.
I thought it worked (without having an idea what has been the problem), 
but it broke down again after a few hours.

My current set up is:

-A p4 production server (Server1) running debian linux, 2.4 kernel, 
mysql 4.1.13-standard-log. This server is replicating to several other 
production-servers.
-Two new Dual-Opteron Servers (Server2+Server3) with 6GB RAM each, 3ware 
SATA-RAID, custom kernel 2.6.15.1 SMP, mysql 5.0.18-max-log.


Server2 is replicating from Server1 with a few  
Replicate_Ignore_DB/Replicate_Wild_Ignore_Table rules. I have
had problems getting this server running at first since it always hung 
with replicated queries (different ones) and the only thing helped
was to kill -9 the mysqld. At some point it suddenly worked and is 
running for almost a week now - having replicated at least 20-30GB so far.
Server 3 was supposed to become a slave of the first one, but it shows 
the same problems I had with Server2 at first: it starts to replicate 
and some query hangs after
a few minutes. These are no complicated mass-inserts (those 1-5MB 
mass-inserts work without trouble), but simple queries like insert into 
table (a,b,c) values (1,2,3) or update table set a=1 where b=2.


I tried kernel 2.6.8, 2.6.15, SMP and non-SMP (debian-kernels and 
self-compiled), the official mysql-max and mysql-standard-binaries and a 
self-compiled mysql 5.0.18. I disabled Innodb and Cluster, I put all 
variables back to the standard values and played around with lots of 
settings.  lspci and the output of /proc/cpuinfo are the same on both 
servers. I have exactly the same BIOS-settings on both servers (I was 
going nuts comparing these bios-screens with a KVM in a loud 
server-room). Both servers have exactly the same debian-packages 
installed. lsmod shows the same on both systems.


I have had trouble with mysql-replication in 3.2x and 4.x in the last 
years, but I always got everything working and it was was working good 
without bigger trouble once it was up and running. But this time I  have 
no clue what else to try.


I currently have no other server that is powerful enough to handle all 
the updates being replicated in order to test a 5.0.18 on some other 
CPU. I'll probably try to get my workstation (p4 3ghz, 1GB RAM) running 
as a slave hoping the IDE-disk is fast enough, but no matter if that 
works or not - I don't know what to change/try on my new servers?!?


any ideas anybody?
thanks
Jan


Jan Kirchhoff schrieb:
I thought I found the reason for my problems with the change in 
join-behaviour in mysql 5, but Iwas wrong :( there is more trouble :(


my replications hangs with simple queries like insert into table 
(a,b,c) values (1,2,3) on a myisam-table.
It just hangs forever with no cpu-load on the slave. I have to kill 
and restart mysql with the following commands:


killall -9 mysqld;sleep 2;mysqladmin shutdown;sleep 
5;/etc/init.d/mysql start;sleep 2;mysql -e 'slave start'


I can find the changed row in the table, so the query was processed 
correctly.
Then it runs again for some time and hangs again with some other 
simple insert.


I disabled innodb, cluster, took out all my variables out of my.cnf 
except
max_allowed_packet = 16M which I need for the replication to work and 
I have no clue what the reason for my problem is.


what else could I try?







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance of MEMORY/HEAP-tables compared to mysql-cluster?

2006-02-06 Thread Jan Kirchhoff
I just managed to get two identical test-servers running, both being 
slaves of my production system replicating a few databases including two 
of the heavy-use tables.
One server uses heap-tables, on the other one i changed the table-format 
to innodb.


I've had some problems with the replication but now it seems like 
everything is running - although I still don't know what the problem was/is.
I hope I'll be able to do some testing during the next days... I'll give 
more feedback later this week. Thanks for the help!


Jan



sheeri kritzer schrieb:

I can confirm that using a large buffer pool, putting all the hot data
in there, and setting the logfiles large, etc. works in the real world
-- that's what we do, and all our important data resides in memory. 
The wonder of transactions, foreign keys, etc., with the speed of

memory tables.

-Sheeri

On 2/5/06, Heikki Tuuri [EMAIL PROTECTED] wrote:
  

Jan,

if you make the InnoDB buffer pool big enough to hold all your data, or at
least all the 'hot data', and set ib_logfiles large as recommended at
http://dev.mysql.com/doc/refman/5.0/en/innodb-configuration.html, then
InnoDB performance should be quite close to MEMORY/HEAP performance for
small SQL queries. If all the data is in the buffer pool, then InnoDB is
essentially a 'main-memory' database. It even uses automatically built hash
indexes.

This assumes that you do not bump into extensive deadlock issues. Deadlocks
can occur even with single row UPDATEs if you update indexed columns.
Setting innodb_locks_unsafe_for_binlog will reduce deadlocks, but read the
caveats about it.

Best regards,

Heikki

Oracle Corp./Innobase Oy
InnoDB - transactions, row level locking, and foreign keys for MySQL

InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM
tables
http://www.innodb.com/order.php
- Original Message -
From: Jan Kirchhoff [EMAIL PROTECTED]
Newsgroups: mailing.database.myodbc
Sent: Tuesday, January 31, 2006 1:09 PM
Subject: Re: Performance of MEMORY/HEAP-tables compared to mysql-cluster?




Hi,

I am currently experiencing trouble getting my new mysql 5-servers
running as slaves on my old 4.1.13-master.
Looks like I'll have to dump the whole 30GB-database and import it on
the new servers :( At this moment I
do no see any oppurtunity to do this before the weekend since the
longest time I can block any of our production
systems is only 2-3 hours between midnight and 2am :(

I am still curious if Innodb could handle the load of my updates on the
heavy-traffic-tables since its disk-bound and
does transactions.

What I would probably need is an in-memory-table without any kind of
locking - at least not table-locks! But there
is no such engine in mysql. When a cluster can handle that (although it
has the transaction-overhead) it would probably be
perfect for since it even adds high availability in a very easy way...

Jan

Jan Kirchhoff schrieb:
  

sheeri kritzer schrieb:


No problem:

Firstly, how are you measuring your updates on a single table?  I took
a few binary logs, grepped out for things that changed the table,
counting the lines (using wc) and then dividing by the # of seconds
the binary logs covered.  The average for one table was 108 updates
per second.
  I'm very intrigued as to how you came up with 2-300 updates per second
for one table. . . did you do it that way?  If not, how did you do it?
 (We are a VERY heavily trafficked site, having 18,000 people online
and active, and that accounts for the 108 updates per second.  So if
you have more traffic than that. .  .wow!)

  

Thanks for your hardware/database information. I will look at that
close tomorrow since I want to go home for today - it's already  9 pm
over here... I need beer ;)

We are not running a webservice here (actually we do, too, but thats
on other systems). This is part of our database with data of major
stock exchanges worldwide that we deliver realtime data for.
Currently that are around 900,000 quotes, during trading hours they
change all the time... We have much more updates than selects on the
main database.
Our Application that receives the datastream writes blocks (INSERT ...
ON DUPLICATE KEY UPDATE...) with all records that changed since the
last write. It gives me debug output like [timestamp] Wrote 19427
rows in 6 queries every 30 seconds - and that are numbers that I can
rely on.

Jan




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

  

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





  



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Totally different join-behaviour in mysql 4 and 5?

2006-02-01 Thread Jan Kirchhoff


As I already wrote I try do get a replication running from a mysql-4.1.13 
(32bit) master to a 5.0.18 (64bit) slave. It only runs for a few minutes and 
then a query hangs.
I think I now found out why:
I modified a multi-table-update that hung to a select.  The same query on the absolutely 
identical tables gives totally different explains on both systems:
While my query has a cardinality of 23,124*1=23,124 on mysql4, it has  
6,412*34,341=220,194,492 on mysql5 - and takes forever and makes me think 
everything hangs?!
I verified this with a dump of to tables that I imported on various few 
different systems. I created a new test-database, piped the dump into that and 
ran the following queries.

It looks like the join-behaviour of mysql has totally changed in mysql5!
This seems to affect quite a lot of queries here.
Am I doing really stupid mistakes or did I miss a major chapter in the upgrading to 
mysql5-documentation that I read over and over during the last days?

Jan

I put the mysql versions in front of the mysql prompts:


4.1.13-standard mysql show table status;
+--++-++++-+-+--+---++-+-+-+---+--++-+
| Name | Engine | Version | Row_format | Rows   | Avg_row_length | 
Data_length | Max_data_length | Index_length | Data_free | Auto_increment | 
Create_time | Update_time | Check_time  | Collation 
| Checksum | Create_options | Comment |
+--++-++++-+-+--+---++-+-+-+---+--++-+
| dlstm_data_d | MyISAM |   9 | Dynamic|  48621 | 49 | 
2428108 |  4294967295 |  1921024 | 0 |   NULL | 
2006-02-01 11:54:57 | 2006-02-01 11:55:08 | 2006-02-01 11:55:09 | 
latin1_swedish_ci | NULL || |
| stm_data_d   | MyISAM |   9 | Dynamic| 480772 |105 |
50816164 |  4294967295 | 57697280 | 0 |   NULL | 
2006-02-01 11:55:09 | 2006-02-01 11:57:00 | 2006-02-01 11:58:58 | 
latin1_swedish_ci | NULL || |
+--++-++++-+-+--+---++-+-+-+---+--++-+
2 rows in set (0.00 sec)


5.0.18-max-log mysql show table status;
+--++-++++-+-+--+---++-+-+-+---+--++-+
| Name | Engine | Version | Row_format | Rows   | Avg_row_length | 
Data_length | Max_data_length | Index_length | Data_free | Auto_increment | 
Create_time | Update_time | Check_time  | Collation 
| Checksum | Create_options | Comment |
+--++-++++-+-+--+---++-+-+-+---+--++-+
| dlstm_data_d | MyISAM |  10 | Dynamic|  48621 | 49 | 
2384860 | 281474976710655 |  1836032 | 0 |   NULL | 
2006-02-01 11:54:05 | 2006-02-01 11:54:07 | 2006-02-01 11:54:07 | 
latin1_swedish_ci | NULL || |
| stm_data_d   | MyISAM |  10 | Dynamic| 480772 |104 |
50192768 | 281474976710655 | 52738048 | 0 |   NULL | 
2006-02-01 11:54:07 | 2006-02-01 11:55:12 | 2006-02-01 11:55:40 | 
latin1_swedish_ci | NULL || |
+--++-++++-+-+--+---++-+-+-+---+--++-+
2 rows in set (0.00 sec)


4.1.13-standard mysql explain select *from dlstm_data_d s, stm_data_d t where s.sym_cd_ is null and s.sys_cdd=t.sys_cdd and s.local=t.local and s.local is not null and s.local!= and s.sys_cdd is not null and s.sys_cdd!= and t.sym_cd is not null; 
++-+---+--+---+-+-+---+---+-+

| id | select_type | table | type | possible_keys | key | 
key_len | ref   | rows  | Extra 

Re: 5.0.18-max-log as a slave of a 4.1.13-standard-log master problem - slave hangs

2006-02-01 Thread Jan Kirchhoff
I thought I found the reason for my problems with the change in 
join-behaviour in mysql 5, but Iwas wrong :( there is more trouble :(


my replications hangs with simple queries like insert into table 
(a,b,c) values (1,2,3) on a myisam-table.
It just hangs forever with no cpu-load on the slave. I have to kill and 
restart mysql with the following commands:


killall -9 mysqld;sleep 2;mysqladmin shutdown;sleep 5;/etc/init.d/mysql 
start;sleep 2;mysql -e 'slave start'


I can find the changed row in the table, so the query was processed 
correctly.
Then it runs again for some time and hangs again with some other simple 
insert.


I disabled innodb, cluster, took out all my variables out of my.cnf except
max_allowed_packet = 16M which I need for the replication to work and I 
have no clue what the reason for my problem is.


what else could I try?




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Totally different join-behaviour in mysql 4 and 5?

2006-02-01 Thread Jan Kirchhoff


Comma separated JOINS strikes again!!! 
  

[...]

Here is where you will find this change documented in the manual:
http://dev.mysql.com/doc/refman/5.0/en/upgrading-from-4-1.html
  
I read that page over and over again... probably too late at night.  
thanks for that info. Thanks to Peter, too.
I made that select out of an multi-table update that hung in the 
processlist. That query came out of a very old script
floating around...  I found a few more and  modified the updates, i 
think I shouldn't have that problem any more.
I think nobody over here ever used comma-joins except for those few 
times. At least I hope so. ;)


but that was only part of my problems, my replication still hangs every 
now and then... I'd be glad if you could have a
look at my other postings 5.0.18-max-log as a slave of a 
4.1.13-standard-log master problem - slave hangs.

Maybe you have any ideas what I could try to get this working...

thanks
Jan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance of MEMORY/HEAP-tables compared to mysql-cluster?

2006-01-31 Thread Jan Kirchhoff

Hi,

I am currently experiencing trouble getting my new mysql 5-servers 
running as slaves on my old 4.1.13-master.
Looks like I'll have to dump the whole 30GB-database and import it on 
the new servers :( At this moment I
do no see any oppurtunity to do this before the weekend since the 
longest time I can block any of our production

systems is only 2-3 hours between midnight and 2am :(

I am still curious if Innodb could handle the load of my updates on the 
heavy-traffic-tables since its disk-bound and

does transactions.

What I would probably need is an in-memory-table without any kind of 
locking - at least not table-locks! But there
is no such engine in mysql. When a cluster can handle that (although it 
has the transaction-overhead) it would probably be

perfect for since it even adds high availability in a very easy way...

Jan

Jan Kirchhoff schrieb:

sheeri kritzer schrieb:

No problem:

Firstly, how are you measuring your updates on a single table?  I took
a few binary logs, grepped out for things that changed the table,
counting the lines (using wc) and then dividing by the # of seconds
the binary logs covered.  The average for one table was 108 updates
per second.
  I'm very intrigued as to how you came up with 2-300 updates per second
for one table. . . did you do it that way?  If not, how did you do it?
 (We are a VERY heavily trafficked site, having 18,000 people online
and active, and that accounts for the 108 updates per second.  So if
you have more traffic than that. .  .wow!)
  
Thanks for your hardware/database information. I will look at that 
close tomorrow since I want to go home for today - it's already  9 pm 
over here... I need beer ;)


We are not running a webservice here (actually we do, too, but thats 
on other systems). This is part of our database with data of major 
stock exchanges worldwide that we deliver realtime data for.
Currently that are around 900,000 quotes, during trading hours they 
change all the time... We have much more updates than selects on the 
main database.
Our Application that receives the datastream writes blocks (INSERT ... 
ON DUPLICATE KEY UPDATE...) with all records that changed since the 
last write. It gives me debug output like [timestamp] Wrote 19427 
rows in 6 queries every 30 seconds - and that are numbers that I can 
rely on.


Jan





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



5.0.18-max-log as a slave of a 4.1.13-standard-log master problem - slave hangs

2006-01-31 Thread Jan Kirchhoff

I've been trying to get my new mysql-5.0.18-servers running as slaves of our 
production systems to check if all our applications work fine with mysql 5 and 
to do some tests and tuning on the new servers.

The old servers are all P4s, 3GB RAM running debian-linux, 2.4-kernel and 
official mysql 4.1.13-standard-log binaries:
d1 is the master, d2 and d3 are slaves.

my new servers are dual-opterons, 6 GB RAM, running debian-linux with a 
2.6.15-SMP-kernel, official mysql 5.0.18-max-log-binary. their names are d4 and 
d5. I am currently trying to get d4 running as a slave of d1. d5 should later 
become a slave of d4.

The old servers only have myisam and memory-tables, innodb is disabled. The new 
ones had innodb and mysql-cluster enabled (datanodes running on the same 
servers, management-node running on d3) since I wanted to do some testing with 
the different engines, but I disabled both temporarily without any change in 
this weird problem:

No matter if I do a copy of the /var/lib/mysql of d1 (and dump the contents of the memory-tables) while a flush tables with read lock is active and copy that to d4 (and doing a change master to... on d4 afterwards) or if I do a mysqldump --master-data=1: 


The replication runs for maybe a minute or two and then hangs. show slave status says everything is OK but a 
replicated replace hangs in the processlist and nothing happens. CPU-load goes down to zero. Even after 2 hours 
nothing changed, a slave stop hangs, too, when I kill the replicated replace-process nothing happens and 
I can't stop the mysql server and have to kill it with killall -9 mysqld in the shell :(
At first I thought this was a problem with a temporary table, but after having 
reloaded a new dump a few times I had the same problem with really simple 
inserts/updates like:

A new dump, everything works for a few minutes, then this query hangs:
|  4 | system user | | nachrichten | Connect | 11164 | update |replace into nachrichten.x_symbole (symbol,syscode,nachrichten_id) values('KUN','de','99949') 


(taken directly from show processlist)

Info about the simple table:

CREATE TABLE `x_symbole` (
 `symbol` char(20) NOT NULL default '',
 `syscode` char(6) NOT NULL default '',
 `nachrichten_id` int(11) NOT NULL default '0',
 PRIMARY KEY  (`symbol`,`syscode`,`nachrichten_id`),
 KEY `nachrichten_id` (`nachrichten_id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 




I have to kill the mysqld with killall -9 mysqld, do a mysqladmin shutdown 
again and then restart mysql and issue the query in the mysql-shell: it works!
Then I issue a start slave, everything works again for a minute or two and 
hangs with some different query.

I go nuts with this! I spent so much time with this problem and did not get any further and I have absolutely no idea what the problem is. nothing in the error log. 


Can anybody suggest something that might help? I have no idea whats wrong!

regards
Jan


d4:
mysql show variables;
+-++
| Variable_name   | Value   
   |
+-++
| auto_increment_increment| 1   
   |
| auto_increment_offset   | 1   
   |
| automatic_sp_privileges | ON  
   |
| back_log| 50  
   |
| basedir | 
/usr/local/mysql-max-5.0.18-linux-x86_64-glibc23/  |
| binlog_cache_size   | 32768   
   |
| bulk_insert_buffer_size | 15728640
   |
| character_set_client| latin1  
   |
| character_set_connection| latin1  
   |
| character_set_database  | latin1  
   |
| character_set_results   | latin1  
   |
| character_set_server| latin1  
   |
| character_set_system| utf8
   |
| character_sets_dir  | 
/usr/local/mysql-max-5.0.18-linux-x86_64-glibc23/share/mysql/charsets/ |
| collation_connection| latin1_swedish_ci   
  

Performance of MEMORY/HEAP-tables compared to mysql-cluster?

2006-01-27 Thread Jan Kirchhoff

Hi,

Did anybody ever benchmark heap-tables against a cluster?
I have a table with 900.000 rows (40 fields, CHARs, INTs and DOUBLEs, 
Avg_row_length=294) that gets around 600 updates/sec (grouped in about 12 
extended inserts a minute inserting/updating 3000 rows each).
This is currently a HEAP-table (and get replicated onto a slave, too). I 
experience locking-problems on both the master and the slave, queries that 
usually respond within 0.0x seconds suddenly hang and take 10 seconds or 
sometimes even longer.
I wonder if a cluster setup would give me any speedup in this issue? I will be 
doing some benchmarking myself next week, but It would be very helpful if 
anybody could share experiences with me so I don't have to start from 
scratch... It is difficult and very time-consuming to set up a test-suite 
comparable to our production systems... Any tips will help! Thanks!

regards
Jan 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



ERROR 1025 when doing ALTER TABLE to change a myisam-table to a CLUSTER table?

2006-01-27 Thread Jan Kirchhoff

Hello,

I am just doing my first testing on a mysql-cluster system. Curently, I habe 1 
management node running and 2 Data-Nodes that also run a mysqld each.
The servers are Dual-Opterons with 6GB of RAM each.

I did a dump of a database of one of our production systems (about 1.5GB 
mysqldump-file) and piped that into the first of the new servers.
I then startet doing alter table abc type=ndb-queries and everything looked 
fine at the beginning. After having moved 70-80% of the tables into the NDB-Engine (they 
all show up correctly on the other mysql-server and everything seems to work) I suddenly 
got the following error:

ERROR 1025 (HY000): Error on rename of './master/#sql-e80_1' to 
'./master/status_system_hist' (errno: 708)

I could not find any information on how to fix this and what the reason could be. 
When I manually create a new table with the same definition in the NDB-Engine and then do a insert into .. select from... I have no trouble. I should not be hitting the memory-limit yet, ndbd only uses 56% of the RAM so far. 


I attached some SQL-Output, part of the config.ini and the top-output.

Can anybody help me with this? 

thanks 
Jan  



[NDBD DEFAULT]
NoOfReplicas=2
DataMemory=4200M  
IndexMemory=1000M 
NoOfFragmentLogFiles=100

MaxNoOfConcurrentOperations=50




mysql show table status;
+-++-++-++-+-+--+---++-+-+-+---+--++---+
| Name| Engine | Version | Row_format | Rows| 
Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | 
Auto_increment | Create_time | Update_time | Check_time 
 | Collation | Checksum | Create_options | Comment   |
+-++-++-++-+-+--+---++-+-+-+---+--++---+
[...]
| status_system_hist  | MyISAM |  10 | Dynamic|  270413 |   
  91 |24721832 | 281474976710655 |  4409344 | 0 |   
NULL | 2006-01-27 15:03:48 | 2006-01-27 15:04:12 | NULL| 
latin1_swedish_ci | NULL ||   |
[...]

mysql alter table status_system_hist type=ndb;
ERROR 1025 (HY000): Error on rename of './master/#sql-e80_1' to 
'./master/status_system_hist' (errno: 708)
mysql show create table status_system_hist;
++---
+
| Table  | Create Table  
   |


Re: Performance of MEMORY/HEAP-tables compared to mysql-cluster?

2006-01-27 Thread Jan Kirchhoff

sheeri kritzer schrieb:

Why are you using a heap table?
  
We started out with a myisam-table years ago when the table was much 
smaller und less frequently updated. We tried innodb about 2 or 3 years 
ago and couldn't get a satisfying result. We then changed it to HEAP and 
everything was fine.
Now we are getting locking-Problems as the number of updates and selects 
constantly increases and need to upgrade our server-hardware anyway. I 
like the scalability of clusters for load-balancing and HA and we have 
had problems with our mysql-replications on the heavy load servers 
(total  2000 updates/Sec average) every 2-3 months that we couldn't 
reproduce. Other replications with less throughput run stable for years 
(same kernel, same mysqld). I'd get rid of all my replication problems 
when I put the most frequently updatet tables into a cluster...

My company has tables with much more information than that, that get
updated much more frequently.  We use InnoDB tables, with very large
buffer sizes and have tweaked which queries use the cache and which
don't, on a system with lots of RAM (10Gb).  Basically we've set it up
so everything is in memory anyway.

Perhaps a similar setup would help for you?
  
that sounds interesting since we couldn't get good performance using 
innodb in our case - but thats a few years ago. things may have changed? 
I'll definitely give it a try next week, too.
Could you give me more information on your system? hardware, size of the 
table, average number of updates/sec?


thanks for your suggestions
Jan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance of MEMORY/HEAP-tables compared to mysql-cluster?

2006-01-27 Thread Jan Kirchhoff

sheeri kritzer schrieb:

No problem:

Firstly, how are you measuring your updates on a single table?  I took
a few binary logs, grepped out for things that changed the table,
counting the lines (using wc) and then dividing by the # of seconds
the binary logs covered.  The average for one table was 108 updates
per second.
  
I'm very intrigued as to how you came up with 2-300 updates per second

for one table. . . did you do it that way?  If not, how did you do it?
 (We are a VERY heavily trafficked site, having 18,000 people online
and active, and that accounts for the 108 updates per second.  So if
you have more traffic than that. .  .wow!)
  
Thanks for your hardware/database information. I will look at that close 
tomorrow since I want to go home for today - it's already  9 pm over 
here... I need beer ;)


We are not running a webservice here (actually we do, too, but thats on 
other systems). This is part of our database with data of major stock 
exchanges worldwide that we deliver realtime data for.
Currently that are around 900,000 quotes, during trading hours they 
change all the time... We have much more updates than selects on the 
main database.
Our Application that receives the datastream writes blocks (INSERT ... 
ON DUPLICATE KEY UPDATE...) with all records that changed since the last 
write. It gives me debug output like [timestamp] Wrote 19427 rows in 6 
queries every 30 seconds - and that are numbers that I can rely on.


Jan


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: will a cluster be faster than a heap-table?

2005-10-28 Thread Jan Kirchhoff

Hi Brent,

Wow, it seems like you are going to extremes. To jump from myisam to  
heap is a big step. Did you try using InnoDB? It would handle locking  
issues much better since it doesn't lock the table. Heap tables can  
be pretty dangerous since it's all in memory. If the machine crashes,  
you'll lose the data.


I know that, but I do regular (cornjobs) backups to myisam-tables and I 
can reconstruct the whole table from the machines that insert/update the 
data.


Based on your information, you want to get the best disk I/O you can.  
You won't get that out of a single IDE drive, even if it is one of  
the latest SATA based with command queuing. I don't think you'll get  
anything faster than heap tables and tons of RAM. But there is  
certainly finite scalability because of the use of RAM. Clusters may  
be the way to go for scalability, but I would work on getting your  
data disk based for maximum scalability.


For my case, scalability means more updates/second and more 
selects/second. Not larger tables. At least not much larger, and this 
table is using less than 300 MB of memory right now. So I see no point 
in using anything disk-based. 2 years ago we started with myisam, then 
changed to innodb, found out it wouldn't give any better performance in 
our case and switched back to myisam since that makes the setup of 
replications much easier. Then we changed it again 6 months ago and now 
use memory-tables.


I would try InnoDB and maximize you disk setup. I don't know how many  
disks you have in your RAID and if it's hardware or software based.  
More disks will add speed by splitting the load across more disks.  
Just keep in mind the limits of your SCSI card too. You may need to  
add a card to split the load.


These two systems have Hardware-RAID (SCSI storage controller: LSI Logic 
/ Symbios Logic (formerly NCR) 53c1030 (rev 07)) width 2 disks in raid1 
and the DB is myaybe 40 Gb of size. I have no performance trouble on any 
other table.


Jan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



will a cluster be faster than a heap-table?

2005-10-27 Thread Jan Kirchhoff

Hi,

I am currently using a replication setup on two servers with mysql
4.1.13-standard-log (master/slave each a P4 2.4ghz, 3GB RAM, Hardware 
SCSI-RAID).

I have a table that has lots of updates and selects. We converted this table
(along with other tables) from a myisam to a heap-table 6 months ago which
increased the performance a lot, but we are hitting the limits again - on the
master as well as on the slave.

We are only talking about 50-60 queries/second in peaks maybe 90 q/sec (which
means more selects, but not much more inserts), but the inserts are
bulk-inserts (each around 2500-3000 rows) doing INSERT INTO...ON DUPLICATE KEY
UPDATE-Queries updating a total of around 50.000 rows/minute in the daytime,
We are getting locking-Problems with selects having to wait for 5 seconds or
sometimes even much longer.
We expect that the amount of insert will increase slowly while the selects
will get much more pretty soon. The selects are all optimized and respond
within 0.x or 0.0x seconds in a mysql-shell in case they are not locked by an
insert. It is weird that those inserts that usually only take 1-2 seconds
(never saw anything older in the processlist) now sometimes take 10 seconds or
more while more and more selects are waiting in the Locked-status. I saw this
behaviour 3 weeks ago for the first time and maybe 4 or 5 more times since 
then...

I am just wondering if a cluster-setup would help us speed up the system.
If I understand this right, it is no problem to mix NDB-tables an
memory/myisam-tables. I'd just have to install a cluster-enabled version of
mysqld and set up 2 or more NDB-Nodes, right? I could then alter the few
speed-critical tables to the NDB-storage-type and would not have to change any
SQL?
I'd just take 2 simple Athlon64-PCs with 1GB-2GB RAM each, Gbit-Ethernet and
an IDE-Harddisk, no expensive Server-Hardware? Or would I need at least 4
Nodes to have an speed-improvement?

I've been trying to find answers on this on mysql.com but was not successful.
There is no info about possibilities of mixing NDB- and myisam-tables and all
documentation on mysql-cluster focuses more on HA than on speed especially
speed of cluster-tables compared to traditional mysql-memory-tables...

thanks for any help on this!

Jan

in case this is important:
show table status:
*** 25. row ***
   Name: memtable_spr
 Engine: HEAP
Version: 9
 Row_format: Fixed
   Rows: 777330
 Avg_row_length: 294
Data_length: 234729984
Max_data_length: 856336152
   Index_length: 52598232
  Data_free: 294
 Auto_increment: NULL
Create_time: NULL
Update_time: NULL
 Check_time: NULL
  Collation: latin1_swedish_ci
   Checksum: NULL
 Create_options:
Comment:


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Replication suddenly stops on mysql 4.1.7 with Slave_IO_Running: No

2005-02-07 Thread Jan Kirchhoff
We've had very good performance with the official mysql-icc-binaries, so 
I upgraded to 4.1.8 last weekend since there is no official 4.1.9 binary 
on the mysql.com-site...

It didn't help with my problems, I still have replication-crashs almost 
every other hour. I put a fresh snapshot from the master onto the slave 
but it didn't help either :(
A simple slave start helps, so I have a cronjob running right now 
checking for the replication-status and issuing a slave start if 
necessary

I have no other idea but try the gcc-4.1.9 in about 3 weeks, I have no 
possibility to take the master database down anytime before that :(

Gleb Paharenko schrieb:
Hello.

 

But I use 4.1.7, not 4.0.21 ...weird.
   


As said at:
 http://dev.mysql.com/doc/mysql/en/news-4-1-8.html

Fixed a bug which caused a crash when only the slave I/O thread was 

stopped and started. (Bug #6148)

I suggest you to upgrade to the latest release (4.1.9 now).





Jan Kirchhoff [EMAIL PROTECTED] wrote:

 

Gleb Paharenko schrieb:
   

 

 

Hello.
 

 

 

 

 

I've looked through the bug database, and the only thing
 

 

 

that I've found was an already-closed bug:
 

 

 

http://bugs.mysql.com/bug.php?id=6148
 

 

 

 

 

I had been looking around the Changelogs, but I had not found that one. 
   

 

Sounds pretty much like my problem :(
   

 

But I use 4.1.7, not 4.0.21 ...weird.
   

 

 

Check that your server passes rpl_relayspace.test. Go to the mysql-test
 

 

 

directory and execute:
 

 

 

./mysql-test-run t/rpl_relayspace.test   
 

 

 

 

 

This one runs wirhout errors on the master and the slave...:
   

 

 

hostname:/usr/local/mysql-standard-4.1.7-pc-linux-i686-icc-glibc23/mysql-test# 
   

 

./mysql-test-run t/rpl_relayspace.test  
   

 

Installing Test Databases
   

 

Removing Stale Files
   

 

Installing Master Databases
   

 

running  ../bin/mysqld --no-defaults --bootstrap --skip-grant-tables 
   

 

--basedir=.. --datadir=mysql-test/var/master-data --skip-innodb 
   

 

--skip-ndbcluster --skip-bdb
   

 

Installing Slave Databases
   

 

running  ../bin/mysqld --no-defaults --bootstrap --skip-grant-tables 
   

 

--basedir=.. --datadir=mysql-test/var/slave-data --skip-innodb 
   

 

--skip-ndbcluster --skip-bdb
   

 

Manager disabled, skipping manager start.
   

 

Loading Standard Test Databases
   

 

Starting Tests
   

 

 

TESTRESULT
   

 

---
   

 

rpl_relayspace [ pass ]  
   

 

---
   

 

 

Ending Tests
   

 

Shutting-down MySQL daemon
   

 

 

Master shutdown finished
   

 

Slave shutdown finished
   

 

All 1 tests were successful.
   

 

 

I'm not able to exchange the mysql-software itself (I use the 
   

 

icc-binary) to a gcc-version or to upgrade to 4.1.9 in the next 2-3 
   

 

weeks. And looking at the changelogs on mysql.com I don't think it would 
   

 

change anything...
   

 

Hasn't anybody else had such problems with 4.1.x?
   

 

 

hostname:/usr/local/mysql-standard-4.1.7-pc-linux-i686-icc-glibc23/bin# 
   

 

./mysqld --version
   

 

./mysqld  Ver 4.1.7-standard for pc-linux on i686 (Official 
   

 

MySQL-standard binary)
   

 

 

(more detailed information on my systems in my initial mail from 2005-1-27)
   

 

 

btw: I also ran mysqlcheck -q and mysqlcheck -o on all tables last week 
   

 

to make sure the tables are OK...
   

 

 

 

 

 

 

 

 

 

Jan Kirchhoff [EMAIL PROTECTED] wrote:
 

 

 

 

 

 

Hi,
   

 

  
   

 

 

 

 

 

 

 

 

 

 

My problem still goes on... After having had the problem 2 more times 
   

 

  
   

 

 

 

 

 

 

within 1 day, I decided to re-do the replication (copy the whole 
   

 

  
   

 

 

 

 

 

 

database onto the slave with rsync and reset master and slave). That 
   

 

  
   

 

 

 

 

 

 

only lasted for little more than 1 day and I ended up with the same error:
   

 

  
   

 

 

 

 

 

 

 

 

 

 

Could not parse relay log event entry. The possible reasons are: the 
   

 

  
   

 

 

 

 

 

 

master's binary log is corrupted (you can check this by running 
   

 

  
   

 

 

 

 

 

 

'mysqlbinlog' on the binary log), the slave's relay log is corrupted 
   

 

  
   

 

 

 

 

 

 

(you can check this by running 'mysqlbinlog' on the relay log), a 
   

 

  
   

 

 

 

 

 

 

network problem, or a bug in the master's or slave's MySQL code. If you 
   

 

  
   

 

 

 

 

 

 

want to check the master's binary log or slave's relay log, you will be 
   

 

  
   

 

 

 

 

 

 

able to know their names by issuing 'SHOW SLAVE STATUS' on this slave

Re: Replication suddenly stops on mysql 4.1.7 with Slave_IO_Running: No

2005-02-01 Thread Jan Kirchhoff
Gleb Paharenko schrieb:
Hello.

I've looked through the bug database, and the only thing
that I've found was an already-closed bug:
 http://bugs.mysql.com/bug.php?id=6148
 

I had been looking around the Changelogs, but I had not found that one. 
Sounds pretty much like my problem :(
But I use 4.1.7, not 4.0.21 ...weird.

Check that your server passes rpl_relayspace.test. Go to the mysql-test
directory and execute:
 ./mysql-test-run t/rpl_relayspace.test   
 

This one runs wirhout errors on the master and the slave...:
hostname:/usr/local/mysql-standard-4.1.7-pc-linux-i686-icc-glibc23/mysql-test# 
./mysql-test-run t/rpl_relayspace.test  
Installing Test Databases
Removing Stale Files
Installing Master Databases
running  ../bin/mysqld --no-defaults --bootstrap --skip-grant-tables 
--basedir=.. --datadir=mysql-test/var/master-data --skip-innodb 
--skip-ndbcluster --skip-bdb
Installing Slave Databases
running  ../bin/mysqld --no-defaults --bootstrap --skip-grant-tables 
--basedir=.. --datadir=mysql-test/var/slave-data --skip-innodb 
--skip-ndbcluster --skip-bdb
Manager disabled, skipping manager start.
Loading Standard Test Databases
Starting Tests

TESTRESULT
---
rpl_relayspace [ pass ]  
---

Ending Tests
Shutting-down MySQL daemon
Master shutdown finished
Slave shutdown finished
All 1 tests were successful.
I'm not able to exchange the mysql-software itself (I use the 
icc-binary) to a gcc-version or to upgrade to 4.1.9 in the next 2-3 
weeks. And looking at the changelogs on mysql.com I don't think it would 
change anything...
Hasn't anybody else had such problems with 4.1.x?

hostname:/usr/local/mysql-standard-4.1.7-pc-linux-i686-icc-glibc23/bin# 
./mysqld --version
./mysqld  Ver 4.1.7-standard for pc-linux on i686 (Official 
MySQL-standard binary)

(more detailed information on my systems in my initial mail from 2005-1-27)
btw: I also ran mysqlcheck -q and mysqlcheck -o on all tables last week 
to make sure the tables are OK...

 



Jan Kirchhoff [EMAIL PROTECTED] wrote:
 

Hi,
   

 

 

My problem still goes on... After having had the problem 2 more times 
   

 

within 1 day, I decided to re-do the replication (copy the whole 
   

 

database onto the slave with rsync and reset master and slave). That 
   

 

only lasted for little more than 1 day and I ended up with the same error:
   

 

 

Could not parse relay log event entry. The possible reasons are: the 
   

 

master's binary log is corrupted (you can check this by running 
   

 

'mysqlbinlog' on the binary log), the slave's relay log is corrupted 
   

 

(you can check this by running 'mysqlbinlog' on the relay log), a 
   

 

network problem, or a bug in the master's or slave's MySQL code. If you 
   

 

want to check the master's binary log or slave's relay log, you will be 
   

 

able to know their names by issuing 'SHOW SLAVE STATUS' on this slave.
   

 

 

I can look at the binlog with mysqlbinlog on the master and the slave; 
   

 

no errors or problems.
   

 

After a simple SLAVE START without having done any changes to the 
   

 

database, the slave thread startet again and caught up with the master.
   

 

 

I've been using mysql's replication-feature since it first came up in 
   

 

1999 or 2000 and dealt with lots of problems and workarounds, but this 
   

 

one is weird. Any ideas anybody?
   

 

 

Jan
   

 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Replication suddenly stops on mysql 4.1.7 with Slave_IO_Running: No

2005-01-31 Thread Jan Kirchhoff
Hi,
My problem still goes on... After having had the problem 2 more times 
within 1 day, I decided to re-do the replication (copy the whole 
database onto the slave with rsync and reset master and slave). That 
only lasted for little more than 1 day and I ended up with the same error:

Could not parse relay log event entry. The possible reasons are: the 
master's binary log is corrupted (you can check this by running 
'mysqlbinlog' on the binary log), the slave's relay log is corrupted 
(you can check this by running 'mysqlbinlog' on the relay log), a 
network problem, or a bug in the master's or slave's MySQL code. If you 
want to check the master's binary log or slave's relay log, you will be 
able to know their names by issuing 'SHOW SLAVE STATUS' on this slave.

I can look at the binlog with mysqlbinlog on the master and the slave; 
no errors or problems.
After a simple SLAVE START without having done any changes to the 
database, the slave thread startet again and caught up with the master.

I've been using mysql's replication-feature since it first came up in 
1999 or 2000 and dealt with lots of problems and workarounds, but this 
one is weird. Any ideas anybody?

Jan
Hello,
I have a replication setup on to linux boxes (debian woody, kernel 
2.4.21-xfs, mysql 4.1.7-standard official intel-compiler binary from 
mysql.com).

master:~# mysqladmin status
Uptime: 464848  Threads: 10  Questions: 296385136  Slow queries: 1752  
Opens: 2629  Flush tables: 1  Open tables: 405  Queries per second 
avg: 637.596

slave:~# mysqladmin  status
Uptime: 463460  Threads: 2  Questions: 292885156  Slow queries: 6  
Opens: 2510  Flush tables: 1  Open tables: 327  Queries per second 
avg: 631.953

both systems have identical hardware (P4 2.4ghz, 3GB RAM, 
SCSI-Hardware-RAID) connection is gigabit-ethernet.

Everything used to work fine, but I wanted to get rid of InnoDB since 
I did only use that for very big table containing historical data and 
those tables were moved to another server. I ran out of discspace, 
innodb-datafiles can only grow but not shrink and i didn't need it 
anyway, so it had to go.
I stopped the slave, changed all left over innodb-tables to myisam, 
added skip-innodb  to my.cnf on the master and the slave, restarted 
the server, renewed the replication by doing it the classical way: 
flush tables with read log, copy the /var/lib/mysql on the slave (not 
much, just around 20GB), reset master, unlock tables. Then start the 
slave-mysqld, reset slave, slave start.

Everything was fine and very fast for 4 days (from saturday till 
wednesday afternoon), then suddenly the slave stopped.
this is where the weird stuff starts:
show slave status tells me everything is fine, just 
Slave_IO_Running: No is wrong.
After typing slave start, it says Slave_IO_Running: Yes, and 
Slave_SQL_Running: No. Very strange. Now i did a slave stop;slave 
start; and everything is fine again, the slave catches up and goes 
on. Today (thursday afternoon), the same thing happens again and can 
be solved again by slave stop;slave start;. Now it happened again 
around 10pm. Again, the stop-start-trick made it working again.

I add the output of my mysql-shell
Can anybody help me with that?
This is a production system under heavy load and I can't play around 
with different mysql-versions and such...
If I don't find a solution really quick, I'll have to do help myself 
with some shell-skript-daemon checking if replication is running and 
issuing stop slave;start slave-commands otherwise... not really the 
way it should be :(

Thanks
Jan
SLAVE:
slave:~# cat /proc/cpuinfo
processor   : 0
vendor_id   : GenuineIntel
cpu family  : 15
model   : 2
model name  : Intel(R) Pentium(R) 4 CPU 2.40GHz
stepping: 7
cpu MHz : 2392.077
cache size  : 512 KB
fdiv_bug: no
hlt_bug : no
f00f_bug: no
coma_bug: no
fpu : yes
fpu_exception   : yes
cpuid level : 2
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge 
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm
bogomips: 4771.02

slave:~# free
 total   used   free sharedbuffers cached
Mem:   31051042355364 749740  04401514104
-/+ buffers/cache: 8408202264284
Swap:   779144 428072 351072
MASTER
master:~# cat /proc/cpuinfo
processor   : 0
vendor_id   : GenuineIntel
cpu family  : 15
model   : 2
model name  : Intel(R) Pentium(R) 4 CPU 2.40GHz
stepping: 7
cpu MHz : 2392.163
cache size  : 512 KB
fdiv_bug: no
hlt_bug : no
f00f_bug: no
coma_bug: no
fpu : yes
fpu_exception   : yes
cpuid level : 2
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge 
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm
bogomips: 4771.02

master:~# free

Re: How big is too big?

2005-01-28 Thread Jan Kirchhoff
Misao schrieb:
Our production databases here are really growing and getting to be rather
big. The question on our minds is; when is a database or table just too big?
 

We have a few 20-30GB-InnoDB-Tables (growing) without any problems 
(mysql 4.1.5gamma).
The limits of mysql are somewhere in the terabyte-area I think, there is 
information on that in the manual.
I assume your problem would probably be hardware/performance at some point.

The machine that we are running that big database on is a dual-Opteron, 
8gigs of RAM, 750GB RAID 1+0 SATA-Hotswap.
no problems so far... nice piece of hardware ;)

I have 2 or 3 tables that the MySQL Administrator can't even get a size on. It
reports it as 0Bytes, but the little picture bar shows that these tables
take up almost 1/3 of the database size. I think these tables could be as
big as 8GB, but we have quite a few above 1GB.
 

Seems like a problem of MySQL Administrator. Check if you use the newest 
version, else change your frontend or make a bug-report.

Jan
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Replication suddenly stops on mysql 4.1.7 with Slave_IO_Running: No

2005-01-27 Thread Jan Kirchhoff
Hello,
I have a replication setup on to linux boxes (debian woody, kernel 2.4.21-xfs, 
mysql 4.1.7-standard official intel-compiler binary from mysql.com).

master:~# mysqladmin status
Uptime: 464848  Threads: 10  Questions: 296385136  Slow queries: 1752  Opens: 
2629  Flush tables: 1  Open tables: 405  Queries per second avg: 637.596

slave:~# mysqladmin  status
Uptime: 463460  Threads: 2  Questions: 292885156  Slow queries: 6  Opens: 2510 
 Flush tables: 1  Open tables: 327  Queries per second avg: 631.953

both systems have identical hardware (P4 2.4ghz, 3GB RAM, SCSI-Hardware-RAID) 
connection is gigabit-ethernet.

Everything used to work fine, but I wanted to get rid of InnoDB since I did 
only use that for very big table containing historical data and those tables 
were moved to another server. I ran out of discspace, innodb-datafiles can 
only grow but not shrink and i didn't need it anyway, so it had to go.
I stopped the slave, changed all left over innodb-tables to myisam, added 
skip-innodb  to my.cnf on the master and the slave, restarted the server, 
renewed the replication by doing it the classical way: flush tables with 
read log, copy the /var/lib/mysql on the slave (not much, just around 20GB), 
reset master, unlock tables. Then start the slave-mysqld, reset slave, slave 
start.

Everything was fine and very fast for 4 days (from saturday till wednesday 
afternoon), then suddenly the slave stopped.
this is where the weird stuff starts:
show slave status tells me everything is fine, just Slave_IO_Running: No 
is wrong.
After typing slave start, it says Slave_IO_Running: Yes, and 
Slave_SQL_Running: No. Very strange. Now i did a slave stop;slave start; 
and everything is fine again, the slave catches up and goes on. Today 
(thursday afternoon), the same thing happens again and can be solved again by 
slave stop;slave start;. Now it happened again around 10pm. Again, the 
stop-start-trick made it working again.

I add the output of my mysql-shell
Can anybody help me with that?
This is a production system under heavy load and I can't play around with 
different mysql-versions and such...
If I don't find a solution really quick, I'll have to do help myself with some 
shell-skript-daemon checking if replication is running and issuing stop 
slave;start slave-commands otherwise... not really the way it should be :(

Thanks
Jan
SLAVE:
slave:~# cat /proc/cpuinfo
processor   : 0
vendor_id   : GenuineIntel
cpu family  : 15
model   : 2
model name  : Intel(R) Pentium(R) 4 CPU 2.40GHz
stepping: 7
cpu MHz : 2392.077
cache size  : 512 KB
fdiv_bug: no
hlt_bug : no
f00f_bug: no
coma_bug: no
fpu : yes
fpu_exception   : yes
cpuid level : 2
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca 
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm
bogomips: 4771.02

slave:~# free
 total   used   free sharedbuffers cached
Mem:   31051042355364 749740  04401514104
-/+ buffers/cache: 8408202264284
Swap:   779144 428072 351072
MASTER
master:~# cat /proc/cpuinfo
processor   : 0
vendor_id   : GenuineIntel
cpu family  : 15
model   : 2
model name  : Intel(R) Pentium(R) 4 CPU 2.40GHz
stepping: 7
cpu MHz : 2392.163
cache size  : 512 KB
fdiv_bug: no
hlt_bug : no
f00f_bug: no
coma_bug: no
fpu : yes
fpu_exception   : yes
cpuid level : 2
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca 
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm
bogomips: 4771.02

master:~# free
 total   used   free sharedbuffers cached
Mem:   31051043096016   9088  06482087780
-/+ buffers/cache:10075882097516
Swap:   779144 391732 387412

Slave shell:
wpdb2:~# mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 23083 to server version: 4.1.7-standard
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
wpdb2 mysql show slave status\G
*** 1. row ***
 Slave_IO_State:
Master_Host: 192.168.10.26
Master_User: repl
Master_Port: 3306
  Connect_Retry: 10
Master_Log_File: mysql-bin.000210
Read_Master_Log_Pos: 146168522
 Relay_Log_File: wpdb2-relay-bin.000210
  Relay_Log_Pos: 146168608
  Relay_Master_Log_File: mysql-bin.000210
   Slave_IO_Running: No
  Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
 Replicate_Do_Table:
 Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
 

Re: very large HEAP-tables in 4.1.3

2004-08-07 Thread Jan Kirchhoff
harrison, thanks for you mail,
I think mysql uses way too much memory (overhead) to store my data.

How much overhead do you think it is using?  Each row is 61 bytes in 
geldbrief, which is *exactly* the amount needed for the datatypes you 
have.
[...]
Now if you take 61 * 2449755 (number of rows) = 149435055 bytes used
157468096 Real amount
149435055 Data size
-
  8033041 Overhead
 5.1% Total overhead of data
I don't see how you could get it to be much smaller than that.  Even 
with zero overhead it would only be 5% smaller, which still would 
require *a lot* of memory to store it all.  The primary key itself is 
only using about 8 bytes of memory per row (because it is a hashed 
index, btree would be much larger), which is also very compact.

With your own in-memory database, do you some sort of compression 
algorithm?  That is the only way that I could see it taking up much 
less space.  MySQL is pretty close to as efficient as you can get 
without compression.
I did that calculation after my last post, too. I should have done that 
earlier ;) There is compression in our old solution, but I never thoght 
it was very efficient. I was definitly wrong, we get around 20% more 
data in our old database. That was what I thought mysql's overhead has 
to be since I didn't take the compression to serious.

With all of that being said, I would just go with InnoDB, which can 
buffer the data in memory as well.  In a later email you mention that 
you need to delete a lot of rows per hour.  HEAP wouldn't work all 
that well for that since it uses table level locks.  If it took 5 
seconds to delete a large portion of rows, then the table would be 
locked for the duration of that.  InnoDB, with its row level locking, 
would be much better for that purge process.  If you turn off the 
innodb_flush_log_at_trx_commit (which would most likely be OK if your 
case) then most of the inserting would be done in memory anyways, and 
only written to disk in batches.

The other option is as you said before using myisam merge tables.  
That will make dropping old records easier if you partition the data 
into the separate days.  The only thing I would be worried about in 
that scenario is if inserts are coming from multiple threads.  In that 
case you could still possibly run into locking issues with the inserts 
locking each other.

I'm away next week, but I guess I'll just give all options a try on our 
development-system afterwards. I guess if something works with 2GB of 
RAM, it should also work with 6GB (Opteron). Or is mysql's behaviour  
changing with very big heap-tables or key_buffers?

Jan
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


very large HEAP-tables in 4.1.3

2004-08-06 Thread Jan Kirchhoff
I was just wondering if anybody has been using very large HEAP-tables 
and if there are ways to have mysql use the memory more efficient:
(I have no experience with all heap-tables but using them as temporary 
tables...)

I just started testing with 2 heap-tables on a development-system (p4 
3.2ghz, 2GB RAM) to get an idea of what's possible and what's not:

I think mysql uses way too much memory (overhead) to store my data. I've 
fed the database with realtime-data for 1 hour now and I need at least 
30 times as much in the tables. (200 times would be perfect - of course 
on some other machine with more memory)
Right now top tells me that mysql is using around 10% of the memory.

I already increased  max_heap_table_size and I will have to increase it 
much more, but right now it seems that I would need 6GB of RAM to get my 
minimum amount of data in those tables. Which means I'd need a 
64bit-system. But can mysql deal with 6GB-HEAP-tables???

So has anybody tried something like this, yet? We are currently using 
our own, self-written databases for this, but we are thinking about 
either using a sql-database in order to be able to make joins to other 
tables or adding features to our own software (64bit-support, sql-like 
interface etc.). If it works with mysql, we'd probably prefer that since 
its much less work for us and easier to handle. It doesn't matter if 
mysql uses a little more memory, but right now it seems like mysql is 
wasting way too much memory :(

thanks for any help!
Jan
| max_heap_table_size | 
49744
|


mysql show table status\G
*** 1. row ***
  Name: geldbrief
Engine: HEAP
   Version: 9
Row_format: Fixed
  Rows: 2449755
Avg_row_length: 61
   Data_length: 157468096
Max_data_length: 60634
  Index_length: 19690688
 Data_free: 0
Auto_increment: NULL
   Create_time: NULL
   Update_time: NULL
Check_time: NULL
 Collation: latin1_swedish_ci
  Checksum: NULL
Create_options: max_rows=2000
   Comment:
*** 2. row ***
  Name: umsaetze
Engine: HEAP
   Version: 9
Row_format: Fixed
  Rows: 236425
Avg_row_length: 45
   Data_length: 11402880
Max_data_length: 535713975
  Index_length: 1942648
 Data_free: 0
Auto_increment: NULL
   Create_time: NULL
   Update_time: NULL
Check_time: NULL
 Collation: latin1_swedish_ci
  Checksum: NULL
Create_options: max_rows=2000
   Comment:
2 rows in set (0.00 sec)
CREATE TABLE `geldbrief` (
 `symbol` char(12) NOT NULL default '',
 `quelle` int(10) unsigned NOT NULL default '0',
 `kurszeit` datetime NOT NULL default '-00-00 00:00:00',
 `ticknumber` int(10) unsigned NOT NULL default '0',
 `bid` double(16,4) default NULL,
 `bidsize` double(16,4) default NULL,
 `ask` double(16,4) default NULL,
 `asksize` double(16,4) default NULL,
 PRIMARY KEY  (`symbol`,`quelle`,`kurszeit`,`ticknumber`)
) ENGINE=HEAP DEFAULT CHARSET=latin1 MAX_ROWS=2000
CREATE TABLE `umsaetze` (
 `symbol` char(12) NOT NULL default '',
 `quelle` int(10) unsigned NOT NULL default '0',
 `kurszeit` datetime NOT NULL default '-00-00 00:00:00',
 `ticknumber` int(10) unsigned NOT NULL default '0',
 `kurs` double(16,4) default NULL,
 `umsatz` double(16,4) default NULL,
 PRIMARY KEY  (`symbol`,`quelle`,`kurszeit`,`ticknumber`)
) ENGINE=HEAP DEFAULT CHARSET=latin1 MAX_ROWS=2000
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: very large HEAP-tables in 4.1.3

2004-08-06 Thread Jan Kirchhoff
Philippe Poelvoorde wrote:
Maybe you should try to normalize your table,
'symbol' could have its own table, that would reduce data and index.
And then try to reduce the size of your rows, bidsize and asksize 
should be in integer I think. Maybe 'float' would be enough.
What represents the 'quelle' column ?
Is kurszeit necessary in your primary key ?

I changed a few columns, bidsize and asksize are integer now, and i 
changed ticknumber to smallint unsigned.
At first I used the ticknumbers by the feedserver, now I count up to 
65,000 and then reset the counter back to 0. I need that additional 
column to handle multiple ticks within one second.
now I have a row_length of 41 instead of 61 on the geldbrief-table, 
but there is still just way to much memory-usage

quelle is the stock-exchange (source);
That table should store trades and bid/asks of stock-exchanges, so the 
primary key has to include:

symbol ( i.e. IBM)
quelle (numeric code for the stock-exchange)
date and time
ticknumber (in order be able to handle multiple ticks per second)
any more suggestions?
Maybe I'll test how a InnoDB-table with a huge innodb_buffer_pool_size 
will work. But since I'll have to do big delete's once every hour (kick 
old records) I have no idea if that would work out on a table with much 
more than 100,000,000 rows and insert coming in all the time...
Another idea is to use a bunch of myisam-tables (4 or more for each day) 
and a merge-table. I could then do a flush tables with write 
lock;truncate table big_merge_table;unlock tables; on the myisam-tables 
to delete the old rows.
I don't think that the disc-based table engines can respond quick enough 
and handle all the inserts at the same time... but I might give it a try 
next week.

Does anybody have comments on those two ideas in case my 
in-memory-concept doesn't work..

thanks
Jan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: very large HEAP-tables in 4.1.3

2004-08-06 Thread Jan Kirchhoff
Philippe Poelvoorde wrote:
Hi,
I changed a few columns, bidsize and asksize are integer now, and i 
changed ticknumber to smallint unsigned.
At first I used the ticknumbers by the feedserver, now I count up to 
65,000 and then reset the counter back to 0. I need that additional 
column to handle multiple ticks within one second.
now I have a row_length of 41 instead of 61 on the geldbrief-table, 
but there is still just way to much memory-usage

quelle is the stock-exchange (source);

So normally a contract is traded on a principal exchange, not two, I 
would eventually suggest doing that :
Hi Philippe,
That might be right if you only watch one country, but we currently have 
35 international exchanges. So we don't have just one major exchange for 
IBM but we have NYSE, LSE, Xetra (german) etc.

It looks like we'll try something disc-based and have a memory-database 
only with ask- and asksize for the important exchanges. We want to get 
rid of our old solution and it seems like the massive amount of data 
just doesn't fit into memory with mysql because of the overhead mysql has.
The idea of introducing a numeric code instead of the char(12)-symbols 
and have a translation-table might be interesting. It makes everything a 
little less comfortable but saves a few bytes... I could split the data 
in individual tables for each exchange... It would save another 2 bytes 
for the exchange-ID... but the applications will have to choose the 
right table... I guess we'll have some discussion on that here in the 
company next week.

I don't think that the disc-based table engines can respond quick 
enough and handle all the inserts at the same time... but I might 
give it a try next week.

or one table per symbol maybe ?
ehmmm..  30 tables? not a good idea ;) I'd split it in tables by 
the time since that makes cleaning it up much easier (truncate table is 
much faster than delete from table where  
datefielddate_sub(now(),interval 5 day))

thanks for your help!
Jan
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Problem with Mysql 4.0.18 + Debian

2004-08-03 Thread Jan Kirchhoff
That sounds like a typical mod_perl-problem. The script is making new 
connections and doesn't close the old ones.
You should add debug-code to your script and add
* * * * * root mysql -e 'show processlist' 
/tmp/mysql_processlist_debug_`date +%s`.txt
to your /etc/crontab in order to log the processlist once every minute 
in a txt-file in /tmp

Jan
[EMAIL PROTECTED] wrote:
Thanks for the two responses.
William Mussatto said:
 

Are you running mod_perl?
   

Yes I am, with Perl 5.8.3.
Victor Pendleton said:
 

What does mysql show processlist look like?
   

Here is what it looks like currently, but the system is not in its
unresponsive phase right now.  I can't force it to go all wonky on me,
it will probably be tomorrow before the process count explodes again.
++-+---+-+-+--+---+--+
| Id | User| Host  | db  | Command | Time | State | Info  
 |
++-+---+-+-+--+---+--+
|  8 | citidel | localhost | citidel | Sleep   | 0|   | NULL  
 |
| 71 | citidel | localhost | citidel | Sleep   | 2192 |   | NULL  
 |
| 72 | citidel | localhost | citidel | Sleep   | 2141 |   | NULL  
 |
| 78 | citidel | localhost | citidel | Sleep   | 1503 |   | NULL  
 |
| 79 | citidel | localhost | citidel | Sleep   | 1503 |   | NULL  
 |
| 87 | citidel | localhost | citidel | Sleep   | 741  |   | NULL  
 |
| 88 | citidel | localhost | citidel | Sleep   | 730  |   | NULL  
 |
| 89 | citidel | localhost | citidel | Sleep   | 607  |   | NULL  
 |
| 95 | citidel | localhost | citidel | Query   | 0| NULL  | show
processlist |
++-+---+-+-+--+---+-


Ryan Richardson said:
 

-Original Message-
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: 8/2/04 9:16 AM
Subject: Problem with Mysql 4.0.18 + Debian
Hello:
 I posted this before but I have not gotten a response.  I have a
Debian
(woody) server running a good sized database (7.2GB of db files), Mysql
4.0.18.  I am running Apache 1.3.29 + perl and using mysql as the
backend.  In my.cnf, I have max_connections=300.
Here's the problem.  I had the site up several days, with everything
running perfectly.  Ordinarily there would be about 11 mysql processes
running.  However, after a few days of running smoothly, the number of
mysql processes increases to over 170, and the site crashes.  Formerly
I'd
get errors like
DBI connect('yada yada, ...) failed: Too many connections at DB.pm line
25
However once I set max_connections to 300 (default is 90), mysql will
still accept connections, but it is still way too slow to be usable, so
the website becomes unreachable.
I've read on this list that people running MySQL w/ FreeBSD can have
similar sounding problems.  I am wondering if there is a connection.
I know that the site is getting virtually no traffic, so the problem is
not that it is being overloaded.  I have tried this scenario at least a
dozen times, and the same thing always happens.
Ryan
   



 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: help tuning a DB installation

2004-08-03 Thread Jan Kirchhoff
Jim wrote:
Hi.  I'm wondering if anyone can help me tune this database so it runs 
better on my hardware.  I've made some attempts, but either they've 
made it worse or not changed anything.  Changing the database design 
itself has shown the most improvement, but I'd still like to know how 
to tune things on the db side.  I understand from the manual that he 
two most important variables to configure are key_buffer_size and 
table_cache, but HOW to do this is another question.

The machine is a 1.2 GHz Celeron, with 512 MB of RAM and an IDE disk.
[...]
Top looks like this:
PID   USER   PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
15899 mysql  26   10 45004  12M  3864 S N  35.1  2.4  98:58   0 mysqld
What's the output of free? I assume there are no other applications 
running on that server except mysql and you are using myisam-tables?

increase key_buffer_size *a lot*. You have 512megs of RAM that mysql 
could use, but it's only using 2.4% of that. You should leave some RAM 
for the OS and its caching, but I'd start setting key_buffer_size to 
300megs. And change table_cache to 64 or more - depending on the number 
of concurrent connections.

Your table_cache is set to 4, that means all mysql-processes may have 
only 4 tables open at a time - in total!
have a look at the docs at mysql.com, they explain what the variables 
mean and how to find out how useful your settings are.

Looking at Your settings I'd say your server spends most of the time 
waiting for the disc since it does no caching (increase key_buffer_size 
so mysql can cache its indexes in memory) and the processes/clients are 
waiting to open a table since all of them may only have 4 of them open 
at a time...

Jan
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Problem with Mysql 4.0.18 + Debian

2004-08-03 Thread Jan Kirchhoff
Jocelyn Fournier wrote:
Hi,
A quick fix would be to set the wait_timeout variable in the my.cnf to a
much smaller value than 28800 (default value).
Try to add wait_timeout=60 in the my.cnf for example, the connections should
be automatically closed after 60 secondes if there are not used anymore.
 

But as you wrote its only a quick fix and when the load on the server
increases the problem will show up again... sooner or later.
Better find the bug in the perl-script, it's probably just a missing
$dbh-disconnect (or it's in the wrong place) as somebody else here
suggested already.
we did a lot of bigger web-projects with heavy load and load-balancing
on linux/apache/mysql/mod_perl - environments and it works perfectly,
and all problems I've had before that were like Ryan's were just because
of mistakes in my database-connection/disconnection functions.
Try google, there are lots of webpages dealing with the traps of
mod_perl and what one has to watch out for!
Jan
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: slave should not stop

2004-07-29 Thread Jan Kirchhoff
put the following option in your my.cnf on the slave in order to ignore 
errors. Just use the error-numbers you'd like to ignore:
slave-skip-error=1053

Jan
Jim Nachlin wrote:
Is there any way within mysql to have the slaves not stop replicating 
on an error.  For some reason, my application is trying to insert 
duplicate keys.  This fails and the slaves stop replicating from the 
master. Ideally, the command that failed would just be skipped 
automatically and the whole system keeps going.

If there's no way to fix this problem, has anyone come up with a 
workaround, like some sort of clever script that will keep things 
replicating?

Thanks,
Jim

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: MySQL and SSL

2004-07-28 Thread Jan Kirchhoff
Michael Dykman wrote:
could someone please tell me which versions of mysql support SSL
connections for both clients and replication slaves?
 

As far as I understand SSL was introduced in 4.0 and SSL-replication was 
introduced in 4.1.1.
http://dev.mysql.com/doc/mysql/en/SSL_options.html
http://dev.mysql.com/doc/mysql/en/Replication_Options.html

We've started using replication over the Internet in 2001 using 
SSH-Tunnels (SSH-Port-Forwarding) which works fine, too. We haven't had 
any problems.

regards
Jan Kirchhoff

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: RAM-usage and hardware upgrade 10gb RAM

2004-07-23 Thread Jan Kirchhoff
David Griffiths wrote:
We just put a new dual-Opteron server into our production environment. 
We ordered a Megaraid SCSI card and five 10k drives, and a 3Ware 
Escalade SATA card with six 7200 RPM drives (Maxtor) to see which ones 
were best.

Our network guy did a bunch of benchmarking on the drives and found 
that SCSI-RAID5 was a bit faster than SATA-RAID0+1.

The SATA was significantly cheaper (the 3Ware card was the same price 
as the Megaraid card, however). You might be able to tie a 10K SCSI 
rig if you went with the Western Digital Raptor drives.

We ended up putting the SATA drives in production - some bug in the 
SCSI driver kept crashing MySQL on index-creation, etc.

High Performance MySQL mentions that SCSI 15K drives are worth the 
extra money.

Thanks David for your post,
Does anybody else in this list have experience with SATA-RAIDs?
After having done some research it looks like we'll go with a 
dual-Opteron an 8-12GB of RAM and a SATA-RAID10 with 8-10 
250GB-SATA-discs. We are just waiting for the NCQ-SATA-drives to be 
available and for 2 colleagues to return from vacation since we want 
everybody to be here when we do that major change. (looks like we'll 
order the system in 2-3 weeks if the harddiscs are available)

Our most important tables that get selects all the time and get updated 
up to 30 times a second each (or even more often depending on the time 
of the day) are of a total size of about 5-6 gigs.
Is it realistic thinking that mysql/innodb would keep those tables 
totally in memory and reply to all selects without reading from the disc 
when we increase innodb_buffer_pool_size to 7 or 8 gigs (assuming we 
have 12gigs of RAM)?

I just wanted to make sure nobody has hit problems with such systems. If 
you could just send a short We're doing something like that and it 
works fine I could definitly sleep better ;)

thanks for all the posts so far and pointing me towards the right direction!
Jan
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: find out who was online at a given time

2004-07-21 Thread Jan Kirchhoff
[EMAIL PROTECTED] wrote:
Problem: Spam Abuse
IP of offender: 66.50.xxX.245
Date of offense: 2004-07-05
Time of offense: 16:15
Now if I query the database based on date and ip address, I get the
following:
Id Date   Time   Record TypeFull
Name   IP Address
==        
= 

349 2004-07-0511:21:08  Start [EMAIL PROTECTED]
66.50.xxX.245
345 2004-07-0511:21:09  Start [EMAIL PROTECTED]
66.50.xxX.245
413 2004-07-0511:22:32  Stop  [EMAIL PROTECTED]
66.50.xxX.245
[...]
a time data type. What I need to be able to do is find the start before
the offense time, and the stop after the offense time so I know that the
person with the start and the stop is the one that committed the abuse.
 

what a about the very simple  approach?
This should be very fast if you habe indexes on ip, date, time and 
record_type.

select * from table where ip=1.1.1.1 and datefield=2004-07-05 and 
timefield=16:15:00 and record_type=Start order by timefield desc 
limit 1;
select * from table where ip=1.1.1.1 and datefield=2004-07-05 and 
timefield=16:15:00 and record_type=Stop order by timefield limit 1;


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: RAM-usage and hardware upgrade 10gb RAM

2004-07-20 Thread Jan Kirchhoff
Egor Egorov wrote:
Money is not really an issue but of course we don't want to waste it for 
scsi-hardware if we can reach almost the same speed with hardware 
sata-raids.
   

'Almost' is a key word. Some SCSI disk are working at 15k RPM, which will give
you a HUGE MySQL performance growth compared to 10k disks. 

AFAIR, there are no 15k RPM SATA disks yet. 
 

But shouldn't a sata-based RAID10 with 8 discs do job as well? writes 
would be spread on 4 discs...
Has anybody experience with those external SCSI-to-SATA RAIDs?
A SCSI-solution would cost twice as much, but would it really speed 
things up compared to a massive use of parallel (raid0) sata-discs?
I know disc i/o is the bottleneck in our case, of course we want the 
fastest disc/raid-system we can possibly get for our money.
Is our thinking too simple or shouldn't it be possible to reach the 
speed of fast scsi-discs by simply taking 2-3 fast sata-discs in a 
hardware raid0?
Our goal is a raid10, so reading should be even faster.

Money is not really an issue but of course we don't want to waste it for 
We'd like to stay with x86 because all our hardware is intel/amd and all 
our servers are running debian-linux. Can we expect better performance 
or problems using kernel 2.6.x?
   

You can expect better performance on kernel 2.6.x of course, especially on
multiple requests. 
 

Has anybody experiences with RAM-usage and cpu-architecture (please have 
a look at my earlier post)?

thanks
Jan
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RAM-usage and hardware upgrade 10gb RAM

2004-07-19 Thread Jan Kirchhoff
Hi,
We are currently using a 4.0.16-replication-setup (debian-linux, kernel 
2.4.21, xfs) of two 2.4ghz Intel-Pentium4 systems with  3gig RAM each 
and SCSI-Hardware-Raid, connected via gigabit-ethernet. We are reaching 
the limit of those systems and are going to buy new hardware as well as 
upgrade to mysql 4.1.x. We will start testing our applications on 4.1.3 
within the next few weeks but our main problem is that we are not quite 
sure what hardware to buy...

We are planning to buy something like a dual-xeon system with 10-16gb of 
RAM and hardware raid10 with 8 sata-disks and as much cache as possible.
Will mysql be able to use the ram efficiently or are we hitting limits? 
AMD or Intel? 32bit or 64bit?
Money is not really an issue but of course we don't want to waste it for 
scsi-hardware if we can reach almost the same speed with hardware 
sata-raids.
We'd like to stay with x86 because all our hardware is intel/amd and all 
our servers are running debian-linux. Can we expect better performance 
or problems using kernel 2.6.x?
If it really adds performance we might change to something else but x86 
or change the OS, but definitly not for 2-5%.
We are going to keep the old servers as replication-slaves for big, time 
consuming selects and making backups.

We will have around 60,000,000 inserts/updates a day and lots of selects 
with joins on tables of all sizes (historical tables with 400,000,000 
rows as well as small tables with less than 500,000 rows) The whole size 
of the database will be around 200gb, growing up to 400gb in the next 12 
months. We are using innodb because we had big problems with the 
locking-issues of myisam. Some of the smaller tables that are updated 
all the time can be kept in memory if possible since its data is also 
cached/backuped by the applications that insert/update the data. Has 
anybody experienced problems with a innodb_buffer_pool_size 10gb?
Disk-I/O is our main problem since all the updates go to various tables 
spread on the discs. Since most of the data can be reconstruted in case 
of a crash it is ok for us to have delayed inserts and inserts being 
cached in memory. Are there more options for innodb-tables than 
increasing innodb_buffer_pool_size and setting 
innodb_flush_log_at_trx_commit=0 that could speed up inserts/updates?

thanks for any help/suggestions...
Jan
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


replication crashed, a bug?

2001-01-20 Thread Jan Kirchhoff


My replication crashed just once again...

my my.cnf on the slave contains:

master-host = 123.123.123.123
master-user = rep
master-password = hidden

replicate-do-db = db1
server-id   = 6
replicate-ignore-table=db1.specials

I created a new DB on the master called "specials".

created 100 tables on the specials called data00, data01, data02 etc.

I then did a 
replace into specials.data10 select * from db1.specials where mynumber LIKE 
'10%';
on the master which works fine, but the slave crashed and I found *nothing* 
in mysql.err, it was empty, even after I tried "SLAVE START" several times. I 
then restartes thge mysql-server on the slave and did "SLAVE START" again and 
then found in mysql.err:

ERROR: 1146  Table 'specials.data10' doesn't exist 
010120 16:19:20  Slave:  error running query 'replace into 
specials.data10 select * from db1.specials where mynumber LIKE '10%''
010120 16:19:20  Error running query, slave aborted. Fix the problem, and 
re-start the slave thread with mysqladmin start-slave 
 
Of course, the the database specials and the table don't exist. they are not 
in the replication.

I now did a mysqldump -d specials file
and created the database and tables on the slave to get the slave running 
again, but that's not how the problem should be solved.

Is this a bug? mysql 3.23.28-gamma

Thanks

Jan

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




inserts with now() resulting in replication-problems

2001-01-19 Thread Jan Kirchhoff


My Replication-Slave crashed a few days ago with an error in the error-log 
saying something like "duplicate primary key processing query "INSERT INTO 
testtable (time,name,number) VALUES (NOW(),'Hello',10)"

testtable's primary key was (time,name)
time: datetime
name: varchar
number: int

What I was wondering after that:

I do a "INSERT INTO testtable (time,name,number) VALUES (NOW(),'Hello',10) on 
the master. Then I do the *same* query again 1 Minute later. No Problem, 
since time has passed and NOW() is now different, so i do not have a 
duplictae key. 

If the Slave gets out of sync (loses the connection to the server), connects 
again and tries to catch up, it processes those two queries *within one 
second* which would result in an duplicate primary key on the slave. The 
Slave would abort with an error and the replication would stop. 

Is that right or did I misunderstand the concept?
I changed the query to REPLACE instead of INSERT which is, in this case, fine 
with me, but is it possible to tell mysql to ignore such problems and keep 
the Replication up? I'd like to make the server send be a mail or SMS on my 
mobile phone if that happens. So if I could make the slave keep on 
replicating but write a line into a logfile like "serious-errors.log". I 
could then make some shell-script send me a mail...

i'm running mysql 3.23.28-gamma on debian linux on Athlon-PCs.

thanks for any explaination!

Jan


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: ANNOUNCE: myrepl - automate inital mysql replication setup on master server without down-time

2001-01-19 Thread Jan Kirchhoff


 * It'd be handy to create a compressed tar file (.tar.gz). I'll
   probably add that.

great ;) but i'll transfer it compressed with scp, so it's no big problem for 
me. But a "-z"-switch would probably be useful for lots of people.

 * It'd be nice to specify which databases/tables not to snapshot
   (or to say "only snapshot these databases/tables"). If folks
   need that, it can be added also.

yes, just like mysqldump...? ;)


 Comments and feedback welcome. Half the reason I wrote this was to see
 if it would work. The other half is that I figured it'd be useful if
 it did. :-)

It surely is useful. i'll write two or three shell-scripts around, so it 
automatically transfers the tar-file to the slave ans starts another script 
there that puts the data in the right directory and restarts the 
replication... So in case the replication crashes, it's just a "click" and 
everythings fine again. ;) that'll save me a lot of time!

but i need to be able to select *one* database and exclude *one* table from 
that database... if there's nobody else needing that, i'll just make some 
changes in your script and customize it for my system, otherwise I (or you?) 
could add that feature ,although i won't find much time for that before 
february. i'll see what the response is by then.


 Thanks,

 Jeremy

thanks to you for the script!
(i thought about programming something like that, too, but just didn't come 
any further than thinking about it... i just didn't have more time. :( )

Jan

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




replication of tables with 10 million rows?

2001-01-19 Thread Jan Kirchhoff


one of my tables has ~10 million rows (but just 4 columns: 
int,double,double,date (date,int as primary key)) and when i started a 
replication on that database it crashed within 48 hours without any messages 
in the error-log. the mysql-server stays, up, just the replication dies. i 
don't have much possibility on testing for a reproduceable case since it now 
is a production system with too much load for playing around with it. i have 
backups of the whole database and might be able to test that in about 3 weeks 
on other systems, but i'll test the new mysql-version first...

i just wondered if someone has a replication running with that number of rows 
or even more in one table? 

those crashes were really weird, we now excluded the table from the 
replication and i'll split it up in 100 small ones what we wanted to do 
anyway since mysqls table-locking is really annoying when doing inserts or 
updates... and we do about 200.000 updates/inserts each day, half of them 
within one hour at night :(

i'm running mysql 3.23.28-gamma...

bye
Jan

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php