Re: Repairing packed MyISAM tables with no index file (.MYI)

2006-03-21 Thread Kayra Otaner
My question wasn't about how to compress tables. It was about how to
regenerate indexes from a myisampacked table.


On Tue, 2006-03-21 at 16:40 +0530, Dilipkumar wrote:
 Hi,
 
 First Use this  to compress : 
 
  /usr/local/mysql/bin/myisampack   (table name) troubleticket.*
 
 then use this
 
  /usr/local/mysql/bin/myisamchk -rq (table name) troubleticket.*
 
 
 This might help you.
 
 
 Kayra Otaner wrote:
 
 Hello, 
 
 I have been using myisampack to pack old MyISAM tables to archive huge
 amounts of data. To save more space I decided to get rid of index (.MYI)
 files based on the assumption that I can reconstruct those indexes
 whenever I needed. I've rebuild indexes on plain MyISAM tables with no
 problem. I always use : 
 
 repair table TABLENAME USE_FRM; 
 
 from MySQL console to rebuild index files from scratch. When I try the
 same on packed MyISAM tables MySQL fails. First it gives me bunch of
 same type of errors : 
 
 | test.z_976287758_978107517 | repair | info | Found block that points
 outside data file at 382300672 | 
 
 Then when it is finishes complaining about blocks outside data file, it
 actually deletes actual data file (.MYD) : 
 
 -rw-r- 1 0 Mar 20 21:58 z_976287758_978107517.MYD 
 -rw-rw 1 1.0K Mar 20 21:59 z_976287758_978107517.MYI 
 -rw-r- 1 8.7K Mar 20 20:15 z_976287758_978107517.frm 
 
 Typically I would expect USE_FRM to not to touch actual data, but just
 rebuild index file. When I try the same with myisamchk console utility,
 it does the same. I use 5.0.18 on RHEL4 and RHEL3. Tested it only on 5.x
 so far, doesn't work it with 4.x since utils seems like different. 
 
 
 Any idea on what is going on? Did I hit to a bug? 
 
 Thanks. 
 
 Kayra Otaner
 
 
   
 
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Repairing packed MyISAM tables with no index file (.MYI)

2006-03-20 Thread Kayra Otaner
Hello, 

I have been using myisampack to pack old MyISAM tables to archive huge
amounts of data. To save more space I decided to get rid of index (.MYI)
files based on the assumption that I can reconstruct those indexes
whenever I needed. I've rebuild indexes on plain MyISAM tables with no
problem. I always use : 

repair table TABLENAME USE_FRM; 

from MySQL console to rebuild index files from scratch. When I try the
same on packed MyISAM tables MySQL fails. First it gives me bunch of
same type of errors : 

| test.z_976287758_978107517 | repair | info | Found block that points
outside data file at 382300672 | 

Then when it is finishes complaining about blocks outside data file, it
actually deletes actual data file (.MYD) : 

-rw-r- 1 0 Mar 20 21:58 z_976287758_978107517.MYD 
-rw-rw 1 1.0K Mar 20 21:59 z_976287758_978107517.MYI 
-rw-r- 1 8.7K Mar 20 20:15 z_976287758_978107517.frm 

Typically I would expect USE_FRM to not to touch actual data, but just
rebuild index file. When I try the same with myisamchk console utility,
it does the same. I use 5.0.18 on RHEL4 and RHEL3. Tested it only on 5.x
so far, doesn't work it with 4.x since utils seems like different. 


Any idea on what is going on? Did I hit to a bug? 

Thanks. 

Kayra Otaner


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unbelieveable with 4 Gbytes - MySQL 5 only 350 concurrent mysql connection... on Linux Redhat 9

2006-02-24 Thread Kayra Otaner
It can be both Linux kernel setting and MySQL configuration issue.

It can be a kernel issue since:
- Although you have 4G on your machine your kernel may set to let only
2G to a user space process. You may need to upgrade your kernel to a
'hugemem' kernel which gives you either 3/1 or 4/4 (you need 8G mem for
this) memory split
- Number of open files (directly co related to number of open tables in
MySQL) setting in kernel can be lower than what your MySQL instance
needs.

It can be a MySQL configuration issue since :
- You may have incorrect settings in your my.cnf config file. Say if you
assigned 2G to innodb buffer and other buffers and your kernel lets you
to go upto 2G for user space process. For each connection coming MySQL
dedicates at least 128K (thread size) memory, so for 350 connection, you
end up using +43M memory which makes MySQL's total mem usage 2G + 43M.
This may be the reason why you're seeing can't create new thread.


Kayra Otaner

On Fri, 2006-02-24 at 20:51 +0700, Ady Wicaksono wrote:
 I have MySQL 5 installed on My Machine,
 
 But it could handle only max about 350 concurrent mysql connection
 
 Unbelieveable, i have 4 Gbytes, but on the next 351 connection i always
 got Can't create a new thread (errno 12); if you are not out of 
 available memory, you can consult the manual for
  a possible OS-dependent bug in
 
 Is it mysql bug, incorrect Linux setting or what?
 
 
 
 
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Index merge optimization with joined tables?

2005-12-01 Thread Kayra Otaner
Hello,

Is MySQL 5.0's Index merge optimization is also available to multiple
table joins? When I do explain on 3 table joins, only one key per table
is used :

SELECT R.*  FROM L, M, R
WHERE
L.key1  = 1
ANDL.key2 = M.key2
ANDM.key1 = R.key1

R table has 10 rows
M table has 24 million rows
L table has 9 million rows

Explain plan only shows ref  eq_ref type optimization is being used. M
table has 2 key fields used in the joins and only one is used. Tables
are InnoDB, total size of the tables is around 9G. I'm using MySQL max
5.0.16

Any suggestions?

Thank you

Kayra Otaner



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Optimization suggestions when only using Innodb

2005-12-01 Thread Kayra Otaner
Please see my comments below :

On Thu, 2005-12-01 at 14:06 -0600, Ross Anderson wrote:
 I have read the suggestions for optimization for innodb however I'm 
 curious if anyone can help me understand which buffers are common (thus 
 used by innodb action) and which are specific to myiasm.  The server has 
 2GB of ram but runs multiple services. I have a number of different 
 types of connections also. Some are repeated queries to static tables 
 and others are dynamic read write to large tables.
 
 Thanks in advance!
 Ross Anderson
 
 mysql 4.0.25
 linux-2.6.14
 
 

MyISAM storage engine specific :
 key_buffer = 128M
 myisam_sort_buffer_size = 16M


Effecting all engines:
 thread_cache = 8
 #query_cache_type = ON
 query_cache_size= 32M
 # Try number of CPU's*2 for thread_concurrency
 thread_concurrency = 4
 sort_buffer_size = 2M
 read_buffer_size = 2M
 table_cache = 256

InnoDB specific:
 innodb_buffer_pool_size = 768M
 innodb_additional_mem_pool_size = 20M
 innodb_log_file_size = 256M
 innodb_log_buffer_size = 8M
 innodb_flush_log_at_trx_commit = 0
 innodb_lock_wait_timeout = 50
 innodb_status_file=0

Also if there is too much IO, try to increase number of innodb log files
from 2 to something like 4. 

Check
http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html

for full list of system variables.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Replicaiton

2003-08-29 Thread Kayra Otaner
Hi,

It looks like your slave can't connect to the master you've defined.
Here are the steps you need to follow :
1- On shell type
mysql -h mail.dbi.tju.edu -urepl -pPASSWORD
You need to replace word PASSWORD with your actual password. This will
show you if you can connect to master from your slave computer. If you
can not connect check your password and make sure that on master server 
user 'repl' has permission to access from your slave computer. You can
see permissions by checking user table on mysql db. 
2- If you can access to master server from slave next step is : make
sure that you have entered your password correctly to /etc/my.cnf file.
You can chance your replication user's password by typing 
 change master to master_user = 'repl', master_password = 'PASSWORD'

After all these steps type 'start slave' on slave server to see if it is
working.


Kayra Otaner



On Fri, 2003-08-29 at 15:10, Leo Genyuk wrote:
 I am having problems to start MySQL replication. I followed all the 
 steps outlined on the website, but replicaiton is not working. slave 
 status shows the following:
 
 mysql show slave status\G
 *** 1. row ***
Master_Host: mail.dbi.tju.edu
Master_User: repl
Master_Port: 3306
  Connect_retry: 60
Master_Log_File: mail-bin.001
Read_Master_Log_Pos: 3651
 Relay_Log_File: blade4-relay-bin.001
  Relay_Log_Pos: 3133
 Relay_Master_Log_File: mail-bin.001
   Slave_IO_Running: No
  Slave_SQL_Running: Yes
Replicate_do_db:
Replicate_ignore_db:
 Last_errno: 0
 Last_error:
   Skip_counter: 0
Exec_master_log_pos: 3651
Relay_log_space: 3133
 1 row in set (0.00 sec)
 
 
 As you can see Slave_IO_Running is set to NO.
 
 I tried to start it manually with the follwoing command:
   slave start IO_THREAD;
 without any luck. I have also tried to start and stop the slave server 
 also wihtout any luck.
 
 Thank you in advance for any help.
 
 Leo.
 
 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Optimizing Mysql for large tables

2003-08-22 Thread Kayra Otaner
Joseph,

How big your table files are? Are they MyISAM or Innodb ? Do you have
indexes? How much memory do you have? Is your MySQL running on a
dedicated server or do you run anything else on your db server?

This questions needs to be answered before suggesting anything logical.
But general suggestions can be :

1 - Add indexes to your tables and use that indexes when you're
selecting rows.
2 - increase key_buffer_size to 32 or 64 or even more depending on your
server memory
3- Add memory to your server if you have less then 256 Mb and willing to
have fast db performance.

Kayra Otaner



On Thu, 2003-08-21 at 13:50, Joseph Norris wrote:
 Group,
 
 I have been working with Mysql for about 5 years - mainly in LAMP shops. The
 tables have been between 20-100 thousand records size.  Now I have a project
 where the tables are in the millions of records.
 
 This is very new to me and I am noticing that my queries are really
 sloww!
 
 What are the options that I have to speed my queries on the mysql side with
 regards to the my.cnf file. I have a fair understanding of
 sql optimization and I understand explain. I just want to see if
 there is something that I can do with the server also.
 
 Thanks to all.
 
 #Joseph Norris (Perl - what else is here?/Linux/CGI/Mysql)
 print @c=map chr $_+100,(6,17,15,16,-68,-3,10,11,16,4,1,14,-68,12,1,14,8,
 -68,4,-3,-1,7,1,14,-68,-26,11,15,1,12,4,-68,-22,11,14,14,5,15,-90);
 
 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: DIFFERENTIAL BACKUPS

2003-08-21 Thread Kayra Otaner
This is my differential backup script. I've put it into cron so it runs
every hour automatically. I've a master sql dump files in
'/master/DBNAME' and I compare this file to the most recent dump and
calculate diff.
Basically it mysqldumps database without buffering (-q), by adding ' to
table and field names. Then splits this file into pieces each containing
500.000 lines. If you don't have big SQL dumps (more than 700-800Mb) you
can skip this step, this step is needed to reduce memory required by
diff. I should say that I'm running this diff on a server with 2Gb ram.
If you have less memory you can reduce number of lines per split file
eg: instead of 500.000 line you can use 50.000.
After splitting files I compare differences between directories /master
and /target. Just make sure that you have your master directory
containing master database dumps which are split using same number of
lines.
Last step compresses diff file using bzip2 and securely transfer to a
designated backup server.


#!/bin/bash
# Copyright (C) 2002-2003 Kayra Otaner [EMAIL PROTECTED]
# This software may be freely redistributed under the terms of the GNU
# public license.
#
# MySQL differential backup script.

FILENAME=`date --date today +%Y-%m-%d_%H`
clear
cd /home/backup/db/
mysqldump -CQq --add-drop-table DBNAME -ubackup -pX 
./target/DBNAME.sql
cd target
split -l 50 DBNAME.sql DBNAME
cd ..
rm ./target/DBNAME.sql
diff -N -H -U 2 ./master ./target  diff
 
cp diff $FILENAME
bzip2 -9f $FILENAME
scp $FILENAME.bz2 [EMAIL PROTECTED]:~/db/




As I told you, this script looks old fashioned but working perfectly for
us. If there is any binary diff tool that you can use, try same with
binary diff. Binary diffs are commercial and not %100 reliable. Don't
forget to change your user name, password and dbname parameters for
above script. Let me know if you're having trouble customizing it.

Kayra Otaner




On Wed, 2003-08-20 at 19:42, Miguel Perez wrote:
 Do you mind being more explicit with your method please, I really appreciate 
 your help.
 
 Thanx in advanced...
 
 From: Kayra Otaner [EMAIL PROTECTED]
 To: Miguel Perez [EMAIL PROTECTED]
 Subject: Re: DIFFERENTIAL BACKUPS
 Date: 20 Aug 2003 19:01:17 -0400
 
 I do differential backups using old fashion but solid methods. Since
 there isn't safe binary diff, I sql dump database to a text file and
 unified diff it, bzip2 it and send it to another server.
 So, it is possible with this method.
 
 Let me know if you need details.
 
 
 Kayra Otaner
 
 
 
 
 On Wed, 2003-08-20 at 18:33, Miguel Perez wrote:
   Hi list,
  
   Does anyone know how to do a differntial backup for mysql databases. Is 
 it
   possible?.
  
   I use mysql 4.0.12 on a redhat 7.3 box.
  
   Greetings and thnx in advanced
  
   _
   MSN Fotos: la forma ms fcil de compartir e imprimir fotos.
   http://photos.msn.es/support/worldwide.aspx
  
 
 _
 Charla con tus amigos en lnea mediante MSN Messenger:  
 http://messenger.microsoft.com/es
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL 4.0.13 Memory problem under heavy load

2003-08-20 Thread Kayra Otaner
Hi all,

I want to get your opinions on how to increase available/free memory and
performance on a heavy volume database server.

I have MySQL 4.0.13 running on RH 7.2 replicated to another RH 7.2 using
same MySQL version.
Recently our master database server (2 AMD Cpu + 2Gb memory + 2Gb swap
space) started to suffer from  memory outages because of heavy load.
During day available free memory is changing from 200Mb to 5Mb and when
available memory reaches to 5Mb MySQL starts to give 'Too many
connections' messages. Db server is working with 45-70 query/second and
more than 25,712 connection per hour. There are active 10-13 threads
serving databases.

To increase available free memory I've did the following :
1- Optimized all tables
2- Removed unneccessary/old indexes
3- Moved unused databases to replication server
4- Increased key_buffer_size from 8Mb to 128Mb
5- Have increased max_connection from 100 to 150
6- Have increased thread_cache to 5

This changes helped a bit but still memory is a problem for MySQL. What
should I do to prevent 'too many connections' messages and have more
memory available on database servers? Should I remove more indexes from
tables? Should I increase key_buffer_size to 256Mb or more?
Key_buffer_size doesn't look like a problem since key efficiency looks
100% most of the time.


Thank you for your time

Kayra Otaner


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]