Re: Queue / FIFO in MySQL?

2009-09-07 Thread Alex Arul Lurthu
Please check out http://q4m.31tools.com/*. *It is a message queue  storage
engine with sql interface. Havent used it in a production setup though we
did some tests.

-- 
Thanks
Alex
http://alexlurthu.wordpress.com


Re: Reset of Status Parameters

2008-06-20 Thread Alex Arul Lurthu
I would also add Baron's  maakit  http://www.maatkit.org/ ( innotop ) for
innodb details to the arsenal.

On Fri, Jun 20, 2008 at 3:11 PM, Ian Simpson [EMAIL PROTECTED] wrote:

 I tend to use the 'mytop' program, which shows the average
 queries/second for the entire lifetime and for the last 5 seconds, as
 well as showing a bunch of other statistics and a list of running
 queries. It's a handy little monitoring tool.

 On Fri, 2008-06-20 at 12:17 +0530, Venu Madhav Padakanti wrote:
  I am using MySQL version 5.0.22, I am interested in knowing the current
  performance on the MySQL.
 
  With the status command we can get the queries per second but it will
  average since the beginning of time when SQL was up and running and not
  the current rate?
 
  Is there any way to reset that parameter so that the data can reflect
  current without restarting the MySQL
 
  Thanks in advance
  ..venu
 --
 Ian Simpson
 System Administrator
 MyJobGroup

 This email may contain confidential information and is intended for the
 recipient(s) only. If an addressing or transmission error has misdirected
 this email, please notify the author by replying to this email. If you are
 not the intended recipient(s) disclosure, distribution, copying or printing
 of this email is strictly prohibited and you should destroy this mail.
 Information or opinions in this message shall not be treated as neither
 given nor endorsed by the company. Neither the company nor the sender
 accepts any responsibility for viruses or other destructive elements and it
 is your responsibility to scan any attachments.




-- 
Thanks
Alex
http://alexlurthu.wordpress.com


Re: Very slow inserts into InnoDB tables

2008-06-13 Thread Alex Arul Lurthu
Please check if the my.cnf configurations to be the same.

 What are your configuration parameters in terms of innodh flush log trx
commit , bin logging, sync binlog and innodb unsafe for binlog ?

If the systems have raid, check if the BBWC is enabled on the new host and
WB is enabled.


On Fri, Jun 13, 2008 at 5:02 PM, Ian Simpson [EMAIL PROTECTED] wrote:

 Hi list,

 Have a bit of a mystery here that I hope somebody can help with.

 I've just got a new server that I'm using as a dedicated MySQL server.
 In terms of hardware it's pretty much identical, if not slightly
 superior to an existing server already in production use.

 It's having a real struggle processing INSERT statements to InnoDB
 tables; it's maxing out at around 100 inserts per second, even with very
 simple two column tables (inserts into MyISAM tables run fine).
 Meanwhile, the original server can happily process around 1000
 inserts/sec into an identical table.

 The MySQL configuration of the two databases is identical, except for
 the tablespace file size (the new server has a larger tablespace
 defined), and the InnoDB logs (again, new server has larger logs).

 Can anybody suggest an area of investigation as to the cause?

 Thanks,
 --
 Ian Simpson

 This email may contain confidential information and is intended for the
 recipient(s) only. If an addressing or transmission error has misdirected
 this email, please notify the author by replying to this email. If you are
 not the intended recipient(s) disclosure, distribution, copying or printing
 of this email is strictly prohibited and you should destroy this mail.
 Information or opinions in this message shall not be treated as neither
 given nor endorsed by the company. Neither the company nor the sender
 accepts any responsibility for viruses or other destructive elements and it
 is your responsibility to scan any attachments.




-- 
Thanks
Alex
http://alexlurthu.wordpress.com


Re: Very slow inserts into InnoDB tables

2008-06-13 Thread Alex Arul Lurthu
also how often do you issue a commit. batching the inserts inside a
transaction might help.

On Fri, Jun 13, 2008 at 6:53 PM, Ananda Kumar [EMAIL PROTECTED] wrote:

 check for iostat to see if the disk is heavly used.

 On 6/13/08, Ian Simpson [EMAIL PROTECTED] wrote:

 Hi Alex,

 Configurations are identical, other than the differences I initially
 mentioned. I've diffed both the configuration files and the output of
 SHOW VARIABLES on both servers.

 I've contacted my hosting provider to ask about the RAID settings.

 Variable_name: innodb_flush_log_at_trx_commit
Value: 1
 Variable_name: sync_binlog
Value: 0
 Variable_name: innodb_locks_unsafe_for_binlog
Value: OFF

 Thanks

 --
 Ian Simpson

 On Fri, 2008-06-13 at 17:43 +0530, Alex Arul Lurthu wrote:
  Please check if the my.cnf configurations to be the same.
 
   What are your configuration parameters in terms of innodh flush log
  trx commit , bin logging, sync binlog and innodb unsafe for binlog ?
 
  If the systems have raid, check if the BBWC is enabled on the new host
  and WB is enabled.
 
 
  On Fri, Jun 13, 2008 at 5:02 PM, Ian Simpson [EMAIL PROTECTED]
  wrote:
  Hi list,
 
  Have a bit of a mystery here that I hope somebody can help
  with.
 
  I've just got a new server that I'm using as a dedicated MySQL
  server.
  In terms of hardware it's pretty much identical, if not
  slightly
  superior to an existing server already in production use.
 
  It's having a real struggle processing INSERT statements to
  InnoDB
  tables; it's maxing out at around 100 inserts per second, even
  with very
  simple two column tables (inserts into MyISAM tables run
  fine).
  Meanwhile, the original server can happily process around 1000
  inserts/sec into an identical table.
 
  The MySQL configuration of the two databases is identical,
  except for
  the tablespace file size (the new server has a larger
  tablespace
  defined), and the InnoDB logs (again, new server has larger
  logs).
 
  Can anybody suggest an area of investigation as to the cause?
 
  Thanks,
  --
  Ian Simpson
 
  This email may contain confidential information and is
  intended for the recipient(s) only. If an addressing or
  transmission error has misdirected this email, please notify
  the author by replying to this email. If you are not the
  intended recipient(s) disclosure, distribution, copying or
  printing of this email is strictly prohibited and you should
  destroy this mail. Information or opinions in this message
  shall not be treated as neither given nor endorsed by the
  company. Neither the company nor the sender accepts any
  responsibility for viruses or other destructive elements and
  it is your responsibility to scan any attachments.
 
 
 
  --
  Thanks
  Alex
  http://alexlurthu.wordpress.com

 This email may contain confidential information and is intended for the
 recipient(s) only. If an addressing or transmission error has misdirected
 this email, please notify the author by replying to this email. If you are
 not the intended recipient(s) disclosure, distribution, copying or printing
 of this email is strictly prohibited and you should destroy this mail.
 Information or opinions in this message shall not be treated as neither
 given nor endorsed by the company. Neither the company nor the sender
 accepts any responsibility for viruses or other destructive elements and it
 is your responsibility to scan any attachments.





-- 
Thanks
Alex
http://alexlurthu.wordpress.com


Re: Very slow inserts into InnoDB tables

2008-06-13 Thread Alex Arul Lurthu
replication based inserts are serial whereas most of the time the inserts on
masters are concurrent. this leads to the slaves falling behind. to tackle
this we have used the following strategies :

1. Use raid 0 on the slaves (master users raid 10) so as to speed up writes.

2. pre fetch and cache the data that needs to be modified by the slave sql
thread.
3. set innodb flush trx log commit to 2 or even 0.
4. Out of desperation sometimes disable innodb double write and also xa
support.

On Fri, Jun 13, 2008 at 7:33 PM, Ian Simpson [EMAIL PROTECTED] wrote:



 Hi guys, thanks for pitching in.

 The inserts are from replication; we're not using transactions on the
 master (yet), and I don't think there's a way of telling MySQL to batch
 incoming replication statements if they're not already in a transaction.

 Disk usage: the older server (the one that's running fine) is running
 more transactions per second, but has lower blocks written and read per
 second than the new server:

 The working server (which in addition to replicating is also handling a
 bunch of read queries)

 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
 sda 88.47 782.20 998.77 9046888130 11551757459

 The new server, which is just trying to handle replication

 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
 sda 77.83 1367.55 2914.72 358474084 764029986

 Thanks,
 
 --
 Ian Simpson




 On Fri, 2008-06-13 at 19:15 +0530, Alex Arul Lurthu wrote:
  also how often do you issue a commit. batching the inserts inside a
  transaction might help.
 
  On Fri, Jun 13, 2008 at 6:53 PM, Ananda Kumar [EMAIL PROTECTED]
  wrote:
  check for iostat to see if the disk is heavly used.
 
 
  On 6/13/08, Ian Simpson [EMAIL PROTECTED] wrote:
  Hi Alex,
 
  Configurations are identical, other than the
  differences I initially
  mentioned. I've diffed both the configuration files
  and the output of
  SHOW VARIABLES on both servers.
 
  I've contacted my hosting provider to ask about the
  RAID settings.
 
  Variable_name: innodb_flush_log_at_trx_commit
  Value: 1
  Variable_name: sync_binlog
  Value: 0
  Variable_name: innodb_locks_unsafe_for_binlog
  Value: OFF
 
  Thanks
 
  --
  Ian Simpson
 
  On Fri, 2008-06-13 at 17:43 +0530, Alex Arul Lurthu
  wrote:
   Please check if the my.cnf configurations to be the
  same.
  
   What are your configuration parameters in terms of
  innodh flush log
   trx commit , bin logging, sync binlog and innodb
  unsafe for binlog ?
  
   If the systems have raid, check if the BBWC is
  enabled on the new host
   and WB is enabled.
  
  
   On Fri, Jun 13, 2008 at 5:02 PM, Ian Simpson
  [EMAIL PROTECTED]
   wrote:
   Hi list,
  
   Have a bit of a mystery here that I hope
  somebody can help
   with.
  
   I've just got a new server that I'm using as
  a dedicated MySQL
   server.
   In terms of hardware it's pretty much
  identical, if not
   slightly
   superior to an existing server already in
  production use.
  
   It's having a real struggle processing
  INSERT statements to
   InnoDB
   tables; it's maxing out at around 100
  inserts per second, even
   with very
   simple two column tables (inserts into
  MyISAM tables run
   fine).
   Meanwhile, the original server can happily
  process around 1000
   inserts/sec into an identical table.
  
   The MySQL configuration of the two databases
  is identical,
   except for
   the tablespace file size (the new server has
  a larger
   tablespace
   defined), and the InnoDB logs (again, new
  server has larger
   logs).
  
   Can anybody suggest an area of investigation
  as to the cause?
  
   Thanks,
   --
   Ian Simpson
  
   This email may contain confidential
  information and is
   intended for the recipient(s) only. If an
  addressing or
   transmission error has misdirected this
  email, please notify
   the author by replying to this email. If you
  are not the
   intended recipient(s) disclosure,
  distribution, copying or
   printing of this email is strictly
  prohibited and you should
   destroy this mail. Information or opinions
  in this message
   shall not be treated as neither given nor
  endorsed by the
   company. Neither the company nor the sender
  accepts any
   responsibility for viruses or other
  destructive elements and
   it is your responsibility to scan any
  attachments.
  
  
  
   --
   Thanks
   Alex
   http://alexlurthu.wordpress.com
 
  This email may contain confidential information and is
  intended for the recipient(s) only. If an addressing
  or transmission error has misdirected this email,
  please notify the author by replying to this email. If
  you are not the intended recipient(s) disclosure,
  distribution, copying or printing of this email is
  strictly prohibited and you should destroy this mail.
  Information or opinions in this message shall not be
  treated as neither given nor endorsed by the company.
  Neither the company nor the sender accepts any
  responsibility for viruses or other destructive
  elements

Re: Circular replication

2007-12-05 Thread Alex Arul Lurthu
Chain replication is fine as long as reading stale data from the last slave
in your chain is ok. the staleness depends on the write throughput and
capacity of the intermediate slaves. But Chain replication with circular
replication is a definite no no in prod since if any intermediate fails, you
will not be able to restore it easily and the data goes out of sync.

On Dec 5, 2007 12:31 AM, Krishna Chandra Prajapati [EMAIL PROTECTED]
wrote:

 Hi All,

 Is circular replication or chain replication is suitable for production
 environment. Whether any testing has been done. If yes then, Please let me
 know. There is any other issue related to circular replication.

 Thanks
 --
 Krishna Chandra Prajapati
 MySQL DBA,
 Ed Ventures e-Learning Pvt.Ltd.
 1-8-303/48/15, Sindhi Colony
 P.G.Road, Secunderabad.
 Pin Code: 53
 Office Number: 040-66489771
 Mob: 9912924044
 URL: ed-ventures-online.com
 Email-id: [EMAIL PROTECTED]




-- 
Thanks
Alex
http://alexlurthu.wordpress.com


Re: Determining Table Storage Engine Type on Crashed Table

2007-12-01 Thread Alex Arul Lurthu
How about looking at the .frm file of the table.

On 11/28/07, Richard Edward Horner [EMAIL PROTECTED] wrote:
 FYI, this did not work :)

 Thanks though!

 Rich(ard)

 On Nov 23, 2007 3:37 AM, Paul McCullagh [EMAIL PROTECTED]
 wrote:
  Maybe this will work:
 
  SHOW CREATE TABLE table_name;
 
 
  On Nov 21, 2007, at 9:42 PM, Richard Edward Horner wrote:
 
   Hey everybody,
  
   Hopefully some of you are already enjoying time off. I am not...yet :)
  
   Anyway, is there a way to determine what storage engine a table is
   using if it's crashed?  When it's fine, I can just run:
  
   mysql show table status like 'table_name';
   +-++-++
   ++-+--+--
   +---++-
   +-+-+---
   +--++-+
   | Name| Engine | Version | Row_format | Rows   |
   Avg_row_length | Data_length | Max_data_length  | Index_length |
   Data_free | Auto_increment | Create_time | Update_time
   | Check_time  | Collation | Checksum | Create_options
   | Comment |
   +-++-++
   ++-+--+--
   +---++-
   +-+-+---
   +--++-+
   | table_name | MyISAM |  10 | Fixed  | 985984 | 13
   |12817792 | 3659174697238527 | 34238464 | 0 |
   1182153 | 2007-11-15 17:44:28 | 2007-11-21 15:28:07 | 2007-11-21
   15:28:18 | latin1_swedish_ci | NULL || |
   +-++-++
   ++-+--+--
   +---++-
   +-+-+---
   +--++-+
   1 row in set (0.00 sec)
  
   As you can see, the second column returned is the Engine. In this
   case, MyISAM. Now, if I crash the table, it doesn't work:
  
   mysql show table status like 'table_name';
   +-++-++--
   ++-+-+--
   +---++-+-
   ++---+--+
   +-
   ---+
   | Name| Engine | Version | Row_format | Rows | Avg_row_length
   | Data_length | Max_data_length | Index_length | Data_free |
   Auto_increment | Create_time | Update_time | Check_time | Collation |
   Checksum | Create_options | Comment
   |
   +-++-++--
   ++-+-+--
   +---++-+-
   ++---+--+
   +-
   ---+
   | table_name | NULL   |NULL | NULL   | NULL |   NULL |
  NULL |NULL | NULL |  NULL |
   NULL | NULL| NULL| NULL   | NULL  | NULL |
   NULL   | Table './blah/table_name' is marked as crashed and
   should be repaired |
   +-++-++--
   ++-+-+--
   +---++-+-
   ++---+--+
   +-
   ---+
   1 row in set (0.00 sec)
  
   Now, let's assume for a moment this were an InnoDB table. If I were to
   try and run repair, it would say that the storage engine does not
   support repair so clearly it knows what the storage engine is. How do
   I get it to tell me? Or I guess a broader more helpful question would
   be, What are all the ways to determine a table's storage engine
   type?
  
   Thanks,
   --
   Richard Edward Horner
   Engineer / Composer / Electric Guitar Virtuoso
   [EMAIL PROTECTED]
   http://richhorner.com - updated June 28th
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:http://lists.mysql.com/mysql?
   [EMAIL PROTECTED]
  
 
 



 --
 Richard Edward Horner
 Engineer / Composer / Electric Guitar Virtuoso
 [EMAIL PROTECTED]
 http://richhorner.com - updated June 28th

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
Sent from Gmail for mobile | mobile.google.com

Thanks
Alex
http://alexlurthu.wordpress.com

-- 
MySQL General Mailing 

Re: show slave staus

2007-11-27 Thread Alex Arul Lurthu
You can set pager command to grep out unwanted fields.

On 11/28/07, bruce [EMAIL PROTECTED] wrote:
 hi..

 simple question!!

 mysql show slave status

 returns a number of fields, with information on the status of the slave. is
 there a way to only return the field(s) i'm interested in...

 i thought that i had figured this out, but i can't recall, and it's been
 awhile since i've played with this!

 thanks


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
Sent from Gmail for mobile | mobile.google.com

Thanks
Alex
http://alexlurthu.wordpress.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SELECT Speed

2007-11-26 Thread Alex Arul Lurthu
The second query might be faster due to caching.

On 11/26/07, Alexander Bespalov [EMAIL PROTECTED] wrote:
 Hi,

 I have a problem with SELECT speed. The first execution takes up to several
 minutes while the next (with the same statement) takes not more then several
 seconds.

 The statement example is:
 select nas.nasIpAddress, count(distinct(acct.user_id)), count(*),
 sum(acct.acctOutputOctets)  from acct, nas, can, acctStatusType  where
 acct.can_id = can.id  and acct.acctStatusType_id = acctStatusType.id  and
 acct.day = '2007-09-01'
  and acct.day = '2007-11-01'
  and acct.nas_id = nas.id
  and can.can = 10
  and acctStatusType.acctStatusType = 'Stop'
  group by acct.nas_id
 ;

 EXPLAIN shows the following:
 ++-++++-
 ---+-+-+---++
 | id | select_type | table  | type   | possible_keys  | key
 | key_len | ref | rows  | Extra  |
 ++-++++-
 ---+-+-+---++
 |  1 | SIMPLE  | can| const  | PRIMARY,can| can
 |   2 | const   | 1 | Using filesort |
 |  1 | SIMPLE  | acctStatusType | const  | PRIMARY,acctStatusType |
 acctStatusType |  10 | const   | 1 ||
 |  1 | SIMPLE  | acct   | ref| index1 |
 index1 |   4 | const,const | 63827 | Using where|
 |  1 | SIMPLE  | nas| eq_ref | PRIMARY|
 PRIMARY|   4 | GWF.acct.nas_id | 1 ||
 ++-++++-
 ---+-+-+---++

 I have the following table with 59742411 rows:
 mysql describe acct;
 +---+---+--+-+--
 ---+---+
 | Field | Type  | Null | Key | Default
 | Extra |
 +---+---+--+-+--
 ---+---+
 | date  | datetime  |  | | -00-00
 00:00:00 |   |
 | user_id   | int(10) unsigned  |  | PRI | 0
 |   |
 | nas_id| int(10) unsigned  |  | PRI | 0
 |   |
 | can   | smallint(5) unsigned  | YES  | | NULL
 |   |
 | can_id| smallint(5) unsigned  | YES  | MUL | NULL
 |   |
 | acctStatusType_id | tinyint(3) unsigned   |  | PRI | 0
 |   |
 | acctTerminateCause_id | tinyint(3) unsigned   |  | | 0
 |   |
 | sweetEventContext_id  | tinyint(3) unsigned   |  | | 0
 |   |
 | acctSessionId | varchar(8)|  | PRI |
 |   |
 | acctDelayTime | mediumint(8) unsigned |  | | 0
 |   |
 | acctSessionTime   | mediumint(8) unsigned | YES  | | NULL
 |   |
 | acctInputOctets   | bigint(20) unsigned   | YES  | | NULL
 |   |
 | acctOutputOctets  | bigint(20) unsigned   | YES  | | NULL
 |   |
 | wisprBwMaxUp  | int(10) unsigned  | YES  | | NULL
 |   |
 | wisprBwMaxDown| int(10) unsigned  | YES  | | NULL
 |   |
 | day   | date  |  | PRI | -00-00
 |   |
 | acctMultiSessionId| varchar(27)   | YES  | | NULL
 |   |
 +---+---+--+-+--
 ---+---+
 mysql show index from acct;
 +---++--+--+---+
 ---+-+--++--++-+
 | Table | Non_unique | Key_name | Seq_in_index | Column_name   |
 Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
 +---++--+--+---+
 ---+-+--++--++-+
 | acct  |  0 | PRIMARY  |1 | user_id   | A
 |   53341 | NULL | NULL   |  | BTREE  | |
 | acct  |  0 | PRIMARY  |2 | nas_id| A
 |  277871 | NULL | NULL   |  | BTREE  | |
 | acct  |  0 | PRIMARY  |3 | acctStatusType_id | A
 |  558340 | NULL | NULL   |  | BTREE  | |
 | acct  |  0 | PRIMARY  |4 | acctSessionId | A
 |59742411 | NULL | NULL   |  | BTREE  | |
 | acct  |  0 | PRIMARY  |5 | day   | A
 |59742411 | NULL | NULL   |  | BTREE  | |
 | acct  |  1 | index1   |1 | can_id| A
 | 467 | NULL | NULL   | YES  | BTREE  | |
 | acct  |  1 | index1   |   

Re: Database page corruption on disk occurring during mysqldump on a fresh database

2007-09-05 Thread Alex Arul Lurthu
The checksum errors might be due to various reasons. We had similar issue
where we restored the database multiple times, replaced the ram sticks
nothing helped. Finally we drilled down the issue to the chassis. Recommend
testing the restore on a different machine to rule out any hardware issue.

-- 
Thanks
Alex
http://alexlurthu.wordpress.com

On 9/5/07, Krishna Chandra Prajapati [EMAIL PROTECTED] wrote:

 Hi
 This might be happening due to two reasons;
 1 The system date might not be correct.
 2. Some things wrong with log postion (Incorrect log position)

 Regards,
 Krishna Chandra Prajapati

 On 8/31/07, Maurice Volaski [EMAIL PROTECTED] wrote:
 
  A 64-bit Gentoo Linux box had just been upgraded from MySQL 4.1 to
  5.0.44 fresh (by dumping in 4.1 and restoring in 5.0.44) and almost
  immediately after that, during which time the database was not used,
  a crash occurred during a scripted mysqldump. So I restored and days
  later, it happened again. The crash details seem to be trying to
  suggest some other aspect of the operating system, even the memory or
  disk is flipping a bit. Or could I be running into a bug in this
  version of MySQL?
 
  Here's the output of the crash
  ---
  InnoDB: Database page corruption on disk or a failed
  InnoDB: file read of page 533.
  InnoDB: You may have to recover from a backup.
  070827  3:10:04  InnoDB: Page dump in ascii and hex (16384 bytes):
len 16384; hex
 
  [dump itself deleted for brevity]
 
;InnoDB: End of page dump
  070827  3:10:04  InnoDB: Page checksum 646563254,
  prior-to-4.0.14-form checksum 2415947328
  InnoDB: stored checksum 4187530870, prior-to-4.0.14-form stored
  checksum 2415947328
  InnoDB: Page lsn 0 4409041, low 4 bytes of lsn at page end 4409041
  InnoDB: Page number (if stored to page already) 533,
  InnoDB: space id (if created with = MySQL-4.1.1 and stored already) 0
  InnoDB: Page may be an index page where index id is 0 35
  InnoDB: (index PRIMARY of table elegance/image)
  InnoDB: Database page corruption on disk or a failed
  InnoDB: file read of page 533.
  InnoDB: You may have to recover from a backup.
  InnoDB: It is also possible that your operating
  InnoDB: system has corrupted its own file cache
  InnoDB: and rebooting your computer removes the
  InnoDB: error.
  InnoDB: If the corrupt page is an index page
  InnoDB: you can also try to fix the corruption
  InnoDB: by dumping, dropping, and reimporting
  InnoDB: the corrupt table. You can use CHECK
  InnoDB: TABLE to scan your table for corruption.
  InnoDB: See also InnoDB:
  http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html
  InnoDB: about forcing recovery.
  InnoDB: Ending processing because of a corrupt database page.
 
  --
 
  Maurice Volaski, [EMAIL PROTECTED]
  Computing Support, Rose F. Kennedy Center
  Albert Einstein College of Medicine of Yeshiva University
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/[EMAIL PROTECTED]
 
 


 --
 Krishna Chandra Prajapati
 MySQL DBA,

 Ed Ventures e-Learning Pvt. Ltd,
 201,202, Ashoka Bhoopal Chambers,
 S P Road, Secunderabad 53.
 Ph. No. - 040-39188771
 Url: www.ed-ventures-online.com



Re: recovering from 'disk full' mysql error

2007-09-05 Thread Alex Arul Lurthu
stop slave;

change master to master_log_file='Relay_Master_Log_File' ,
master_log_pos=Exec_master_log_pos;

#Get the values for the above command from your show slave status output.

start slave;

The above process should fix your problem.


On 9/4/07, Russell E Glaue [EMAIL PROTECTED] wrote:

 I had a disk full error on the master MySQL (4.1.22), which was
 replicating to a
 slave (4.1.22).

 My question is, how do I recover a slave replica when the master had a
 disk full
 error, space was freed, but the 'disk full' issue left a corrupted master
 binlog?
 1) Do I have to reinitialize the slave from scratch,
 2) or can I use some CLI tool to repair the damaged binlog (see below)
 3) or can I skip to the next binlog

 This is assuming I performed a stop and start of the master mysql, then
 'purge
 binary logs' from master.
 And would the resolution be the same if I did not start and stop the
 master mysql?


 According to bug 9949 (http://bugs.mysql.com/bug.php?id=9949) this was
 addressed
 in 4.1.9.
 I am running 4.1.22, and have experienced this issue as outlined.

 I am not finding a good sum of reasoning and process to proceed with
 resolving
 this type of predicament. Can anyone clue me in to resolving this?


 I purged enough binlogs to free up disk space on master, and then started
 the
 slave replica process on the slave.

 I get this famous error:
 -
 070904 12:46:26 [Note] Slave I/O thread: connected to master
 '[EMAIL PROTECTED]:3306',  replication started in log '
 master1-bin.54' at
 position 138164107
 070904 12:46:26 [ERROR] Error reading packet from server: binlog truncated
 in
 the middle of event (server_errno=1236)
 070904 12:46:26 [ERROR] Got fatal error 1236: 'binlog truncated in the
 middle of
 event' from master when reading data from binary log
 -

 I do a binlog dump on 'master1-bin.54', and I get the below error
 ('Error in
 log format or read error') at the end of the output:

 -
 #070904 11:46:22 server id 2  end_log_pos 137541177 Query
 thread_id=48871
 exec_time=0 error_code=0
 #070904 11:46:22 server id 2  end_log_pos 137655584 Query
 thread_id=48871
 exec_time=0 error_code=0
 #070904 11:46:22 server id 2  end_log_pos 137655719 Query
 thread_id=48773
 exec_time=0 error_code=0
 #070904 11:46:22 server id 2  end_log_pos 137770204 Query
 thread_id=48773
 exec_time=0 error_code=0
 #070904 11:46:24 server id 2  end_log_pos 137770339 Query
 thread_id=48870
 exec_time=0 error_code=0
 #070904 11:46:24 server id 2  end_log_pos 137962851 Query
 thread_id=48870
 exec_time=0 error_code=0
 #070904 11:46:24 server id 2  end_log_pos 137962986 Query
 thread_id=48871
 exec_time=0 error_code=0
 #070904 11:46:24 server id 2  end_log_pos 138152129 Query
 thread_id=48871
 exec_time=0 error_code=0
 #070904 11:46:24 server id 2  end_log_pos 138152264 Query
 thread_id=48773
 exec_time=0 error_code=0
 #070904 11:46:24 server id 2  end_log_pos 138163972 Query
 thread_id=48773
 exec_time=0 error_code=0
 #070904 11:46:24 server id 2  end_log_pos 138164107 Query
 thread_id=48870
 exec_time=0 error_code=0
 ERROR: Error in Log_event::read_log_event(): 'read error', data_len:
 81904,
 event_type: 2
 Could not read entry at offset 138164107:Error in log format or read error
 -


 -RG

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]




-- 
Thanks
Alex
http://alexlurthu.wordpress.com


Re: Delete query question

2007-09-05 Thread Alex Arul Lurthu
reply inline

On 9/5/07, Olaf Stein [EMAIL PROTECTED] wrote:

 delete from geno_260k where ident=(select ident from geno_260k where a1=0
 group by ident having count(a1)25);


When a sub query returns more than one row in a where clause, then =
should be replaced by  the  in .

-- 
Thanks
Alex
http://alexlurthu.wordpress.com


Re: Simple questio SQL

2007-09-05 Thread Alex Arul Lurthu
select * from Products order by  Visited  desc  limit  10;  should  give
you the top  10  products.


On 9/5/07, Tomas Abad [EMAIL PROTECTED] wrote:

 HI All,



 I have a Table and want to know the most visited products.



 Products

 -  Id

 -  Name

 -  Visited



 Visited is numeric.




-- 
Thanks
Alex
http://alexlurthu.wordpress.com


Re: innodb engine status

2007-08-29 Thread Alex Arul Lurthu
To have a good understanding on the show innodb status output checkout
http://www.mysqlperformanceblog.com/2006/07/17/show-innodb-status-walk-through/

One area you can look at is the LATEST DETECTED DEADLOCK. But in most
cases have found calculations on the status variables more helpful.

--
Alex
http://alexlurthu.wordpress.com

On 8/29/07, Ananda Kumar [EMAIL PROTECTED] wrote:

 Hi Friends,

 When i do SHOW INNODB STATUS\G. It gives me details of transaction
 happening on INNODB engine.

 Please let me know what all information i should consider from this output
 to ensure if everything is fine or there are issue which i should address.

 I am using mysql version 5.0.41-community-log

 regards
 anandkl



Re: SCRIPT OR TOOL TO GIVE CHANGES IN DATABASES

2007-08-29 Thread Alex Arul Lurthu
If you want to track the schema level changes, updates, inserts etc you
always have the binlogs/update logs. If you want to track select queries
also you have the general query log. Check out
http://dev.mysql.com/doc/refman/4.1/en/log-files.html for the different logs
available. Of course be wary of the performance implications.

-- 
Alex
http://alexlurthu.wordpress.com/

On 8/29/07, Krishna Chandra Prajapati [EMAIL PROTECTED] wrote:

 Hi All,

 Is there is any script or tool that generate the report and send an email
 for changes done in the databases;

 There is any otherway (manual) So that i can look that what changes has
 been
 done in the database today.

 Regards,
 Krishna



Re: thread_concurrency in linux

2007-08-29 Thread Alex Arul Lurthu
I am not sure whether you are talking about innodb_thread_concurrency. If so
please check out
http://www.mysqlperformanceblog.com/2006/06/05/innodb-thread-concurrency for
more details. Innodb_thread_concurrency works on linux.

Thanks
Alex

On 8/29/07, Andrew Braithwaite [EMAIL PROTECTED] wrote:

 Hi,

 Does anyone know if thread_concurrency works in linux or is it just
 limited to Solaris and Windows?

 I know the general rule is number of CPU's*2 but will this actually have
 any effect with Linux's threading model?

 Thanks for any help :)

 Andrew

 Mysql, query


 This message has been scanned for viruses by BlackSpider MailControl -
 www.blackspider.com

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]




-- 
Thanks
Alex
http://blog.360.yahoo.com/alex.lurthu


Re: [MySQL] Fastest way to Search from a Table

2007-08-07 Thread Alex Arul Lurthu
On 8/7/07, Tharabar [EMAIL PROTECTED] wrote:

 SELECT `field1`,`field2`,`field3` FROM 'table' WHERE `field5` LIKE
 '%keyword%1' AND `field5` LIKE '%keyword2%'


The most trivial way to speedup a query is to add an index on the columns
that are being used in the where clause. But doing 'LIKE' operation with the
pattern beginning with '%' it renders the index use less.

If you use MyISAM you can always use full text search on a text field for
such use cases.


-- 
Thanks
Alex
http://alexlurthu.wordpress.com


Re: MySQL database move

2007-07-09 Thread Alex Arul Lurthu

Hi Ace,

If you cant affort downtime and if you are using innodb try removing auto
extend on the current data file and create a datafile in a different
partition and put autoextend on the same. If you are using MyISAM , you can
move few tables to different disk use symlinks.


--
Thanks
Alex
http://blog.360.yahoo.com/alex.lurthu

On 7/9/07, Ananda Kumar [EMAIL PROTECTED] wrote:


Hi Ace,
Can't you zip or move old bin-logs to a different disk and release some
free
space on the current drive.

regards
anandkl


On 7/9/07, Ace [EMAIL PROTECTED] wrote:

 Hi,

   We have crisis. Disk with MySQL database is full. Now we want to move
 database to another disk. How can we do it?

 --
 Thanks,
 Rajan




Re: bin-log position.

2007-07-08 Thread Alex Arul Lurthu

Anand,

If the dump was taken from the slave then the binlog positions that were
dumped out are that of the slave. Now you might do 2 things :

1. Setup a slave of your master

  To do this use the binlog positions that you obtained using the
command show slave status on your slave. The master log file will be 
mysql-bin.29 . But your show slave output status doesnt show the
exe_master_log position. You need to use that position for setting up a new
slave of your master.

2. Setup a slave of your slave

 To do this use the binglog positions in your mysqldump output.

P.S: Make sure you setup the hostnames in the change master command
appropriately.


--
Thanks
Alex
http://blog.360.yahoo.com/alex.lurthu


Re: bin-log position.

2007-07-08 Thread Alex Arul Lurthu

You will be taking a chance if you use Read_Master_Log_Pos: 106683511. It is
not guaranteed  that whatever  is read from master is  definitely  applied.
The apply of the read sqls might have stopped somewhere and in that
case Read_Master_Log_Pos
and exec_master_log_pos might be different.

--
Thanks
Alex
http://blog.360.yahoo.com/alex.lurthu

On 7/9/07, Ananda Kumar [EMAIL PROTECTED] wrote:


Hi Alex,
Thanks a lot,
I missed on taking the exe_master_log position value.
If i start from Read_Master_Log_Pos: 106683511, will it cause any harm.

regards
anandkl



On 7/9/07, Alex Arul Lurthu [EMAIL PROTECTED] wrote:

 Anand,

 If the dump was taken from the slave then the binlog positions that were
 dumped out are that of the slave. Now you might do 2 things :

 1. Setup a slave of your master

To do this use the binlog positions that you obtained using the
 command show slave status on your slave. The master log file will be 
 mysql-bin.29 . But your show slave output status doesnt show the
 exe_master_log position. You need to use that position for setting up a new
 slave of your master.

 2. Setup a slave of your slave

   To do this use the binglog positions in your mysqldump output.

 P.S: Make sure you setup the hostnames in the change master command
 appropriately.


 --
 Thanks
 Alex
 http://blog.360.yahoo.com/alex.lurthu






Re: Performance problem MySQL 4.0.20

2007-07-07 Thread Alex Arul Lurthu

On 7/6/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:


How does the cpu power influence the speed of a query?



Sort is a cpu intensive process.

*Check if you are suffering from locks on the tables you look up.*

Also, was the table in the system where the queries are running faster
rebuilt recently. You can try rebuilding the tables.

How is the overrall system load on both machines. Is one overloaded than the
other ? The machine which takes longer to excute the query might be busy
serving some other process other than MySQL.


Thanks
Alex


Re: bin-log position.

2007-07-07 Thread Alex Arul Lurthu

On 7/6/07, Ananda Kumar [EMAIL PROTECTED] wrote:



-- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.03',
MASTER_LOG_POS=311;

As see in the mysqldump is saying to start recovery from bin-log file '
mysql-bin.03' and position=MASTER_LOG_POS=311,




You need to use the positions specified in the mysqldump since when u
restore using the data, you will get data only till the snapshot specified
in the dump.

Thanks
Alex


Re: Performance problem MySQL 4.0.20

2007-07-07 Thread Alex Arul Lurthu

On 7/6/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:


How does the cpu power influence the speed of a query?



Sort is a cpu intensive process.

*Check if you are suffering from locks on the tables you look up.*

Also, was the table in the system where the queries are running faster
rebuilt recently. You can try rebuilding the tables.

How is the overrall system load on both machines. Is one overloaded than the
other ? The machine which takes longer to excute the query might be busy
serving some other process other than MySQL.


Thanks
Alex


Re: mysqldump on slave.

2007-07-07 Thread Alex Arul Lurthu

You havent mentioned which version of mysql your are on. If on an older
version i would recommend the option --opt. Also if you are running a  dump
using single-transaction option on a loaded system with writes, things might
slow down significantly due to different versions of rows which the server
needs to maintain due to the dump.

Thanks
Alex

On 7/6/07, Ananda Kumar [EMAIL PROTECTED] wrote:


Hi All,
I am planning to take mysqldump on slave running innodb engine.

I going to use the below command

/usr/bin/mysqldump --user=root  --password= --flush-logs
--all-databases
--master-data=2 --single-transaction   alldbdump.sql

Before doing the mysqldump, should i stop the slave or the above command
is
enough, please let know, also please let me know if any other option i can
use to speed up mysqldump.

regards
anandkl



Re: Cannot Connect: Can't create a new thread (errno 11)

2007-07-05 Thread Alex Arul Lurthu

Chris,

Did you check if there are any user level MAX_USER_CONNECTIONS restriction
set while granting access ?

Thanks
Alex

On 7/4/07, Chris Faust [EMAIL PROTECTED] wrote:


Folks,



I'm wondering is someone could help us in trying to resolve the above
error,
we have read through what threads we could find and tried all the
suggestions, but have had no luck.



There are no memory problems, in fact when this happens if there is still
a
localhost client connection to the server you can do whatever you want,
everything is running normal. It just refuses all new connections. When it
happens there is plenty of memory on the machine and no load at all
(everything is running just as it should). Once things fall below 90,
everything comes back.



The problem happens when the process list hits the 91st process every
time.
90 processes, no problems, the 91st process/connection gets the above
error
and any after that simple gets Can't connect to MySQL server.



We've tried all the obvious things, messing with the number of
connections,
open files etc. Nothing seems to work.



We are running MySQL 5 on Centos 5 (flavor of RedHat). The machine is
dedicated to MySQL.



Below is a link to a text file that shows the OS limits for the mysql
account on the machine and the output from show variables (including it
made
the message to large for the list).



http://208.3.90.212/wtresults.txt



If anyone has any suggestions, it would be appreciated.



Thanks!






Re: Performance problem MySQL 4.0.20

2007-07-05 Thread Alex Arul Lurthu

Hi,

Run explain plan on both the machines and post the same.

~Alex

On 7/4/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:


Hello Ananda,

yes, the testmachine has the same data.


Regards,
Spiker
--
GMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS.
Alle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




Re: Blob data

2007-06-25 Thread Alex Arul Lurthu

Ratheesh,

If you still want to place you blob data in the database, seperate out the
blob storage to a seperate table. This will help you  alleviate few of your
performance and maintenance problems.

~Alex

On 6/22/07, Steve Edberg [EMAIL PROTECTED] wrote:


At 12:11 PM +0530 6/22/07, Ratheesh K J wrote:
Hello All,

I want a clarification. Whe run a forum wherein people send messages
with/without attachments. Attachments may contain images, documents
etc.. We are actually storing the attachment in a blob column.
Sometimes the attachments are big. And today the table size has
grown to 40 GB. This has created a headache for any maintanance
task, backup, restoration. etc.

I want to know whether this is the right approach. Or should we
actually store the attachments in directories and just stiore the
attachment path in the database.

Kindly suggest the best approach so that I can reduce the database size.

Thanks in advance


Yes, storing files - especially non-textual files - in the file
system instead of the database is generally considered the best
practice.

At one point I had created a document management system that stored
everything in the database as you are doing; my rationale was that it
allowed me to manage permissions using the existing database
permissions, and to back up the whole database using mysqldump, vs
mysqldump + doing a tar of the files. However, I abandoned this
approach for the following reasons:

(1) Storing non-plaintext items (eg; pictures) in the database makes
it bigger and slower without added value - you can't (at least not
yet, or in the foreseeable future) do a meaningful search on a blob.

(2) It becomes more difficult to split storage out onto multiple
filesystems; eg, leaving the database files in /var/database, putting
the documents themselves into /home/docmanager, etc.

(3) It makes queries on the commandline unwieldy; if you have a blob
field, doing a select * to check a record's contents can dump a lot
of garbage on the screen.

(4) It can make doing incremental backups more difficult; if the
documents themselves are relatively static, but the document metadata
stored in the database is very dynamic, it becomes simple to do a
compact daily database dump + a weekly document directory backup (for
example) if the files are not in the database.

What I do is create a unique SHA1 hash when a file is uploaded (eg;
sha1(rand()). The original filename and the 40-character hash are
stored in the database, and the document is stored in the filesystem
using the hash as the filename. I can optionally compress and encrypt
the document as well, storing the encryption key in the database.
This gives (for me) adequate document security. An additional
advantage is that you can take advantage of the filesystem tree if
you have a large number of documents. For example, if a document hash
is 'f0118e9bd2c4fb29c64ee03abce698b8', you can store the file in the
directory tree f0/11/8e/f0118e9bd2c4fb29c64ee03abce698b8 (extending
to as many levels as you feel necessary). By keeping the number of
files per directory fairly small, file retrieval becomes relatively
fast. As the hashes approximate a random distribution, you should
always have a close-to-balanced tree.

Lastly, I store a hash of the document itself in the database as
well. This allows me to detect if duplicate files are uploaded, and
to determine if a previously-uploaded file has been corrupted in some
way.

steve

--
+--- my people are the people of the dessert, ---+
| Steve Edberghttp://pgfsun.ucdavis.edu/ |
| UC Davis Genome Center[EMAIL PROTECTED] |
| Bioinformatics programming/database/sysadmin (530)754-9127 |
+ said t e lawrence, picking up his fork +

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




Re: replication

2007-05-15 Thread Alex Arul Lurthu

check if you are seeing any access denied errors in the mysql error log.

~Alex

On 5/14/07, richard [EMAIL PROTECTED] wrote:


Ian P. Christian wrote:
 richard wrote:

 as far as I can see, these commands select which db's to replicate on
 the slave that currently exist on the master server.
 What i am asking is, if I set a server up as a slave, can I have other
 databases on the slave that are not part of the replication system?


 Yes - I've been doing this for some time, works fine.




mmm, ok.
I've set replication up and have show master and show slave output
displaying the following.
Any updates I make to the master are not transferred to the slave.
The log position increases on the master but doesn't change on the
slave. Any ideas?
cheers
Rich


mysql show master status \G
*** 1. row ***
File: bla-bin.004
Position: 681
Binlog_do_db: foo_db
Binlog_ignore_db:
1 row in set (0.00 sec)



mysql show slave status \G
*** 1. row ***
Slave_IO_State: Checking master version
Master_Host: 192.168.1.110
Master_User: slave_user
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: bla-bin.004
Read_Master_Log_Pos: 512
Relay_Log_File: db_box-relay-bin.01
Relay_Log_Pos: 4
Relay_Master_Log_File: bla-bin.004
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB: foo_db
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 512
Relay_Log_Space: 4
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
1 row in set (0.00 sec)



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




Re: Redo logs take to much disk space

2007-05-15 Thread Alex Arul Lurthu

the ask lies in expire_logs_days. If you set this to optimal number of days,
logs older than the configured days will get purged.

~Alex


On 5/15/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:


 One question about this, is it safe to turn of log_bin?
i think, you can. the log is necessary for data replication and sometimes
for
data recovery.
you can read about it here:
http://dev.mysql.com/doc/refman/5.0/en/binary-log.html
 Or can you tweak it somehow so that it won't' take some much disk space?
you can bzip old logs if you need them but don't want them to take so much
space :)



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




Re: backup solution.

2007-05-15 Thread Alex Arul Lurthu

On 5/15/07, Ananda Kumar [EMAIL PROTECTED] wrote:


Hi All,
I have take a mysqldump of my entire database, is it possible to restore
just one table from this mysqldump.



Yes thats possible.

cat your-dump-filename | grep tablename u want to restore  mysql -u
user -ppassword should do it.



Also, i have taken binary backup of my database, and one of the datafile
is
corrupt, is it possible to restore just this file from backup and start
the
mysqldb. Mysql database is running in innodb engine.



If all files are from the same cold binary backup, then you will be able to
do a restore. Else it will be highly unlikely.

~Alex


Re: Memory Problems

2007-05-15 Thread Alex Arul Lurthu

On 5/15/07, Christoph Klünter [EMAIL PROTECTED] wrote:




I have set the sort_buffer_size to 1G but even this doesn't help.
Any hints ? Should we try a 64Bit-OS ?



setting sort_buffer_size to 1GB is not recommended. it is a thread specific
configuration parameter which means each thread will be eligible to get a
sort_buffer_size of 1GB.

On a 32 bit system, you can push to a approx 3.2 GB usage for the database
by doing kernel level hacks mentioned at
http://www.puschitz.com/TuningLinuxForOracle.shtml#GrowingTheOracleSGATo2.7GBInx86RHEL2.1WithoutVLMor
by using VLM also. But if you are using a mysql 32bit build, then you
will not be able to use more than 4GB theoretically.

Using 64bit OS and 64bit build of mysql will enable you to use memory
greater than 4GB effectively.

~Alex


Re: backup solution.

2007-05-15 Thread Alex Arul Lurthu

On 5/15/07, Ananda Kumar [EMAIL PROTECTED] wrote:


Hi Alex,
Thanks for the info,
For the second question, do you mean i should restore the entire backup or
just that one file from my backup.

All the files should be from the same backup. AFAIK, MySQL doesnt have an

option to recover only one datafile. Hence restore the entire backup.

~Alex


Re: Memory Problems

2007-05-15 Thread Alex Arul Lurthu

On 5/15/07, Micah Stevens [EMAIL PROTECTED] wrote:


I think you may be able to get around this by using multiple key
buffers? (MySQL 4.1 or later)

key buffers caches only index data and they dont help with sorting like

sort_buffer. they dont impact innodb engine. even while using multiple key
buffers, the memory allocated will belong to the same process and the
limitation is at the process level.

~Alex


Re: replication

2007-05-14 Thread Alex Arul Lurthu

techincally yes you can have. replication is all about the queries that are
getting executed on the master will be executed on the slave. So , if the
database that exists on the master exists on the slave , then you should
have no problems.

Thanks
Alex

On 5/14/07, richard [EMAIL PROTECTED] wrote:



as far as I can see, these commands select which db's to replicate on
the slave that currently exist on the master server.
What i am asking is, if I set a server up as a slave, can I have other
databases on the slave that are not part of the replication system?




MySQL 中文网 wrote:

 of course

 setup by the option replication-do-db or replication-ignore-db :)

 richard wrote:

 Hi,

 I have a server (master version 4.0.1) that has a database that I
 want to replicate on another server (slave version 4.1.20).
 The slave already has another db on it. Is is possible to have a
 mixture of replicated and non replicated dbs running on a slave?

 cheers
 Rich



 --







 **
 MySQL 中文网
 叶金荣   MySQL独立咨询师、系统架构师、培训师

 业务: LAMP架构设计、解决方案、负载均衡,MySQL 技术咨询、培训、支持
 手机:13466608698
 [EMAIL PROTECTED] (或 [EMAIL PROTECTED])
 网址:http://imysql.cn
 **
 http://imysql.cn 为用户提供一个专业的、自由的MySQL中文交流社区,欢迎来访

 给你的祝福,要让你招架不住!

 2007/05/14 17:50:01



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




Re: Mysqldump Files

2007-01-30 Thread Alex Arul Lurthu

On a different note just running a slave will not help when there is some
accidental deletion of data or tables or databases. Hence have a slave as
Mikhail mentioned and also run regular backups on the slave. Preferably a
tar of your mysql server after shutting it down, also ensure you have
log-bin and log-slave-updates enabled on your slave. Push the tar to more
than one location and hosts so that you can survive disk corruptions. This
way you can ensure you can recover from most possible database corruption
scenarios.

Thanks
Alex

On 1/30/07, Mikhail Berman [EMAIL PROTECTED] wrote:


Hi David,

Is the space on hard-drive is major concern of yours or abilities to
recover from crash is?

Backups are usually taking to be able to recover from a crash. Which in
its turn means if there is a way to recover faster it is better. Having
slave that is constantly updated gives you very quick way of recovering
if master goes down. Just point you PHP scripts to slave and be happy.

If you need additional back up, do them from full slave. Stop it for a
while, do backups and then restart slave again.

For huge backups in our office we use old, and I mean old, Dells with
huge 500GB drives running one of *Nix's


Regards,

Mikhail Berman

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Monday, January 29, 2007 3:27 PM
To: Mikhail Berman
Cc: mysql
Subject: RE: Mysqldump Files

Hi Mikhail,

I don't think that would save much space, in terms of file size. The
tables that are actively getting inserts are large and growing larger
(~750,000 records), and those that have no activity are either currently

empty or have less than a hundred records in them. So just dumping the
active tables will comprise I'd guess 99% or more of the database size.

David

--

Hi David,

Let me point you in a bit different direction. You are already running
replication as it seems from your E-mail

So, why not just run chained replication from second to the third server
and use replicate-do-table = [table_name] in my.cnf of the third
server to limit selection of tables to be used by web server. Or do full
replication to another server from the first one for full backup?

Regards,

Mikhail Berman

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Monday, January 29, 2007 2:33 PM
To: mysql
Subject: Mysqldump Files

Howdy Guys and Gals,

We are acquiring data on background radiation in a master-slave server
environment (RH9.0, MySQL 4.0.24) at the rate of approximately 19,000
records per day.  The data are insert-only into about 25 of 31 tables in

the database -- no updates are ever applied to the data. Information
from
the database is used via select statements for graphical display and
report generation amongst other uses.

A PHP backup script using mysqldump runs as a cron job each night from a

third server which also functions as an intranet webserver. After 1 1/2
years of operation, the mysqldump file of the entire database is roughly

760 MB, and this takes under 2 minutes to create.  Once bzipped and
tarred, the entire file is 31.7 MB in size, and this part of the backup
process now takes 46-47 minutes.

The rate of acquisition of data will be fairly constant, and up to 3
years
of data will be kept on the live master-slave, so simply doubling all
these values seems a realistic expectation for a full backup of the
database after 3 years.  Data older than 3 years would be deleted from
the
master-slave system.

How long it would be reasonable to keep doing a full dump of the
database
versus using mysqldump with a where clause, i.e., doing a daily
incremental backup, say of the last 24 hours.

Also, what are the key mysqldump and/or server variables to pay
attention
to in order to process these large, maybe multi-gigabyte dump files?

Thanks,

David

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]