Re: Triggers - Accessing all NEW data

2011-09-13 Thread Luis Motta Campos
On 8 Sep 2011, at 16:23, Chris Tate-Davies wrote:

 Hello. I want to know if there is a special way I can access all the data in 
 the NEW/OLD data?
 
 I realise I can access it by referencing NEW.fieldname but I want to 
 serialise the NEW object so I can save as a string. Is this possible or do I 
 need to write a function?


Hi, 

You'll have to write your own function for that. 

Cheers
--
Luis Motta Campos
is a DBA, Foodie, and Photographer


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Triggers - Accessing all NEW data

2011-09-13 Thread Chris Tate-Davies

Thanks,

I kinda guessed that, but I'm not sure how to pass the OLD object to it 
as MySQL cannot handle a rowset datatype.


Has anyone had any experience with this? Not sure where to start or how 
to proceed.


Chris


On 13/09/11 07:40, Luis Motta Campos wrote:

On 8 Sep 2011, at 16:23, Chris Tate-Davies wrote:


Hello. I want to know if there is a special way I can access all the data in 
the NEW/OLD data?

I realise I can access it by referencing NEW.fieldname but I want to serialise 
the NEW object so I can save as a string. Is this possible or do I need to 
write a function?


Hi,

You'll have to write your own function for that.

Cheers
--
Luis Motta Campos
is a DBA, Foodie, and Photographer



--

*Chris Tate-Davies*

*Software Development*
Inflight Productions Ltd
Telephone: 01295 269 680
15 Stukeley Street | London | WC2B 5LT
*Email:*chris.tatedav...@inflightproductions.com 
mailto:chris.tatedav...@inflightproductions.com

*Web:*www.inflightproductions.com http://www.inflightproductions.com/





-


Registered Office: 15 Stukeley Street, London WC2B 5LT, England.
Registered in England number 1421223

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately and delete the original. Any other 
use of the email by you is prohibited. Please note that the information 
provided in this e-mail is in any case not legally binding; all committing 
statements require legally binding signatures.


http://www.inflightproductions.com



Re: Triggers - Accessing all NEW data

2011-09-13 Thread Claudio Nanni
Hi,
Just quick reading your email, forgive me if I'm mistaken
what about serializing using *concat(old.f1,'|||',old.f2,'|||',old.f3)
 ('|||' = any separator that works for you)*
and deserialize inside the function?

does this make any sense to you?

Cheers

Claudio

2011/9/13 Chris Tate-Davies chris.tatedav...@inflightproductions.com

 Thanks,

 I kinda guessed that, but I'm not sure how to pass the OLD object to it as
 MySQL cannot handle a rowset datatype.

 Has anyone had any experience with this? Not sure where to start or how to
 proceed.

 Chris



 On 13/09/11 07:40, Luis Motta Campos wrote:

 On 8 Sep 2011, at 16:23, Chris Tate-Davies wrote:

  Hello. I want to know if there is a special way I can access all the data
 in the NEW/OLD data?

 I realise I can access it by referencing NEW.fieldname but I want to
 serialise the NEW object so I can save as a string. Is this possible or do I
 need to write a function?


 Hi,

 You'll have to write your own function for that.

 Cheers
 --
 Luis Motta Campos
 is a DBA, Foodie, and Photographer


 --

 *Chris Tate-Davies*

 *Software Development*
 Inflight Productions Ltd
 Telephone: 01295 269 680
 15 Stukeley Street | London | WC2B 5LT
 *Email:*chris.tatedavies@**inflightproductions.comchris.tatedav...@inflightproductions.commailto:
 chris.tatedavies@**inflightproductions.comchris.tatedav...@inflightproductions.com
 
 *Web:*www.inflightproductions.**com http://www.inflightproductions.com 
 http://www.**inflightproductions.com/http://www.inflightproductions.com/
 





 -


 Registered Office: 15 Stukeley Street, London WC2B 5LT, England.
 Registered in England number 1421223

 This message is for the designated recipient only and may contain
 privileged, proprietary, or otherwise private information. If you have
 received it in error, please notify the sender immediately and delete the
 original. Any other use of the email by you is prohibited. Please note that
 the information provided in this e-mail is in any case not legally binding;
 all committing statements require legally binding signatures.


 http://www.**inflightproductions.com http://www.inflightproductions.com




-- 
Claudio


Re: Triggers - Accessing all NEW data

2011-09-13 Thread Chris Tate-Davies


I could do that, but I was hoping I wouldn't have to specify the 
individual fields, and just pass the collection for parsing.


If I were to add any fields I would have to re-write the trigger which 
is something I was trying to avoid.



On 13/09/11 09:53, Claudio Nanni wrote:

Hi,
Just quick reading your email, forgive me if I'm mistaken
what about serializing using *concat(old.f1,'|||',old.f2,'|||',old.f3)
  ('|||' = any separator that works for you)*
and deserialize inside the function?

does this make any sense to you?

Cheers

Claudio

2011/9/13 Chris Tate-Davieschris.tatedav...@inflightproductions.com


Thanks,

I kinda guessed that, but I'm not sure how to pass the OLD object to it as
MySQL cannot handle a rowset datatype.

Has anyone had any experience with this? Not sure where to start or how to
proceed.

Chris



On 13/09/11 07:40, Luis Motta Campos wrote:


On 8 Sep 2011, at 16:23, Chris Tate-Davies wrote:

  Hello. I want to know if there is a special way I can access all the data

in the NEW/OLD data?

I realise I can access it by referencing NEW.fieldname but I want to
serialise the NEW object so I can save as a string. Is this possible or do I
need to write a function?


Hi,

You'll have to write your own function for that.

Cheers
--
Luis Motta Campos
is a DBA, Foodie, and Photographer



--

*Chris Tate-Davies*

*Software Development*
Inflight Productions Ltd
Telephone: 01295 269 680
15 Stukeley Street | London | WC2B 5LT
*Email:*chris.tatedavies@**inflightproductions.comchris.tatedav...@inflightproductions.commailto:
chris.tatedavies@**inflightproductions.comchris.tatedav...@inflightproductions.com
*Web:*www.inflightproductions.**comhttp://www.inflightproductions.com  
http://www.**inflightproductions.com/http://www.inflightproductions.com/




-


Registered Office: 15 Stukeley Street, London WC2B 5LT, England.
Registered in England number 1421223

This message is for the designated recipient only and may contain
privileged, proprietary, or otherwise private information. If you have
received it in error, please notify the sender immediately and delete the
original. Any other use of the email by you is prohibited. Please note that
the information provided in this e-mail is in any case not legally binding;
all committing statements require legally binding signatures.


http://www.**inflightproductions.comhttp://www.inflightproductions.com






--

*Chris Tate-Davies*

*Software Development*
Inflight Productions Ltd
Telephone: 01295 269 680
15 Stukeley Street | London | WC2B 5LT
*Email:*chris.tatedav...@inflightproductions.com 
mailto:chris.tatedav...@inflightproductions.com

*Web:*www.inflightproductions.com http://www.inflightproductions.com/





-


Registered Office: 15 Stukeley Street, London WC2B 5LT, England.
Registered in England number 1421223

This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise private information. If you have received it in 
error, please notify the sender immediately and delete the original. Any other 
use of the email by you is prohibited. Please note that the information 
provided in this e-mail is in any case not legally binding; all committing 
statements require legally binding signatures.


http://www.inflightproductions.com



Question about slow storage and InnoDB compression

2011-09-13 Thread Maria Arrea
Hello

 I have upgraded our backup server from mysql 5.0.77 to mysql 5.5.15. We are 
using bacula as backup software, and all the info from backups is stored in a 
mysql database. Today I have upgraded from mysql 5.0 to 5.5 using IUS 
repository RPMS and with mysql_upgrade procedure, no problem so far. This 
backup systems hold the bacula daemon, the mysql server and the backup of other 
100 systems (Solaris/Linux/Windows)

 Our server has 6 GB of ram, 1 quad Intel Xeon E5520 and 46 TB of raid-6 SATA 
disks (7200 rpm) connected to a Smart Array P812 controller  Red Hat 
Enterprise Linux 5.7 x64. Our mysql has dozens of millions of lines, and we are 
using InnoDB as storage engine for bacula internal data. We add hundred of 
thousands lines /day to our mysql (files are incrementally backed up daily from 
our 100 servers). So, we have a 7-8 concurrent writes (in different lines, of 
course) , and theorically we only read from mysql when we restore from backup.

 Daily we launch a cron job that executes an optimize table in each table of 
our database to compact the database. It takes almost an hour. We are going to 
increase the memory of the server from 6 to 12 GB in a couple of weeks, and I 
will change my.cnf to reflect more memory. My actual my.cnf is attached below:


 These are my questions:


 - We have real slow storage (raid 6 SATA), but plenty CPU and ram . Should I 
enable innodb compression to make this mysql faster?
 - This system is IOPS-constrained for mysql (fine for backup, though). Should 
I add a SSD only to hold mysql data?
 - Any additional setting I should use to tune this mysql server?



 my.cnf content:

 [client]
 port = 3306
 socket = /var/lib/mysql/mysql.sock


 [mysqld]
 innodb_flush_method=O_DIRECT
 max_connections = 15
 wait_timeout = 86400
 port = 3306
 socket = /var/lib/mysql/mysql.sock
 key_buffer = 100M
 max_allowed_packet = 2M
 table_cache = 2048
 sort_buffer_size = 16M
 read_buffer_size = 16M
 read_rnd_buffer_size = 12M
 myisam_sort_buffer_size = 384M
 query_cache_type=1
 query_cache_size=32M
 thread_cache_size = 16
 query_cache_size = 250M
 thread_concurrency = 6
 tmp_table_size = 1024M
 max_heap_table = 1024M


 skip-federated
 innodb_buffer_pool_size= 2500M
 innodb_additional_mem_pool_size = 32M

 [mysqldump]
 max_allowed_packet = 16M

 [mysql]
 no-auto-rehash

 [isamchk]
 key_buffer = 1250M
 sort_buffer_size = 384M
 read_buffer = 8M
 write_buffer = 8M

 [myisamchk]
 key_buffer = 1250M
 sort_buffer_size = 384M
 read_buffer = 8M
 write_buffer = 8M

 [mysqlhotcopy]
 interactive-timeout


 Regards

 Maria


Re: Question about slow storage and InnoDB compression

2011-09-13 Thread Suresh Kuna
I would recommend to go for a 15K rpm SSD raid-10 to keep the mysql data and
add the Barracuda file format with innodb file per table settings, 3 to 4 GB
of innodb buffer pool depending the ratio of myisam v/s innodb in your db.
Check the current stats and reduce the tmp and heap table size to a lower
value, and reduce the remaining buffer's and cache as well.

On Tue, Sep 13, 2011 at 9:06 PM, Maria Arrea maria_ar...@gmx.com wrote:

 Hello

  I have upgraded our backup server from mysql 5.0.77 to mysql 5.5.15. We
 are using bacula as backup software, and all the info from backups is stored
 in a mysql database. Today I have upgraded from mysql 5.0 to 5.5 using IUS
 repository RPMS and with mysql_upgrade procedure, no problem so far. This
 backup systems hold the bacula daemon, the mysql server and the backup of
 other 100 systems (Solaris/Linux/Windows)

  Our server has 6 GB of ram, 1 quad Intel Xeon E5520 and 46 TB of raid-6
 SATA disks (7200 rpm) connected to a Smart Array P812 controller  Red Hat
 Enterprise Linux 5.7 x64. Our mysql has dozens of millions of lines, and we
 are using InnoDB as storage engine for bacula internal data. We add hundred
 of thousands lines /day to our mysql (files are incrementally backed up
 daily from our 100 servers). So, we have a 7-8 concurrent writes (in
 different lines, of course) , and theorically we only read from mysql when
 we restore from backup.

  Daily we launch a cron job that executes an optimize table in each table
 of our database to compact the database. It takes almost an hour. We are
 going to increase the memory of the server from 6 to 12 GB in a couple of
 weeks, and I will change my.cnf to reflect more memory. My actual my.cnf is
 attached below:


  These are my questions:


  - We have real slow storage (raid 6 SATA), but plenty CPU and ram . Should
 I enable innodb compression to make this mysql faster?
  - This system is IOPS-constrained for mysql (fine for backup, though).
 Should I add a SSD only to hold mysql data?
  - Any additional setting I should use to tune this mysql server?



  my.cnf content:

  [client]
  port = 3306
  socket = /var/lib/mysql/mysql.sock


  [mysqld]
  innodb_flush_method=O_DIRECT
  max_connections = 15
  wait_timeout = 86400
  port = 3306
  socket = /var/lib/mysql/mysql.sock
  key_buffer = 100M
  max_allowed_packet = 2M
  table_cache = 2048
  sort_buffer_size = 16M
  read_buffer_size = 16M
  read_rnd_buffer_size = 12M
  myisam_sort_buffer_size = 384M
  query_cache_type=1
  query_cache_size=32M
  thread_cache_size = 16
  query_cache_size = 250M
  thread_concurrency = 6
  tmp_table_size = 1024M
  max_heap_table = 1024M


  skip-federated
  innodb_buffer_pool_size= 2500M
  innodb_additional_mem_pool_size = 32M

  [mysqldump]
  max_allowed_packet = 16M

  [mysql]
  no-auto-rehash

  [isamchk]
  key_buffer = 1250M
  sort_buffer_size = 384M
  read_buffer = 8M
  write_buffer = 8M

  [myisamchk]
  key_buffer = 1250M
  sort_buffer_size = 384M
  read_buffer = 8M
  write_buffer = 8M

  [mysqlhotcopy]
  interactive-timeout


  Regards

  Maria




-- 
Thanks
Suresh Kuna
MySQL DBA


Re: Question about slow storage and InnoDB compression

2011-09-13 Thread Suresh Kuna
Thanks for correcting me in the disk stats Singer, A typo error of SSD
instead of SAS 15k rpm.

Compression may not increase the memory requirements :
To minimize I/O and to reduce the need to uncompress a page, at times the
buffer pool contains both the compressed and uncompressed form of a database
page. To make room for other required database pages, InnoDB may “evict”
from the buffer pool an uncompressed page, while leaving the compressed page
in memory. Or, if a page has not been accessed in a while, the compressed
form of the page may be written to disk, to free space for other data. Thus,
at any given time, the buffer pool may contain both the compressed and
uncompressed forms of the page, or only the compressed form of the page, or
neither.

More details and benefits about the barracuda file format can be found in
the below url Which helps to know the pros and cons on file format

http://dev.mysql.com/doc/refman/5.6/en/glossary.html#glos_antelope
http://dev.mysql.com/doc/innodb/1.1/en/glossary.html#glos_barracuda
http://www.mysqlperformanceblog.com/2008/04/23/real-life-use-case-for-barracuda-innodb-file-format/
http://dev.mysql.com/doc/innodb/1.1/en/innodb-other-changes-file-formats.html

I would go with the Singer suggestions in What you want to do is part.

Thanks
Suresh Kuna


On Wed, Sep 14, 2011 at 7:21 AM, Singer X.J. Wang w...@singerwang.comwrote:

 Comments:
 1) There is no such thing as 15K RPM SSDs... SSDs are NON ROTATIONAL
 STORAGE, therefore RPMS make no sense..
 2) Upgrading to Barracuda file format isn't really worth it in this case,
 you're not going to get any real benefits. In your scenario I doubt InnoDB
 table compression will help, as it will significantly increase your memory
 requirements as it to keep uncompressed and compressed copies in RAM.

 Questions:
 1) Why are you putting your MySQL data on the same volume as your Bacula
 backups? Bacula does large sequential I/O and MySQL will do random I/O based
 on teh structure.

 What you want to do is:

 1) you have 5MB InnoDB Log Files, that's a major bottleneck. I would use at
 256MB or 512MB x 2 InnoDB log files.
 2) dump and import the database using innodb_file_per_table so that
 optimization will free up space..
 3) are you running Bacula on the server as well? If so, decrease the buffer
 pool to 1-2GB.. if not bump it up to to 3GB as you need some memory for
 bacula

 and 4, this is the most important one:
 How big is your MySQL data? Its not that big, I figure in the 80-100GB
 range.  Get yourself a pair of 240GB SSDs, mount it locally for MySQL.

 S





 On Tue, Sep 13, 2011 at 21:19, Suresh Kuna sureshkumar...@gmail.comwrote:

 I would recommend to go for a 15K rpm SSD raid-10 to keep the mysql data
 and
 add the Barracuda file format with innodb file per table settings, 3 to 4
 GB
 of innodb buffer pool depending the ratio of myisam v/s innodb in your db.
 Check the current stats and reduce the tmp and heap table size to a lower
 value, and reduce the remaining buffer's and cache as well.



 On Tue, Sep 13, 2011 at 9:06 PM, Maria Arrea maria_ar...@gmx.com wrote:

  Hello
 
   I have upgraded our backup server from mysql 5.0.77 to mysql 5.5.15. We
  are using bacula as backup software, and all the info from backups is
 stored
  in a mysql database. Today I have upgraded from mysql 5.0 to 5.5 using
 IUS
  repository RPMS and with mysql_upgrade procedure, no problem so far.
 This
  backup systems hold the bacula daemon, the mysql server and the backup
 of
  other 100 systems (Solaris/Linux/Windows)
 
   Our server has 6 GB of ram, 1 quad Intel Xeon E5520 and 46 TB of raid-6
  SATA disks (7200 rpm) connected to a Smart Array P812 controller  Red
 Hat
  Enterprise Linux 5.7 x64. Our mysql has dozens of millions of lines, and
 we
  are using InnoDB as storage engine for bacula internal data. We add
 hundred
  of thousands lines /day to our mysql (files are incrementally backed up
  daily from our 100 servers). So, we have a 7-8 concurrent writes (in
  different lines, of course) , and theorically we only read from mysql
 when
  we restore from backup.
 
   Daily we launch a cron job that executes an optimize table in each
 table
  of our database to compact the database. It takes almost an hour. We are
  going to increase the memory of the server from 6 to 12 GB in a couple
 of
  weeks, and I will change my.cnf to reflect more memory. My actual my.cnf
 is
  attached below:
 
 
   These are my questions:
 
 
   - We have real slow storage (raid 6 SATA), but plenty CPU and ram .
 Should
  I enable innodb compression to make this mysql faster?
   - This system is IOPS-constrained for mysql (fine for backup, though).
  Should I add a SSD only to hold mysql data?
   - Any additional setting I should use to tune this mysql server?
 
 
 
   my.cnf content:
 
   [client]
   port = 3306
   socket = /var/lib/mysql/mysql.sock
 
 
   [mysqld]
   innodb_flush_method=O_DIRECT
   max_connections = 15
   wait_timeout = 86400
   port =