A tidbit for those of us who want to play with InnoDB compression

2011-10-04 Thread Johan De Meersman
As noted in the title, I'm messing about a bit with InnoDB compressed tables. 
As such, I found a rather glaring hole in the Internet: how the hell do you 
turn compression off again? :-D 

After messing about a lot and googling until my fingers hurt, I happened upon 
this bug report: http://bugs.mysql.com/bug.php?id=56628 

So, you turn compression on a table off by: 

set session innodb_strict_mode=off; 
alter table YOURTABLEHERE engine=InnoDB row_format=compact key_block_size=0; 

Of course, if you're running 5.1.55+ or 5.5.9+, you'll not need to tinker with 
your innodb_strict_mode ; but it's still a glaring hole in the documentation. 

-- 
Bier met grenadyn 
Is als mosterd by den wyn 
Sy die't drinkt, is eene kwezel 
Hy die't drinkt, is ras een ezel 


Re: A tidbit for those of us who want to play with InnoDB compression

2011-10-04 Thread Andrew Moore
Nice one Johan, thanks for the info.

On Tue, Oct 4, 2011 at 2:17 PM, Johan De Meersman vegiv...@tuxera.bewrote:

 As noted in the title, I'm messing about a bit with InnoDB compressed
 tables. As such, I found a rather glaring hole in the Internet: how the hell
 do you turn compression off again? :-D

 After messing about a lot and googling until my fingers hurt, I happened
 upon this bug report: http://bugs.mysql.com/bug.php?id=56628

 So, you turn compression on a table off by:

 set session innodb_strict_mode=off;
 alter table YOURTABLEHERE engine=InnoDB row_format=compact
 key_block_size=0;

 Of course, if you're running 5.1.55+ or 5.5.9+, you'll not need to tinker
 with your innodb_strict_mode ; but it's still a glaring hole in the
 documentation.

 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel



Re: Question about slow storage and InnoDB compression

2011-09-14 Thread Maria Arrea
The server hosting bacula and the database only has one kind of disk: SATA, 
maybe I should buy a couple of SSD for mysql.

 I have read all your mails, and still not sure if I should enable innodb 
compression. My ibfile is 50 GB, though.

 Regards

 Maria


 Questions:
 1) Why are you putting your MySQL data on the same volume as your Bacula 
backups? Bacula does large sequential I/O and MySQL will do random I/O based on 
teh structure.





 What you want to do is:

 1) you have 5MB InnoDB Log Files, that's a major bottleneck. I would use at 
256MB or 512MB x 2 InnoDB log files.
 2) dump and import the database using innodb_file_per_table so that 
optimization will free up space..
 3) are you running Bacula on the server as well? If so, decrease the buffer 
pool to 1-2GB.. if not bump it up to to 3GB as you need some memory for bacula

 and 4, this is the most important one:
 How big is your MySQL data? Its not that big, I figure in the 80-100GB range. 
Get yourself a pair of 240GB SSDs, mount it locally for MySQL.

 S



 On Tue, Sep 13, 2011 at 21:19, Suresh Kuna  sureshkumar...@gmail.com  wrote:
 I would recommend to go for a 15K rpm SSD raid-10 to keep the mysql data and
 add the Barracuda file format with innodb file per table settings, 3 to 4 GB
 of innodb buffer pool depending the ratio of myisam v/s innodb in your db.
 Check the current stats and reduce the tmp and heap table size to a lower
 value, and reduce the remaining buffer's and cache as well.

 On Tue, Sep 13, 2011 at 9:06 PM, Maria Arrea  maria_ar...@gmx.com  wrote:

  Hello
 
  I have upgraded our backup server from mysql 5.0.77 to mysql 5.5.15. We
  are using bacula as backup software, and all the info from backups is stored
  in a mysql database. Today I have upgraded from mysql 5.0 to 5.5 using IUS
  repository RPMS and with mysql_upgrade procedure, no problem so far. This
  backup systems hold the bacula daemon, the mysql server and the backup of
  other 100 systems (Solaris/Linux/Windows)
 
  Our server has 6 GB of ram, 1 quad Intel Xeon E5520 and 46 TB of raid-6
  SATA disks (7200 rpm) connected to a Smart Array P812 controller  Red Hat
  Enterprise Linux 5.7 x64. Our mysql has dozens of millions of lines, and we
  are using InnoDB as storage engine for bacula internal data. We add hundred
  of thousands lines /day to our mysql (files are incrementally backed up
  daily from our 100 servers). So, we have a 7-8 concurrent writes (in
  different lines, of course) , and theorically we only read from mysql when
  we restore from backup.
 
  Daily we launch a cron job that executes an optimize table in each table
  of our database to compact the database. It takes almost an hour. We are
  going to increase the memory of the server from 6 to 12 GB in a couple of
  weeks, and I will change my.cnf to reflect more memory. My actual my.cnf is
  attached below:
 
 
  These are my questions:
 
 
  - We have real slow storage (raid 6 SATA), but plenty CPU and ram . Should
  I enable innodb compression to make this mysql faster?
  - This system is IOPS-constrained for mysql (fine for backup, though).
  Should I add a SSD only to hold mysql data?
  - Any additional setting I should use to tune this mysql server?
 
 
 
  my.cnf content:
 
  [client]
  port = 3306
  socket = /var/lib/mysql/mysql.sock
 
 
  [mysqld]
  innodb_flush_method=O_DIRECT
  max_connections = 15
  wait_timeout = 86400
  port = 3306
  socket = /var/lib/mysql/mysql.sock
  key_buffer = 100M
  max_allowed_packet = 2M
  table_cache = 2048
  sort_buffer_size = 16M
  read_buffer_size = 16M
  read_rnd_buffer_size = 12M
  myisam_sort_buffer_size = 384M
  query_cache_type=1
  query_cache_size=32M
  thread_cache_size = 16
  query_cache_size = 250M
  thread_concurrency = 6
  tmp_table_size = 1024M
  max_heap_table = 1024M
 
 
  skip-federated
  innodb_buffer_pool_size= 2500M
  innodb_additional_mem_pool_size = 32M
 
  [mysqldump]
  max_allowed_packet = 16M
 
  [mysql]
  no-auto-rehash
 
  [isamchk]
  key_buffer = 1250M
  sort_buffer_size = 384M
  read_buffer = 8M
  write_buffer = 8M
 
  [myisamchk]
  key_buffer = 1250M
  sort_buffer_size = 384M
  read_buffer = 8M
  write_buffer = 8M
 
  [mysqlhotcopy]
  interactive-timeout
 
 
  Regards
 
  Maria
 


--
 Thanks
 Suresh Kuna
 MySQL DBA
 -- The best compliment you could give Pythian for our service is a referral.


Re: Question about slow storage and InnoDB compression

2011-09-14 Thread Reindl Harald


Am 14.09.2011 09:50, schrieb Maria Arrea:
  I have read all your mails, and still not sure if I should enable innodb 
 compression

if you have enough free cpu-ressources and IO is your problem simply yes
because the transfer from/to disk will be not so high as uncompressed







signature.asc
Description: OpenPGP digital signature


Re: Question about slow storage and InnoDB compression

2011-09-14 Thread Maria Arrea
I have finally enabled compression:


 
+++-++---++-+-+--+---++-+-+-+---+--+-+-+
 | Name | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | 
Max_data_length | Index_length | Data_free | Auto_increment | Create_time | 
Update_time | Check_time | Collation | Checksum | Create_options | Comment |
 
+++-++---++-+-+--+---++-+-+-+---+--+-+-+
 | BaseFiles | InnoDB | 10 | Compressed | 0 | 0 | 16384 | 0 | 16384 | 0 | 1 | 
2011-09-14 10:33:04 | | | latin1_swedish_ci | | row_format=COMPRESSED 
KEY_BLOCK_SIZE=16 | |
 | CDImages | InnoDB | 10 | Compressed | 0 | 0 | 16384 | 0 | 0 | 0 | | 
2011-09-14 10:33:04 | | | latin1_swedish_ci | | row_format=COMPRESSED 
KEY_BLOCK_SIZE=16 | |
 | Client | InnoDB | 10 | Compressed | 46 | 356 | 16384 | 0 | 16384 | 0 | 53 | 
2011-09-14 10:33:04 | | | latin1_swedish_ci | | row_format=COMPRESSED 
KEY_BLOCK_SIZE=16 | |
 | Counters | InnoDB | 10 | Compressed | 0 | 0 | 16384 | 0 | 0 | 0 | | 
2011-09-14 10:33:04 | | | latin1_swedish_ci | | row_format=COMPRESSED 
KEY_BLOCK_SIZE=16 | |
 | Device | InnoDB | 10 | Compressed | 0 | 0 | 16384 | 0 | 0 | 0 | 1 | 
2011-09-14 10:33:04 | | | latin1_swedish_ci | | row_format=COMPRESSED 
KEY_BLOCK_SIZE=16 | |
 | File | InnoDB | 10 | Compressed | 106551231 | 129 | 13763608576 | 0 | 
7449083904 | 7340032 | 516304137 | 2011-09-14 12:53:45 | | | latin1_swedish_ci 
| | row_format=COMPRESSED KEY_BLOCK_SIZE=16 | |
 | FileSet | InnoDB | 10 | Compressed | 8 | 2048 | 16384 | 0 | 0 | 0 | 11 | 
2011-09-14 11:26:17 | | | latin1_swedish_ci | | row_format=COMPRESSED 
KEY_BLOCK_SIZE=16 | |
 | Filename | InnoDB | 10 | Compressed | 39608549 | 62 | 2455764992 | 0 | 
3063939072 | 4194304 | 49584798 | 2011-09-14 13:11:41 | | | latin1_swedish_ci | 
| row_format=COMPRESSED KEY_BLOCK_SIZE=16 | |
 | Job | InnoDB | 10 | Compressed | 3499 | 454 | 1589248 | 0 | 212992 | 4194304 
| 10200 | 2011-09-14 13:11:42 | | | latin1_swedish_ci | | row_format=COMPRESSED 
KEY_BLOCK_SIZE=16 | |
 | JobHisto | MyISAM | 10 | Dynamic | 0 | 0 | 0 | 281474976710655 | 1024 | 0 | 
| 2011-09-14 11:42:30 | 2011-09-14 11:42:30 | 2011-09-14 11:42:30 | 
latin1_swedish_ci | | | |
 | JobMedia | InnoDB | 10 | Compressed | 52788 | 69 | 3686400 | 0 | 2637824 | 
4194304 | 150064 | 2011-09-14 13:11:44 | | | latin1_swedish_ci | | 
row_format=COMPRESSED KEY_BLOCK_SIZE=16 | |
 | Location | InnoDB | 10 | Compressed | 0 | 0 | 16384 | 0 | 0 | 0 | 1 | 
2011-09-14 11:42:32 | | | latin1_swedish_ci | | row_format=COMPRESSED 
KEY_BLOCK_SIZE=16 | |
 | LocationLog | MyISAM | 10 | Dynamic | 0 | 0 | 0 | 281474976710655 | 1024 | 0 
| 1 | 2011-09-14 11:42:32 | 2011-09-14 11:42:32 | | latin1_swedish_ci | | | |
 | Log | InnoDB | 10 | Compressed | 31578 | 349 | 11026432 | 0 | 1589248 | 
4194304 | 285940 | 2011-09-14 13:11:45 | | | latin1_swedish_ci | | 
row_format=COMPRESSED KEY_BLOCK_SIZE=16 | |
 | Media | MyISAM | 10 | Dynamic | 39 | 142 | 5568 | 281474976710655 | 4096 | 0 
| 47 | 2011-09-14 11:42:33 | 2011-09-14 11:42:33 | 2011-09-14 11:42:33 | 
latin1_swedish_ci | | | |
 | MediaType | InnoDB | 10 | Compressed | 1 | 16384 | 16384 | 0 | 0 | 0 | 2 | 
2011-09-14 11:42:33 | | | latin1_swedish_ci | | row_format=COMPRESSED 
KEY_BLOCK_SIZE=16 | |
 | Path | InnoDB | 10 | Compressed | 4681359 | 81 | 380452864 | 0 | 581959680 | 
7340032 | 4527256 | 2011-09-14 13:13:23 | | | latin1_swedish_ci | | 
row_format=COMPRESSED KEY_BLOCK_SIZE=16 | |
 | PathHierarchy | InnoDB | 10 | Compressed | 0 | 0 | 16384 | 0 | 16384 | 0 | | 
2011-09-14 11:44:16 | | | latin1_swedish_ci | | row_format=COMPRESSED 
KEY_BLOCK_SIZE=16 | |
 | PathVisibility | InnoDB | 10 | Compressed | 0 | 0 | 16384 | 0 | 16384 | 0 | 
| 2011-09-14 11:44:16 | | | latin1_swedish_ci | | row_format=COMPRESSED 
KEY_BLOCK_SIZE=16 | |
 | Pool | InnoDB | 10 | Compressed | 8 | 2048 | 16384 | 0 | 16384 | 0 | 9 | 
2011-09-14 11:44:16 | | | latin1_swedish_ci | | row_format=COMPRESSED 
KEY_BLOCK_SIZE=16 | |
 | Status | InnoDB | 10 | Compressed | 21 | 780 | 16384 | 0 | 0 | 0 | | 
2011-09-14 11:44:16 | | | latin1_swedish_ci | | row_format=COMPRESSED 
KEY_BLOCK_SIZE=16 | |
 | Storage | InnoDB | 10 | Compressed | 1 | 16384 | 16384 | 0 | 0 | 0 | 2 | 
2011-09-14 11:44:16 | | | latin1_swedish_ci | | row_format=COMPRESSED 
KEY_BLOCK_SIZE=16 | |
 | UnsavedFiles | InnoDB | 10 | Compressed | 0 | 0 | 16384 | 0 | 0 | 0 | 1 | 
2011-09-14 11:44:16 | | | latin1_swedish_ci | | row_format=COMPRESSED 
KEY_BLOCK_SIZE=16 | |
 | Version | InnoDB | 10 | Compressed | 1 | 16384 | 16384

Re: Question about slow storage and InnoDB compression

2011-09-14 Thread Reindl Harald


Am 14.09.2011 14:50, schrieb Maria Arrea:
 I have finally enabled compression:
 I am still benchmarking, but I see a 15-20% performance gain after enabling 
 compression using bacula gui
as expected if disk-io is the only bottenleck
the same with NTFS-Compression inside a VMware Machine on modern hardware



signature.asc
Description: OpenPGP digital signature


Re: Question about slow storage and InnoDB compression

2011-09-14 Thread Suresh Kuna
 I am still benchmarking, but I see a 15-20% performance gain after
enabling compression using bacula gui (bat).

This is a very good performance improvement and how much disk space did you
saved here ?

Can you do bench marking with 4kb and 8kb key_block_size as well to check
what suits your application. I saw there has been improvement in performance
by adjusting this one too.


On Wed, Sep 14, 2011 at 6:20 PM, Maria Arrea maria_ar...@gmx.com wrote:

 I have finally enabled compression:



  
 +++-++---++-+-+--+---++-+-+-+---+--+-+-+
  | Name | Engine | Version | Row_format | Rows | Avg_row_length |
 Data_length | Max_data_length | Index_length | Data_free | Auto_increment |
 Create_time | Update_time | Check_time | Collation | Checksum |
 Create_options | Comment |

  
 +++-++---++-+-+--+---++-+-+-+---+--+-+-+
  | BaseFiles | InnoDB | 10 | Compressed | 0 | 0 | 16384 | 0 | 16384 | 0 | 1
 | 2011-09-14 10:33:04 | | | latin1_swedish_ci | | row_format=COMPRESSED
 KEY_BLOCK_SIZE=16 | |
  | CDImages | InnoDB | 10 | Compressed | 0 | 0 | 16384 | 0 | 0 | 0 | |
 2011-09-14 10:33:04 | | | latin1_swedish_ci | | row_format=COMPRESSED
 KEY_BLOCK_SIZE=16 | |
  | Client | InnoDB | 10 | Compressed | 46 | 356 | 16384 | 0 | 16384 | 0 |
 53 | 2011-09-14 10:33:04 | | | latin1_swedish_ci | | row_format=COMPRESSED
 KEY_BLOCK_SIZE=16 | |
  | Counters | InnoDB | 10 | Compressed | 0 | 0 | 16384 | 0 | 0 | 0 | |
 2011-09-14 10:33:04 | | | latin1_swedish_ci | | row_format=COMPRESSED
 KEY_BLOCK_SIZE=16 | |
  | Device | InnoDB | 10 | Compressed | 0 | 0 | 16384 | 0 | 0 | 0 | 1 |
 2011-09-14 10:33:04 | | | latin1_swedish_ci | | row_format=COMPRESSED
 KEY_BLOCK_SIZE=16 | |
  | File | InnoDB | 10 | Compressed | 106551231 | 129 | 13763608576 | 0 |
 7449083904 | 7340032 | 516304137 | 2011-09-14 12:53:45 | | |
 latin1_swedish_ci | | row_format=COMPRESSED KEY_BLOCK_SIZE=16 | |
  | FileSet | InnoDB | 10 | Compressed | 8 | 2048 | 16384 | 0 | 0 | 0 | 11 |
 2011-09-14 11:26:17 | | | latin1_swedish_ci | | row_format=COMPRESSED
 KEY_BLOCK_SIZE=16 | |
  | Filename | InnoDB | 10 | Compressed | 39608549 | 62 | 2455764992 | 0 |
 3063939072 | 4194304 | 49584798 | 2011-09-14 13:11:41 | | |
 latin1_swedish_ci | | row_format=COMPRESSED KEY_BLOCK_SIZE=16 | |
  | Job | InnoDB | 10 | Compressed | 3499 | 454 | 1589248 | 0 | 212992 |
 4194304 | 10200 | 2011-09-14 13:11:42 | | | latin1_swedish_ci | |
 row_format=COMPRESSED KEY_BLOCK_SIZE=16 | |
  | JobHisto | MyISAM | 10 | Dynamic | 0 | 0 | 0 | 281474976710655 | 1024 |
 0 | | 2011-09-14 11:42:30 | 2011-09-14 11:42:30 | 2011-09-14 11:42:30 |
 latin1_swedish_ci | | | |
  | JobMedia | InnoDB | 10 | Compressed | 52788 | 69 | 3686400 | 0 | 2637824
 | 4194304 | 150064 | 2011-09-14 13:11:44 | | | latin1_swedish_ci | |
 row_format=COMPRESSED KEY_BLOCK_SIZE=16 | |
  | Location | InnoDB | 10 | Compressed | 0 | 0 | 16384 | 0 | 0 | 0 | 1 |
 2011-09-14 11:42:32 | | | latin1_swedish_ci | | row_format=COMPRESSED
 KEY_BLOCK_SIZE=16 | |
  | LocationLog | MyISAM | 10 | Dynamic | 0 | 0 | 0 | 281474976710655 | 1024
 | 0 | 1 | 2011-09-14 11:42:32 | 2011-09-14 11:42:32 | | latin1_swedish_ci |
 | | |
  | Log | InnoDB | 10 | Compressed | 31578 | 349 | 11026432 | 0 | 1589248 |
 4194304 | 285940 | 2011-09-14 13:11:45 | | | latin1_swedish_ci | |
 row_format=COMPRESSED KEY_BLOCK_SIZE=16 | |
  | Media | MyISAM | 10 | Dynamic | 39 | 142 | 5568 | 281474976710655 | 4096
 | 0 | 47 | 2011-09-14 11:42:33 | 2011-09-14 11:42:33 | 2011-09-14 11:42:33 |
 latin1_swedish_ci | | | |
  | MediaType | InnoDB | 10 | Compressed | 1 | 16384 | 16384 | 0 | 0 | 0 | 2
 | 2011-09-14 11:42:33 | | | latin1_swedish_ci | | row_format=COMPRESSED
 KEY_BLOCK_SIZE=16 | |
  | Path | InnoDB | 10 | Compressed | 4681359 | 81 | 380452864 | 0 |
 581959680 | 7340032 | 4527256 | 2011-09-14 13:13:23 | | | latin1_swedish_ci
 | | row_format=COMPRESSED KEY_BLOCK_SIZE=16 | |
  | PathHierarchy | InnoDB | 10 | Compressed | 0 | 0 | 16384 | 0 | 16384 | 0
 | | 2011-09-14 11:44:16 | | | latin1_swedish_ci | | row_format=COMPRESSED
 KEY_BLOCK_SIZE=16 | |
  | PathVisibility | InnoDB | 10 | Compressed | 0 | 0 | 16384 | 0 | 16384 |
 0 | | 2011-09-14 11:44:16 | | | latin1_swedish_ci | | row_format=COMPRESSED
 KEY_BLOCK_SIZE=16 | |
  | Pool | InnoDB | 10 | Compressed | 8 | 2048 | 16384 | 0 | 16384 | 0 | 9 |
 2011-09-14 11:44:16 | | | latin1_swedish_ci | | row_format=COMPRESSED
 KEY_BLOCK_SIZE=16 | |
  | Status | InnoDB | 10 | Compressed | 21 | 780 | 16384 | 0 | 0 | 0 | |
 2011

Question about slow storage and InnoDB compression

2011-09-13 Thread Maria Arrea
Hello

 I have upgraded our backup server from mysql 5.0.77 to mysql 5.5.15. We are 
using bacula as backup software, and all the info from backups is stored in a 
mysql database. Today I have upgraded from mysql 5.0 to 5.5 using IUS 
repository RPMS and with mysql_upgrade procedure, no problem so far. This 
backup systems hold the bacula daemon, the mysql server and the backup of other 
100 systems (Solaris/Linux/Windows)

 Our server has 6 GB of ram, 1 quad Intel Xeon E5520 and 46 TB of raid-6 SATA 
disks (7200 rpm) connected to a Smart Array P812 controller  Red Hat 
Enterprise Linux 5.7 x64. Our mysql has dozens of millions of lines, and we are 
using InnoDB as storage engine for bacula internal data. We add hundred of 
thousands lines /day to our mysql (files are incrementally backed up daily from 
our 100 servers). So, we have a 7-8 concurrent writes (in different lines, of 
course) , and theorically we only read from mysql when we restore from backup.

 Daily we launch a cron job that executes an optimize table in each table of 
our database to compact the database. It takes almost an hour. We are going to 
increase the memory of the server from 6 to 12 GB in a couple of weeks, and I 
will change my.cnf to reflect more memory. My actual my.cnf is attached below:


 These are my questions:


 - We have real slow storage (raid 6 SATA), but plenty CPU and ram . Should I 
enable innodb compression to make this mysql faster?
 - This system is IOPS-constrained for mysql (fine for backup, though). Should 
I add a SSD only to hold mysql data?
 - Any additional setting I should use to tune this mysql server?



 my.cnf content:

 [client]
 port = 3306
 socket = /var/lib/mysql/mysql.sock


 [mysqld]
 innodb_flush_method=O_DIRECT
 max_connections = 15
 wait_timeout = 86400
 port = 3306
 socket = /var/lib/mysql/mysql.sock
 key_buffer = 100M
 max_allowed_packet = 2M
 table_cache = 2048
 sort_buffer_size = 16M
 read_buffer_size = 16M
 read_rnd_buffer_size = 12M
 myisam_sort_buffer_size = 384M
 query_cache_type=1
 query_cache_size=32M
 thread_cache_size = 16
 query_cache_size = 250M
 thread_concurrency = 6
 tmp_table_size = 1024M
 max_heap_table = 1024M


 skip-federated
 innodb_buffer_pool_size= 2500M
 innodb_additional_mem_pool_size = 32M

 [mysqldump]
 max_allowed_packet = 16M

 [mysql]
 no-auto-rehash

 [isamchk]
 key_buffer = 1250M
 sort_buffer_size = 384M
 read_buffer = 8M
 write_buffer = 8M

 [myisamchk]
 key_buffer = 1250M
 sort_buffer_size = 384M
 read_buffer = 8M
 write_buffer = 8M

 [mysqlhotcopy]
 interactive-timeout


 Regards

 Maria


Re: Question about slow storage and InnoDB compression

2011-09-13 Thread Suresh Kuna
I would recommend to go for a 15K rpm SSD raid-10 to keep the mysql data and
add the Barracuda file format with innodb file per table settings, 3 to 4 GB
of innodb buffer pool depending the ratio of myisam v/s innodb in your db.
Check the current stats and reduce the tmp and heap table size to a lower
value, and reduce the remaining buffer's and cache as well.

On Tue, Sep 13, 2011 at 9:06 PM, Maria Arrea maria_ar...@gmx.com wrote:

 Hello

  I have upgraded our backup server from mysql 5.0.77 to mysql 5.5.15. We
 are using bacula as backup software, and all the info from backups is stored
 in a mysql database. Today I have upgraded from mysql 5.0 to 5.5 using IUS
 repository RPMS and with mysql_upgrade procedure, no problem so far. This
 backup systems hold the bacula daemon, the mysql server and the backup of
 other 100 systems (Solaris/Linux/Windows)

  Our server has 6 GB of ram, 1 quad Intel Xeon E5520 and 46 TB of raid-6
 SATA disks (7200 rpm) connected to a Smart Array P812 controller  Red Hat
 Enterprise Linux 5.7 x64. Our mysql has dozens of millions of lines, and we
 are using InnoDB as storage engine for bacula internal data. We add hundred
 of thousands lines /day to our mysql (files are incrementally backed up
 daily from our 100 servers). So, we have a 7-8 concurrent writes (in
 different lines, of course) , and theorically we only read from mysql when
 we restore from backup.

  Daily we launch a cron job that executes an optimize table in each table
 of our database to compact the database. It takes almost an hour. We are
 going to increase the memory of the server from 6 to 12 GB in a couple of
 weeks, and I will change my.cnf to reflect more memory. My actual my.cnf is
 attached below:


  These are my questions:


  - We have real slow storage (raid 6 SATA), but plenty CPU and ram . Should
 I enable innodb compression to make this mysql faster?
  - This system is IOPS-constrained for mysql (fine for backup, though).
 Should I add a SSD only to hold mysql data?
  - Any additional setting I should use to tune this mysql server?



  my.cnf content:

  [client]
  port = 3306
  socket = /var/lib/mysql/mysql.sock


  [mysqld]
  innodb_flush_method=O_DIRECT
  max_connections = 15
  wait_timeout = 86400
  port = 3306
  socket = /var/lib/mysql/mysql.sock
  key_buffer = 100M
  max_allowed_packet = 2M
  table_cache = 2048
  sort_buffer_size = 16M
  read_buffer_size = 16M
  read_rnd_buffer_size = 12M
  myisam_sort_buffer_size = 384M
  query_cache_type=1
  query_cache_size=32M
  thread_cache_size = 16
  query_cache_size = 250M
  thread_concurrency = 6
  tmp_table_size = 1024M
  max_heap_table = 1024M


  skip-federated
  innodb_buffer_pool_size= 2500M
  innodb_additional_mem_pool_size = 32M

  [mysqldump]
  max_allowed_packet = 16M

  [mysql]
  no-auto-rehash

  [isamchk]
  key_buffer = 1250M
  sort_buffer_size = 384M
  read_buffer = 8M
  write_buffer = 8M

  [myisamchk]
  key_buffer = 1250M
  sort_buffer_size = 384M
  read_buffer = 8M
  write_buffer = 8M

  [mysqlhotcopy]
  interactive-timeout


  Regards

  Maria




-- 
Thanks
Suresh Kuna
MySQL DBA


Re: Question about slow storage and InnoDB compression

2011-09-13 Thread Suresh Kuna
Thanks for correcting me in the disk stats Singer, A typo error of SSD
instead of SAS 15k rpm.

Compression may not increase the memory requirements :
To minimize I/O and to reduce the need to uncompress a page, at times the
buffer pool contains both the compressed and uncompressed form of a database
page. To make room for other required database pages, InnoDB may “evict”
from the buffer pool an uncompressed page, while leaving the compressed page
in memory. Or, if a page has not been accessed in a while, the compressed
form of the page may be written to disk, to free space for other data. Thus,
at any given time, the buffer pool may contain both the compressed and
uncompressed forms of the page, or only the compressed form of the page, or
neither.

More details and benefits about the barracuda file format can be found in
the below url Which helps to know the pros and cons on file format

http://dev.mysql.com/doc/refman/5.6/en/glossary.html#glos_antelope
http://dev.mysql.com/doc/innodb/1.1/en/glossary.html#glos_barracuda
http://www.mysqlperformanceblog.com/2008/04/23/real-life-use-case-for-barracuda-innodb-file-format/
http://dev.mysql.com/doc/innodb/1.1/en/innodb-other-changes-file-formats.html

I would go with the Singer suggestions in What you want to do is part.

Thanks
Suresh Kuna


On Wed, Sep 14, 2011 at 7:21 AM, Singer X.J. Wang w...@singerwang.comwrote:

 Comments:
 1) There is no such thing as 15K RPM SSDs... SSDs are NON ROTATIONAL
 STORAGE, therefore RPMS make no sense..
 2) Upgrading to Barracuda file format isn't really worth it in this case,
 you're not going to get any real benefits. In your scenario I doubt InnoDB
 table compression will help, as it will significantly increase your memory
 requirements as it to keep uncompressed and compressed copies in RAM.

 Questions:
 1) Why are you putting your MySQL data on the same volume as your Bacula
 backups? Bacula does large sequential I/O and MySQL will do random I/O based
 on teh structure.

 What you want to do is:

 1) you have 5MB InnoDB Log Files, that's a major bottleneck. I would use at
 256MB or 512MB x 2 InnoDB log files.
 2) dump and import the database using innodb_file_per_table so that
 optimization will free up space..
 3) are you running Bacula on the server as well? If so, decrease the buffer
 pool to 1-2GB.. if not bump it up to to 3GB as you need some memory for
 bacula

 and 4, this is the most important one:
 How big is your MySQL data? Its not that big, I figure in the 80-100GB
 range.  Get yourself a pair of 240GB SSDs, mount it locally for MySQL.

 S





 On Tue, Sep 13, 2011 at 21:19, Suresh Kuna sureshkumar...@gmail.comwrote:

 I would recommend to go for a 15K rpm SSD raid-10 to keep the mysql data
 and
 add the Barracuda file format with innodb file per table settings, 3 to 4
 GB
 of innodb buffer pool depending the ratio of myisam v/s innodb in your db.
 Check the current stats and reduce the tmp and heap table size to a lower
 value, and reduce the remaining buffer's and cache as well.



 On Tue, Sep 13, 2011 at 9:06 PM, Maria Arrea maria_ar...@gmx.com wrote:

  Hello
 
   I have upgraded our backup server from mysql 5.0.77 to mysql 5.5.15. We
  are using bacula as backup software, and all the info from backups is
 stored
  in a mysql database. Today I have upgraded from mysql 5.0 to 5.5 using
 IUS
  repository RPMS and with mysql_upgrade procedure, no problem so far.
 This
  backup systems hold the bacula daemon, the mysql server and the backup
 of
  other 100 systems (Solaris/Linux/Windows)
 
   Our server has 6 GB of ram, 1 quad Intel Xeon E5520 and 46 TB of raid-6
  SATA disks (7200 rpm) connected to a Smart Array P812 controller  Red
 Hat
  Enterprise Linux 5.7 x64. Our mysql has dozens of millions of lines, and
 we
  are using InnoDB as storage engine for bacula internal data. We add
 hundred
  of thousands lines /day to our mysql (files are incrementally backed up
  daily from our 100 servers). So, we have a 7-8 concurrent writes (in
  different lines, of course) , and theorically we only read from mysql
 when
  we restore from backup.
 
   Daily we launch a cron job that executes an optimize table in each
 table
  of our database to compact the database. It takes almost an hour. We are
  going to increase the memory of the server from 6 to 12 GB in a couple
 of
  weeks, and I will change my.cnf to reflect more memory. My actual my.cnf
 is
  attached below:
 
 
   These are my questions:
 
 
   - We have real slow storage (raid 6 SATA), but plenty CPU and ram .
 Should
  I enable innodb compression to make this mysql faster?
   - This system is IOPS-constrained for mysql (fine for backup, though).
  Should I add a SSD only to hold mysql data?
   - Any additional setting I should use to tune this mysql server?
 
 
 
   my.cnf content:
 
   [client]
   port = 3306
   socket = /var/lib/mysql/mysql.sock
 
 
   [mysqld]
   innodb_flush_method=O_DIRECT
   max_connections = 15
   wait_timeout = 86400
   port

Server/Client connection compression

2007-06-01 Thread Giorgio Zarrelli
Hi,

I saw that to enable server/client protocol compression I can start mysql with 
the -C option. 

Is there a configuration keyword to write in my.cnf to enable server/client 
protocol compression?

Thanks

Giorgio Zarrelli

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Server/Client connection compression

2007-06-01 Thread Baron Schwartz

Hi,

Giorgio Zarrelli wrote:

Hi,

I saw that to enable server/client protocol compression I can start mysql with 
the -C option. 

Is there a configuration keyword to write in my.cnf to enable server/client 
protocol compression?


Yes.  In general, most command-line options can be written into the options files (and 
dashes and underscores are interchangeable by the way, so you will see people referring 
to both option-name=val and option_name=val).  For example, if I add a line


compress

to the [mysql] section in /home/baron/.my.cnf, and then connect and type 'status', I 
see a line in the output that says


Protocol:   Compressed

That line is not there unless compression is enabled.  I could add the same option to 
various sections in /etc/my.cnf as well; probably the best place to put it is in the 
[client] section.


Cheers
Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Table compression with write (append) support

2007-05-28 Thread Kevin Hunter
At 12:31a -0400 on 28 May 2007, Dan Nelson wrote:
 In the last episode (May 27), Yves Goergen said:
 I'm thinking about using a MySQL table to store an Apache access log
 and do statistics on it. Currently all access log files are stored as
 files and compressed by day. Older log files are compressed by month,
 with bzip2. This gives a very good compression ratio, since there's a
 lot of repetition in those files. If I store all that in a regular
 table, it would be several gigabytes large. So I'm looking for a way
 to compress the database table but still be able to append new rows.
 As the nature of a log file, it is not required to alter previous
 data. It could only be useful to delete older rows. Do you know
 something for that?
 
 You want the ARCHIVE storage engine.
 
 http://dev.mysql.com/doc/refman/5.0/en/archive-storage-engine.html

Huh.  This is the first I've heard of the archive engine.  Cool!

However, I'm curious how the compression offered by OPTIMIZE TABLE and
the zlib library would compare to denormalization of the log schema.  In
particular, I imagine a lot of the HTTP requests would be the same, so
you could create a table to store the requested URLs, and then have a
second table with the timestamp and foreign key relationship into the
first.  Depending on how wide the original rows are and how often
they're requested, I imagine you could get quite a savings.  Anything
else that's repeated as well?  IP's?  Return codes?

Would be curious about the results if you were able to implement both.

Kevin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Table compression with write (append) support

2007-05-28 Thread Yves Goergen
On 28.05.2007 09:06 CE(S)T, Kevin Hunter wrote:
 At 12:31a -0400 on 28 May 2007, Dan Nelson wrote:
 You want the ARCHIVE storage engine.

 http://dev.mysql.com/doc/refman/5.0/en/archive-storage-engine.html

Hm, it doesn't support deleting rows and it cannot use indexes. So doing
statistics on them (which can be a little more complex than counting
rows within a timespan, which is why I wanted to use an SQL database)
could get quite resource demanding.

 In particular, I imagine a lot of the HTTP requests would be the
 same, so you could create a table to store the requested URLs, and
 then have a second table with the timestamp and foreign key
 relationship into the first.

Interesting idea. Inserting would be more work to find the already
present dictionary rows. Also, URLs sometimes contain things like
session IDs. They're probably not of interest for my use but it's not
always easy to detect them for removal. I could also parse user agent
strings for easier evaluation, but this takes me the possibility to add
support for newer browsers at a later time. (Well, I could update the
database from the original access log files when I've updated the UA
parser.)

IP addresses (IPv4) and especially return codes (which can be mapped to
a 1-byte value) are probably not worth the reference. Data size values
should be too distributed for this.

How large is a row reference? 4 bytes?

-- 
Yves Goergen LonelyPixel [EMAIL PROTECTED]
Visit my web laboratory at http://beta.unclassified.de

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Table compression with write (append) support

2007-05-28 Thread Baron Schwartz

Yves Goergen wrote:

On 28.05.2007 09:06 CE(S)T, Kevin Hunter wrote:

At 12:31a -0400 on 28 May 2007, Dan Nelson wrote:

You want the ARCHIVE storage engine.

http://dev.mysql.com/doc/refman/5.0/en/archive-storage-engine.html


Hm, it doesn't support deleting rows and it cannot use indexes. So doing
statistics on them (which can be a little more complex than counting
rows within a timespan, which is why I wanted to use an SQL database)
could get quite resource demanding.


Another option might be to use compressed MyISAM tables, which you create with 
myisampack.  Suppose you create a new table every day, and after you start 
inserting into the new table, you compress yesterday's file.  Then you could use 
the MERGE storage engine to provide a view over all the tables as though they 
are one.


Baron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Table compression with write (append) support

2007-05-28 Thread Kevin Hunter
At 5:45a -0400 on 28 May 2007, Yves Goergen wrote:
 On 28.05.2007 09:06 CE(S)T, Kevin Hunter wrote:

 In particular, I imagine a lot of the HTTP requests would be the
 same, so you could create a table to store the requested URLs, and
 then have a second table with the timestamp and foreign key
 relationship into the first.
 
 Interesting idea. Inserting would be more work to find the already
 present dictionary rows.

My guess is not /that/ much work, since you should only have a known and
relatively small set in this dictionary, it'd basically be cached the
whole time.  But, that's my guess.  Haven't tried it.  Practice and
theory . . .

 Also, URLs sometimes contain things like
 session IDs. They're probably not of interest for my use but it's not
 always easy to detect them for removal.

Really?  Why wouldn't it be easy to detect them?  You presumably know
what variable you're looking for in the URL string, and applying a
simple regex search-and-replace . . . ?

 IP addresses (IPv4) and especially return codes (which can be mapped to
 a 1-byte value) are probably not worth the reference. Data size values
 should be too distributed for this.

Well, presumably, you'd normalize that part of the table.  That is,
rather than include multiple foreign keys in your data rows, you'd
create a cartesian product of the the return codes with the dictionary
table.  You'd have a slightly more bloated dictionary, but depending on
the number of requests the site(s) get(s), the aggregation would more
than make up for it.

 I could also parse user agent
 strings for easier evaluation, but this takes me the possibility to add
 support for newer browsers at a later time. (Well, I could update the
 database from the original access log files when I've updated the UA
 parser.)
 

Same thought.  If you've only a known set of UA strings, you could
normalize them with the dictionary table as well.

 How large is a row reference? 4 bytes?

I don't know, I'm fairly new to MySQL.  I suppose it'd also matter on
the type of index.  Anyone more knowledgeable wanna pipe up?

Well.  Whatever method works for your needs, cool.  I'm going to check
out both MYISAMPACK and ARCHIVE.  I was unaware of those.  Thanks list!

Kevin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Table compression with write (append) support

2007-05-28 Thread Yves Goergen
On 28.05.2007 18:34 CE(S)T, Kevin Hunter wrote:
 At 5:45a -0400 on 28 May 2007, Yves Goergen wrote:
 Also, URLs sometimes contain things like
 session IDs. They're probably not of interest for my use but it's not
 always easy to detect them for removal.
 
 Really?  Why wouldn't it be easy to detect them?  You presumably know
 what variable you're looking for in the URL string, and applying a
 simple regex search-and-replace . . . ?

I don't control what applications run on that web server.

 Same thought.  If you've only a known set of UA strings, you could
 normalize them with the dictionary table as well.

Well, I don't know (in advance) what's all running around out there...

-- 
Yves Goergen LonelyPixel [EMAIL PROTECTED]
Visit my web laboratory at http://beta.unclassified.de

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Table compression with write (append) support

2007-05-27 Thread Yves Goergen
Hi,

I'm thinking about using a MySQL table to store an Apache access log and
do statistics on it. Currently all access log files are stored as files
and compressed by day. Older log files are compressed by month, with
bzip2. This gives a very good compression ratio, since there's a lot of
repetition in those files. If I store all that in a regular table, it
would be several gigabytes large. So I'm looking for a way to compress
the database table but still be able to append new rows. As the nature
of a log file, it is not required to alter previous data. It could only
be useful to delete older rows. Do you know something for that?

-- 
Yves Goergen LonelyPixel [EMAIL PROTECTED]
Visit my web laboratory at http://beta.unclassified.de

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Table compression with write (append) support

2007-05-27 Thread Dan Nelson
In the last episode (May 27), Yves Goergen said:
 I'm thinking about using a MySQL table to store an Apache access log
 and do statistics on it. Currently all access log files are stored as
 files and compressed by day. Older log files are compressed by month,
 with bzip2. This gives a very good compression ratio, since there's a
 lot of repetition in those files. If I store all that in a regular
 table, it would be several gigabytes large. So I'm looking for a way
 to compress the database table but still be able to append new rows.
 As the nature of a log file, it is not required to alter previous
 data. It could only be useful to delete older rows. Do you know
 something for that?

You want the ARCHIVE storage engine.

http://dev.mysql.com/doc/refman/5.0/en/archive-storage-engine.html

-- 
Dan Nelson
[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Ideas on Compression Protocol

2003-10-09 Thread Director General: NEFACOMP
Hi group,
I recently asked about Compression and security and got nice answers.

Now I have got a different question:
What are the disadvantage of using that client/server Compression protocol?

Does it increase speed? Does it decrease speed? Does it overload the server? The 
client?
Any ideas and/or thoughts are welcome.


Thanks,
__
NZEYIMANA Emery Fabrice
NEFA Computing Services, Inc.
P.O. Box 5078 Kigali
Office Phone: +250-51 11 06
Office Fax: +250-50 15 19
Mobile: +250-08517768
Email: [EMAIL PROTECTED]
http://www.nefacomp.net/


Re: Ideas on Compression Protocol

2003-10-09 Thread Director General: NEFACOMP
Thank you for the ideas. Very helpful.


Thanks
Emery
- Original Message -
From: Danny Haworth [EMAIL PROTECTED]
To: Director General: NEFACOMP [EMAIL PROTECTED]
Sent: Thursday, October 09, 2003 12:01
Subject: Re: Ideas on Compression Protocol


 We used compression on a project with about 90 simultaneous users.
 Overall it sped things up (especially since most users were on modem
 dialups). Load on the client wasn't noticeable, neither was load on the
 server,

 The server did have two 2Ghz processors in though so compression of 90
 simultaneous streams shouldn't have been a problem =)

 On a standard 100mb switched lan, compression didn't make much of a
 difference, but there weren't any noticeable speed decreases either.

 HTH

 danny

 On Thu, 2003-10-09 at 09:39, Director General: NEFACOMP wrote:
  Hi group,
  I recently asked about Compression and security and got nice answers.
 
  Now I have got a different question:
  What are the disadvantage of using that client/server Compression
protocol?
 
  Does it increase speed? Does it decrease speed? Does it overload the
server? The client?
  Any ideas and/or thoughts are welcome.
 
 
  Thanks,
  __
  NZEYIMANA Emery Fabrice
  NEFA Computing Services, Inc.
  P.O. Box 5078 Kigali
  Office Phone: +250-51 11 06
  Office Fax: +250-50 15 19
  Mobile: +250-08517768
  Email: [EMAIL PROTECTED]
  http://www.nefacomp.net/






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Compression: Security or Zipping?

2003-10-07 Thread Director General: NEFACOMP
Hi group,

I have read in the MySQL manual that the client/Server Compression protocol adds some 
security to the application.

Does anyone have more information on this?



Thanks,
__
NZEYIMANA Emery Fabrice
NEFA Computing Services, Inc.
P.O. Box 5078 Kigali
Office Phone: +250-51 11 06
Office Fax: +250-50 15 19
Mobile: +250-08517768
Email: [EMAIL PROTECTED]
http://www.nefacomp.net/


RE: Compression: Security or Zipping?

2003-10-07 Thread Greg_Cope
 Hi group,
 
 I have read in the MySQL manual that the client/Server 
 Compression protocol adds some security to the application.
 
 Does anyone have more information on this?
 

It adds security by compressing the network trafic, which is more security
by obscurity, as this might stop a casual observer sniffing network trafic,
but a dedicated person would just sniff and then uncompress.

Think of the similarity between a plain file, and a compressed one - it
offers a similar level of protection.  If you know how to recreate all the
fragments and uncompress it, it offers little protection.

If client server coms security is an issue, either use a VPN, ssh tunnel, or
look at the mysql SSL client-server features.

The above could be completely wrong..

Greg

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Compression: Security or Zipping?

2003-10-07 Thread Danny Haworth
Hi,

I think this is more of a Security by Obscurity approach. E.g.
compressed credit card details flying down the wire are less obvious
than their plaintext equivalent.

I guess there may also be a case of increased difficulty when trying to
decompress a single part of captured traffic, like you would get when
trying to decompress a 10k part of a large zip file.

hth

danny

On Tue, 2003-10-07 at 10:49, Director General: NEFACOMP wrote:
 Hi group,
 
 I have read in the MySQL manual that the client/Server Compression protocol adds 
 some security to the application.
 
 Does anyone have more information on this?
 
 
 
 Thanks,
 __
 NZEYIMANA Emery Fabrice


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: compression protocol

2002-12-18 Thread Dmitry Kosoy


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 18, 2002 10:55 AM
To: Dmitry Kosoy
Subject: Re: compression protocol 


Your message cannot be posted because it appears to be either spam or
simply off topic to our filter. To bypass the filter you must include
one of the following words in your message:

sql,query,queries,smallint

If you just reply to this message, and include the entire text of it in the
reply, your reply will go through. However, you should
first review the text of the message to make sure it has something to do
with MySQL. Just typing the word MySQL once will be sufficient, for example.

You have written the following:

Hi,

I want to use compression in server/client protocol.
How I define it in JDBC connection ?

Regards,
  Dmitry
This email and any files transmitted with it are confidential and intended
solely for the use of the individual or entity to whom they are addressed.
If you have received this email in error please notify us immediately and
delete this communication.
This email and any files transmitted with it are confidential and intended
solely for the use of the individual or entity to whom they are addressed.
If you have received this email in error please notify us immediately and
delete this communication.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: compression protocol

2002-12-18 Thread Mark Matthews
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Dmitry Kosoy wrote:


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 18, 2002 10:55 AM
To: Dmitry Kosoy
Subject: Re: compression protocol 


Your message cannot be posted because it appears to be either spam or
simply off topic to our filter. To bypass the filter you must include
one of the following words in your message:

sql,query,queries,smallint

If you just reply to this message, and include the entire text of it in the
reply, your reply will go through. However, you should
first review the text of the message to make sure it has something to do
with MySQL. Just typing the word MySQL once will be sufficient, for example.

You have written the following:

Hi,

I want to use compression in server/client protocol.
How I define it in JDBC connection ?

Regards,
  Dmitry


The JDBC driver does not support it yet. It will be implemented in the 
3.1 series of drivers (3.0.3, the final BETA of the 3.0 series was 
released today).

	-mark

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.1.90 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQE+AHWutvXNTca6JD8RAsYUAJ9nWfGU+rm8kk17jthk7H9qPqowMQCgpK8/
6j9OuDD/36C2127SRhxbmfY=
=ihGC
-END PGP SIGNATURE-


-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



compression

2002-02-12 Thread Victoria Reznichenko

Sommai,

Wednesday, February 06, 2002, 8:01:13 AM, you wrote:

SF Hi,
SF I need to know that MySQL has any compression method when it store 
SF data?  If the answer is Yes, How the different compression between MySQL 
SF and other tools (zip, gzip)?  I asked this question because I need to store 
SF some text file for future used (at least 1 Mbyte per day).  I was used 
SF Winzip in Windows to compress and keep in file server.  If MySQL could 
SF compress some byte of data I think it better than flat file.

myisampack is used to compress MyISAM tables, after packing tables
become read-only. See at : http://www.mysql.com/doc/m/y/myisampack.html

You can also read about packed compressed table at: 
http://www.mysql.com/doc/C/o/Compressed_format.html

SF Sommai




-- 
For technical support contracts, goto https://order.mysql.com/
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Victoria Reznichenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   ___/   www.mysql.com




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




compression

2002-02-06 Thread Victoria Reznichenko

Sommai,

Wednesday, February 06, 2002, 8:01:13 AM, you wrote:

SF Hi,
SF I need to know that MySQL has any compression method when it store 
SF data?  If the answer is Yes, How the different compression between MySQL 
SF and other tools (zip, gzip)?  I asked this question because I need to store 
SF some text file for future used (at least 1 Mbyte per day).  I was used 
SF Winzip in Windows to compress and keep in file server.  If MySQL could 
SF compress some byte of data I think it better than flat file.

myisampack is used to compress MyISAM tables, after packing tables
become read-only. See at : http://www.mysql.com/doc/m/y/myisampack.html

You can also read about packed compressed table at: 
http://www.mysql.com/doc/C/o/Compressed_format.html

SF Sommai




-- 
For technical support contracts, goto https://order.mysql.com/
This email is sponsored by Ensita.net http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Victoria Reznichenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.net
   ___/   www.mysql.com




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




compression

2002-02-05 Thread Sommai Fongnamthip

Hi,
I need to know that MySQL has any compression method when it store 
data?  If the answer is Yes, How the different compression between MySQL 
and other tools (zip, gzip)?  I asked this question because I need to store 
some text file for future used (at least 1 Mbyte per day).  I was used 
Winzip in Windows to compress and keep in file server.  If MySQL could 
compress some byte of data I think it better than flat file.

Sommai


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: MySQL compression?

2001-06-15 Thread Chris Bolt

Look up myisampack in the mysql manual at http://www.mysql.com/doc/. The
only drawback is you can't modify the table.

 I have a table that has massive amounts of text.  Just plain text, stuff
 that would compress REALLY well.  Does mysql have any sort of compression
 internally for the table data that it stores?  A simple gzip wouldn't add
 too much overhead to the system, and you could still have
 clear-text indexes.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Dinamic Compression/Decompression

2001-06-12 Thread Emiliano F Castejon (Castle John)


I would like to know if there is a way to use
a compressed MYSQL bank for read and write (dynamic
compression/decompression).
Using myisampack utility I can only create a 
read-only bank.
Is there any internal compression scheme
in MySQL ?
Disk space is an important factor in my
work. Every day about 2 GBytes of data
are incorporated into the bank 
(IP protocol headers captured 
from live network traffic)

Thanks for any help,
  Castejon.

-- 
___
 Emiliano F Castejon
(Castle John)

  [EMAIL PROTECTED]
   [EMAIL PROTECTED]

  INPE - National Institute of Space Research
   Computer Networks Security Area
  Sao Jose dos Campos - SP - Brazil
 http://www.inpe.br

  ICQ UIN: 27559902 (required authorization)

  http://www.castlejohn.org
®2001 By Castle John
___
AUTOPSIA NTA

  A new concept in Network Traffic Analysis
Coming soon ...

 http://www.castlejohn.org/autopsia
®2001 By Castle John
___
  General guidelines for security

  Do not assume anything
   Trust no-one,nothing
Nothing is secure
  Security is a trade-off with usability
 Paranoia is your friend
___

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Dinamic Compression/Decompression

2001-06-12 Thread Philip Mak

On Tue, 12 Jun 2001, Emiliano F Castejon (Castle John) wrote:

 I would like to know if there is a way to use a compressed MYSQL bank
 for read and write (dynamic compression/decompression).

I'm not sure if it is possible to do this natively in MySQL; I'll let
someone else answer that.

You could perform compression at the filesystem level independent of
MySQL; make a partition on your disk that is stored compressed (check your
operating system manuals to determine how to do this; I don't know), and
then MySQL would be able to read and write, and it would be compressed.
However, performance will suffer.

-Philip Mak ([EMAIL PROTECTED])


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Replication and compression

2001-04-20 Thread Scott Vanderweyst

Is it possible to enable compression between replicating database servers?
I'm assuming that there is already compression code in place, because of
the need to link with the compression libraries on the client end
sometimes.

Compressing the replication connection has certain advantages where the
connection speed is slow, and the servers at either end are not being
stressed due to slow data transfer.

Scott V




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Should I turn on compression in MyODBC in LAN evironment?!?

2001-01-13 Thread Sinisa Milivojevic

"Apolinaras \"Apollo\" Sinkevicius" writes:
  I wonder, is there a performance gain if compression is turned on the 
  client side?
  My set-up:
  Front end: M$Access97 via latest MyODBC
  Back end: MySQL 3.23.30 on RH7 with PIII300 128Ram
  
  LAN is 100BaseT Full Duplex switched.
  
  Thanx
  
  


No, you would not get much performance boost over such a fast network,
unless you are inserting and retreiving huge rows in size over 1 Mb.


Compression reaps huge gains over slow lines.

Regards,

Sinisa

    __ _   _  ___ ==  MySQL AB
 /*/\*\/\*\   /*/ \*\ /*/ \*\ |*| Sinisa Milivojevic
/*/ /*/ /*/   \*\_   |*|   |*||*| mailto:[EMAIL PROTECTED]
   /*/ /*/ /*/\*\/*/  \*\|*|   |*||*| Larnaka, Cyprus
  /*/ /*/  /*/\*\_/*/ \*\_/*/ |*|
  /*/^^^\*\^^^
 /*/ \*\Developers Team

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php