Re: [ANN] Mroonga 8.09 - Fast fulltext search for all languages on MySQL
Hi, Sorry, There was wrong release information in Mroonga 8.09. The MySQL 8 is not supported. That is still being handled. On 2018/11/29 14:12, Horimoto Yasuhiro wrote: > Hi, > > Mroonga 8.09 has been released! > > Mroonga is a MySQL storage engine that supports fast fulltext search > and geolocation search. It is CJK ready. It uses Groonga as a storage > and fulltext search engine. > > Document: >http://mroonga.org/docs/ > > How to install: Install Guide >http://mroonga.org/docs/install.html > > How to upgrade: Upgrade Guide >http://mroonga.org/docs/upgrade.html > > Blog: >http://mroonga.org/en/blog/2018/11/29/mroonga-8.09.html > > Changes: >http://mroonga.org/docs/news.html#release-8.09 > > Here are some topics in this release. > > * Supported Ubuntu 18.10 (Cosmic Cuttlefish). > * Supported MariaDB 10.3.10. > * Supported MariaDB 10.2.19 > * Supported MariaDB 10.1.37 > * Supported Percona Server 5.7.23-25. > * Supported MariaDB 10.3.11. > * Supported MySQL 5.6.42. > * Supported MySQL 5.7.24. > * Supported MySQL 8. > > Regards, > -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
[ANN] Mroonga 8.09 - Fast fulltext search for all languages on MySQL
Hi, Mroonga 8.09 has been released! Mroonga is a MySQL storage engine that supports fast fulltext search and geolocation search. It is CJK ready. It uses Groonga as a storage and fulltext search engine. Document: http://mroonga.org/docs/ How to install: Install Guide http://mroonga.org/docs/install.html How to upgrade: Upgrade Guide http://mroonga.org/docs/upgrade.html Blog: http://mroonga.org/en/blog/2018/11/29/mroonga-8.09.html Changes: http://mroonga.org/docs/news.html#release-8.09 Here are some topics in this release. * Supported Ubuntu 18.10 (Cosmic Cuttlefish). * Supported MariaDB 10.3.10. * Supported MariaDB 10.2.19 * Supported MariaDB 10.1.37 * Supported Percona Server 5.7.23-25. * Supported MariaDB 10.3.11. * Supported MySQL 5.6.42. * Supported MySQL 5.7.24. * Supported MySQL 8. Regards, -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Estimate mysqldump size
Ronan McGlue writes: > Hi Olivier, > > On 28/11/2018 8:00 pm, Olivier wrote: >> Hello, >> >> Is there a way that gives an estimate of the size of a mysqldump such a >> way that it would always be larger than the real size? >> >> So far, I have found: >> >> mysql -s -u root -e "SELECT SUM(data_length) Data_BB FROM >>information_schema.tables WHERE table_schema NOT IN >>('information_schema','performance_schema','mysql'); >> >> but the result may be smaller than the real size. > > In the above example, you also need to account for index_length, eg > > mysql> select round(SUM(data_length+index_length)/POWER(1024,2),1) > Total_MB,round(SUM(data_length)/POWER(1024,2),1) > data_MB,round(SUM(index_length)/POWER(1024,2),1) index_MB FROM > information_schema.tables where TABLE_SCHEMA not in ( > "information_schema", "performance_schema", "mysql") ; > +--+-+--+ > | Total_MB | data_MB | index_MB | > +--+-+--+ > | 4546.0 | 4093.7 | 452.2 | > +--+-+--+ > 1 row in set (0.00 sec) Thanks. > However, this doesn't 100% map to OS file size ( if using innodb file > per table ) and will likely never be 100% accurate to what the OS > reports, due to fragmentation etc. > >> >> I am writting a program that takes the result of mysqldump and pipe it >> in a tar file. > > A typical global mysqldump ( ie taken with -A ) will be a single file. > Why are you then wanting to pipe this to a tar archive? The tar file will be part of Amanda backup. On a full backup, it should have the mysqldump and on incremental backups it should have the binary logs. Having everything in a tar file makes it very consistent and easy to deal with in case of catastrophic failure (like everything is lost except the tape, the backup can still be extracted by hand on a live CD/single user system as it is all tar). Amanda will also take care of the compression. > Its also common for mysqldump to be compressed via a pipe due to the > nature of the output file created ( eg text files compress *very* well ) > , to then be sent across the network , eg via ssh > > mysqldump -u.. -p -A | gzip > schema.sql.gz > > > Aside from your stated goal of piping to tar, if we can step back a > level briefly - what are you trying to achieve here? A plugin for Amanda. I think a commercial solution exist, I don't need anything very fancy, so I am trying to come up with my own solution. Best regards, Olivier > >> Tar file format has the size in the header, before the >> data and if the size of the dump is bigger than the size declared in the >> header, tar does not like that (if the size of the dump is smaller than >> the actual size, it can be padded with spaces). >> >> So, the estimate must be larger than the actual dump, how to acheive >> that? > > It wont be anything other than an estimate , however it should still be > reasonably close if you arent doing a *lot* of dml on it. > > You could artificially inflate the expected size by ,eg multiplying by > 1.1x or 1.2x , however there will always be an edge case table which > will be greater still.. > > > Regards > > Ronan McGlue > > MySQL Support > > > >> >> Thanks in advance, >> >> Olivier >> >> > -- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Estimate mysqldump size
Ronan McGlue writes: > Hi Olivier, > > On 28/11/2018 8:00 pm, Olivier wrote: >> Hello, >> >> Is there a way that gives an estimate of the size of a mysqldump such a >> way that it would always be larger than the real size? >> >> So far, I have found: >> >> mysql -s -u root -e "SELECT SUM(data_length) Data_BB FROM >>information_schema.tables WHERE table_schema NOT IN >>('information_schema','performance_schema','mysql'); >> >> but the result may be smaller than the real size. > > In the above example, you also need to account for index_length, eg But I thought I had read that indexes are not saved by a myslqdump, but recreated on a restore? Thanks in advance, Olivier > > mysql> select round(SUM(data_length+index_length)/POWER(1024,2),1) > Total_MB,round(SUM(data_length)/POWER(1024,2),1) > data_MB,round(SUM(index_length)/POWER(1024,2),1) index_MB FROM > information_schema.tables where TABLE_SCHEMA not in ( > "information_schema", "performance_schema", "mysql") ; > +--+-+--+ > | Total_MB | data_MB | index_MB | > +--+-+--+ > | 4546.0 | 4093.7 | 452.2 | > +--+-+--+ > 1 row in set (0.00 sec) > > However, this doesn't 100% map to OS file size ( if using innodb file > per table ) and will likely never be 100% accurate to what the OS > reports, due to fragmentation etc. > >> >> I am writting a program that takes the result of mysqldump and pipe it >> in a tar file. > > A typical global mysqldump ( ie taken with -A ) will be a single file. > Why are you then wanting to pipe this to a tar archive? > > Its also common for mysqldump to be compressed via a pipe due to the > nature of the output file created ( eg text files compress *very* well ) > , to then be sent across the network , eg via ssh > > mysqldump -u.. -p -A | gzip > schema.sql.gz > > > Aside from your stated goal of piping to tar, if we can step back a > level briefly - what are you trying to achieve here? > >> Tar file format has the size in the header, before the >> data and if the size of the dump is bigger than the size declared in the >> header, tar does not like that (if the size of the dump is smaller than >> the actual size, it can be padded with spaces). >> >> So, the estimate must be larger than the actual dump, how to acheive >> that? > > It wont be anything other than an estimate , however it should still be > reasonably close if you arent doing a *lot* of dml on it. > > You could artificially inflate the expected size by ,eg multiplying by > 1.1x or 1.2x , however there will always be an edge case table which > will be greater still.. > > > Regards > > Ronan McGlue > > MySQL Support > > > >> >> Thanks in advance, >> >> Olivier >> >> > -- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Estimate mysqldump size
Am 28.11.18 um 10:00 schrieb Olivier: > Is there a way that gives an estimate of the size of a mysqldump such a > way that it would always be larger than the real size? keep in mind that a dump has tons of sql statements not existing that way in the data -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Estimate mysqldump size
Hi Olivier, On 28/11/2018 8:00 pm, Olivier wrote: Hello, Is there a way that gives an estimate of the size of a mysqldump such a way that it would always be larger than the real size? So far, I have found: mysql -s -u root -e "SELECT SUM(data_length) Data_BB FROM information_schema.tables WHERE table_schema NOT IN ('information_schema','performance_schema','mysql'); but the result may be smaller than the real size. In the above example, you also need to account for index_length, eg mysql> select round(SUM(data_length+index_length)/POWER(1024,2),1) Total_MB,round(SUM(data_length)/POWER(1024,2),1) data_MB,round(SUM(index_length)/POWER(1024,2),1) index_MB FROM information_schema.tables where TABLE_SCHEMA not in ( "information_schema", "performance_schema", "mysql") ; +--+-+--+ | Total_MB | data_MB | index_MB | +--+-+--+ | 4546.0 | 4093.7 | 452.2 | +--+-+--+ 1 row in set (0.00 sec) However, this doesn't 100% map to OS file size ( if using innodb file per table ) and will likely never be 100% accurate to what the OS reports, due to fragmentation etc. I am writting a program that takes the result of mysqldump and pipe it in a tar file. A typical global mysqldump ( ie taken with -A ) will be a single file. Why are you then wanting to pipe this to a tar archive? Its also common for mysqldump to be compressed via a pipe due to the nature of the output file created ( eg text files compress *very* well ) , to then be sent across the network , eg via ssh mysqldump -u.. -p -A | gzip > schema.sql.gz Aside from your stated goal of piping to tar, if we can step back a level briefly - what are you trying to achieve here? Tar file format has the size in the header, before the data and if the size of the dump is bigger than the size declared in the header, tar does not like that (if the size of the dump is smaller than the actual size, it can be padded with spaces). So, the estimate must be larger than the actual dump, how to acheive that? It wont be anything other than an estimate , however it should still be reasonably close if you arent doing a *lot* of dml on it. You could artificially inflate the expected size by ,eg multiplying by 1.1x or 1.2x , however there will always be an edge case table which will be greater still.. Regards Ronan McGlue MySQL Support Thanks in advance, Olivier -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Estimate mysqldump size
Hello, Is there a way that gives an estimate of the size of a mysqldump such a way that it would always be larger than the real size? So far, I have found: mysql -s -u root -e "SELECT SUM(data_length) Data_BB FROM information_schema.tables WHERE table_schema NOT IN ('information_schema','performance_schema','mysql'); but the result may be smaller than the real size. I am writting a program that takes the result of mysqldump and pipe it in a tar file. Tar file format has the size in the header, before the data and if the size of the dump is bigger than the size declared in the header, tar does not like that (if the size of the dump is smaller than the actual size, it can be padded with spaces). So, the estimate must be larger than the actual dump, how to acheive that? Thanks in advance, Olivier -- -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql