Dear MySQL Users,
MySQL Cluster is the distributed, shared-nothing variant of MySQL.
This storage engine provides:
- In-Memory storage - Real-time performance (with optional
checkpointing to disk)
- Transparent Auto-Sharding - Read & write scalability
- Active-Active/Multi-Master geographic replication
- 99.999% High Availability with no single point of failure
and on-line maintenance
- NoSQL and SQL APIs (including C++, Java, http, Memcached
and JavaScript/Node.js)
MySQL Cluster 7.5.9, has been released and can be downloaded from
http://www.mysql.com/downloads/cluster/
where you will also find Quick Start guides to help you get your
first MySQL Cluster database up and running.
MySQL Cluster 7.5 is also available from our repository for Linux
platforms, go here for details:
http://dev.mysql.com/downloads/repo/
The release notes are available from
http://dev.mysql.com/doc/relnotes/mysql-cluster/7.5/en/index.html
MySQL Cluster enables users to meet the database challenges of next
generation web, cloud, and communications services with uncompromising
scalability, uptime and agility.
More details can be found at
http://www.mysql.com/products/cluster/
Enjoy !
Changes in MySQL NDB Cluster 7.5.9 (5.7.21-ndb-7.5.9) (2018-01-17,
General Availability)
MySQL NDB Cluster 7.5.9 is a new release of MySQL NDB Cluster
7.5, based on MySQL Server 5.7 and including features in
version 7.5 of the NDB
(http://dev.mysql.com/doc/refman/5.7/en/mysql-cluster.html)
storage engine, as well as fixing recently discovered bugs in
previous NDB Cluster releases.
Obtaining MySQL NDB Cluster 7.5. MySQL NDB Cluster 7.5
source code and binaries can be obtained from
http://dev.mysql.com/downloads/cluster/.
For an overview of changes made in MySQL NDB Cluster 7.5, see
What is New in NDB Cluster 7.5
(http://dev.mysql.com/doc/refman/5.7/en/mysql-cluster-what-is-new-7-5.html).
This release also incorporates all bug fixes and changes made
in previous NDB Cluster releases, as well as all bug fixes
and feature changes which were added in mainline MySQL 5.7
through MySQL 5.7.21 (see Changes in MySQL 5.7.21 (Not yet
released, General Availability)
(http://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-21.html)).
Bugs Fixed
* NDB Replication: On an SQL node not being used for a
replication channel with sql_log_bin=0
(http://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_sql_log_bin)
it was possible after
creating and populating an NDB table for a table map
event to be written to the binary log for the created
table with no corresponding row events. This led to
problems when this log was later used by a slave cluster
replicating from the mysqld where this table was created.
Fixed this by adding support for maintaining a cumulative
any_value bitmap for global checkpoint event operations
that represents bits set consistently for all rows of a
specific table in a given epoch, and by adding a check to
determine whether all operations (rows) for a specific
table are all marked as NOLOGGING, to prevent the
addition of this table to the Table_map held by the
binlog injector.
As part of this fix, the NDB API adds a new
getNextEventOpInEpoch3()
(http://dev.mysql.com/doc/ndbapi/en/ndb-ndb-getnexteventopinepoch3.html)
method which provides information about
any AnyValue received by making it possible to retrieve
the cumulative any_value bitmap. (Bug #26333981)
* A query against the INFORMATION_SCHEMA.FILES
(http://dev.mysql.com/doc/refman/5.7/en/files-table.html)
table returned no results when it included an ORDER BY
clause. (Bug #26877788)
* During a restart, DBLQH loads redo log part metadata for
each redo log part it manages, from one or more redo log
files. Since each file has a limited capacity for
metadata, the number of files which must be consulted
depends on the size of the redo log part. These files are
opened, read, and closed sequentially, but the closing of
one file occurs concurrently with the opening of the
next.
In cases where closing of the file was slow, it was
possible for more than 4 files per redo log part to be
open concurrently; since these files were opened using
the OM_WRITE_BUFFER option, more than 4 chunks of write
buffer were allocated per part in such cases. The write
buffer pool is not unlimited; if all redo log parts were
in a similar state, the pool was exhausted, causing the
data node to shut down.
This issue is resolved by avoiding the use of
OM_WRITE_BUFFER during metadata reload, so that any
transient opening of more than 4 redo log files per log
file part no longer leads to failure of the data