-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hi all,
On behalf of the SciPy development team I'm pleased to announce
the release candidate SciPy 1.2.0rc2. Please help us test out this
release candidate -- the 1.2.x series will be an LTS release and the
last to support Python 2.7.
Sources and binary wheels can be found at:
https://pypi.org/project/scipy/
and at:
https://github.com/scipy/scipy/releases/tag/v1.2.0rc2
One of a few ways to install the release candidate with pip:
pip install scipy==1.2.0rc2
==
SciPy 1.2.0 Release Notes
==
Note: Scipy 1.2.0 is not released yet!
SciPy 1.2.0 is the culmination of 6 months of hard work. It contains
many new features, numerous bug-fixes, improved test coverage and better
documentation. There have been a number of deprecations and API changes
in this release, which are documented below. All users are encouraged to
upgrade to this release, as there are a large number of bug-fixes and
optimizations. Before upgrading, we recommend that users check that
their own code does not use deprecated SciPy functionality (to do so,
run your code with ``python -Wd`` and check for ``DeprecationWarning`` s).
Our development attention will now shift to bug-fix releases on the
1.2.x branch, and on adding new features on the master branch.
This release requires Python 2.7 or 3.4+ and NumPy 1.8.2 or greater.
Note: This will be the last SciPy release to support Python 2.7.
Consequently, the 1.2.x series will be a long term support (LTS)
release; we will backport bug fixes until 1 Jan 2020.
For running on PyPy, PyPy3 6.0+ and NumPy 1.15.0 are required.
Highlights of this release
---
- 1-D root finding improvements with a new solver, ``toms748``, and a new
unified interface, ``root_scalar``
- New ``dual_annealing`` optimization method that combines stochastic and
local deterministic searching
- A new optimization algorithm, ``shgo`` (simplicial homology
global optimization) for derivative free optimization problems
- A new category of quaternion-based transformations are available in
`scipy.spatial.transform`
New features
`scipy.ndimage` improvements
-
Proper spline coefficient calculations have been added for the ``mirror``,
``wrap``, and ``reflect`` modes of `scipy.ndimage.rotate`
`scipy.fftpack` improvements
-
DCT-IV, DST-IV, DCT-I, and DST-I orthonormalization are now supported in
`scipy.fftpack`.
`scipy.interpolate` improvements
-
`scipy.interpolate.pade` now accepts a new argument for the order of the
numerator
`scipy.cluster` improvements
-
`scipy.cluster.vq.kmeans2` gained a new initialization method, kmeans++.
`scipy.special` improvements
-
The function ``softmax`` was added to `scipy.special`.
`scipy.optimize` improvements
--
The one-dimensional nonlinear solvers have been given a unified interface
`scipy.optimize.root_scalar`, similar to the `scipy.optimize.root` interface
for multi-dimensional solvers. ``scipy.optimize.root_scalar(f, bracket=[a
,b],
method="brenth")`` is equivalent to ``scipy.optimize.brenth(f, a ,b)``. If
no
``method`` is specified, an appropriate one will be selected based upon the
bracket and the number of derivatives available.
The so-called Algorithm 748 of Alefeld, Potra and Shi for root-finding
within
an enclosing interval has been added as `scipy.optimize.toms748`. This
provides
guaranteed convergence to a root with convergence rate per function
evaluation
of approximately 1.65 (for sufficiently well-behaved functions.)
``differential_evolution`` now has the ``updating`` and ``workers``
keywords.
The first chooses between continuous updating of the best solution vector
(the
default), or once per generation. Continuous updating can lead to faster
convergence. The ``workers`` keyword accepts an ``int`` or map-like
callable,
and parallelises the solver (having the side effect of updating once per
generation). Supplying an ``int`` evaluates the trial solutions in N
parallel
parts. Supplying a map-like callable allows other parallelisation approaches
(such as ``mpi4py``, or ``joblib``) to be used.
``dual_annealing`` (and ``shgo`` below) is a powerful new general purpose
global optizimation (GO) algorithm. ``dual_annealing`` uses two annealing
processes to accelerate the convergence towards the global minimum of an
objective mathematical function. The first annealing process controls the
stochastic Markov chain searching and the second annealing process controls
the
deterministic minimization. So, dual annealing is a hybrid method that takes
advantage of stochastic and local deterministic searching in an efficient
way.
``shgo`` (simplicial homology global optimization) is a similar algorithm
appropriate for solving black box and derivative fr