Script 'mail_helper' called by obssrc Hello community, here is the log from the commit of package python-lmfit for openSUSE:Factory checked in at 2024-09-05 15:47:40 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Comparing /work/SRC/openSUSE:Factory/python-lmfit (Old) and /work/SRC/openSUSE:Factory/.python-lmfit.new.10096 (New) ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "python-lmfit" Thu Sep 5 15:47:40 2024 rev:7 rq:1198943 version:1.3.2 Changes: -------- --- /work/SRC/openSUSE:Factory/python-lmfit/python-lmfit.changes 2024-07-12 17:05:26.909894032 +0200 +++ /work/SRC/openSUSE:Factory/.python-lmfit.new.10096/python-lmfit.changes 2024-09-05 15:48:28.935296803 +0200 @@ -1,0 +2,15 @@ +Thu Sep 5 09:41:05 UTC 2024 - Ben Greiner <c...@bnavigator.de> + +- Update to 1.3.2 + * fix typo in restoring a _buildmodel dict (PR #957, Issue #956) + * fixes for Numpy2 support (PR #959, Issue #958) + * ensure that correct initial params are used when re-fitting a + ModeRresult (PR #961, Issue #960) + * make sure that CompositeModels cannot have a prefix (PR #961, + Issue #954) + * now require asteval>=1.0 and uncertainties>=3.2.2 +- Fix requirements +- Drop support-numpy-2.patch +- Add lmfit-pr965-asteval.patch gh#lmfit/lmfit-py#965 + +------------------------------------------------------------------- Old: ---- lmfit-1.3.1.tar.gz support-numpy-2.patch New: ---- lmfit-1.3.2.tar.gz lmfit-pr965-asteval.patch BETA DEBUG BEGIN: Old:- Fix requirements - Drop support-numpy-2.patch - Add lmfit-pr965-asteval.patch gh#lmfit/lmfit-py#965 BETA DEBUG END: BETA DEBUG BEGIN: New:- Drop support-numpy-2.patch - Add lmfit-pr965-asteval.patch gh#lmfit/lmfit-py#965 BETA DEBUG END: ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Other differences: ------------------ ++++++ python-lmfit.spec ++++++ --- /var/tmp/diff_new_pack.adKX99/_old 2024-09-05 15:48:29.359314412 +0200 +++ /var/tmp/diff_new_pack.adKX99/_new 2024-09-05 15:48:29.363314578 +0200 @@ -17,14 +17,14 @@ Name: python-lmfit -Version: 1.3.1 +Version: 1.3.2 Release: 0 Summary: Least-Squares Minimization with Bounds and Constraints License: BSD-3-Clause AND MIT URL: https://lmfit.github.io/lmfit-py/ Source: https://files.pythonhosted.org/packages/source/l/lmfit/lmfit-%{version}.tar.gz -# PATCH-FIX-UPSTREAM gh#lmfit/lmfit-py#959 -Patch0: support-numpy-2.patch +# PATCH-FIX-UPSTREAM lmfit-pr965-asteval.patch gh#lmfit/lmfit-py#965 +Patch0: https://github.com/lmfit/lmfit-py/pull/965.patch#/lmfit-pr965-asteval.patch BuildRequires: %{python_module base >= 3.8} BuildRequires: %{python_module pip} BuildRequires: %{python_module setuptools_scm} @@ -32,24 +32,23 @@ BuildRequires: %{python_module wheel} BuildRequires: fdupes BuildRequires: python-rpm-macros -Requires: python-asteval >= 0.9.28 +Requires: python-asteval >= 1 Requires: python-dill >= 0.3.4 -Requires: python-numpy >= 1.23 -Requires: python-scipy >= 1.8 -Requires: python-uncertainties >= 3.1.4 +Requires: python-numpy >= 1.19 +Requires: python-scipy >= 1.6 +Requires: python-uncertainties >= 3.2.2 Recommends: python-emcee Recommends: python-matplotlib Recommends: python-pandas BuildArch: noarch # SECTION test requirements -BuildRequires: %{python_module asteval >= 0.9.28} +BuildRequires: %{python_module asteval >= 1} BuildRequires: %{python_module dill >= 0.3.4} BuildRequires: %{python_module flaky} -BuildRequires: %{python_module numpy >= 1.23} -BuildRequires: %{python_module pytest-cov} +BuildRequires: %{python_module numpy >= 1.19} BuildRequires: %{python_module pytest} -BuildRequires: %{python_module scipy >= 1.8} -BuildRequires: %{python_module uncertainties >= 3.1.4} +BuildRequires: %{python_module scipy >= 1.6} +BuildRequires: %{python_module uncertainties >= 3.2.2} # /SECTION %python_subpackages @@ -75,6 +74,7 @@ %prep %autosetup -p1 -n lmfit-%{version} sed -i -e '/^#!\//, 1d' lmfit/jsonutils.py +sed -i 's/--cov=lmfit --cov-report html//' pyproject.toml %build %pyproject_wheel ++++++ lmfit-1.3.1.tar.gz -> lmfit-1.3.2.tar.gz ++++++ diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/lmfit-1.3.1/.pre-commit-config.yaml new/lmfit-1.3.2/.pre-commit-config.yaml --- old/lmfit-1.3.1/.pre-commit-config.yaml 2024-04-19 18:02:36.000000000 +0200 +++ new/lmfit-1.3.2/.pre-commit-config.yaml 2024-07-19 18:17:35.000000000 +0200 @@ -2,13 +2,13 @@ repos: - repo: https://github.com/asottile/pyupgrade - rev: v3.15.2 + rev: v3.16.0 hooks: - id: pyupgrade args: [--py38-plus] - repo: https://github.com/pre-commit/pre-commit-hooks - rev: v4.5.0 + rev: v4.6.0 hooks: - id: check-ast - id: check-builtin-literals @@ -23,7 +23,7 @@ args: [--remove] - repo: https://github.com/PyCQA/flake8 - rev: 7.0.0 + rev: 7.1.0 hooks: - id: flake8 additional_dependencies: [flake8-deprecated, flake8-mutable, Flake8-pyproject] @@ -51,14 +51,14 @@ - id: python-check-blanket-noqa - repo: https://github.com/codespell-project/codespell - rev: v2.2.6 + rev: v2.3.0 hooks: - id: codespell files: '.py|.rst' exclude: 'doc/doc_examples_to_gallery.py|.ipynb' # escaped characters currently do not work correctly # so \nnumber is considered a spelling error.... - args: ["-L nnumber", "-L mone"] + args: ["-L nnumber", "-L mone", "-L assertIn", "-L efine",] - repo: https://github.com/asottile/yesqa rev: v1.5.0 diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/lmfit-1.3.1/PKG-INFO new/lmfit-1.3.2/PKG-INFO --- old/lmfit-1.3.1/PKG-INFO 2024-04-19 18:02:50.684708600 +0200 +++ new/lmfit-1.3.2/PKG-INFO 2024-07-19 18:17:43.637402300 +0200 @@ -1,6 +1,6 @@ Metadata-Version: 2.1 Name: lmfit -Version: 1.3.1 +Version: 1.3.2 Summary: Least-Squares Minimization with Bounds and Constraints Author-email: LMFit Development Team <matt.newvi...@gmail.com> License: BSD-3 @@ -98,10 +98,10 @@ Description-Content-Type: text/x-rst License-File: LICENSE License-File: AUTHORS.txt -Requires-Dist: asteval>=0.9.28 +Requires-Dist: asteval>=1.0 Requires-Dist: numpy>=1.19 Requires-Dist: scipy>=1.6 -Requires-Dist: uncertainties>=3.1.4 +Requires-Dist: uncertainties>=3.2.2 Requires-Dist: dill>=0.3.4 Provides-Extra: dev Requires-Dist: build; extra == "dev" diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/lmfit-1.3.1/azure-pipelines.yml new/lmfit-1.3.2/azure-pipelines.yml --- old/lmfit-1.3.1/azure-pipelines.yml 2024-04-19 18:02:36.000000000 +0200 +++ new/lmfit-1.3.2/azure-pipelines.yml 2024-07-19 18:17:35.000000000 +0200 @@ -82,7 +82,7 @@ displayName: 'Install dependencies' - script: | python -m pip install --upgrade build pip wheel - python -m pip install asteval==0.9.28 numpy==1.23.0 scipy==1.8.0 uncertainties==3.1.4 + python -m pip install asteval==1.0 numpy==1.23.0 scipy==1.8.0 uncertainties==3.2.2 displayName: 'Install minimum required version of dependencies' - script: | python -m build @@ -256,7 +256,7 @@ displayName: 'Install build, pip, setuptools, wheel, pybind11, and cython' - script: | export PATH=/home/vsts/.local/bin:$PATH - export numpy_version=1.26.4 + export numpy_version=2.0.0 wget https://github.com/numpy/numpy/releases/download/v${numpy_version}/numpy-${numpy_version}.tar.gz tar xzvf numpy-${numpy_version}.tar.gz cd numpy-${numpy_version} @@ -269,7 +269,7 @@ displayName: 'Install pythran' - script: | export PATH=/home/vsts/.local/bin:$PATH - export scipy_version=1.13.0 + export scipy_version=1.14.0 wget https://github.com/scipy/scipy/releases/download/v${scipy_version}/scipy-${scipy_version}.tar.gz tar xzvf scipy-${scipy_version}.tar.gz cd scipy-${scipy_version} diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/lmfit-1.3.1/doc/index.rst new/lmfit-1.3.2/doc/index.rst --- old/lmfit-1.3.1/doc/index.rst 2024-04-19 18:02:36.000000000 +0200 +++ new/lmfit-1.3.2/doc/index.rst 2024-07-19 18:17:35.000000000 +0200 @@ -1,4 +1,4 @@ -.. lmfit documentation master file, +.. lmfit documentation master file Non-Linear Least-Squares Minimization and Curve-Fitting for Python ================================================================== @@ -63,4 +63,4 @@ bounds constraints whatsnew - examples/index + examples/index.rst diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/lmfit-1.3.1/doc/installation.rst new/lmfit-1.3.2/doc/installation.rst --- old/lmfit-1.3.1/doc/installation.rst 2024-04-19 18:02:36.000000000 +0200 +++ new/lmfit-1.3.2/doc/installation.rst 2024-07-19 18:17:35.000000000 +0200 @@ -38,8 +38,8 @@ Lmfit requires the following Python packages, with versions given: * `NumPy`_ version 1.23 or higher. * `SciPy`_ version 1.8 or higher. - * `asteval`_ version 0.9.28 or higher. - * `uncertainties`_ version 3.1.4 or higher. + * `asteval`_ version 1.0 or higher. + * `uncertainties`_ version 3.2.2 or higher. * `dill`_ version 0.3.4 or higher. All of these are readily available on PyPI, and are installed diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/lmfit-1.3.1/doc/whatsnew.rst new/lmfit-1.3.2/doc/whatsnew.rst --- old/lmfit-1.3.1/doc/whatsnew.rst 2024-04-19 18:02:36.000000000 +0200 +++ new/lmfit-1.3.2/doc/whatsnew.rst 2024-07-19 18:17:35.000000000 +0200 @@ -11,6 +11,26 @@ to be a comprehensive list of changes. For such a complete record, consult the `lmfit GitHub repository`_. + +.. _whatsnew_132_label: + +Version 1.3.2 Release Notes (July 19, 2024) +==================================================== + +Fixes: + +- fix typo in restoring a ``_buildmodel`` dict (PR #957, Issue #956) +- fixes for Numpy2 support (PR #959, Issue #958) +- ensure that correct initial params are used when re-fitting a ModeRresult (PR #961, Issue #960) +- make sure that CompositeModels cannot have a prefix (PR #961, Issue #954) + +Build, Maintenance: + +- update pre-commit hooks, adding codespell exceptions +- update to latest SciPy/NumPy versions, including dependency versions for NumPy 2. +- now require asteval>=1.0 and uncertainties>=3.2.2 + + .. _whatsnew_131_label: Version 1.3.1 Release Notes (April 19, 2024) diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/lmfit-1.3.1/lmfit/conf_emcee.py new/lmfit-1.3.2/lmfit/conf_emcee.py --- old/lmfit-1.3.1/lmfit/conf_emcee.py 2023-12-31 17:26:04.000000000 +0100 +++ new/lmfit-1.3.2/lmfit/conf_emcee.py 1970-01-01 01:00:00.000000000 +0100 @@ -1,500 +0,0 @@ -#!/bin/env python -""" - -Wrapper for emcee - -""" - -import numpy as np -from .minimizer import _nan_policy - -# check for emcee -try: - import emcee - from emcee.autocorr import AutocorrError - HAS_EMCEE = int(emcee.__version__[0]) >= 3 -except ImportError: - HAS_EMCEE = False - -# check for pandas -try: - import pandas as pd - from pandas import isnull - HAS_PANDAS = True -except ImportError: - HAS_PANDAS = False - isnull = np.isnan - -def _lnprob(minimizer, theta, userfcn, params, var_names, bounds, userargs=(), - userkws=None, float_behavior='posterior', is_weighted=True, - nan_policy='raise'): - """Calculate the log-posterior probability. - - See the `Minimizer.emcee` method for more details. - - Parameters - ---------- - minimizer : minimizer - Minimizer instance - theta : sequence - Float parameter values (only those being varied). - userfcn : callable - User objective function. - params : Parameters - The entire set of Parameters. - var_names : list - The names of the parameters that are varying. - bounds : numpy.ndarray - Lower and upper bounds of parameters, with shape - ``(nvarys, 2)``. - userargs : tuple, optional - Extra positional arguments required for user objective function. - userkws : dict, optional - Extra keyword arguments required for user objective function. - float_behavior : {'posterior', 'chi2'}, optional - Specifies meaning of objective when it returns a float. Use - `'posterior'` if objective function returns a log-posterior - probability (default) or `'chi2'` if it returns a chi2 value. - is_weighted : bool, optional - If `userfcn` returns a vector of residuals then `is_weighted` - (default is True) specifies if the residuals have been weighted - by data uncertainties. - nan_policy : {'raise', 'propagate', 'omit'}, optional - Specifies action if `userfcn` returns NaN values. Use `'raise'` - (default) to raise a `ValueError`, `'propagate'` to use values - as-is, or `'omit'` to filter out the non-finite values. - - Returns - ------- - lnprob : float - Log posterior probability. - - """ - # the comparison has to be done on theta and bounds. DO NOT inject theta - # values into Parameters, then compare Parameters values to the bounds. - # Parameters values are clipped to stay within bounds. - if np.any(theta > bounds[:, 1]) or np.any(theta < bounds[:, 0]): - return -np.inf - for name, val in zip(var_names, theta): - params[name].value = val - userkwargs = {} - if userkws is not None: - userkwargs = userkws - # update the constraints - params.update_constraints() - # now calculate the log-likelihood - out = userfcn(params, *userargs, **userkwargs) - minimizer.result.nfev += 1 - if callable(minimizer.iter_cb): - abort = minimizer.iter_cb(params, minimizer.result.nfev, out, - *userargs, **userkwargs) - minimizer._abort = minimizer._abort or abort - if minimizer._abort: - minimizer.result.residual = out - minimizer._lastpos = theta - raise AbortFitException("fit aborted by user.") - else: - out = _nan_policy(np.asarray(out).ravel(), - nan_policy=minimizer.nan_policy) - lnprob = np.asarray(out).ravel() - if len(lnprob) == 0: - lnprob = np.array([-1.e100]) - if lnprob.size > 1: - # objective function returns a vector of residuals - if '__lnsigma' in params and not is_weighted: - # marginalise over a constant data uncertainty - __lnsigma = params['__lnsigma'].value - c = np.log(2 * np.pi) + 2 * __lnsigma - lnprob = -0.5 * np.sum((lnprob / np.exp(__lnsigma)) ** 2 + c) - else: - lnprob = -0.5 * (lnprob * lnprob).sum() - else: - # objective function returns a single value. - # use float_behaviour to figure out if the value is posterior or chi2 - if float_behavior == 'posterior': - pass - elif float_behavior == 'chi2': - lnprob *= -0.5 - else: - raise ValueError("float_behaviour must be either 'posterior' " - "or 'chi2' " + float_behavior) - return lnprob - -def emcee(minimizer, params=None, steps=1000, nwalkers=100, burn=0, thin=1, - ntemps=1, pos=None, reuse_sampler=False, workers=1, - float_behavior='posterior', is_weighted=True, seed=None, - progress=True, run_mcmc_kwargs={}): - r"""Bayesian sampling of the posterior distribution. - - The method uses the ``emcee`` Markov Chain Monte Carlo package and - assumes that the prior is Uniform. You need to have ``emcee`` - version 3 or newer installed to use this method. - - Parameters - ---------- - minimizer : minimizer - Minimizer instance - params : Parameters, optional - Parameters to use as starting point. If this is not specified - then the Parameters used to initialize the Minimizer object - are used. - steps : int, optional - How many samples you would like to draw from the posterior - distribution for each of the walkers? - nwalkers : int, optional - Should be set so :math:`nwalkers >> nvarys`, where ``nvarys`` - are the number of parameters being varied during the fit. - 'Walkers are the members of the ensemble. They are almost like - separate Metropolis-Hastings chains but, of course, the proposal - distribution for a given walker depends on the positions of all - the other walkers in the ensemble.' - from the `emcee` webpage. - burn : int, optional - Discard this many samples from the start of the sampling regime. - thin : int, optional - Only accept 1 in every `thin` samples. - ntemps : int, deprecated - ntemps has no effect. - pos : numpy.ndarray, optional - Specify the initial positions for the sampler, an ndarray of - shape ``(nwalkers, nvarys)``. You can also initialise using a - previous chain of the same `nwalkers` and ``nvarys``. Note that - ``nvarys`` may be one larger than you expect it to be if your - ``userfcn`` returns an array and ``is_weighted=False``. - reuse_sampler : bool, optional - Set to True if you have already run `emcee` with the - `Minimizer` instance and want to continue to draw from its - ``sampler`` (and so retain the chain history). If False, a - new sampler is created. The keywords `nwalkers`, `pos`, and - `params` will be ignored when this is set, as they will be set - by the existing sampler. - **Important**: the Parameters used to create the sampler must - not change in-between calls to `emcee`. Alteration of Parameters - would include changed ``min``, ``max``, ``vary`` and ``expr`` - attributes. This may happen, for example, if you use an altered - Parameters object and call the `minimize` method in-between - calls to `emcee`. - workers : Pool-like or int, optional - For parallelization of sampling. It can be any Pool-like object - with a map method that follows the same calling sequence as the - built-in `map` function. If int is given as the argument, then - a multiprocessing-based pool is spawned internally with the - corresponding number of parallel processes. 'mpi4py'-based - parallelization and 'joblib'-based parallelization pools can - also be used here. **Note**: because of multiprocessing - overhead it may only be worth parallelising if the objective - function is expensive to calculate, or if there are a large - number of objective evaluations per step - (``nwalkers * nvarys``). - float_behavior : str, optional - Meaning of float (scalar) output of objective function. Use - `'posterior'` if it returns a log-posterior probability or - `'chi2'` if it returns :math:`\chi^2`. See Notes for further - details. - is_weighted : bool, optional - Has your objective function been weighted by measurement - uncertainties? If ``is_weighted=True`` then your objective - function is assumed to return residuals that have been divided - by the true measurement uncertainty ``(data - model) / sigma``. - If ``is_weighted=False`` then the objective function is - assumed to return unweighted residuals, ``data - model``. In - this case `emcee` will employ a positive measurement - uncertainty during the sampling. This measurement uncertainty - will be present in the output params and output chain with the - name ``__lnsigma``. A side effect of this is that you cannot - use this parameter name yourself. - **Important**: this parameter only has any effect if your - objective function returns an array. If your objective function - returns a float, then this parameter is ignored. See Notes for - more details. - seed : int or numpy.random.RandomState, optional - If `seed` is an ``int``, a new `numpy.random.RandomState` - instance is used, seeded with `seed`. - If `seed` is already a `numpy.random.RandomState` instance, - then that `numpy.random.RandomState` instance is used. Specify - `seed` for repeatable minimizations. - progress : bool, optional - Print a progress bar to the console while running. - run_mcmc_kwargs : dict, optional - Additional (optional) keyword arguments that are passed to - ``emcee.EnsembleSampler.run_mcmc``. - - Returns - ------- - MinimizerResult - MinimizerResult object containing updated params, statistics, - etc. The updated params represent the median of the samples, - while the uncertainties are half the difference of the 15.87 - and 84.13 percentiles. The `MinimizerResult` contains a few - additional attributes: `chain` contain the samples and has - shape ``((steps - burn) // thin, nwalkers, nvarys)``. - `flatchain` is a `pandas.DataFrame` of the flattened chain, - that can be accessed with `result.flatchain[parname]`. - `lnprob` contains the log probability for each sample in - `chain`. The sample with the highest probability corresponds - to the maximum likelihood estimate. `acor` is an array - containing the auto-correlation time for each parameter if the - auto-correlation time can be computed from the chain. Finally, - `acceptance_fraction` (an array of the fraction of steps - accepted for each walker). - - Notes - ----- - This method samples the posterior distribution of the parameters - using Markov Chain Monte Carlo. It calculates the log-posterior - probability of the model parameters, `F`, given the data, `D`, - :math:`\ln p(F_{true} | D)`. This 'posterior probability' is - given by: - - .. math:: - - \ln p(F_{true} | D) \propto \ln p(D | F_{true}) + \ln p(F_{true}) - - where :math:`\ln p(D | F_{true})` is the 'log-likelihood' and - :math:`\ln p(F_{true})` is the 'log-prior'. The default log-prior - encodes prior information known about the model that the log-prior - probability is ``-numpy.inf`` (impossible) if any of the parameters - is outside its limits, and is zero if all the parameters are inside - their bounds (uniform prior). The log-likelihood function is [1]_: - - .. math:: - - \ln p(D|F_{true}) = -\frac{1}{2}\sum_n \left[\frac{(g_n(F_{true}) - D_n)^2}{s_n^2}+\ln (2\pi s_n^2)\right] - - The first term represents the residual (:math:`g` being the - generative model, :math:`D_n` the data and :math:`s_n` the - measurement uncertainty). This gives :math:`\chi^2` when summed - over all data points. The objective function may also return the - log-posterior probability, :math:`\ln p(F_{true} | D)`. Since the - default log-prior term is zero, the objective function can also - just return the log-likelihood, unless you wish to create a - non-uniform prior. - - If the objective function returns a float value, this is assumed - by default to be the log-posterior probability, (`float_behavior` - default is 'posterior'). If your objective function returns - :math:`\chi^2`, then you should use ``float_behavior='chi2'`` - instead. - - By default objective functions may return an ndarray of (possibly - weighted) residuals. In this case, use `is_weighted` to select - whether these are correctly weighted by measurement uncertainty. - Note that this ignores the second term above, so that to calculate - a correct log-posterior probability value your objective function - should return a float value. With ``is_weighted=False`` the data - uncertainty, `s_n`, will be treated as a nuisance parameter to be - marginalized out. This uses strictly positive uncertainty - (homoscedasticity) for each data point, - :math:`s_n = \exp(\rm{\_\_lnsigma})`. ``__lnsigma`` will be - present in `MinimizerResult.params`, as well as `Minimizer.chain` - and ``nvarys`` will be increased by one. - - References - ---------- - .. [1] https://emcee.readthedocs.io - - """ - if not HAS_EMCEE: - raise NotImplementedError('emcee version 3 is required.') - - if ntemps > 1: - msg = ("'ntemps' has no effect anymore, since the PTSampler was " - "removed from emcee version 3.") - raise DeprecationWarning(msg) - - tparams = params - # if you're reusing the sampler then nwalkers have to be - # determined from the previous sampling - if reuse_sampler: - if not hasattr(self, 'sampler') or not hasattr(self, '_lastpos'): - raise ValueError("You wanted to use an existing sampler, but " - "it hasn't been created yet") - if len(self._lastpos.shape) == 2: - nwalkers = self._lastpos.shape[0] - elif len(self._lastpos.shape) == 3: - nwalkers = self._lastpos.shape[1] - tparams = None - - result = self.prepare_fit(params=tparams) - params = result.params - - # check if the userfcn returns a vector of residuals - out = self.userfcn(params, *self.userargs, **self.userkws) - out = np.asarray(out).ravel() - if out.size > 1 and is_weighted is False and '__lnsigma' not in params: - # __lnsigma should already be in params if is_weighted was - # previously set to True. - params.add('__lnsigma', value=0.01, min=-np.inf, max=np.inf, - vary=True) - # have to re-prepare the fit - result = self.prepare_fit(params) - params = result.params - - result.method = 'emcee' - - # Removing internal parameter scaling. We could possibly keep it, - # but I don't know how this affects the emcee sampling. - bounds = [] - var_arr = np.zeros(len(result.var_names)) - i = 0 - for par in params: - param = params[par] - if param.expr is not None: - param.vary = False - if param.vary: - var_arr[i] = param.value - i += 1 - else: - # don't want to append bounds if they're not being varied. - continue - param.from_internal = lambda val: val - lb, ub = param.min, param.max - if lb is None or lb is np.nan: - lb = -np.inf - if ub is None or ub is np.nan: - ub = np.inf - bounds.append((lb, ub)) - bounds = np.array(bounds) - - self.nvarys = len(result.var_names) - - # set up multiprocessing options for the samplers - auto_pool = None - sampler_kwargs = {} - if isinstance(workers, int) and workers > 1: - auto_pool = multiprocessing.Pool(workers) - sampler_kwargs['pool'] = auto_pool - elif hasattr(workers, 'map'): - sampler_kwargs['pool'] = workers - - # function arguments for the log-probability functions - # these values are sent to the log-probability functions by the sampler. - lnprob_args = (self.userfcn, params, result.var_names, bounds) - lnprob_kwargs = {'is_weighted': is_weighted, - 'float_behavior': float_behavior, - 'userargs': self.userargs, - 'userkws': self.userkws, - 'nan_policy': self.nan_policy} - - sampler_kwargs['args'] = lnprob_args - sampler_kwargs['kwargs'] = lnprob_kwargs - - # set up the random number generator - rng = _make_random_gen(seed) - - # now initialise the samplers - if reuse_sampler: - if auto_pool is not None: - self.sampler.pool = auto_pool - - p0 = self._lastpos - if p0.shape[-1] != self.nvarys: - raise ValueError("You cannot reuse the sampler if the number " - "of varying parameters has changed") - - else: - p0 = 1 + rng.randn(nwalkers, self.nvarys) * 1.e-4 - p0 *= var_arr - sampler_kwargs.setdefault('pool', auto_pool) - self.sampler = emcee.EnsembleSampler(nwalkers, self.nvarys, - self._lnprob, **sampler_kwargs) - - # user supplies an initialisation position for the chain - # If you try to run the sampler with p0 of a wrong size then you'll get - # a ValueError. Note, you can't initialise with a position if you are - # reusing the sampler. - if pos is not None and not reuse_sampler: - tpos = np.asfarray(pos) - if p0.shape == tpos.shape: - pass - # trying to initialise with a previous chain - elif tpos.shape[-1] == self.nvarys: - tpos = tpos[-1] - else: - raise ValueError('pos should have shape (nwalkers, nvarys)') - p0 = tpos - - # if you specified a seed then you also need to seed the sampler - if seed is not None: - self.sampler.random_state = rng.get_state() - - if not isinstance(run_mcmc_kwargs, dict): - raise ValueError('run_mcmc_kwargs should be a dict of keyword arguments') - - # now do a production run, sampling all the time - try: - output = self.sampler.run_mcmc(p0, steps, progress=progress, **run_mcmc_kwargs) - self._lastpos = output.coords - except AbortFitException: - result.aborted = True - result.message = "Fit aborted by user callback. Could not estimate error-bars." - result.success = False - result.nfev = self.result.nfev - - # discard the burn samples and thin - chain = self.sampler.get_chain(thin=thin, discard=burn)[..., :, :] - lnprobability = self.sampler.get_log_prob(thin=thin, discard=burn)[..., :] - flatchain = chain.reshape((-1, self.nvarys)) - if not result.aborted: - quantiles = np.percentile(flatchain, [15.87, 50, 84.13], axis=0) - - for i, var_name in enumerate(result.var_names): - std_l, median, std_u = quantiles[:, i] - params[var_name].value = median - params[var_name].stderr = 0.5 * (std_u - std_l) - params[var_name].correl = {} - - params.update_constraints() - - # work out correlation coefficients - corrcoefs = np.corrcoef(flatchain.T) - - for i, var_name in enumerate(result.var_names): - for j, var_name2 in enumerate(result.var_names): - if i != j: - result.params[var_name].correl[var_name2] = corrcoefs[i, j] - - result.chain = np.copy(chain) - result.lnprob = np.copy(lnprobability) - result.errorbars = True - result.nvarys = len(result.var_names) - result.nfev = nwalkers*steps - - try: - result.acor = self.sampler.get_autocorr_time() - except AutocorrError as e: - print(str(e)) - result.acceptance_fraction = self.sampler.acceptance_fraction - - # Calculate the residual with the "best fit" parameters - out = self.userfcn(params, *self.userargs, **self.userkws) - result.residual = _nan_policy(out, nan_policy=self.nan_policy, - handle_inf=False) - - # If uncertainty was automatically estimated, weight the residual properly - if not is_weighted and result.residual.size > 1 and '__lnsigma' in params: - result.residual /= np.exp(params['__lnsigma'].value) - - # Calculate statistics for the two standard cases: - if isinstance(result.residual, np.ndarray) or (float_behavior == 'chi2'): - result._calculate_statistics() - - # Handle special case unique to emcee: - # This should eventually be moved into result._calculate_statistics. - elif float_behavior == 'posterior': - result.ndata = 1 - result.nfree = 1 - - # assuming prior prob = 1, this is true - _neg2_log_likel = -2*result.residual - - # assumes that residual is properly weighted, avoid overflowing np.exp() - result.chisqr = np.exp(min(650, _neg2_log_likel)) - - result.redchi = result.chisqr / result.nfree - result.aic = _neg2_log_likel + 2 * result.nvarys - result.bic = _neg2_log_likel + np.log(result.ndata) * result.nvarys - - if auto_pool is not None: - auto_pool.terminate() - - return result diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/lmfit-1.3.1/lmfit/model.py new/lmfit-1.3.2/lmfit/model.py --- old/lmfit-1.3.1/lmfit/model.py 2024-04-19 18:02:36.000000000 +0200 +++ new/lmfit-1.3.2/lmfit/model.py 2024-07-19 18:17:35.000000000 +0200 @@ -1255,10 +1255,14 @@ if 'nan_policy' not in kws: kws['nan_policy'] = self.left.nan_policy + # CompositeModel cannot have a prefix. + if 'prefix' in kws: + warnings.warn("CompositeModel ignores `prefix` argument") + kws['prefix'] = '' + def _tmp(self, *args, **kws): pass Model.__init__(self, _tmp, **kws) - for side in (left, right): prefix = side.prefix for basename, hint in side.param_hints.items(): @@ -1388,7 +1392,7 @@ func = left.get('funcdef', None) name = left.get('name', None) prefix = left.get('prefix', None) - ivars = left.get('indepedendent_vars', None) + ivars = left.get('independent_vars', None) pnames = left.get('param_root_names', None) phints = left.get('param_hints', None) nan_policy = left.get('nan_policy', None) @@ -1548,7 +1552,10 @@ if data is not None: self.data = data if params is not None: - self.init_params = params + self.init_params = deepcopy(params) + else: + self.init_params = deepcopy(self.params) + if weights is not None: self.weights = weights if method is not None: @@ -1559,8 +1566,8 @@ self.ci_out = None self.userargs = (self.data, self.weights) self.userkws.update(kwargs) - self.init_fit = self.model.eval(params=self.params, **self.userkws) - _ret = self.minimize(method=self.method) + self.init_fit = self.model.eval(params=self.init_params, **self.userkws) + _ret = self.minimize(method=self.method, params=self.init_params) self.model.post_fit(_ret) _ret.params.create_uvars(covar=_ret.covar) diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/lmfit-1.3.1/lmfit/parameter.py new/lmfit-1.3.2/lmfit/parameter.py --- old/lmfit-1.3.1/lmfit/parameter.py 2024-04-19 18:02:36.000000000 +0200 +++ new/lmfit-1.3.2/lmfit/parameter.py 2024-07-19 18:17:35.000000000 +0200 @@ -181,9 +181,8 @@ params = [self[k] for k in self] # find the symbols from _asteval.symtable, that need to be remembered. - sym_unique = self._asteval.user_defined_symbols() unique_symbols = {key: deepcopy(self._asteval.symtable[key]) - for key in sym_unique} + for key in self._asteval.user_defined_symbols()} return self.__class__, (), {'unique_symbols': unique_symbols, 'params': params} @@ -567,9 +566,8 @@ """ params = [p.__getstate__() for p in self.values()] - sym_unique = self._asteval.user_defined_symbols() unique_symbols = {key: encode4js(deepcopy(self._asteval.symtable[key])) - for key in sym_unique} + for key in self._asteval.user_defined_symbols()} return json.dumps({'unique_symbols': unique_symbols, 'params': params}, **kws) diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/lmfit-1.3.1/lmfit/version.py new/lmfit-1.3.2/lmfit/version.py --- old/lmfit-1.3.1/lmfit/version.py 2024-04-19 18:02:50.000000000 +0200 +++ new/lmfit-1.3.2/lmfit/version.py 2024-07-19 18:17:43.000000000 +0200 @@ -12,5 +12,5 @@ __version_tuple__: VERSION_TUPLE version_tuple: VERSION_TUPLE -__version__ = version = '1.3.1' -__version_tuple__ = version_tuple = (1, 3, 1) +__version__ = version = '1.3.2' +__version_tuple__ = version_tuple = (1, 3, 2) diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/lmfit-1.3.1/lmfit.egg-info/PKG-INFO new/lmfit-1.3.2/lmfit.egg-info/PKG-INFO --- old/lmfit-1.3.1/lmfit.egg-info/PKG-INFO 2024-04-19 18:02:50.000000000 +0200 +++ new/lmfit-1.3.2/lmfit.egg-info/PKG-INFO 2024-07-19 18:17:43.000000000 +0200 @@ -1,6 +1,6 @@ Metadata-Version: 2.1 Name: lmfit -Version: 1.3.1 +Version: 1.3.2 Summary: Least-Squares Minimization with Bounds and Constraints Author-email: LMFit Development Team <matt.newvi...@gmail.com> License: BSD-3 @@ -98,10 +98,10 @@ Description-Content-Type: text/x-rst License-File: LICENSE License-File: AUTHORS.txt -Requires-Dist: asteval>=0.9.28 +Requires-Dist: asteval>=1.0 Requires-Dist: numpy>=1.19 Requires-Dist: scipy>=1.6 -Requires-Dist: uncertainties>=3.1.4 +Requires-Dist: uncertainties>=3.2.2 Requires-Dist: dill>=0.3.4 Provides-Extra: dev Requires-Dist: build; extra == "dev" diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/lmfit-1.3.1/lmfit.egg-info/SOURCES.txt new/lmfit-1.3.2/lmfit.egg-info/SOURCES.txt --- old/lmfit-1.3.1/lmfit.egg-info/SOURCES.txt 2024-04-19 18:02:50.000000000 +0200 +++ new/lmfit-1.3.2/lmfit.egg-info/SOURCES.txt 2024-07-19 18:17:43.000000000 +0200 @@ -135,7 +135,6 @@ examples/test_splinepeak.dat lmfit/__init__.py lmfit/_ampgo.py -lmfit/conf_emcee.py lmfit/confidence.py lmfit/jsonutils.py lmfit/lineshapes.py diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/lmfit-1.3.1/lmfit.egg-info/requires.txt new/lmfit-1.3.2/lmfit.egg-info/requires.txt --- old/lmfit-1.3.1/lmfit.egg-info/requires.txt 2024-04-19 18:02:50.000000000 +0200 +++ new/lmfit-1.3.2/lmfit.egg-info/requires.txt 2024-07-19 18:17:43.000000000 +0200 @@ -1,7 +1,7 @@ -asteval>=0.9.28 +asteval>=1.0 numpy>=1.19 scipy>=1.6 -uncertainties>=3.1.4 +uncertainties>=3.2.2 dill>=0.3.4 [all] diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/lmfit-1.3.1/pyproject.toml new/lmfit-1.3.2/pyproject.toml --- old/lmfit-1.3.1/pyproject.toml 2024-04-19 18:02:36.000000000 +0200 +++ new/lmfit-1.3.2/pyproject.toml 2024-07-19 18:17:35.000000000 +0200 @@ -6,11 +6,11 @@ name = "lmfit" dynamic = ["version"] dependencies = [ - "asteval>=0.9.28", + "asteval>=1.0", "numpy>=1.19", "scipy>=1.6", - "uncertainties>=3.1.4", - "dill>=0.3.4" + "uncertainties>=3.2.2", + "dill>=0.3.4", ] requires-python = ">= 3.8" authors = [ @@ -110,7 +110,7 @@ [tool.rstcheck] report_level = "WARNING" ignore_substitutions = [ - "release" + "release", ] ignore_roles = [ "scipydoc", diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/lmfit-1.3.1/tests/test_model.py new/lmfit-1.3.2/tests/test_model.py --- old/lmfit-1.3.1/tests/test_model.py 2024-04-19 18:02:36.000000000 +0200 +++ new/lmfit-1.3.2/tests/test_model.py 2024-07-19 18:17:35.000000000 +0200 @@ -13,7 +13,7 @@ from lmfit import Model, Parameters, models from lmfit.lineshapes import gaussian, lorentzian, step, voigt from lmfit.model import get_reducer, propagate_err -from lmfit.models import GaussianModel, PseudoVoigtModel +from lmfit.models import GaussianModel, PseudoVoigtModel, QuadraticModel @pytest.fixture() @@ -900,7 +900,7 @@ yatan = stepmod2.eval(pars, x=x) assert (yatan-yline).std() > 0.1 - assert (yatan-yline).ptp() > 1.0 + assert np.ptp(yatan-yline) > 1.0 voigtmod = Model(voigt) assert 'x' in voigtmod.independent_vars @@ -1648,3 +1648,52 @@ assert result.nfev > 7 assert_allclose(result.values['c0'], 5.0, 0.02, 0.02, '', True) assert_allclose(result.values['c1'], 3.3, 0.02, 0.02, '', True) + + +def test_model_refitting(): + """Github #960""" + np.random.seed(0) + x = np.linspace(0, 100, 5001) + y = gaussian(x, amplitude=90, center=60, sigma=4) + 30 + 0.3*x - 0.0030*x*x + y += np.random.normal(size=5001, scale=0.5) + + model = GaussianModel(prefix='peak_') + QuadraticModel(prefix='bkg_') + + params = model.make_params(bkg_a=0, bkg_b=0, bkg_c=20, peak_amplitude=200, + peak_center=55, peak_sigma=10) + + result = model.fit(y, params, x=x, method='powell') + assert result.chisqr > 12000.0 + assert result.nfev > 500 + assert result.params['peak_amplitude'].value > 500 + assert result.params['peak_amplitude'].value < 5000 + assert result.params['peak_sigma'].value > 10 + assert result.params['peak_sigma'].value < 100 + + # now re-fit with LM + result.fit(y, x=x, method='leastsq') + + assert result.nfev > 25 + assert result.nfev < 200 + assert result.chisqr < 2000.0 + + assert result.params['peak_amplitude'].value > 85 + assert result.params['peak_amplitude'].value < 95 + assert result.params['peak_sigma'].value > 3 + assert result.params['peak_sigma'].value < 5 + + # and assert that the initial value are from the Powell result + assert result.init_values['peak_amplitude'] > 1500 + assert result.init_values['peak_sigma'] > 25 + + params = model.make_params(bkg_a=0, bkg_b=-.02, bkg_c=26, peak_amplitude=20, + peak_center=62, peak_sigma=3) + + # now re-fit with LM and these new params + result.fit(y, params, x=x, method='leastsq') + + # and assert that the initial value are from the Powell result + assert result.init_values['peak_amplitude'] > 19 + assert result.init_values['peak_amplitude'] < 21 + assert result.init_values['peak_sigma'] > 2 + assert result.init_values['peak_sigma'] < 4 ++++++ lmfit-pr965-asteval.patch ++++++ >From e62b1784e7516f543402c013cfd532d6003aa859 Mon Sep 17 00:00:00 2001 From: Matthew Newville <newvi...@cars.uchicago.edu> Date: Sun, 4 Aug 2024 20:34:44 -0500 Subject: [PATCH 1/9] BUG: fix typo in 'create_uvars' method Closes: #962 --- lmfit/parameter.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/lmfit/parameter.py b/lmfit/parameter.py index a7ec9a65..243d4227 100644 --- a/lmfit/parameter.py +++ b/lmfit/parameter.py @@ -515,9 +515,9 @@ def create_uvars(self, covar=None): vindex += 1 vnames.append(par.name) vbest.append(par.value) - if getattr(par, 'sdterr', None) is None and covar is not None: + if getattr(par, 'stderr', None) is None and covar is not None: par.stderr = sqrt(covar[vindex, vindex]) - uvars[par.name] = ufloat(par.value, getattr(par, 'sdterr', 0.0)) + uvars[par.name] = ufloat(par.value, getattr(par, 'stderr', 0.0)) corr_uvars = None if covar is not None: >From 7fd4e42e84b3ab8f0bdc05274aa270d4ded765bf Mon Sep 17 00:00:00 2001 From: Matthew Newville <newvi...@cars.uchicago.edu> Date: Sun, 11 Aug 2024 12:40:21 -0500 Subject: [PATCH 2/9] MAINT: 'uncertainties' fails if 'stderr' is None, so set to zero --- lmfit/parameter.py | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/lmfit/parameter.py b/lmfit/parameter.py index 243d4227..cd6d9626 100644 --- a/lmfit/parameter.py +++ b/lmfit/parameter.py @@ -517,7 +517,10 @@ def create_uvars(self, covar=None): vbest.append(par.value) if getattr(par, 'stderr', None) is None and covar is not None: par.stderr = sqrt(covar[vindex, vindex]) - uvars[par.name] = ufloat(par.value, getattr(par, 'stderr', 0.0)) + stderr = getattr(par, 'stderr', 0.0) + if stderr is None: + stderr = 0.0 + uvars[par.name] = ufloat(par.value, stderr) corr_uvars = None if covar is not None: >From b812b4731805f9d85d717aff0ad34031c747d1d4 Mon Sep 17 00:00:00 2001 From: Matthew Newville <newvi...@cars.uchicago.edu> Date: Sun, 11 Aug 2024 12:44:30 -0500 Subject: [PATCH 3/9] MAINT: asteval no longer raises NameError to Python - so we suppress 'asteval' expections to stderr and look for them when creating parameters --- lmfit/parameter.py | 25 +++++++++++++++++++++++-- tests/test_parameters.py | 3 +-- 2 files changed, 24 insertions(+), 4 deletions(-) diff --git a/lmfit/parameter.py b/lmfit/parameter.py index cd6d9626..77ba882c 100644 --- a/lmfit/parameter.py +++ b/lmfit/parameter.py @@ -27,6 +27,20 @@ def check_ast_errors(expr_eval): expr_eval.raise_exception(None) +class Writer: + """Replace 'stdout' and 'stderr' for asteval.""" + def __init__(self, **kws): + self.messages = [] + for k, v in kws.items(): + setattr(self, k, v) + + def write(self, msg): + """Internal writer.""" + o = msg.strip() + if len(o) > 0: + self.messages.append(msg) + + def asteval_with_uncertainties(*vals, obj=None, pars=None, names=None, **kwargs): """Calculate object value, given values for variables. @@ -76,8 +90,9 @@ def __init__(self, usersyms=None): """ super().__init__(self) - - self._asteval = Interpreter() + self._ast_msgs = Writer() + self._asteval = Interpreter(writer=self._ast_msgs, + err_writer=self._ast_msgs) _syms = {} _syms.update(SCIPY_FUNCTIONS) @@ -86,6 +101,9 @@ def __init__(self, usersyms=None): for key, val in _syms.items(): self._asteval.symtable[key] = val + def _writer(self, msg): + self._asteval_msgs.append(msg) + def copy(self): """Parameters.copy() should always be a deepcopy.""" return self.__deepcopy__(None) @@ -433,6 +451,9 @@ def add(self, name, value=None, vary=True, min=-inf, max=inf, expr=None, self.__setitem__(name, Parameter(value=value, name=name, vary=vary, min=min, max=max, expr=expr, brute_step=brute_step)) + if len(self._asteval.error) > 0: + err = self._asteval.error[0] + raise err.exc(err.msg) def add_many(self, *parlist): """Add many parameters, using a sequence of tuples. diff --git a/tests/test_parameters.py b/tests/test_parameters.py index 7e12b1f0..998341e3 100644 --- a/tests/test_parameters.py +++ b/tests/test_parameters.py @@ -39,8 +39,7 @@ def assert_parameter_attributes(par, expected): def test_check_ast_errors(): """Assert that an exception is raised upon AST errors.""" pars = lmfit.Parameters() - - msg = r"at expr='<_?ast.Module object at" + msg = "name 'par2' is not defined" with pytest.raises(NameError, match=msg): pars.add('par1', expr='2.0*par2') >From 05fb78e8ebab8d3cc3360f3eb1ee852c8f4a7528 Mon Sep 17 00:00:00 2001 From: reneeotten <reneeot...@users.noreply.github.com> Date: Fri, 16 Aug 2024 10:50:54 -0400 Subject: [PATCH 4/9] DOC: tweak 'sphinx-gallery' settings --- doc/conf.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/conf.py b/doc/conf.py index 1a36156b..972b552b 100644 --- a/doc/conf.py +++ b/doc/conf.py @@ -167,7 +167,7 @@ 'examples_dirs': '../examples', 'gallery_dirs': 'examples', 'filename_pattern': r'(\\|/)documentation|(\\|/)example_', - 'ignore_pattern': r'(\\|/)doc_', + 'ignore_pattern': 'doc_', 'ignore_repr_types': r'matplotlib', 'image_srcset': ["3x"], } >From 07d65bf8ebcf013e7b47ce0c4930aa39d7cd2cc3 Mon Sep 17 00:00:00 2001 From: reneeotten <reneeot...@users.noreply.github.com> Date: Fri, 16 Aug 2024 10:51:28 -0400 Subject: [PATCH 5/9] MAINT: update pre-commit hooks --- .pre-commit-config.yaml | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index ee53e906..bad4bf3f 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -2,7 +2,7 @@ exclude: 'doc/conf.py' repos: - repo: https://github.com/asottile/pyupgrade - rev: v3.16.0 + rev: v3.17.0 hooks: - id: pyupgrade args: [--py38-plus] @@ -12,18 +12,18 @@ repos: hooks: - id: check-ast - id: check-builtin-literals + - id: check-docstring-first - id: check-case-conflict - id: check-merge-conflict - id: check-toml + - id: check-yaml - id: debug-statements - id: end-of-file-fixer - id: mixed-line-ending - id: trailing-whitespace - - id: fix-encoding-pragma - args: [--remove] - repo: https://github.com/PyCQA/flake8 - rev: 7.1.0 + rev: 7.1.1 hooks: - id: flake8 additional_dependencies: [flake8-deprecated, flake8-mutable, Flake8-pyproject] >From 805263ddfac4f877dfd2c4e834155bd274020e3d Mon Sep 17 00:00:00 2001 From: reneeotten <reneeot...@users.noreply.github.com> Date: Fri, 16 Aug 2024 10:53:22 -0400 Subject: [PATCH 6/9] CI: update to latest NumPy version --- azure-pipelines.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/azure-pipelines.yml b/azure-pipelines.yml index 314d8704..01bc9d6e 100644 --- a/azure-pipelines.yml +++ b/azure-pipelines.yml @@ -256,7 +256,7 @@ stages: displayName: 'Install build, pip, setuptools, wheel, pybind11, and cython' - script: | export PATH=/home/vsts/.local/bin:$PATH - export numpy_version=2.0.0 + export numpy_version=2.0.1 wget https://github.com/numpy/numpy/releases/download/v${numpy_version}/numpy-${numpy_version}.tar.gz tar xzvf numpy-${numpy_version}.tar.gz cd numpy-${numpy_version} >From 16f8cbd176ed5b9f5e1ac6a369c7bd75dbd5046a Mon Sep 17 00:00:00 2001 From: reneeotten <reneeot...@users.noreply.github.com> Date: Fri, 16 Aug 2024 12:39:05 -0400 Subject: [PATCH 7/9] BLD: remove redundant wheel dependency --- pyproject.toml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pyproject.toml b/pyproject.toml index e41e844b..9578466d 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,5 +1,5 @@ [build-system] -requires = ["setuptools>=45", "wheel", "setuptools_scm>=6.2"] +requires = ["setuptools>=45", "setuptools_scm>=6.2"] build-backend = "setuptools.build_meta" [project] >From d6810a558887956f598d58d9876be8fe96090d6d Mon Sep 17 00:00:00 2001 From: reneeotten <reneeot...@users.noreply.github.com> Date: Fri, 16 Aug 2024 18:06:34 -0400 Subject: [PATCH 8/9] DOC: update names of the documentation examples in Gallery - also rename the file doc_uvars_params.py to follow the usual naming conventions --- doc/doc_examples_to_gallery.py | 13 ++++++++----- doc/model.rst | 2 +- ...{doc_uvars_params.py => doc_parameters_uvars.py} | 4 ++-- 3 files changed, 11 insertions(+), 8 deletions(-) rename examples/{doc_uvars_params.py => doc_parameters_uvars.py} (96%) diff --git a/doc/doc_examples_to_gallery.py b/doc/doc_examples_to_gallery.py index 4cfeb5bc..49695ffd 100755 --- a/doc/doc_examples_to_gallery.py +++ b/doc/doc_examples_to_gallery.py @@ -5,7 +5,7 @@ - create a "documentation" directory within "examples" - add a README.txt file -- copy the examples from the documentation, bu remove the "doc_" from the +- copy the examples from the documentation, removing the "doc_" from the filename - add the required docstring to the files for proper rendering - copy the data files @@ -46,12 +46,15 @@ def copy_data_files(src_dir, dest_dir): ) for fn in files: + sname = fn.name[4:] + lmfit_class, *description = sname[:-3].split('_') + gallery_name = f"{lmfit_class.capitalize()} - {' '.join(description)}" script_text = fn.read_text() - gallery_file = examples_documentation_dir / fn.name[4:] - msg = "" # add optional message f - gallery_file.write_text(f'"""\n{fn.name}\n{"=" * len(fn.name)}\n\n' + gallery_file = examples_documentation_dir / sname + msg = "" # add optional message + gallery_file.write_text(f'"""\n{gallery_name}\n{"=" * len(gallery_name)}\n\n' f'{msg}\n"""\n{script_text}') # make sure the saved Models and ModelResult are available @@ -67,5 +70,5 @@ def copy_data_files(src_dir, dest_dir): os.chdir(doc_dir) -# # data files for the other Gallery examples +# data files for the other Gallery examples copy_data_files(examples_documentation_dir, doc_dir) diff --git a/doc/model.rst b/doc/model.rst index 5c8ae340..e5d6506a 100644 --- a/doc/model.rst +++ b/doc/model.rst @@ -1166,7 +1166,7 @@ that taking correlations between Parameters into account when performing calculations can have a noticeable influence on the resulting uncertainties. -.. jupyter-execute:: ../examples/doc_uvars_params.py +.. jupyter-execute:: ../examples/doc_parameters_uvars.py Note that the :meth:`Model.post_fit` does not need to be limited to this diff --git a/examples/doc_uvars_params.py b/examples/doc_parameters_uvars.py similarity index 96% rename from examples/doc_uvars_params.py rename to examples/doc_parameters_uvars.py index 124c3024..1c4f2da8 100644 --- a/examples/doc_uvars_params.py +++ b/examples/doc_parameters_uvars.py @@ -1,4 +1,4 @@ -# <examples/doc_uvars_params.py> +# <examples/doc_parameters_uvars.py> import numpy as np from lmfit.models import ExponentialModel, GaussianModel @@ -67,4 +67,4 @@ def post_fit(result): out = mod.fit(y, pars, x=x) print(out.fit_report(min_correl=0.5)) -# <end examples/doc_uvars_params.py> +# <end examples/doc_parameters_uvars.py> >From ff436c270d07433a7ae404fe76bc9c627b4edc3f Mon Sep 17 00:00:00 2001 From: reneeotten <reneeot...@users.noreply.github.com> Date: Fri, 16 Aug 2024 22:02:40 -0400 Subject: [PATCH 9/9] BLD: remove numexpr dependency (again) --- pyproject.toml | 1 - 1 file changed, 1 deletion(-) diff --git a/pyproject.toml b/pyproject.toml index 9578466d..cacdb8a4 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -59,7 +59,6 @@ doc = [ "matplotlib", "numdifftools", "pandas", - "numexpr", # note: Pandas appears to need numexpr to build our docs "Pillow", "pycairo;platform_system=='Windows'", "Sphinx",