Package: python3-joblib
Tags: fixed-upstream
Control: affects -1 scikit-learn statsmodels
Control: block 966426 by -1
Severity: serious
Justification: causes other packages to FTBFS
Our current version of joblib is incompatible with Python 3.9, which was
recently added to the supported versions list (#966426). This causes at
least scikit-learn and statsmodels to fail their tests.
Upstream fix: https://github.com/joblib/loky/pull/250
Traceback (from scikit-learn test log):
_______________________ test_multi_class_n_jobs[kernel3]
_______________________
kernel = 1**2 * RBF(length_scale=1)
@pytest.mark.parametrize('kernel', kernels)
def test_multi_class_n_jobs(kernel):
# Test that multi-class GPC produces identical results with
n_jobs>1.
gpc = GaussianProcessClassifier(kernel=kernel)
gpc.fit(X, y_mc)
gpc_2 = GaussianProcessClassifier(kernel=kernel, n_jobs=2)
> gpc_2.fit(X, y_mc)
sklearn/gaussian_process/tests/test_gpc.py:178:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _
sklearn/gaussian_process/_gpc.py:657: in fit
self.base_estimator_.fit(X, y)
sklearn/multiclass.py:241: in fit
self.estimators_ = Parallel(n_jobs=self.n_jobs)(delayed(_fit_binary)(
/usr/lib/python3/dist-packages/joblib/parallel.py:1016: in __call__
self.retrieve()
/usr/lib/python3/dist-packages/joblib/parallel.py:908: in retrieve
self._output.extend(job.get(timeout=self.timeout))
/usr/lib/python3/dist-packages/joblib/_parallel_backends.py:554: in
wrap_future_result
return future.result(timeout=timeout)
/usr/lib/python3.9/concurrent/futures/_base.py:440: in result
return self.__get_result()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
_ _ _ _
self = <Future at 0x7fdd633df100 state=finished raised
TerminatedWorkerError>
def __get_result(self):
if self._exception:
> raise self._exception
E
joblib.externals.loky.process_executor.TerminatedWorkerError: A worker
process managed by the executor was unexpectedly terminated. This could
be caused by a segmentation fault while calling the function or by an
excessive memory usage causing the Operating System to kill the worker.
The exit codes of the workers are {EXIT(1)}
/usr/lib/python3.9/concurrent/futures/_base.py:389: TerminatedWorkerError
----------------------------- Captured stdout call
-----------------------------
--------------------------------------------------------------------------------
LokyProcess-45 failed with traceback:
--------------------------------------------------------------------------------
Traceback (most recent call last):
File
"/usr/lib/python3/dist-packages/joblib/externals/loky/backend/popen_loky_posix.py",
line 195, in <module>
process_obj = pickle.load(from_parent)
File
"/usr/lib/python3/dist-packages/joblib/externals/loky/backend/queues.py",
line 75, in __setstate__
self._after_fork()
File "/usr/lib/python3.9/multiprocessing/queues.py", line 69, in
_after_fork
self._reset(after_fork=True)
File "/usr/lib/python3.9/multiprocessing/queues.py", line 73, in _reset
self._notempty._at_fork_reinit()
AttributeError: '_SafeQueue' object has no attribute '_notempty'
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
LokyProcess-46 failed with traceback:
--------------------------------------------------------------------------------
Traceback (most recent call last):
File
"/usr/lib/python3/dist-packages/joblib/externals/loky/backend/popen_loky_posix.py",
line 195, in <module>
process_obj = pickle.load(from_parent)
File
"/usr/lib/python3/dist-packages/joblib/externals/loky/backend/queues.py",
line 75, in __setstate__
self._after_fork()
File "/usr/lib/python3.9/multiprocessing/queues.py", line 69, in
_after_fork
self._reset(after_fork=True)
File "/usr/lib/python3.9/multiprocessing/queues.py", line 73, in _reset
self._notempty._at_fork_reinit()
AttributeError: '_SafeQueue' object has no attribute '_notempty'
--------------------------------------------------------------------------------