Bug#893656: transition: theano 0.9 -> 1.0 - please update lasagne
lasagne - 3 test failures; fixed upstream by https://github.com/Lasagne/Lasagne/pull/836 Latest upstream (37ca134; only packaging change was to drop remove-deprecated.patch) doesn't have these failures. There is a warning that cuda_convnet is no longer available with Theano 1.0, but that has another dependency (pybuild2) that was never in Debian anyway. pyopencl - FTBFS for unrelated reasons (#893050); the theano-using code appears to be build/test scripts (pyopencl/compyte/gen*, test*) we never actually run Builds after applying the path in #893050; 7 tests fail or crash (in Python 2, 6 also in Python 3), but they do this with or without Theano. sympy - build hangs at mkdir -p _build/logo (with mkdir using a full CPU core?!), probably unrelated as it also happens without *-theano installed This hang is because faketime doesn't work well with cowbuilder --login, i.e. nothing to do with sympy itself. Running the testsuite with new Theano succeeds. Hence, it appears that updating lasagne is the only change required to work with Theano 1.0, but testing on CUDA-capable hardware would still be preferable. -- debian-science-maintainers mailing list debian-science-maintainers@lists.alioth.debian.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/debian-science-maintainers
Bug#893729: sympy FTBFS: python3-distutils is now a separate package
Given that the errors suggest it tried and failed to find both setuptools and distutils, it probably already can use either (though I haven't tried to check this), if they are installed... On 21/03/18 20:30, Ghislain Vaillant wrote: Another option could be to patch the build system to use setuptools instead of distutils as recommended by the PyPA? Le mer. 21 mars 2018 à 20:45, Rebecca N. Palmer <rebecca_pal...@zoho.com> a écrit : Source: sympy Severity: serious Control: tags -1 patch X-Debbugs-Cc: debian-pyt...@lists.debian.org python3-distutils has been moved out of python3.6 (as of 3.6.5~rc1-2), so if you need it, please build-depend on it. (Or python3-setuptools, given that this looks like it might prefer that.) (Has anyone checked whether there are more of these?) dpkg-buildpackage: info: source package sympy dpkg-buildpackage: info: source version 1.1.1-4 dpkg-buildpackage: info: source distribution unstable dpkg-buildpackage: info: source changed by Yaroslav Halchenko <deb...@onerussian.com> dpkg-buildpackage: info: host architecture amd64 dpkg-source --before-build sympy-1.1.1 fakeroot debian/rules clean dh clean --with python2,python3 --buildsystem=pybuild debian/rules override_dh_auto_clean make[1]: Entering directory '/home/rnpalmer/Debian/builds/stackbuild/sympy-1.1.1' dh_auto_clean I: pybuild base:217: python2.7 setup.py clean /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'install_requires' warnings.warn(msg) running clean I: pybuild base:217: python3.6 setup.py clean Traceback (most recent call last): File "setup.py", line 46, in from setuptools import setup, Command ModuleNotFoundError: No module named 'setuptools' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "setup.py", line 49, in from distutils.core import setup, Command ModuleNotFoundError: No module named 'distutils' E: pybuild pybuild:330: clean: plugin distutils failed with: exit code=1: python3.6 setup.py clean dh_auto_clean: pybuild --clean -i python{version} -p 3.6 returned exit code 13 make[1]: *** [debian/rules:29: override_dh_auto_clean] Error 25 make[1]: Leaving directory '/home/rnpalmer/Debian/builds/stackbuild/sympy-1.1.1' make: *** [debian/rules:10: clean] Error 2 dpkg-buildpackage: error: fakeroot debian/rules clean subprocess returned exit status 2 -- debian-science-maintainers mailing list debian-science-maintainers@lists.alioth.debian.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/debian-science-maintainers
Bug#893729: sympy FTBFS: python3-distutils is now a separate package
Source: sympy Severity: serious Control: tags -1 patch X-Debbugs-Cc: debian-pyt...@lists.debian.org python3-distutils has been moved out of python3.6 (as of 3.6.5~rc1-2), so if you need it, please build-depend on it. (Or python3-setuptools, given that this looks like it might prefer that.) (Has anyone checked whether there are more of these?) dpkg-buildpackage: info: source package sympy dpkg-buildpackage: info: source version 1.1.1-4 dpkg-buildpackage: info: source distribution unstable dpkg-buildpackage: info: source changed by Yaroslav Halchenkodpkg-buildpackage: info: host architecture amd64 dpkg-source --before-build sympy-1.1.1 fakeroot debian/rules clean dh clean --with python2,python3 --buildsystem=pybuild debian/rules override_dh_auto_clean make[1]: Entering directory '/home/rnpalmer/Debian/builds/stackbuild/sympy-1.1.1' dh_auto_clean I: pybuild base:217: python2.7 setup.py clean /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'install_requires' warnings.warn(msg) running clean I: pybuild base:217: python3.6 setup.py clean Traceback (most recent call last): File "setup.py", line 46, in from setuptools import setup, Command ModuleNotFoundError: No module named 'setuptools' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "setup.py", line 49, in from distutils.core import setup, Command ModuleNotFoundError: No module named 'distutils' E: pybuild pybuild:330: clean: plugin distutils failed with: exit code=1: python3.6 setup.py clean dh_auto_clean: pybuild --clean -i python{version} -p 3.6 returned exit code 13 make[1]: *** [debian/rules:29: override_dh_auto_clean] Error 25 make[1]: Leaving directory '/home/rnpalmer/Debian/builds/stackbuild/sympy-1.1.1' make: *** [debian/rules:10: clean] Error 2 dpkg-buildpackage: error: fakeroot debian/rules clean subprocess returned exit status 2 -- debian-science-maintainers mailing list debian-science-maintainers@lists.alioth.debian.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/debian-science-maintainers
Bug#893656: transition: theano 0.9 -> 1.0
Source: theano Severity: wishlist (Posting here rather than on debian-release as the formal transition process is rarely used, and its tools mostly useless, for non-compiled cases.) Theano 1.0 (currently in Salsa) contains some API-breaking changes: http://www.deeplearning.net/software/theano/NEWS.html The packages in main that import theano are: brian - not tested, FTBFS for unrelated reasons (#876920), not in testing deepnano - not tested but looks already broken, not in testing keras - build and debci tests pass; upstream say it shouldn't be a problem https://github.com/keras-team/keras/issues/5209 lasagne - 3 test failures; fixed upstream by https://github.com/Lasagne/Lasagne/pull/836 pyopencl - FTBFS for unrelated reasons (#893050); the theano-using code appears to be build/test scripts (pyopencl/compyte/gen*, test*) we never actually run sympy - build hangs at mkdir -p _build/logo (with mkdir using a full CPU core?!), probably unrelated as it also happens without *-theano installed As one of the changes is removal of theano.sandbox.cuda (replaced by theano.gpuarray, with a different API) and my hardware doesn't support CUDA, I am unable to test this fully and testing by others would be welcome. Be aware that if python(3)-nose is installed (it wasn't for these tests), attempting to import theano.sandbox.cuda raises SkipTest (?!), not ImportError: https://salsa.debian.org/science-team/theano/raw/master/theano/sandbox/cuda/__init__.py -- debian-science-maintainers mailing list debian-science-maintainers@lists.alioth.debian.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/debian-science-maintainers
Bug#882559: python-xarray FTBFS - GenericNetCDFDataTest.test_cross_engine_read_write_netcdf3 failed
Package: python3-xarray Version: 0.9.2-1 Severity: serious Control: tags -1 upstream Two netcdf tests fail in current sid (see log below). This is known upstream as https://github.com/pydata/xarray/issues/1721 , according to which the actual problem is that scipy has been writing netcdf files with invalid padding for some time ( https://github.com/scipy/scipy/pull/8170 ), and netcdf 4.5 rejects such invalid files ( https://github.com/Unidata/netcdf-c/issues/657 ). netcdf have since reverted this change, and a patch has been posted for the original scipy bug, but given that neither of these are xarray's fault it might make most sense to temporarily disable these tests. === FAILURES === __ GenericNetCDFDataTest.test_cross_engine_read_write_netcdf3 __ self = testMethod=test_cross_engine_read_write_netcdf3> def test_cross_engine_read_write_netcdf3(self): data = create_test_data() valid_engines = set() if has_netCDF4: valid_engines.add('netcdf4') if has_scipy: valid_engines.add('scipy') for write_engine in valid_engines: for format in ['NETCDF3_CLASSIC', 'NETCDF3_64BIT']: with create_tmp_file() as tmp_file: data.to_netcdf(tmp_file, format=format, engine=write_engine) for read_engine in valid_engines: with open_dataset(tmp_file, > engine=read_engine) as actual: xarray/tests/test_backends.py:977: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ xarray/backends/api.py:291: in open_dataset autoclose=autoclose) xarray/backends/netCDF4_.py:210: in __init__ self.ds = opener() xarray/backends/netCDF4_.py:185: in _open_netcdf4_group ds = nc4.Dataset(filename, mode=mode, **kwargs) netCDF4/_netCDF4.pyx:2015: in netCDF4._netCDF4.Dataset.__init__ ??? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > ??? E OSError: [Errno -36] NetCDF: Invalid argument: b'/tmp/tmpmgjj7gx5/temp-88.nc' netCDF4/_netCDF4.pyx:1636: OSError ___ GenericNetCDFDataTestAutocloseTrue.test_cross_engine_read_write_netcdf3 self = testMethod=test_cross_engine_read_write_netcdf3> def test_cross_engine_read_write_netcdf3(self): data = create_test_data() valid_engines = set() if has_netCDF4: valid_engines.add('netcdf4') if has_scipy: valid_engines.add('scipy') for write_engine in valid_engines: for format in ['NETCDF3_CLASSIC', 'NETCDF3_64BIT']: with create_tmp_file() as tmp_file: data.to_netcdf(tmp_file, format=format, engine=write_engine) for read_engine in valid_engines: with open_dataset(tmp_file, > engine=read_engine) as actual: xarray/tests/test_backends.py:977: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ xarray/backends/api.py:291: in open_dataset autoclose=autoclose) xarray/backends/netCDF4_.py:210: in __init__ self.ds = opener() xarray/backends/netCDF4_.py:185: in _open_netcdf4_group ds = nc4.Dataset(filename, mode=mode, **kwargs) netCDF4/_netCDF4.pyx:2015: in netCDF4._netCDF4.Dataset.__init__ ??? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > ??? E OSError: [Errno -36] NetCDF: Invalid argument: b'/tmp/tmpfc4acgiv/temp-93.nc' netCDF4/_netCDF4.pyx:1636: OSError (followed by a few more failures that look like #871208 - see there) -- debian-science-maintainers mailing list debian-science-maintainers@lists.alioth.debian.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/debian-science-maintainers
Bug#871208: python-xarray tests failures
Control: tags -1 upstream fixed-upstream This appears to be 2 separate bugs, both triggered by using pandas 0.20 and fixed upstream. groupby_bins: https://github.com/pydata/xarray/issues/1386 https://github.com/pydata/xarray/pull/1390 test_sel: this is really a pandas bug (fixed in 0.21, but this version isn't in Debian yet) but has a workaround in xarray: https://github.com/pandas-dev/pandas/issues/16896 https://github.com/pydata/xarray/pull/1479 Both fixes are in the 0.10 upstream release, but be aware that this is a mildly API-breaking release (http://xarray.pydata.org/en/latest/whats-new.html#breaking-changes). -- debian-science-maintainers mailing list debian-science-maintainers@lists.alioth.debian.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/debian-science-maintainers
Bug#878596: fixed in Alioth
Control: tags -1 patch fixed-upstream pending Ready for upload (the GPU tests again haven't been run, but this shouldn't touch those parts). -- debian-science-maintainers mailing list debian-science-maintainers@lists.alioth.debian.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/debian-science-maintainers
Bug#877419: [Help] Exclusion did not worked (Was: Bug#877419: Bug#877700: RM: pandas [arm64 armel armhf mips mips64el mipsel s390x] ...)
raise nose.SkipTest("known failure of test_stata on non-little endian") E NameError: name 'nose' is not defined You need an 'import nose' first, if the test doesn't already have one. -- debian-science-maintainers mailing list debian-science-maintainers@lists.alioth.debian.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/debian-science-maintainers
Bug#878596: theano: FTBFS on big-endian systems - test_pooling_with_tensor_vars fails
Package: python-theano Version: 0.9.0+dfsg-1 Severity: serious Suspect the problem is theano/theano/tensor/signal/pool.py:650, which effectively does int32 = *(int64 *)(pointer_to_some_int_type) - if some_int_type is int32, that works on little-endian but not big-endian. -- debian-science-maintainers mailing list debian-science-maintainers@lists.alioth.debian.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/debian-science-maintainers
Bug#877316: clblas: Crashes on single-precision-only hardware, due to double-precision literals
I intend to file this upstream after investigating further (with a patch if I can); the main purpose of this Debian bug is to explain why I can't fully test the theano package I recently pushed. -- debian-science-maintainers mailing list debian-science-maintainers@lists.alioth.debian.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/debian-science-maintainers
Bug#877316: clblas: Crashes on single-precision-only hardware, due to double-precision literals
Package: libclblas2 Version: 2.12-1 Control: tags -1 upstream Control: affects -1 beignet-opencl-icd Some clblas operations use '0.0' (a double-precision literal) not '0.0f' (a single-precision literal) even when processing single-precision arrays. This causes it to crash on GPUs that don't support double precision: ASSERTION FAILED: sel.hasDoubleType() at file /build/beignet-1.3.1/backend/src/backend/gen_insn_selection.cpp, function void gbe::ConvertInstructionPattern::convertBetweenFloatDouble(gbe::Selection::Opaque&, const gbe::ir::ConvertInstruction&, bool&) const, line 6148 This particular 0.0 appears to have come from http://sources.debian.net/src/clblas/2.12-1/src/library/blas/AutoGemm/KernelOpenCL.py/#L368, but there may well be more. This issue also exists in upstream git. -- debian-science-maintainers mailing list debian-science-maintainers@lists.alioth.debian.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/debian-science-maintainers
Bug#835531: theano: fail of test_csr_correct_output_faster_than_scipy at times (FTBFS)
I agree that failing a build for a timing test such as this is inherently unreliable...so let's turn it off. (Tested only in sparse/tests/test_basic.py, not a full build.) It may be that theano is slower because it's using a lowest-common-denominator BLAS while numpy does runtime processor detection, as recently discussed at https://lists.debian.org/debian-science/2017/02/msg00043.html , but now is not the time to be fixing that. Description: Disable overly environment-dependent test Testing speed by wall-clock time is inherently unreliable on a shared machine such as Debian's buildds: don't let it fail the whole build. Author: Rebecca N. Palmer <rebecca_pal...@zoho.com> Bug-Debian: https://bugs.debian.org/835531 Forwarded: not-needed diff --git a/theano/sparse/tests/test_basic.py b/theano/sparse/tests/test_basic.py index 8c183b9..03d79f1 100644 --- a/theano/sparse/tests/test_basic.py +++ b/theano/sparse/tests/test_basic.py @@ -1209,8 +1209,8 @@ class test_structureddot(unittest.TestCase): overhead_tol = 0.002 # seconds overhead_rtol = 1.1 # times as long utt.assert_allclose(scipy_result, theano_result) -if (not theano.config.mode in ["DebugMode", "DEBUG_MODE"] and -theano.config.cxx): + +if 0:#(not theano.config.mode in ["DebugMode", "DEBUG_MODE"] and theano.config.cxx): self.assertFalse( theano_time > overhead_rtol * scipy_time + overhead_tol, (theano_time, -- debian-science-maintainers mailing list debian-science-maintainers@lists.alioth.debian.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/debian-science-maintainers
Bug#855102: (no subject)
Control: tags -1 patch (Combined patch for both bugs as the changes are so close together, but you *can* do the obvious split if you only want to fix one.) This has been tested only by running sparse/test_basic.py and #855102's example from the source tree, *not* a full build. This confirms that it does fix #855102, but I can't test for #831541 (due to several qemu bugs). The syntax for running a single test is nosetests3 -v '/path/to/theano/theano/sparse/tests/test_basic.py':SamplingDotTester.test_op I intend to send this upstream tomorrow. Description: Fix invalid casts and negative stride handling Cast values, not pointers, from int64 to int32. Remember that first-in-index order (numpy) and first-in-memory-order (BLAS) are not always the same thing. Bump c_code_cache_version to make sure existing installs use the fixes. Author: Rebecca N. Palmer <rebecca_pal...@zoho.com> Bug-Debian: https://bugs.debian.org/855102 https://bugs.debian.org/831541 Forwarded: not yet diff --git a/theano/sparse/opt.py b/theano/sparse/opt.py index 6100405..d1c2b54 100644 --- a/theano/sparse/opt.py +++ b/theano/sparse/opt.py @@ -829,7 +829,11 @@ class UsmmCscDense(gof.Op): npy_intp Sind = PyArray_STRIDES(%(x_ind)s)[0] / PyArray_DESCR(%(x_ind)s)->elsize; npy_intp Sptr = PyArray_STRIDES(%(x_ptr)s)[0] / PyArray_DESCR(%(x_ptr)s)->elsize; npy_intp Sy = PyArray_STRIDES(%(y)s)[1] / PyArray_DESCR(%(y)s)->elsize; - + +// blas expects ints; convert here (rather than just making N etc ints) to avoid potential overflow in the negative-stride correction +int N32 = N; +int Sy32 = Sy; +int Szn32 = Szn; if (!(%(inplace)s)) { @@ -859,7 +863,7 @@ class UsmmCscDense(gof.Op): if (Szn < 0) z_row += (N - 1) * Szn; -%(axpy)s((int*), (%(conv_type)s*), (%(conv_type)s*)y_row, (int*), (%(conv_type)s*)z_row, (int*)); +%(axpy)s(, (%(conv_type)s*), (%(conv_type)s*)y_row, , (%(conv_type)s*)z_row, ); } } } @@ -868,7 +872,7 @@ class UsmmCscDense(gof.Op): return rval def c_code_cache_version(self): -return (1, blas.blas_header_version()) +return (1, blas.blas_header_version(), 0xdeb1a) usmm_csc_dense = UsmmCscDense(inplace=False) usmm_csc_dense_inplace = UsmmCscDense(inplace=True) @@ -1748,7 +1752,7 @@ class SamplingDotCSR(gof.Op): ]) def c_code_cache_version(self): -return (2, blas.blas_header_version()) +return (2, blas.blas_header_version(), 0xdeb1a) def c_support_code(self): return blas.blas_header_text() @@ -1891,6 +1895,11 @@ PyErr_SetString(PyExc_NotImplementedError, "rank(y) != 2"); %(fail)s;} memcpy(Dzi, Dpi, PyArray_DIMS(%(p_ind)s)[0]*sizeof(dtype_%(p_ind)s)); memcpy(Dzp, Dpp, PyArray_DIMS(%(p_ptr)s)[0]*sizeof(dtype_%(p_ptr)s)); +// blas expects ints; convert here (rather than just making K etc ints) to avoid potential overflow in the negative-stride correction +int K32 = K; +int Sdx32 = Sdx; +int Sdy32 = Sdy; + for (npy_int32 m = 0; m < M; ++m) { for (npy_int32 n_idx = Dpp[m * Sdpp]; n_idx < Dpp[(m+1)*Sdpp]; ++n_idx) { const npy_int32 n = Dpi[n_idx * Sdpi]; // row index of non-null value for column K @@ -1898,8 +1907,15 @@ PyErr_SetString(PyExc_NotImplementedError, "rank(y) != 2"); %(fail)s;} const dtype_%(x)s* x_row = (dtype_%(x)s*)(PyArray_BYTES(%(x)s) + PyArray_STRIDES(%(x)s)[0] * m); const dtype_%(y)s* y_col = (dtype_%(y)s*)(PyArray_BYTES(%(y)s) + PyArray_STRIDES(%(y)s)[0] * n); +// dot expects pointer to the beginning of memory arrays, +// so when the stride is negative, we need to get the +// last element +if (Sdx < 0) +x_row += (K - 1) * Sdx; +if (Sdy < 0) +y_col += (K - 1) * Sdy; -Dzd[n_idx * Sdzd] = Dpd[n_idx * Sdpd] * %(cdot)s((int*), (const %(conv_type)s*)x_row, (int*), (const %(conv_type)s*)y_col, (int*)); +Dzd[n_idx * Sdzd] = Dpd[n_idx * Sdpd] * %(cdot)s(, (const %(conv_type)s*)x_row, , (const %(conv_type)s*)y_col, ); } } } diff --git a/theano/sparse/tests/test_basic.py b/theano/sparse/tests/test_basic.py index 8c183b9..03d79f1 100644 --- a/theano/sparse/tests/test_basic.py +++ b/theano/sparse/tests/test_basic.py @@ -3085,6 +3085,20 @@ class SamplingDotTester(utt.InferShapeTester): assert tested.format == 'csr' assert tested.dtype == expected.dtype +def test_negative_stride(self): +f = theano.function( +
Bug#855102: theano: SamplingDot broken on negative-stride arrays
Package: theano Version: 0.8.2-3 Severity: important Control: tags -1 upstream Because Numpy arrays' data pointer is the first element in *index* order but BLAS expects the first element in *memory* order, simply calling a BLAS function as in SamplingDotCSR (https://sources.debian.net/src/theano/0.8.2-4/theano/sparse/opt.py/#L1902) doesn't work on negative-stride arrays: import theano import theano.sparse import theano.tensor import numpy as np import scipy.sparse x=[theano.tensor.matrix(),theano.tensor.matrix(),theano.sparse.csr_matrix()] f=theano.function(x,theano.sparse.sampling_dot(*x)) m1=np.random.rand(3,5) m2=np.random.rand(8,5) m3=m2[::-1,::-1]#negative strides p=scipy.sparse.csr_matrix(np.zeros((3,8))) p[1,2]=1 print(f(m1,m2,p)[1,2],np.dot(m1,m2.T)[1,2])#equal print(f(m1,m3,p)[1,2],np.dot(m1,m3.T)[1,2],f(m1,np.array(m3),p)[1,2])#should be equal, but aren't I expect to post a fix tonight (mostly a copy from UsmmCscDense, plus c_code_cache_version bump). -- debian-science-maintainers mailing list debian-science-maintainers@lists.alioth.debian.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/debian-science-maintainers
Bug#831541: theano: single GradientError and WrongValues in tests on s390x, ppc64 and sparc
Found the problem: the affected functions ( https://sources.debian.net/src/theano/0.8.2-4/theano/sparse/opt.py/#L862 , https://sources.debian.net/src/theano/0.8.2-4/theano/sparse/opt.py/#L1902 ) cast a pointer-to-intptr_t (64 bit) to a pointer-to-int (32-bit). Which isn't just broken on big-endian systems, it's a strict aliasing violation *everywhere* (i.e. technically undefined behaviour with optimization on, which it is by default, though it appears to work in practice). (I expect to post a patch tonight: the obvious has a potential overflow issue, and it also needs a c_code_cache_version change to make the fix be used in existing installs). -- debian-science-maintainers mailing list debian-science-maintainers@lists.alioth.debian.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/debian-science-maintainers
Bug#848764: (no subject)
Control: tags -1 patch Three patches because this is logically three bugs, though it's filed as two upstream: https://github.com/Theano/Theano/issues/5494 https://github.com/Theano/Theano/issues/5396 First patch taken from upstream https://github.com/Theano/Theano/commit/e8e01f4e0da83d038b244cd5dcec4f0d3f6c0777 by chinnadhurai (I've only tried these tests, not a full build; I can't reproduce the other three failures reported in upstream 5396) --- a/theano/sparse/tests/test_sp2.py +++ b/theano/sparse/tests/test_sp2.py @@ -61,7 +61,7 @@ class PoissonTester(utt.InferShapeTester): class BinomialTester(utt.InferShapeTester): -n = tensor.scalar() +n = tensor.scalar(dtype='int64') p = tensor.scalar() shape = tensor.lvector() _n = 5 --- a/theano/tensor/tests/test_elemwise.py +++ b/theano/tensor/tests/test_elemwise.py @@ -414,7 +414,11 @@ class test_CAReduce(unittest_tools.InferShapeTester): zv = numpy.bitwise_or.reduce(zv, axis) elif scalar_op == scalar.and_: for axis in reversed(sorted(tosum)): -zv = numpy.bitwise_and.reduce(zv, axis) +if zv.shape[axis] == 0: +# Theano and old numpy use +1 as 'AND of no elements', new numpy uses -1 +zv = numpy.abs(numpy.bitwise_and.reduce(zv, axis).astype('int8')) +else: +zv = numpy.bitwise_and.reduce(zv, axis) elif scalar_op == scalar.xor: # There is no identity value for the xor function # So we can't support shape of dimensions 0. --- a/theano/tensor/tests/test_extra_ops.py +++ b/theano/tensor/tests/test_extra_ops.py @@ -135,7 +135,7 @@ class TestBinCountOp(utt.InferShapeTester): def test_bincountFn(self): w = T.vector('w') def ref(data, w=None, minlength=None): -size = data.max() + 1 +size = int(data.max()) + 1 if minlength: size = max(size, minlength) if w is not None: -- debian-science-maintainers mailing list debian-science-maintainers@lists.alioth.debian.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/debian-science-maintainers
Bug#831540: (no subject)
Control: forwarded -1 https://github.com/Theano/Theano/issues/5498 Control: tags -1 patch I don't think this is a regression - it's Python 3 specific (numpy.array(list of longs, which this test uses on Python 2)=int64 array, but numpy(list of Python 3 ints)=int_nativesize array; see above link for longer discussion) and 0.8.2-2 appears to be the first time the tests were run with Python 3. Fix (though I've only tried these particular tests, not a full build): --- a/theano/tensor/tests/test_basic.py +++ b/theano/tensor/tests/test_basic.py @@ -6672,11 +6672,11 @@ class T_long_tensor(unittest.TestCase): assert scalar_ct.value == val vector_ct = constant([val, val]) -assert vector_ct.dtype == 'int64' +assert vector_ct.dtype in ('int32','int64') assert numpy.all(vector_ct.value == val) matrix_ct = constant([[val, val]]) -assert matrix_ct.dtype == 'int64' +assert matrix_ct.dtype in ('int32','int64') assert numpy.all(matrix_ct.value == val) def test_too_big(self): -- debian-science-maintainers mailing list debian-science-maintainers@lists.alioth.debian.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/debian-science-maintainers
Bug#737153: OpenCVModules.cmake not installed, causing visp FTBFS
libopencv-dev doesn't pull in the Java libraries; I don't know if the appropriate fix is that it should, or that the cmake script shouldn't be looking for them when building C(++). -- debian-science-maintainers mailing list debian-science-maintainers@lists.alioth.debian.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/debian-science-maintainers