Bug#1063631: [Debian-pan-maintainers] Bug#1063631: closing 1063631
I am working on it at the upstream level need a few more days. Cheers Fred
Bug#1065724: epics-base: FTBFS on amd64: Tests failed
Here an analyse of the FTBFS On the amd64, I have two failures dureing the test Test Summary Report --- testPVAServer.t(Wstat: 0 Tests: 0 Failed: 0) Parse errors: No plan found in TAP output Files=6, Tests=129, 1 wallclock secs ( 0.05 usr 0.01 sys + 0.09 cusr 0.06 csys = 0.21 CPU) Result: FAIL --- and Test Summary Report --- testInetAddressUtils.t (Wstat: 0 Tests: 65 Failed: 0) TODO passed: 64 testChannelAccess.t (Wstat: 0 Tests: 152 Failed: 0) TODO passed: 45 testServerContext.t (Wstat: 0 Tests: 0 Failed: 0) Parse errors: No plan found in TAP output Files=12, Tests=6381, 27 wallclock secs ( 0.38 usr 0.03 sys + 0.42 cusr 0.18 csys = 1.01 CPU) Result: FAIL --- on arm64, the first test seems to work... testPVAServer.t .. 1..1 pvAccess Server v7.1.7 Active configuration (w/ defaults) EPICS_PVAS_INTF_ADDR_LIST = 0.0.0.0:5075 EPICS_PVAS_BEACON_ADDR_LIST = EPICS_PVAS_AUTO_BEACON_ADDR_LIST = YES EPICS_PVAS_BEACON_PERIOD = 15 EPICS_PVAS_BROADCAST_PORT = 5076 EPICS_PVAS_SERVER_PORT = 5075 EPICS_PVAS_PROVIDER_NAMES = local ok 1 - ctx.get()!=0 ok I am wondering if this is not something related to the network configuration. i386 Test Summary Report --- printfTest.t (Wstat: 256 (exited 1) Tests: 97 Failed: 1) Failed test: 70 Non-zero exit status: 1 Files=22, Tests=5033, 29 wallclock secs ( 0.42 usr 0.04 sys + 1.74 cusr 0.34 csys = 2.54 CPU) Result: FAIL --- Test Summary Report --- testInetAddressUtils.t (Wstat: 0 Tests: 65 Failed: 0) TODO passed: 64 testChannelAccess.t (Wstat: 0 Tests: 152 Failed: 0) TODO passed: 45 testServerContext.t (Wstat: 0 Tests: 0 Failed: 0) Parse errors: No plan found in TAP output Files=12, Tests=6381, 26 wallclock secs ( 0.42 usr 0.02 sys + 0.46 cusr 0.20 csys = 1.10 CPU) Result: FAIL --- Test Summary Report --- testPVAServer.t(Wstat: 0 Tests: 0 Failed: 0) Parse errors: No plan found in TAP output Files=6, Tests=129, 0 wallclock secs ( 0.08 usr 0.02 sys + 0.10 cusr 0.04 csys = 0.24 CPU) Result: FAIL --- And there is a bunch of unsupported architectures. /<>/src/tools/EpicsHostArch.pl: Architecture 'mips64el-linux-gnuabi64-thread-multi' not recognized /<>/src/tools/EpicsHostArch.pl: Architecture 'powerpc64le-linux-gnu-thread-multi' not recognized /<>/src/tools/EpicsHostArch.pl: Architecture 'riscv64-linux-gnu-thread-multi' not recognized /<>/src/tools/EpicsHostArch.pl: Architecture 's390x-linux-gnu-thread-multi' not recognized
Bug#1060318: Info received (Bug#1060318: Info received (Bug#1060318: Info received (Bug#1060318: Info received (silx: autopkgtest failure with Python 3.12))))
POCL_WORK_GROUP_METHOD=cbs python3 test.py make it works $ POCL_WORK_GROUP_METHOD=cbs python3 test.py [SubCFG] Form SubCFGs in bsort_all [SubCFG] Form SubCFGs in bsort_horizontal [SubCFG] Form SubCFGs in bsort_vertical [SubCFG] Form SubCFGs in bsort_book [SubCFG] Form SubCFGs in bsort_file [SubCFG] Form SubCFGs in medfilt2d [SubCFG] Form SubCFGs in medfilt2d
Bug#1060318: Info received (Bug#1060318: Info received (Bug#1060318: Info received (silx: autopkgtest failure with Python 3.12)))
With latest version (PAS OK) $ dpkg -l | grep pocl ii libpocl2-common5.0-2.1 all common files for the pocl library ii libpocl2t64:amd64 5.0-2.1 amd64Portable Computing Language library ii pocl-opencl-icd:amd64 5.0-2.1 amd64pocl ICD
Bug#1060318: Info received (Bug#1060318: Info received (silx: autopkgtest failure with Python 3.12))
Debian12 (OK) $ dpkg -l | grep pocl ii libpocl2:amd64 3.1-3+deb12u1 amd64Portable Computing Language library ii libpocl2-common 3.1-3+deb12u1 all common files for the pocl library ii pocl-opencl-icd:amd643.1-3+deb12u1 amd64pocl ICD unstable (NOT OK) $ dpkg -l | grep pocl ii libpocl2:amd64 5.0-2 amd64Portable Computing Language library ii libpocl2-common5.0-2 all common files for the pocl library ii pocl-opencl-icd:amd64 5.0-2 amd64pocl ICD
Bug#1060318: Info received (Bug#1060318: Info received (silx: autopkgtest failure with Python 3.12))
On Debian12 it works out of the box $ POCL_DEBUG=1 python3 test.py [2024-03-11 10:05:31.837738936]POCL: in fn pocl_install_sigfpe_handler at line 229: | GENERAL | Installing SIGFPE handler... [2024-03-11 10:05:31.868890390]POCL: in fn POclCreateCommandQueue at line 98: | GENERAL | Created Command Queue 3 (0x1ee13c0) on device 0 [2024-03-11 10:05:31.868917030]POCL: in fn POclCreateContext at line 227: | GENERAL | Created Context 2 (0x1ee0e40) [2024-03-11 10:05:31.868966549]POCL: in fn POclCreateCommandQueue at line 98: | GENERAL | Created Command Queue 4 (0x1f31f10) on device 0 [2024-03-11 10:05:31.874596495]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel s8_to_float (0x1fc5540) [2024-03-11 10:05:31.874606285]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel u8_to_float (0x1fc5610) [2024-03-11 10:05:31.874617005]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel s16_to_float (0x1fc5730) [2024-03-11 10:05:31.874622275]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel u16_to_float (0x1f81e70) [2024-03-11 10:05:31.874632075]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel u32_to_float (0x1f81fb0) [2024-03-11 10:05:31.874638955]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel s32_to_float (0x1f820f0) [2024-03-11 10:05:31.874646635]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel corrections (0x1f82230) [2024-03-11 10:05:31.874654714]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel corrections2 (0x1f82590) [2024-03-11 10:05:31.874663744]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel corrections3Poisson (0x1f82990) [2024-03-11 10:05:31.874669284]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel corrections3 (0x1f82d90) [2024-03-11 10:05:31.874673814]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel bsort_all (0x201ded0) [2024-03-11 10:05:31.874681154]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel bsort_horizontal (0x201e010) [2024-03-11 10:05:31.874685604]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel bsort_vertical (0x201e150) [2024-03-11 10:05:31.874691454]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel bsort_book (0x201e290) [2024-03-11 10:05:31.874699564]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel bsort_file (0x201e3d0) [2024-03-11 10:05:31.874709654]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel medfilt2d (0x201e510) [2024-03-11 10:05:31.877001426]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel s8_to_float (0x1fdf150) [2024-03-11 10:05:31.877011365]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel u8_to_float (0x20103f0) [2024-03-11 10:05:31.877019735]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel s16_to_float (0x1f60f90) [2024-03-11 10:05:31.877025545]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel u16_to_float (0x1f61060) [2024-03-11 10:05:31.877030655]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel u32_to_float (0x1f5f1f0) [2024-03-11 10:05:31.877038395]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel s32_to_float (0x1f5f310) [2024-03-11 10:05:31.877043475]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel corrections (0x1f60500) [2024-03-11 10:05:31.877055965]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel corrections2 (0x200efa0) [2024-03-11 10:05:31.877061275]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel corrections3Poisson (0x200f3a0) [2024-03-11 10:05:31.877064514]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel corrections3 (0x200f7a0) [2024-03-11 10:05:31.877071304]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel bsort_all (0x200fc20) [2024-03-11 10:05:31.877079984]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel bsort_horizontal (0x200fd60) [2024-03-11 10:05:31.877087744]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel bsort_vertical (0x1f613b0) [2024-03-11 10:05:31.877094244]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel bsort_book (0x1f614f0) [2024-03-11 10:05:31.877098614]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel bsort_file (0x1f61630) [2024-03-11 10:05:31.877102884]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel medfilt2d (0x1f61770) [2024-03-11 10:05:31.877723934]POCL: in fn POclCreateKernel at line 138: | GENERAL | Created Kernel medfilt2d (0x1f61e00) [2024-03-11 10:05:31.878064028]POCL: in fn POclSetKernelArg at line 107: | GENERAL | Kernel me
Bug#1060318: Info received (silx: autopkgtest failure with Python 3.12)
We already had the warning message [2024-03-10 14:26:18.189651850]POCL: in fn void appendToProgramBuildLog(cl_program, unsigned int, std::string&) at line 111: | ERROR | warning: /home/picca/.cache/pocl/kcache/tempfile_msXjLw.cl:861:14: AVX vector argument of type '__private float8' (vector of 8 'float' values) without 'avx' enabled changes the ABI warning: /home/picca/.cache/pocl/kcache/tempfile_msXjLw.cl:893:14: AVX vector argument of type '__private float8' (vector of 8 'float' values) without 'avx' enabled changes the ABI warning: /home/picca/.cache/pocl/kcache/tempfile_msXjLw.cl:933:16: AVX vector argument of type '__private float8' (vector of 8 'float' values) without 'avx' enabled changes the ABI warning: /home/picca/.cache/pocl/kcache/tempfile_msXjLw.cl:1266:26: AVX vector argument of type '__private float8' (vector of 8 'float' values) without 'avx' enabled changes the ABI [2024-03-10 14:26:18.195708864]POCL: in fn llvm::Module* getKernelLibrary(cl_device_id, PoclLLVMContextData*) at line 992: | LLVM | Using /lib/x86_64-linux-gnu/../../share/pocl/kernel-x86_64-pc-linux-gnu-sse41.bc as the built-in lib. [2024-03-10 14:26:20.314065808]POCL: in fn int pocl_llvm_build_program(cl_program, unsigned int, cl_uint, _cl_program* const*, const char**, int) at line 756: | LLVM | Writing program.bc to /home/picca/.cache/pocl/kcache/LK/MONFDAKCFIMDEBOPEIHEOILBLCLBMGGNLPDID/program.bc. /usr/lib/python3/dist-packages/pyopencl/cache.py:417: CompilerWarning: From-source build succeeded, but resulted in non-empty logs: Build on succeeded, but said: warning: /home/picca/.cache/pocl/kcache/tempfile_msXjLw.cl:861:14: AVX vector argument of type '__private float8' (vector of 8 'float' values) without 'avx' enabled changes the ABI warning: /home/picca/.cache/pocl/kcache/tempfile_msXjLw.cl:893:14: AVX vector argument of type '__private float8' (vector of 8 'float' values) without 'avx' enabled changes the ABI warning: /home/picca/.cache/pocl/kcache/tempfile_msXjLw.cl:933:16: AVX vector argument of type '__private float8' (vector of 8 'float' values) without 'avx' enabled changes the ABI warning: /home/picca/.cache/pocl/kcache/tempfile_msXjLw.cl:1266:26: AVX vector argument of type '__private float8' (vector of 8 'float' values) without 'avx' enabled changes the ABI prg.build(options_bytes, [devices[i] for i in to_be_built_indices])
Bug#1060318: Info received (silx: autopkgtest failure with Python 3.12)
Here a log with POCL_DEBUG=all picca@cush:/tmp$ python3 test.py [2024-03-10 14:22:19.462191847]POCL: in fn pocl_install_sigfpe_handler at line 265: | GENERAL | Installing SIGFPE handler... [2024-03-10 14:22:19.475550217]POCL: in fn POclCreateCommandQueue at line 103: | GENERAL | Created Command Queue 3 (0x27d55b0) on device 0 [2024-03-10 14:22:19.475690904]POCL: in fn void pocl_llvm_create_context(cl_context) at line 592: | LLVM | Created context 2 (0x27d4960) [2024-03-10 14:22:19.475732695]POCL: in fn POclCreateContext at line 232: | GENERAL | Created Context 2 (0x27d4960) [2024-03-10 14:22:19.475822461]POCL: in fn POclRetainContext at line 32: | REFCOUNTS | Retain Context 2 (0x27d4960), Refcount: 2 [2024-03-10 14:22:19.475856682]POCL: in fn POclCreateCommandQueue at line 103: | GENERAL | Created Command Queue 4 (0x27d77b0) on device 0 [2024-03-10 14:22:19.492655607]POCL: in fn POclRetainContext at line 32: | REFCOUNTS | Retain Context 2 (0x27d4960), Refcount: 3 [2024-03-10 14:22:19.492795776]POCL: in fn compile_and_link_program at line 718: | LLVM | building program with options -I /usr/lib/python3/dist-packages/pyopencl/cl [2024-03-10 14:22:19.492824004]POCL: in fn compile_and_link_program at line 755: | LLVM | building program for 1 devs with options -I /usr/lib/python3/dist-packages/pyopencl/cl [2024-03-10 14:22:19.492847621]POCL: in fn compile_and_link_program at line 759: | LLVM | BUILDING for device: cpu [2024-03-10 14:22:19.497354940]POCL: in fn POclRetainContext at line 32: | REFCOUNTS | Retain Context 2 (0x27d4960), Refcount: 4 [2024-03-10 14:22:19.497919687]POCL: in fn POclRetainContext at line 32: | REFCOUNTS | Retain Context 2 (0x27d4960), Refcount: 5 [2024-03-10 14:22:19.497963205]POCL: in fn POclCreateBuffer at line 292: |MEMORY | Created Buffer 6 (0x2801b90), MEM_HOST_PTR: (nil), device_ptrs[0]: (nil), SIZE 4, FLAGS 1 [2024-03-10 14:22:19.498110078]POCL: in fn pocl_driver_alloc_mem_obj at line 428: |MEMORY | Basic device ALLOC 0x27f7380 / size 4 [2024-03-10 14:22:19.498162446]POCL: in fn POclRetainCommandQueue at line 35: | REFCOUNTS | Retain Command Queue 4 (0x27d77b0), Refcount: 2 [2024-03-10 14:22:19.498187844]POCL: in fn pocl_create_event at line 527: |EVENTS | Created event 1 (0x27e4e60) Command write_buffer [2024-03-10 14:22:19.498211789]POCL: in fn pocl_create_command_struct at line 648: |EVENTS | event pointer provided [2024-03-10 14:22:19.498232543]POCL: in fn pocl_create_command_struct at line 668: |EVENTS | Created immediate command struct: CMD 0x27e4d50 (event 1 / 0x27e4e60, type: write_buffer) [2024-03-10 14:22:19.498259772]POCL: in fn pocl_command_enqueue at line 1290: |EVENTS | In-order Q; adding event syncs [2024-03-10 14:22:19.498280767]POCL: in fn pocl_command_enqueue at line 1335: |EVENTS | Pushed Event 1 to CQ 4. [2024-03-10 14:22:19.498303076]POCL: in fn pocl_update_event_queued at line 2177: |EVENTS | Event queued: 1 [2024-03-10 14:22:19.498326609]POCL: in fn pocl_update_event_submitted at line 2197: |EVENTS | Event submitted: 1 [2024-03-10 14:22:19.498451579]POCL: in fn pocl_update_event_running_unlocked at line 2216: |EVENTS | Event running: 1 [2024-03-10 14:22:19.498484119]POCL: in fn pocl_update_event_finished at line 2368: |EVENTS | cpu: Command complete, event 1 [2024-03-10 14:22:19.498509038]POCL: in fn pocl_exec_command at line 343: |TIMING | >>>32.497 usEvent Write Buffer [2024-03-10 14:22:19.498531904]POCL: in fn POclReleaseMemObject at line 53: | REFCOUNTS | Release Memory Object 6 (0x2801b90), Refcount: 1 [2024-03-10 14:22:19.498562333]POCL: in fn POclReleaseEvent at line 39: | REFCOUNTS | Release Event 1 (0x27e4e60), Refcount: 2 [2024-03-10 14:22:19.498656679]POCL: in fn POclCreateKernel at line 133: | GENERAL | Created Kernel check_atomic32 (0x27f74c0) [2024-03-10 14:22:19.501056049]POCL: in fn POclRetainContext at line 32: | REFCOUNTS | Retain Context 2 (0x27d4960), Refcount: 6 [2024-03-10 14:22:19.501139297]POCL: in fn POclReleaseContext at line 53: | REFCOUNTS | Release Context 2 (0x27d4960), Refcount: 5 [2024-03-10 14:22:19.503196833]POCL: in fn POclSetKernelArg at line 107: | GENERAL | Kernel check_atomic32 || SetArg idx 0 || int* || Local 0 || Size 8 || Value 0x7fff7b47ae20 || Pointer 0x2801b90 || *(uint32*)Value:0 || *(uint64*)Value:0 || Hex Value: 901B8002 [2024-03-10 14:22:19.503275428]POCL: in fn pocl_kernel_calc_wg_size at line 182: | GENERAL | Preparing kernel check_atomic32 with local size 32 x 1 x 1 group sizes 32 x 1 x 1... [2024-03-10 14:22:19.503311773]POCL: in fn POclRetainCommandQueue at line 35: | REFCOUNTS | Retain Command Queue 4 (0x27d77b0), Refcount: 3 [2024-03-10 14:22:19.503350256]POCL: in fn pocl_create_event at line 527: |EVENTS | Created event 2 (0x27f8c
Bug#1060318: Info received (silx: autopkgtest failure with Python 3.12)
It seems that here is an error here [2024-03-10 14:22:19.550588408]POCL: in fn int pocl_llvm_build_program(cl_program, unsigned int, cl_uint, _cl_program* const*, const char**, int) at line 420: | LLVM | all build options: -Dcl_khr_int64 -DPOCL_DEVICE_ADDRESS_BITS=64 -D__USE_CLANG_OPENCL_C_H -xcl -Dinline= -I. -cl-kernel-arg-info -opaque-pointers -D NIMAGE=1 -I /usr/lib/python3/dist-packages/pyopencl/cl -D__ENDIAN_LITTLE__=1 -D__IMAGE_SUPPORT__=1 -DCL_DEVICE_MAX_GLOBAL_VARIABLE_SIZE=64000 -D__OPENCL_VERSION__=300 -cl-std=CL3.0 -D__OPENCL_C_VERSION__=300 -Dcl_khr_byte_addressable_store=1 -Dcl_khr_global_int32_base_atomics=1 -Dcl_khr_global_int32_extended_atomics=1 -Dcl_khr_local_int32_base_atomics=1 -Dcl_khr_local_int32_extended_atomics=1 -Dcl_khr_3d_image_writes=1 -Dcl_khr_command_buffer=1 -Dcl_pocl_pinned_buffers=1 -Dcl_khr_subgroups=1 -Dcl_intel_unified_shared_memory=1 -Dcl_khr_subgroup_ballot=1 -Dcl_khr_subgroup_shuffle=1 -Dcl_intel_subgroups=1 -Dcl_intel_required_subgroup_size=1 -Dcl_ext_float_atomics=1 -Dcl_khr_spir=1 -Dcl_khr_fp64=1 -Dcl_khr_int64_base_atomics=1 -Dcl_khr_int64_extended_atomics=1 -D__opencl_c_3d_image_writes=1 -D__opencl_c_images=1 -D__opencl_c_atomic_order_acq_rel=1 -D__opencl_c_atomic_order_seq_cst=1 -D__opencl_c_atomic_scope_device=1 -D__opencl_c_program_scope_global_variables=1 -D__opencl_c_generic_address_space=1 -D__opencl_c_subgroups=1 -D__opencl_c_atomic_scope_all_devices=1 -D__opencl_c_read_write_images=1 -D__opencl_c_fp64=1 -D__opencl_c_ext_fp32_global_atomic_add=1 -D__opencl_c_ext_fp32_local_atomic_add=1 -D__opencl_c_ext_fp32_global_atomic_min_max=1 -D__opencl_c_ext_fp32_local_atomic_min_max=1 -D__opencl_c_ext_fp64_global_atomic_add=1 -D__opencl_c_ext_fp64_local_atomic_add=1 -D__opencl_c_ext_fp64_global_atomic_min_max=1 -D__opencl_c_ext_fp64_local_atomic_min_max=1 -D__opencl_c_int64=1 -cl-ext=-all,+cl_khr_byte_addressable_store,+cl_khr_global_int32_base_atomics,+cl_khr_global_int32_extended_atomics,+cl_khr_local_int32_base_atomics,+cl_khr_local_int32_extended_atomics,+cl_khr_3d_image_writes,+cl_khr_command_buffer,+cl_pocl_pinned_buffers,+cl_khr_subgroups,+cl_intel_unified_shared_memory,+cl_khr_subgroup_ballot,+cl_khr_subgroup_shuffle,+cl_intel_subgroups,+cl_intel_required_subgroup_size,+cl_ext_float_atomics,+cl_khr_spir,+cl_khr_fp64,+cl_khr_int64_base_atomics,+cl_khr_int64_extended_atomics,+__opencl_c_3d_image_writes,+__opencl_c_images,+__opencl_c_atomic_order_acq_rel,+__opencl_c_atomic_order_seq_cst,+__opencl_c_atomic_scope_device,+__opencl_c_program_scope_global_variables,+__opencl_c_generic_address_space,+__opencl_c_subgroups,+__opencl_c_atomic_scope_all_devices,+__opencl_c_read_write_images,+__opencl_c_fp64,+__opencl_c_ext_fp32_global_atomic_add,+__opencl_c_ext_fp32_local_atomic_add,+__opencl_c_ext_fp32_global_atomic_min_max,+__opencl_c_ext_fp32_local_atomic_min_max,+__opencl_c_ext_fp64_global_atomic_add,+__opencl_c_ext_fp64_local_atomic_add,+__opencl_c_ext_fp64_global_atomic_min_max,+__opencl_c_ext_fp64_local_atomic_min_max,+__opencl_c_int64 -fno-builtin -triple=x86_64-pc-linux-gnu -target-cpu penryn 4 warnings generated. [2024-03-10 14:22:20.986369997]POCL: in fn void appendToProgramBuildLog(cl_program, unsigned int, std::string&) at line 111: | ERROR | warning: /home/picca/.cache/pocl/kcache/tempfile_NcEztR.cl:861:14: AVX vector argument of type '__private float8' (vector of 8 'float' values) without 'avx' enabled changes the ABI warning: /home/picca/.cache/pocl/kcache/tempfile_NcEztR.cl:893:14: AVX vector argument of type '__private float8' (vector of 8 'float' values) without 'avx' enabled changes the ABI warning: /home/picca/.cache/pocl/kcache/tempfile_NcEztR.cl:933:16: AVX vector argument of type '__private float8' (vector of 8 'float' values) without 'avx' enabled changes the ABI warning: /home/picca/.cache/pocl/kcache/tempfile_NcEztR.cl:1266:26: AVX vector argument of type '__private float8' (vector of 8 'float' values) without 'avx' enabled changes the ABI [2024-03-10 14:22:20.992890946]POCL: in fn llvm::Module* getKernelLibrary(cl_device_id, PoclLLVMContextData*) at line 992: | LLVM | Using /lib/x86_64-linux-gnu/../../share/pocl/kernel-x86_64-pc-linux-gnu-sse41.bc as the built-in lib. [2024-03-10 14:22:23.151001890]POCL: in fn int pocl_llvm_build_program(cl_program, unsigned int, cl_uint, _cl_program* const*, const char**, int) at line 756: | LLVM | Writing program.bc to /home/picca/.cache/pocl/kcache/OO/KDMNEJOLAKKIBKBOIDNJJPAEHMJELJCBLMGBG/program.bc. /usr/lib/python3/dist-packages/pyopencl/cache.py:417: CompilerWarning: Non-empty compiler output encountered. Set the environment variable PYOPENCL_COMPILER_OUTPUT=1 to see more. prg.build(options_bytes, [devices[i] for i in to_be_built_indices]) let export the PYOPENCL_COMPILER_OUTPUT
Bug#1060318: Info received (silx: autopkgtest failure with Python 3.12)
Here a small script which trigger the errorfrom silx.image import medianfilter import numpy IMG = numpy.arange(1.0).reshape(100, 100) KERNEL = (1, 1) res = medianfilter.medfilt2d( image=IMG, kernel_size=KERNEL, engine="opencl", )
Bug#1060318: silx: autopkgtest failure with Python 3.12
In order to reproduce the bug, install python3-silx 2.0.0+dfsg-1 python3-pytest-xvfb pocl-opencl-icd then $ pytest --pyargs silx.image.test.test_medianfilter -v === test session starts === platform linux -- Python 3.11.8, pytest-8.0.2, pluggy-1.4.0 -- /usr/bin/python3 cachedir: .pytest_cache rootdir: /home/picca/debian/science-team/pyvkfft plugins: anyio-4.2.0, dials-data-2.4.0, xvfb-3.0.0 collected 2 items ::TestMedianFilterEngines::testCppMedFilt2d PASSED [ 50%] ::TestMedianFilterEngines::testOpenCLMedFilt2d Abandon the OpenCL test fails
Bug#1060318: silx: autopkgtest failure with Python 3.12
With the silx 2.0.0 version the failire is located in the OpenCL part the backtrace is this one when running the median filter # build the packag eintht echroot and enter into it once build dgit --gbp sbuild --finished-build-commands '%SBUILD_SHELL' run this command to obtain the backtrace... DEBUGINFOD_URLS="https://debuginfod.debian.net"; PYTHONPATH=. gdb --args python3.11 -m pytest --pyarg silx silx/image/test/test_medianfilter.py here the backtrace. Thread 1 "python3.11" received signal SIGABRT, Aborted. 0x77d3516c in ?? () from /lib/x86_64-linux-gnu/libc.so.6 (gdb) bt #0 0x77d3516c in ?? () from /lib/x86_64-linux-gnu/libc.so.6 #1 0x77ce7472 in raise () from /lib/x86_64-linux-gnu/libc.so.6 #2 0x77cd14b2 in abort () from /lib/x86_64-linux-gnu/libc.so.6 #3 0x77cd13d5 in ?? () from /lib/x86_64-linux-gnu/libc.so.6 #4 0x77ce03a2 in __assert_fail () from /lib/x86_64-linux-gnu/libc.so.6 #5 0x7344770d in pocl::Kernel::createParallelRegionBefore (this=, B=0x8980180) at ../llvmopencl/./lib/llvmopencl/Kernel.cc:129 #6 pocl::Kernel::getParallelRegions (this=, LI=..., ParallelRegions=0x7fff4860) at ../llvmopencl/./lib/llvmopencl/Kernel.cc:193 #7 0x7346ab82 in pocl::WorkitemLoopsImpl::processFunction (this=this@entry=0x7fff47c0, F=...) at ../llvmopencl/./lib/llvmopencl/WorkitemLoops.cc:445 #8 0x7346cb8d in pocl::WorkitemLoopsImpl::runOnFunction (this=0x7fff47c0, F=...) at ../llvmopencl/./lib/llvmopencl/WorkitemLoops.cc:183 #9 0x7346ecac in pocl::WorkitemLoops::run (this=, F=..., AM=...) at ../llvmopencl/./lib/llvmopencl/WorkitemLoops.cc:1490 #10 0x7346ede5 in llvm::detail::PassModel>::run(llvm::Function&, llvm::AnalysisManager&) (this=, IR=..., AM=...) at /usr/lib/llvm-16/include/llvm/IR/PassManagerInternal.h:89 #11 0x7fffe91d7579 in run () at llvm/include/llvm/IR/PassManager.h:517 #12 0x7fffeaeedb01 in llvm::detail::PassModel>, llvm::PreservedAnalyses, llvm::AnalysisManager>::run(llvm::Function&, llvm::AnalysisManager&) () at llvm/include/llvm/IR/PassManagerInternal.h:89 #13 0x7fffe91dade6 in run () at build-llvm/tools/clang/stage2-bins/llvm/lib/IR/PassManager.cpp:124 #14 0x7fffeaeed921 in llvm::detail::PassModel>::run(llvm::Module&, llvm::AnalysisManager&) () at llvm/include/llvm/IR/PassManagerInternal.h:89 #15 0x7348cd85 in llvm::PassManager>::run(llvm::Module&, llvm::AnalysisManager&) (AM=..., IR=..., this=0x7fff50b8) at /usr/include/c++/13/bits/unique_ptr.h:199 #16 PoCLModulePassManager::run (this=0x7fff4f98, Bitcode=...) at ./lib/CL/pocl_llvm_wg.cc:322 #17 0x73494b14 in TwoStagePoCLModulePassManager::run (Bitcode=..., this=0x7fff4e10) at ./lib/CL/pocl_llvm_wg.cc:386 #18 runKernelCompilerPasses (Device=Device@entry=0x151f050, Mod=...) at ./lib/CL/pocl_llvm_wg.cc:727 #19 0x73496302 in pocl_llvm_run_pocl_passes(llvm::Module*, _cl_command_run*, llvm::LLVMContext*, PoclLLVMContextData*, _cl_kernel*, _cl_device_id*, int) [clone .isra.0] ( Bitcode=Bitcode@entry=0x16d6ef0, RunCommand=RunCommand@entry=0x7fffbd40, PoclCtx=PoclCtx@entry=0x157e340, Kernel=Kernel@entry=0x7fffbcb0, Device=Device@entry=0x151f050, Specialize=Specialize@entry=0, LLVMContext=) at ./lib/CL/pocl_llvm_wg.cc:1101 #20 0x7348ff32 in pocl_llvm_generate_workgroup_function_nowrite (DeviceI=DeviceI@entry=0, Device=Device@entry=0x151f050, Kernel=Kernel@entry=0x7fffbcb0, Command=Command@entry=0x7fffbd40, Output=Output@entry=0x7fff6548, Specialize=Specialize@entry=0) at ./lib/CL/pocl_llvm_wg.cc:1147 #21 0x73424b2f in llvm_codegen (output=output@entry=0x30d19f0 "/sbuild-nonexistent/.cache/pocl/kcache/BC/KCELIMKPIAEADDLPJHGMOMPOPMNFLCMCBIOCK/medfilt2d/0-0-0/medfilt2d.so", device_i=device_i@entry=0, kernel=kernel@entry=0x7fffbcb0, device=0x151f050, command=command@entry=0x7fffbd40, specialize=specialize@entry=0) at ./lib/CL/devices/common.c:137 #22 0x7342778e in pocl_check_kernel_disk_cache (command=command@entry=0x7fffbd40, specialized=specialized@entry=0) at ./lib/CL/devices/common.c:983 #23 0x73427e7a in pocl_check_kernel_dlhandle_cache (command=command@entry=0x7fffbd40, retain=retain@entry=0, specialize=specialize@entry=0) at ./lib/CL/devices/common.c:1108 #24 0x7fffe477fc3d in pocl_basic_compile_kernel (cmd=0x7fffbd40, kernel=0x7fffbcb0, device=, specialize=0) at ./lib/CL/devices/basic/basic.c:682 #25 0x7342c71f in pocl_driver_build_poclbinary (program=0x15a6170, device_i=) at ./lib/CL/devices/common_driver.c:969 #26 0x733f291e in get_binary_sizes (sizes=, program=) at ./lib/CL/clGetProgramInfo.c:54 #27 POclGetProgramInfo (program=0x15a6170, param_name=, param_value_size=, param_value=0x15116f0, param_value_size_ret=0x7fffbf70) at ./lib/CL/clGetProgramInfo.c:143 #28 0x736a46ae in pyopencl::program::get_info (this=0x
Bug#1041803: [Debian-pan-maintainers] Bug#1041803: hyperspy: FTBFS test_image fails
the old and new hyperspy is not compatible with imagio > 0.28. I kindly opened a bug report about the situation at the upstream git repository.
Bug#1026864: dmrgpp: flaky autopkgtest on amd64: times out
and a comment about this issue https://github.com/g1257/dmrgpp/issues/38#issuecomment-1655740289
Bug#1041443: [Debian-pan-maintainers] Bug#1041443: pyfai_2023.5.0+dfsg1-3_all-buildd.changes REJECTED
> I am just the messenger here, if you disagree, please feel free to > contact ftpmasters or lintian maintainers. This was not a rant about this, I just wanted to understand what is going on :). > Your package has been built successfully on (some) buildds, but then the > binaries upload got rejected by dak, that's why they are still in > "Uploaded" state. Overall it's just like if pyfai hasn't been built or > fails to build from source. So until a new upstream source is available with other timestamp, it will not be uploadable. In you opinion, can we discuss about this on debian-devel ? Cheers Fred
Bug#1041443: [Debian-pan-maintainers] Bug#1041443: Bug#1041443: pyfai_2023.5.0+dfsg1-3_all-buildd.changes REJECTED
I just check this date is in the upstream tar file https://files.pythonhosted.org/packages/54/84/ea12e176489b35c4610625ce56aa2a1d91ab235b0caa71846317bfd1192f/pyfai-2023.5.0.tar.gz
Bug#1041443: [Debian-pan-maintainers] Bug#1041443: pyfai_2023.5.0+dfsg1-3_all-buildd.changes REJECTED
ok, it seems that I generated an orig.tag.gz with this (Thu Jan 1 00:00:00 1970). I can not remember which tool I used to generate this file. gbp import-orig --uscan or deb-new-upstream Nevertheless, why is it a serious bug ? thanks Frederic
Bug#1024859: change in the extention importation with 3.11
There is a fix from the upstream around enum. https://github.com/boostorg/python/commit/a218babc8daee904a83f550fb66e5cb3f1cb3013 Fix enum_type_object type on Python 3.11 The enum_type_object type inherits from PyLong_Type which is not tracked by the GC. Instances doesn't have to be tracked by the GC: remove the Py_TPFLAGS_HAVE_GC flag. The Python C API documentation says: "To create a container type, the tp_flags field of the type object must include the Py_TPFLAGS_HAVE_GC and provide an implementation of the tp_traverse handler." https://docs.python.org/dev/c-api/gcsupport.html The new exception was introduced in Python 3.11 by: python/cpython#88429 an opinion ?
Bug#1024859: change in the extention importation with 3.11
in order to debug this, I started gdb set a breakpoint in init_module_scitbx_linalg_ext then a catch throw and I end up with this backtrace Catchpoint 2 (exception thrown), 0x770a90a1 in __cxxabiv1::__cxa_throw (obj=0xb542e0, tinfo=0x772d8200 , dest=0x772c1290 ) at ../../../../src/libstdc++-v3/libsupc++/eh_throw.cc:81 81 ../../../../src/libstdc++-v3/libsupc++/eh_throw.cc: Le dossier n'est pas vide. (gdb) bt #0 0x770a90a1 in __cxxabiv1::__cxa_throw (obj=0xb542e0, tinfo=0x772d8200 , dest=0x772c1290 ) at ../../../../src/libstdc++-v3/libsupc++/eh_throw.cc:81 #1 0x772ad089 in boost::python::throw_error_already_set () at libs/python/src/errors.cpp:61 #2 0x772b6f05 in boost::python::objects::(anonymous namespace)::new_enum_type (doc=0x0, name=0x7743ddf9 "bidiagonal_matrix_kind") at libs/python/src/object/enum.cpp:169 #3 boost::python::objects::enum_base::enum_base (this=this@entry=0x7fffcee0, name=name@entry=0x7743ddf9 "bidiagonal_matrix_kind", to_python=to_python@entry=0x7741f720 ::to_python(void const*)>, convertible=convertible@entry=0x77422e50 ::convertible_from_python(_object*)>, construct=construct@entry=0x7741fb60 ::construct(_object*, boost::python::converter::rvalue_from_python_stage1_data*)>, id=..., doc=0x0) at libs/python/src/object/enum.cpp:204 #4 0x774203cb in boost::python::enum_::enum_ (this=0x7fffcee0, name=0x7743ddf9 "bidiagonal_matrix_kind", doc=0x0) at /usr/include/boost/python/enum.hpp:45 #5 0x77428330 in scitbx::matrix::boost_python::bidiagonal_matrix_svd_decomposition_wrapper::wrap (name=name@entry=0x7743dbd0 "svd_decomposition_of_bidiagonal_matrix") at ./scitbx/linalg/boost_python/svd.cpp:19 #6 0x7741f6b0 in scitbx::matrix::boost_python::wrap_svd () at ./scitbx/linalg/boost_python/svd.cpp:66 #7 0x773f8aa3 in scitbx::matrix::boost_python::(anonymous namespace)::init_module () at ./scitbx/linalg/boost_python/linalg_ext.cpp:19 #8 0x772c13e3 in boost::function0::operator() (this=0x7fffd2b0) at ./boost/function/function_template.hpp:763 #9 boost::python::handle_exception_impl (f=...) at libs/python/src/errors.cpp:25 #10 0x772c1b69 in boost::python::handle_exception (f=) at ./boost/function/function_template.hpp:635 #11 boost::python::detail::(anonymous namespace)::init_module_in_scope (init_function=0x773f8ac0 , m=) at libs/python/src/module.cpp:24 #12 boost::python::detail::init_module (moduledef=..., init_function=0x773f8ac0 ) at libs/python/src/module.cpp:43 not crystal clear to me :)
Bug#1013158: facet-analyser: vtk[6,7] removal
Hello Anton, I have just pushed a few dependencies in the -dev package in the salsa repo I did not updated the changelog. Cheers Fred
Bug#1013158: facet-analyser: vtk[6,7] removal
Hello Anton, I try to checkout paraview in order to add the -dev dependencies but I have this message $ git clone https://salsa.debian.org/science-team/paraview Clonage dans 'paraview'... remote: Enumerating objects: 175624, done. remote: Counting objects: 100% (78929/78929), done. remote: Compressing objects: 100% (38687/38687), done. remote: Total 175624 (delta 47039), reused 65625 (delta 39190), pack-reused 96695 Réception d'objets: 100% (175624/175624), 246.21 Mio | 12.11 Mio/s, fait. Résolution des deltas: 100% (109096/109096), fait. [attr]our-c-style whitespace=tab-in-indent,-blank-at-eol format.clang-format non permis : ThirdParty/QtTesting/vtkqttesting/.gitattributes : 8 [attr]our-c-style whitespace=tab-in-indent,-blank-at-eol format.clang-format=9 non permis : ThirdParty/catalyst/vtkcatalyst/catalyst/.gitattributes : 4 [attr]our-c-style whitespace=tab-in-indent,-blank-at-eol format.clang-format=8 non permis : VTK/.gitattributes : 10 [attr]our-c-style whitespace=tab-in-indent format.clang-format=9 non permis : VTK/ThirdParty/vtkm/vtkvtkm/vtk-m/.gitattributes : 2 Mise à jour des fichiers: 100% (30828/30828), fait. [attr]our-c-style whitespace=tab-in-indent,-blank-at-eol format.clang-format=8 non permis : VTK/.gitattributes : 10 [attr]our-c-style whitespace=tab-in-indent format.clang-format=9 non permis : VTK/ThirdParty/vtkm/vtkvtkm/vtk-m/.gitattributes : 2 Downloading VTK/ThirdParty/vtkm/vtkvtkm/vtk-m/data/README.md (643 B) Error downloading object: VTK/ThirdParty/vtkm/vtkvtkm/vtk-m/data/README.md (b30a14a): Smudge error: Error downloading VTK/ThirdParty/vtkm/vtkvtkm/vtk-m/data/README.md (b30a14a308f64c6fc2969e2b959d79dacdc5affda1d1c0e24f8e176304147146): [b30a14a308f64c6fc2969e2b959d79dacdc5affda1d1c0e24f8e176304147146] Object does not exist on the server or you don't have permissions to access it: [404] Object does not exist on the server or you don't have permissions to access it Errors logged to /home/experiences/instrumentation/picca/debian/science-team/paraview/.git/lfs/logs/20221101T101535.441130442.log Use `git lfs logs last` to view the log. error: le filtre externe 'git-lfs filter-process' a échoué fatal: VTK/ThirdParty/vtkm/vtkvtkm/vtk-m/data/README.md : le filtre smudge 'lfs' a échoué warning: Le clone a réussi, mais l'extraction a échoué. Vous pouvez inspecter ce qui a été extrait avec 'git status' et réessayer avec 'git restore --source=HEAD :/'
Bug#1016598: [Debian-pan-maintainers] Bug#1016598: binoculars: vtk[6, 7] removal
Hello François, thanks a lot, I removed the NMU number and release a -2 package. (uploaded) thanks for your contribution to Debian. Fred
Bug#1008119: [Debian-pan-maintainers] Bug#1008119: Bug#1008119: src:pyfai: fails to migrate to testing for too long: autopkgtest regression
It seems that it failing now https://ci.debian.net/packages/p/pyfai/ I am on 0.21.2 but I do not know if it solve this mask issue. Cheers Fred
Bug#1003061: [Debian-pan-maintainers] Bug#1003061: bug 1003061: dmrgpp: autopkgtest failure on armhf: segmentation fault
Hello Paul, just for info, I have already reported this issue here https://github.com/g1257/dmrgpp/issues/38 cheers Fred.
Bug#1001168: [Debian-pan-maintainers] Bug#1001168: hkl: FTBFS on mipsel: FAIL: trajectory.py
Is it not better to use the DEB__MAINT_APPEND variable in order to deal with this issue ?
Bug#1003037:
It seems that this is an issue in gcc has observed when compiling tensorflow https://zenn.dev/nbo/scraps/8f1505e365d961
Bug#1001168: Info received (Bug#1001168: Info received (Bug#1001168: hkl: FTBFS on mipsel: FAIL: trajectory.py))
Built with gcc-11 and -fno-lto it doesn not work. (sid_mips64el-dchroot)picca@eller:~/matplotlib/build/lib.linux-mips64-3.9$ ../../../test.py Segmentation fault (sid_mips64el-dchroot)picca@eller:~/matplotlib/build/lib.linux-mips64-3.9$ PYTHONPATH=. ../../../test.py Segmentation fault
Bug#1001168: Info received (Bug#1001168: hkl: FTBFS on mipsel: FAIL: trajectory.py)
I tested matplotlib built with numpy 0.17 0.19 0.21. each time I got the segfault. another difference was the gcc compiler. So I switched to gcc-10 (sid_mips64el-dchroot)picca@eller:~/matplotlib$ CC=gcc-10 python3 setup.py build if failed with this error lto1: fatal error: bytecode stream in file ‘build/temp.linux-mips64-3.9/matplotlib.backends._backend_agg/extern/agg24-svn/src/agg_bezier_arc.o’ generated with LTO version 9.4 instead of the expected 11.2 So I unactivated lto with this CFLAGS="-fno-lto" CC=gcc-10 python3 setup.py build at the end it seems that is does not segfault :) (sid_mips64el-dchroot)picca@eller:~/matplotlib/build/lib.linux-mips64-3.9$ ../../../test.py Segmentation fault (sid_mips64el-dchroot)picca@eller:~/matplotlib/build/lib.linux-mips64-3.9$ PYTHONPATH=. ../../../test.py (sid_mips64el-dchroot)picca@eller:~/matplotlib/build/lib.linux-mips64-3.9$ ls matplotlib mpl_toolkits pylab.py toto.png Cheers
Bug#1001168: Info received (Bug#1001168: Info received ())
If I run in the sid chroot, but with the binaryed built from bullseye, it works. (sid_mips64el-dchroot)picca@eller:~/matplotlib-3.5.0/build/lib.linux-mips64-3.9$ rm toto.png (sid_mips64el-dchroot)picca@eller:~/matplotlib-3.5.0/build/lib.linux-mips64-3.9$ python3 test.py (sid_mips64el-dchroot)picca@eller:~/matplotlib-3.5.0/build/lib.linux-mips64-3.9$ ls matplotlib mpl_toolkits pylab.py test.py toto.png
Bug#1001168: Info received ()
Here no error during the build of numpy 1.19.5 = 10892 passed, 83 skipped, 108 deselected, 19 xfailed, 2 xpassed, 2 warnings in 1658.41s (0:27:38) = but 109 for numpy 1.21... = 14045 passed, 397 skipped, 1253 deselected, 20 xfailed, 2 xpassed, 2 warnings, 109 errors in 869.47s (0:14:29) =
Bug#1001168: Info received ()
I investigated a bit more, it seems that cover is wrong. In a bullseye chroot it works $ python3 ./test.py (bullseye_mips64el-dchroot)picca@eller:~/matplotlib-3.5.0/build/lib.linux-mips64-3.9$ ls matplotlib mpl_toolkits pylab.py test.py toto.png I found that the test failed between the 3.3.4-2 and 3.3.4-2+b1 rebuild This binNMU was about python3.10 support, but in the same time numpy has changed from python3-numpy (= 1:1.19.5-1), to python3-numpy (= 1:1.21.4-2), I think that there is a non negligeable possibility for this bug to be triggered by the new numpy.
Bug#1001168: full python backtrace and print locals
the full python backtrace #8 #14 Frame 0x120debd80, for file /home/picca/matplotlib-3.5.0/build/lib.linux-mips64-3.9/matplotlib/lines.py, line 2888, in draw (self=, figure=<...>, _transform=None, _transformSet=False, _visible=True, _animated=False, _alpha=None, clipbox=None, _clippath=None, _clipon=True, _label='', _picker=None, _rasterized=False, _agg_filter=None, _mouseover=False, _callbacks=, callbacks={}, _cid_gen=, _func_cid_map={}, _pickled_cids=set()) at remote 0xfff7a3be20>, _remove_method=None, _url=None, _gid=None, _snap=None, _sketch=None, _path_effects=[], _sticky_edges=<_XYPair at remote 0xfff51bacc0>, _in_layout=True, _suptitle=None, _supxlabel=None, _supylabel=None, _align_label_groups={'x': , 'y': }, _g...(truncated) #20 Frame 0xfff448f230, for file /home/picca/matplotlib-3.5.0/build/lib.linux-mips64-3.9/matplotlib/artist.py, line 50, in draw_wrapper (artist=, figure=<...>, _transform=None, _transformSet=False, _visible=True, _animated=False, _alpha=None, clipbox=None, _clippath=None, _clipon=True, _label='', _picker=None, _rasterized=False, _agg_filter=None, _mouseover=False, _callbacks=, callbacks={}, _cid_gen=, _func_cid_map={}, _pickled_cids=set()) at remote 0xfff7a3be20>, _remove_method=None, _url=None, _gid=None, _snap=None, _sketch=None, _path_effects=[], _sticky_edges=<_XYPair at remote 0xfff51bacc0>, _in_layout=True, _suptitle=None, _supxlabel=None, _supylabel=None, _align_label_groups={'x': , 'y': , _axes=, _axes=<...>, figure=, figure=<...>, _transform=None, _transformSet=False, _visible=True, _animated=False, _alpha=None, clipbox=None, _clippath=None, _clipon=True, _label='', _picker=None, _rasterized=False, _agg_filter=None, _mouseover=False, _callbacks=, callbacks={}, _cid_gen=, _func_cid_map={}, _pickled_cids=set()) at remote 0xfff7a3be20>, _remove_method=None, _url=None, _gid=None, _snap=None, _sketch=None, _path_effects=[], _sticky_edges=<_XYPair at remote 0xfff51bacc0>, _in_layout=True, _suptitle=None, _supxlabel=None, _supylabel=None, _align_label_group...(truncated) 'matplotlib.ticker.Locator') #33 Frame 0xfff448f040, for file /home/picca/matplotlib-3.5.0/build/lib.linux-mips64-3.9/matplotlib/artist.py, line 50, in draw_wrapper (artist=, _axes=, _axes=<...>, figure=, figure=<...>, _transform=None, _transformSet=False, _visible=True, _animated=False, _alpha=None, clipbox=None, _clippath=None, _clipon=True, _label='', _picker=None, _rasterized=False, _agg_filter=None, _mouseover=False, _callbacks=, callbacks={}, _cid_gen=, _func_cid_map={}, _pickled_cids=set()) at remote 0xfff7a3be20>, _remove_method=None, _url=None, _gid=None, _snap=None, _sketch=None, _path_effects=[], _sticky_edges=<_XYPair at remote 0xfff51bacc0>, _in_layout=True, _suptitle=None, _supxlabel=None, _supylabel=None, _align_...(truncated) return draw(artist, renderer) #40 Frame 0xfff46217c0, for file /home/picca/matplotlib-3.5.0/build/lib.linux-mips64-3.9/matplotlib/axis.py, line 1419, in draw (self=, _axes=, _axes=<...>, figure=, figure=<...>, _transform=None, _transformSet=False, _visible=True, _animated=False, _alpha=None, clipbox=None, _clippath=None, _clipon=True, _label='', _picker=None, _rasterized=False, _agg_filter=None, _mouseover=False, _callbacks=, callbacks={}, _cid_gen=, _func_cid_map={}, _pickled_cids=set()) at remote 0xfff7a3be20>, _remove_method=None, _url=None, _gid=None, _snap=None, _sketch=None, _path_effects=[], _sticky_edges=<_XYPair at remote 0xfff51bacc0>, _in_layout=True, _suptitle=None, _supxlabel=None, _supylabel=None, _align_label_grou...(truncated) elif not visible: # something false-like but not None #47 Frame 0xfff461a230, for file /home/picca/matplotlib-3.5.0/build/lib.linux-mips64-3.9/matplotlib/artist.py, line 50, in draw_wrapper (artist=, _axes=, _axes=<...>, figure=, figure=<...>, _transform=None, _transformSet=False, _visible=True, _animated=False, _alpha=None, clipbox=None, _clippath=None, _clipon=True, _label='', _picker=None, _rasterized=False, _agg_filter=None, _mouseover=False, _callbacks=, callbacks={}, _cid_gen=, _func_cid_map={}, _pickled_cids=set()) at remote 0xfff7a3be20>, _remove_method=None, _url=None, _gid=None, _snap=None, _sketch=None, _path_effects=[], _sticky_edges=<_XYPair at remote 0xfff51bacc0>, _in_layout=True, _suptitle=None, _supxlabel=None, _supylabel=None, _align_...(truncated) return draw(artist, renderer) #54 Frame 0xfff447a040, for file /home/picca/matplotlib-3.5.0/build/lib.linux-mips64-3.9/matplotlib/image.py, line 388, in _draw_list_compositing_images (artists=[, _axes=, _axes=<...>, figure=, figure=<...>, _transform=None, _transformSet=False, _visible=True, _animated=False, _alpha=None, clipbox=None, _clippath=None, _clipon=True, _label='', _picker=None, _rasterized=False, _agg_filter=None, _mouseover=False, _callbacks=, callbacks={}, _cid_gen=, _func_cid_map={}, _pickled_cids=set()) at remote 0xfff7a3be20>, _remove_method=None, _url=None, _gid=N
Bug#1001168:
Here the py-bt (gdb) py-bt Traceback (most recent call first): File "/home/picca/matplotlib-3.5.0/build/lib.linux-mips64-3.9/matplotlib/lines.py", line 2888, in draw File "/home/picca/matplotlib-3.5.0/build/lib.linux-mips64-3.9/matplotlib/artist.py", line 50, in draw_wrapper return draw(artist, renderer) File "/home/picca/matplotlib-3.5.0/build/lib.linux-mips64-3.9/matplotlib/axis.py", line 555, in draw 'matplotlib.ticker.Locator') File "/home/picca/matplotlib-3.5.0/build/lib.linux-mips64-3.9/matplotlib/artist.py", line 50, in draw_wrapper return draw(artist, renderer) File "/home/picca/matplotlib-3.5.0/build/lib.linux-mips64-3.9/matplotlib/axis.py", line 1419, in draw elif not visible: # something false-like but not None File "/home/picca/matplotlib-3.5.0/build/lib.linux-mips64-3.9/matplotlib/artist.py", line 50, in draw_wrapper return draw(artist, renderer) File "/home/picca/matplotlib-3.5.0/build/lib.linux-mips64-3.9/matplotlib/image.py", line 388, in _draw_list_compositing_images extra_height = (out_height - out_height_base) / out_height_base File "/home/picca/matplotlib-3.5.0/build/lib.linux-mips64-3.9/matplotlib/axes/_base.py", line 4106, in draw Python Exception 'ascii' codec can't decode byte 0xc2 in position 2280: ordinal not in range(128): Error occurred in Python: 'ascii' codec can't decode byte 0xc2 in position 2280: ordinal not in range(128)
Bug#1001168: Info received (Bug#1001168: hkl: FTBFS on mipsel: FAIL: trajectory.py)
I can confirm that the bullseye matplotlib does not produce a segfault
Bug#1001168: hkl: FTBFS on mipsel: FAIL: trajectory.py
This small script trigger the segfault. #!/usr/bin/env python3 import matplotlib import matplotlib.pyplot as plt plt.figure() plt.title("foo") plt.savefig("toto.png")
Bug#1001168: hkl: FTBFS on mipsel: FAIL: trajectory.py
bugs report are already filled on matplotlib #1000774 and #1000435 I will try to see if this is identical...
Bug#1001168: hkl: FTBFS on mipsel: FAIL: trajectory.py
Here the backtrace on mips64el #0 agg::pixfmt_alpha_blend_rgba, agg::order_rgba>, agg::row_accessor >::blend_solid_hspan(int, int, unsigned int, agg::rgba8T const&, unsigned char const*) (covers=0x100 , c=..., len=, y=166, x=, this=) at extern/agg24-svn/include/agg_color_rgba.h:395 #1 agg::renderer_base, agg::order_rgba>, agg::row_accessor > >::blend_solid_hspan(int, int, int, agg::rgba8T const&, unsigned char const*) (covers=, c=..., len=, y=166, x=, this=0x12123abf8) at extern/agg24-svn/include/agg_renderer_base.h:294 #2 agg::render_scanline_aa_solid::embedded_scanline, agg::renderer_base, agg::order_rgba>, agg::row_accessor > >, agg::rgba8T >(agg::serialized_scanlines_adaptor_aa::embedded_scanline const&, agg::renderer_base, agg::order_rgba>, agg::row_accessor > >&, agg::rgba8T const&) (color=..., ren=..., sl=...) at extern/agg24-svn/include/agg_renderer_scanline.h:40 #3 agg::renderer_scanline_aa_solid, agg::order_rgba>, agg::row_accessor > > >::render::embedded_scanline>(agg::serialized_scanlines_adaptor_aa::embedded_scanline const&) (sl=..., this=0x12123ac10) at extern/agg24-svn/include/agg_renderer_scanline.h:130 #4 agg::render_scanlines, agg::serialized_scanlines_adaptor_aa::embedded_scanline, agg::renderer_scanline_aa_solid, agg::order_rgba>, agg::row_accessor > > > >(agg::serialized_scanlines_adaptor_aa&, agg::serialized_scanlines_adaptor_aa::embedded_scanline&, agg::renderer_scanline_aa_solid, agg::order_rgba>, agg::row_accessor > > >&) [clone .part.0] [clone .lto_priv.0] (ras=..., sl=..., ren=...) at extern/agg24-svn/include/agg_renderer_scanline.h:446 #5 0x00fff49a367c in agg::render_scanlines, agg::serialized_scanlines_adaptor_aa::embedded_scanline, agg::renderer_scanline_aa_solid, agg::order_rgba>, agg::row_accessor > > > >(agg::serialized_scanlines_adaptor_aa&, agg::serialized_scanlines_adaptor_aa::embedded_scanline&, agg::renderer_scanline_aa_solid, agg::order_rgba>, agg::row_accessor > > >&) (ren=..., sl=..., ras=...) at extern/agg24-svn/include/agg_renderer_scanline.h:440 #6 RendererAgg::draw_markers(GCAgg&, py::PathIterator&, agg::trans_affine&, py::PathIterator&, agg::trans_affine&, agg::rgba) (color=..., trans=..., path=..., marker_trans=..., marker_path=..., gc=..., this=0x12123aaa0) at src/_backend_agg.h:658 #7 PyRendererAgg_draw_markers(PyRendererAgg*, _object*) (self=, args=) at src/_backend_agg_wrapper.cpp:285 #8 0x0001202e36a0 in cfunction_call (func=0xfff11f08b0, args=, kwargs=) at ../Objects/methodobject.c:552 #9 0x00012003891c in _PyObject_MakeTpCall (tstate=0x1205a97c0, callable=0xfff11f08b0, args=, nargs=, keywords=0x0) at ../Objects/call.c:191 #10 0x00012002841c in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=9223372036854775814, args=0x121281590, callable=0xfff11f08b0, tstate=) at ../Include/cpython/abstract.h:116 #11 _PyObject_VectorcallTstate (kwnames=0x0, nargsf=9223372036854775814, args=0x121281590, callable=0xfff11f08b0, tstate=) at ../Include/cpython/abstract.h:103 #12 PyObject_Vectorcall (kwnames=0x0, nargsf=9223372036854775814, args=0x121281590, callable=0xfff11f08b0) at ../Include/cpython/abstract.h:127 #13 call_function (kwnames=0x0, oparg=, pp_stack=, tstate=) at ../Python/ceval.c:5075 #14 _PyEval_EvalFrameDefault (tstate=, f=, throwflag=) at ../Python/ceval.c:3487 #15 0x00012001d898 in _PyEval_EvalFrame (throwflag=0, f=0x121281330, tstate=0x1205a97c0) at ../Include/internal/pycore_ceval.h:40 #16 function_code_fastcall (tstate=0x1205a97c0, co=, args=0xfff120cd78, nargs=2, globals=) at ../Objects/call.c:330 #17 0x00012002535c in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=, args=0xfff120cd68, callable=0xfff520a310, tstate=0x1205a97c0) at ../Include/cpython/abstract.h:118 #18 PyObject_Vectorcall (kwnames=0x0, nargsf=, args=0xfff120cd68, callable=0xfff520a310) at ../Include/cpython/abstract.h:127 #19 call_function (kwnames=0x0, oparg=, pp_stack=, tstate=) at ../Python/ceval.c:5075 #20 _PyEval_EvalFrameDefault (tstate=, f=, throwflag=) at ../Python/ceval.c:3518 #21 0x000120107de0 in _PyEval_EvalFrame (throwflag=0, f=0xfff120cbe0, tstate=0x1205a97c0) at ../Include/internal/pycore_ceval.h:40 #22 _PyEval_EvalCode (tstate=0x1205a97c0, _co=0xfff5231500, globals=, locals=, args=, argcount=2, kwnames=0x0, kwargs=0xfff1069720, kwcount=0, kwstep=1, defs=0x0, defcount=0, kwdefs=0x0, closure=0xfff51fafa0, name=0xfff7953bb0, qualname=0xfff5264730) at ../Python/ceval.c:4327 #23 0x000120039e20 in _PyFunction_Vectorcall (func=, stack=, nargsf=, kwnames=) at ../Objects/call.c:396 #24 0x0001200262dc in _PyObject_VectorcallTstate (kwnames=0x0, nargsf=, args=0xfff1069710, callable=0xfff520a3a0, tstate=0x1205a97c0) at ../Include/cpython/abstract.h:118 #25 PyObject_Vectorcall (kwnames=0x0, nargsf=, args=0xfff1069710, callable=0xfff520a3a0) at ../Include/cpython/abstract.h:127 #26 call_function (kwnames=0x0, oparg=, p
Bug#976735: spyder needs source upload for Python 3.9 transition
> Strangely enough, I've already done that ;-) my bad. Cheers Fred
Bug#976735: spyder needs source upload for Python 3.9 transition
> I have a package of Spyder 4 waiting to upload, but it requires five > packages to be accepted into unstable from NEW first (pyls-server, > pyls-black, pyls-spyder, abydos, textdistance); once that happens, the > rest of the packages are almost ready to go. Maybe you can contact the ftpmaster team and request a review of these packages. In order to avoid the spyder removal. Cheers
Bug#976952: [Help] Re: lmfit-py: FTBFS on ppc64el (arch:all-only src pkg): dh_auto_test: error: pybuild --test --test-pytest -i python{version} -p 3.9 returned exit code 13
Ok, in that case, I think that a comment in the d/rules files is enough in order to keep in mind that we have this issue with ppc64el.
Bug#976952: [Help] Re: lmfit-py: FTBFS on ppc64el (arch:all-only src pkg): dh_auto_test: error: pybuild --test --test-pytest -i python{version} -p 3.9 returned exit code 13
> Well, the test is obviously broken and upstream currently can't be bothered > to fix > it on non-x86 targets. He will certainly have to do it at some point given > that ARM64 > is replacing more and more x86_64 systems, but I wouldn't bother, personally. so what is the best solution in order to have lmfit-py in bulleyes ?
Bug#976952: [Help] Re: lmfit-py: FTBFS on ppc64el (arch:all-only src pkg): dh_auto_test: error: pybuild --test --test-pytest -i python{version} -p 3.9 returned exit code 13
> Yes, good catch. The spec file for the openSUSE package has this [1]: so it does not fit with our policy: do not hide problems ;) The problem is that I do not have enougt time to investigate... on a porter box
Bug#976952: [Help] Re: lmfit-py: FTBFS on ppc64el (arch:all-only src pkg): dh_auto_test: error: pybuild --test --test-pytest -i python{version} -p 3.9 returned exit code 13
Hello looking at the Opensuze log, I can find this [ 93s] + pytest-3.8 --ignore=_build.python2 --ignore=_build.python3 --ignore=_build.pypy3 -v -k 'not speed and not (test_model_nan_policy or test_shgo_scipy_vs_lmfit_2)' [ 97s] = test session starts == does it means that the failling test on Debian is skipped during the build on OBS ? === FAILURES === _ TestUserDefiniedModel.test_model_nan_policy __ cheers Frederic
Bug#936609: cblas / gsl hint needed (Was: Bug#936609: Ported ghmm to Python3 but issues with clapack)
Hello Andreas, I just built ghmm by removing --with-gsl. It seems that the gsl implementation of blas conflict with the one provided in atlas. so --enable-gsl + --enable-atlas seems wrong... +--+ | Summary | +--+ Build Architecture: amd64 Build Type: binary Build-Space: 39004 Build-Time: 31 Distribution: UNRELEASED Host Architecture: amd64 Install-Time: 14 Job: /tmp/ghmm_0.9~rc3-3.dsc Machine Architecture: amd64 Package: ghmm Package-Time: 49 Source-Version: 0.9~rc3-3 Space: 39004 Status: successful Version: 0.9~rc3-3
Bug#957430: closing 957430
close 957430 6.5.1-3 thanks
Bug#950094:
I tryed a new build and I end up with this error gpgv: unknown type of key resource 'trustedkeys.kbx' gpgv: keyblock resource '/tmp/dpkg-verify-sig.Wwlhs1jL/trustedkeys.kbx': General error gpgv: Signature made Mon Dec 16 20:17:19 2019 UTC gpgv:using RSA key E8FC295C86B8D7C049F97BA7A35DAFFBAD29E8DE gpgv: Can't check signature: No public key dpkg-source: warning: failed to verify signature on ./ipywidgets_6.0.0-6.dsc dpkg-source: info: extracting ipywidgets in /<> dpkg-source: info: unpacking ipywidgets_6.0.0.orig.tar.gz dpkg-source: info: unpacking ipywidgets_6.0.0-6.debian.tar.xz dpkg-source: info: using patch list from debian/patches/series dpkg-source: info: applying 0001-Unconditionally-import-setuptools-to-pick-up-depende.patch dpkg-source: info: applying 0002-Don-t-build-extension.js-in-widgetsnbextension-setup.patch dpkg-source: info: applying 0003-Use-local-MathJax.patch dpkg-source: info: applying 0004-Tweak-package.json-so-the-upstream-build-works-in-De.patch dpkg-source: info: applying 0005-Import-specific-jupyterlab-service-types-so-we-only-.patch dpkg-source: info: applying 0006-tsconfig-es2015-iterable.patch dpkg-source: error: pathname '/<>/debian/fakewebpack-unpacked/html2canvas' points outside source root (to '/usr/lib/nodejs/html2canvas') E: FAILED [dpkg-source died]
Bug#945467: Fix available (fwd)
you can look also at the CI, now that it works :) https://salsa.debian.org/science-team/veusz/pipelines/137494 Cheers Frederic
Bug#954352: pymca: Missing dependency "python3-scipy", probably in "python3-silx"
A work around for now is to install by hand apt install python3-scipy reassign -1 silx thanks
Bug#936924: libsvm: Python2 removal in sid/bullseye - reopen 936924
Hello, if it is like for my ufo-core package, this could be due to a script file with a shebang using python instead of python3 Cheers Fred
Bug#938743: ufo-core: Python2 removal in sid/bullseye - reopen 938743
Maybe this is due to this picca@cush:~/Debian/ufo-core/ufo-core/bin$ rgrep python * ufo-mkfilter.in:#!/usr/bin/python ufo-prof:#!/usr/bin/env python I will replace python -> python3 and see what is going on
Bug#938743: ufo-core: Python2 removal in sid/bullseye - reopen 938743
Hello Sandro this is strange because, I have this in the control file Package: libufo-bin Architecture: any Depends: ${misc:Depends}, ${python3:Depends}, ${shlibs:Depends} Suggests: ufo-core-doc Description: Library for high-performance, GPU-based computing - tools The UFO data processing framework is a C library suited to build general purpose streams data processing on heterogeneous architectures such as CPUs, GPUs or clusters. It is extensively used at the Karlsruhe Institute of Technology for Ultra-fast X-ray Imaging (radiography, tomography and laminography). . A gobject-instrospection binding is also provided to write scripts or user interfaces. . This package contains binaries to run JSON descriptions of task graphs. Package: libufo-data So it seems that this python dependency comes from python3:Depends. is it normal ? Cheers Fred
Bug#943786: lmfit-py: failing tests with python3.8
Hello andreas, I will try to find some time during these holydays in order to upload the 1.0.0 version :) cheers Fred and happy new year. De : debian-science-maintainers [debian-science-maintainers-bounces+picca=synchrotron-soleil...@alioth-lists.debian.net] de la part de Andreas Tille [andr...@an3as.eu] Envoyé : dimanche 22 décembre 2019 10:48 À : PICCA Frederic-Emmanuel Cc : 943...@bugs.debian.org; MARIE Alexandre Objet : Bug#943786: lmfit-py: failing tests with python3.8 On Sun, Dec 22, 2019 at 07:54:23AM +, PICCA Frederic-Emmanuel wrote: > Hello andreas, In fact we were wayting for the pacakging of ipywidget 7.x > the jupyter-sphinx extension expected by lmfit-py require a newer version of > ipywidget. > > So maybe the best solution for now is to not produce the documentation until > this dependency is ok. Everything is fine for me. Just push a change that builds and upload (or ping me for sponsoring). I was just wondering about fixes in Git that were not uploaded. Kind regards Andreas. -- http://fam-tille.de -- debian-science-maintainers mailing list debian-science-maintain...@alioth-lists.debian.net https://alioth-lists.debian.net/cgi-bin/mailman/listinfo/debian-science-maintainers
Bug#943786: lmfit-py: failing tests with python3.8
Hello andreas, In fact we were wayting for the pacakging of ipywidget 7.x the jupyter-sphinx extension expected by lmfit-py require a newer version of ipywidget. So maybe the best solution for now is to not produce the documentation until this dependency is ok. cheers Frederic
Bug#946422: silx: autopkgtest regression: pocl error
looking in picca@sixs7:~/Debian/silx/silx/silx/opencl/test/test_addition.py def setUp(self): if ocl is None: return self.shape = 4096 self.data = numpy.random.random(self.shape).astype(numpy.float32) self.d_array_img = pyopencl.array.to_device(self.queue, self.data) self.d_array_5 = pyopencl.array.zeros_like(self.d_array_img) - 5 self.program = pyopencl.Program(self.ctx, get_opencl_code("addition")).build() I found that commenting this line # self.d_array_5 = pyopencl.array.zeros_like(self.d_array_img) - 5 remove the pocl issue. I remove everythings from the unit test. @unittest.skipUnless(ocl, "pyopencl is missing") def test_add(self): self.assetTrue(True) that means, only ther setUp and the tearDown are done. with the line uncomment (sid_amd64-dchroot)picca@barriere:~$ PYOPENCL_COMPILER_OUTPUT=1 PYTHONPATH=silx-0.11.0+dfsg/.pybuild/cpython3_3.7_silx/build python3 test.py -v pocl error: lt_dlopen("(null)") or lt_dlsym() failed with 'can't close resident module'. note: missing symbols in the kernel binary might be reported as 'file not found' errors. Aborted with the line commented (sid_amd64-dchroot)picca@barriere:~$ PYOPENCL_COMPILER_OUTPUT=1 PYTHONPATH=silx-0.11.0+dfsg/.pybuild/cpython3_3.7_silx/build python3 test.py -v .Maximum valid workgroup size 0 on device -- Ran 1 test in 0.013s OK Test suite succeeded If now I do not import silx.io before there is no issue with or without the commented line (sid_amd64-dchroot)picca@barriere:~$ PYOPENCL_COMPILER_OUTPUT=1 PYTHONPATH=silx-0.11.0+dfsg/.pybuild/cpython3_3.7_silx/build python3 test.py -v .Maximum valid workgroup size 0 on device -- Ran 1 test in 0.021s OK Test suite succeeded So what is going on when executing this line ??? self.d_array_5 = pyopencl.array.zeros_like(self.d_array_img) - 5
Bug#946422: silx: autopkgtest regression: pocl error
I decided to concentrate myself on one opencl test (addition) So I deactivated all other test by commenting the test in silx/opencl/__init__.py If I do not import silxs.io, this test works (sid_amd64-dchroot)picca@barriere:~$ PYOPENCL_COMPILER_OUTPUT=1 PYTHONPATH=silx-0.11.0+dfsg/.pybuild/cpython3_3.7_silx/build python3 test.py -v .Maximum valid workgroup size 2048 on device -- Ran 1 test in 0.030s OK If I add silxs.io, it fails (sid_amd64-dchroot)picca@barriere:~$ PYOPENCL_COMPILER_OUTPUT=1 PYTHONPATH=silx-0.11.0+dfsg/.pybuild/cpython3_3.7_silx/build python3 test.py -v pocl error: lt_dlopen("(null)") or lt_dlsym() failed with 'can't close resident module'. note: missing symbols in the kernel binary might be reported as 'file not found' errors. Aborted so this test is ok in order to investigate this issue.
Bug#946422: silx: autopkgtest regression: pocl error
With the silx.io import I have this (sid_amd64-dchroot)picca@barriere:~$ PYTHONPATH=silx-0.11.0+dfsg/.pybuild/cpython3_3.7_silx/build python3 test.py pocl error: lt_dlopen("(null)") or lt_dlsym() failed with 'can't close resident module'. note: missing symbols in the kernel binary might be reported as 'file not found' errors. without, I have this (sid_amd64-dchroot)picca@barriere:~$ PYTHONPATH=silx-0.11.0+dfsg/.pybuild/cpython3_3.7_silx/build python3 test.py -v .Maximum valid workgroup size 2048 on device FThe gpyfft module was not found. The Fourier transforms will be done on CPU. For more performances, it is advised to install gpyfft. .The gpyfft module was not found. The Fourier transforms will be done on CPU. For more performances, it is advised to install gpyfft. The gpyfft module was not found. The Fourier transforms will be done on CPU. For more performances, it is advised to install gpyfft. The gpyfft module was not found. The Fourier transforms will be done on CPU. For more performances, it is advised to install gpyfft. The gpyfft module was not found. The Fourier transforms will be done on CPU. For more performances, it is advised to install gpyfft. The gpyfft module was not found. The Fourier transforms will be done on CPU. For more performances, it is advised to install gpyfft. The gpyfft module was not found. The Fourier transforms will be done on CPU. For more performances, it is advised to install gpyfft. .The gpyfft module was not found. The Fourier transforms will be done on CPU. For more performances, it is advised to install gpyfft. The gpyfft module was not found. The Fourier transforms will be done on CPU. For more performances, it is advised to install gpyfft. ../home/picca/silx-0.11.0+dfsg/.pybuild/cpython3_3.7_silx/build/silx/opencl/test/test_linalg.py:69: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. gradient[slice_all] = np.diff(img, axis=d) 4096 ...7 warnings generated. /usr/lib/python3/dist-packages/pyopencl/__init__.py:235: CompilerWarning: Non-empty compiler output encountered. Set the environment variable PYOPENCL_COMPILER_OUTPUT=1 to see more. "to see more.", CompilerWarning) ..7 warnings generated. .. == FAIL: test_medfilt (silx.opencl.test.test_medfilt.TestMedianFilter) -- Traceback (most recent call last): File "/home/picca/silx-0.11.0+dfsg/.pybuild/cpython3_3.7_silx/build/silx/opencl/test/test_medfilt.py", line 115, in test_medfilt self.assertEqual(r.error, 0, 'Results are correct') AssertionError: 3.4028235e+38 != 0 : Results are correct -- Ran 217 tests in 175.525s FAILED (failures=1, skipped=48) I will run it with the PYOPENCL_COMPILEr_OUTPUT=1 to check that compiler warning (sid_amd64-dchroot)picca@barriere:~$ PYOPENCL_COMPILER_OUTPUT=1 PYTHONPATH=silx-0.11.0+dfsg/.pybuild/cpython3_3.7_silx/build python3 test.py -v .Maximum valid workgroup size 2048 on device FThe gpyfft module was not found. The Fourier transforms will be done on CPU. For more performances, it is advised to install gpyfft. .The gpyfft module was not found. The Fourier transforms will be done on CPU. For more performances, it is advised to install gpyfft. The gpyfft module was not found. The Fourier transforms will be done on CPU. For more performances, it is advised to install gpyfft. The gpyfft module was not found. The Fourier transforms will be done on CPU. For more performances, it is advised to install gpyfft. The gpyfft module was not found. The Fourier transforms will be done on CPU. For more performances, it is advised to install gpyfft. The gpyfft module was not found. The Fourier transforms will be done on CPU. For more performances, it is advised to install gpyfft. The gpyfft module was not found. The Fourier transforms will be done on CPU. For more performances, it is advised to install gpyfft. .The gpyfft module was not found. The Fourier transforms will be done on CPU. For more performances, it is advised to install gpyfft. The gpyfft module was not found. The Fourier transforms will be done on CPU. For more performances, it is advised to install gpyfft. ../home/picca/silx-0.11.0+dfsg/.pybuild/cpython3_3.7_silx/build/silx/opencl/test/test_linalg.py:69: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be
Bug#946422: silx: autopkgtest regression: pocl error
not better test cpp engine for medfilt2d ... ok testOpenCLMedFilt2d (silx.image.test.test_medianfilter.TestMedianFilterEngines) test cpp engine for medfilt2d ... pocl error: lt_dlopen("(null)") or lt_dlsym() failed with 'can't close resident module'. note: missing symbols in the kernel binary might be reported as 'file not found' errors. The next step will be to ding in the code and find the culprite.
Bug#946422: silx: autopkgtest regression: pocl error
It seems that this test does not PASS @unittest.skipUnless(ocl, "PyOpenCl is missing") def testOpenCLMedFilt2d(self): """test cpp engine for medfilt2d""" res = medianfilter.medfilt2d( image=TestMedianFilterEngines.IMG, kernel_size=TestMedianFilterEngines.KERNEL, engine='opencl') self.assertTrue(numpy.array_equal(res, TestMedianFilterEngines.IMG)) testOpenCLMedFilt2d (silx.image.test.test_medianfilter.TestMedianFilterEngines) test cpp engine for medfilt2d ... pocl error: lt_dlopen("(null)") or lt_dlsym() failed with 'can't close resident module'. note: missing symbols in the kernel binary might be reported as 'file not found' errors. Aborted E: pybuild pybuild:341: test: plugin custom failed with: exit code=134: env PYTHONPATH=/home/picca/silx-0.11.0+dfsg/.pybuild/cpython3_3.8_silx/build WITH_QT_TEST=False xvfb-run -a --server-args="-screen 0 1024x768x24" python3.8 run_tests.py -vv --installed dh_auto_test: pybuild --test -i python{version} -p "3.8 3.7" -s custom "--test-args=env PYTHONPATH={build_dir} WITH_QT_TEST=False xvfb-run -a --server-args=\"-screen 0 1024x768x24\" {interpreter} run_tests.py -vv --installed" returned exit code 13 make[1]: *** [debian/rules:70: override_dh_auto_test] Error 255 make[1]: Leaving directory '/home/picca/silx-0.11.0+dfsg' make: *** [debian/rules:27: build] Error 2 the code of medfilt2d is there def medfilt2d(image, kernel_size=3, engine='cpp'): """Apply a median filter on an image. This median filter is using a 'nearest' padding for values past the array edges. If you want more padding options or functionalities for the median filter (conditional filter for example) please have a look at :mod:`silx.math.medianfilter`. :param numpy.ndarray image: the 2D array for which we want to apply the median filter. :param kernel_size: the dimension of the kernel. Kernel size must be odd. If a scalar is given, then it is used as the size in both dimension. Default: (3, 3) :type kernel_size: A int or a list of 2 int (kernel_height, kernel_width) :param engine: the type of implementation to use. Valid values are: 'cpp' (default) and 'opencl' :returns: the array with the median value for each pixel. .. note:: if the opencl implementation is requested but is not present or fails, the cpp implementation is called. """ if engine not in MEDFILT_ENGINES: err = 'silx doesn\'t have an implementation for the requested engine: ' err += '%s' % engine raise ValueError(err) if len(image.shape) is not 2: raise ValueError('medfilt2d deals with arrays of dimension 2 only') if engine == 'cpp': return medianfilter_cpp.medfilt(data=image, kernel_size=kernel_size, conditional=False) elif engine == 'opencl': if medfilt_opencl is None: wrn = 'opencl median filter not available. ' wrn += 'Launching cpp implementation.' _logger.warning(wrn) # instead call the cpp implementation return medianfilter_cpp.medfilt(data=image, kernel_size=kernel_size, conditional=False) else: try: medianfilter = medfilt_opencl.MedianFilter2D(image.shape, devicetype="gpu") res = medianfilter.medfilt2d(image, kernel_size) except(RuntimeError, MemoryError, ImportError): wrn = 'Exception occured in opencl median filter. ' wrn += 'To get more information see debug log.' wrn += 'Launching cpp implementation.' _logger.warning(wrn) _logger.debug("median filter - openCL implementation issue.", exc_info=True) # instead call the cpp implementation res = medianfilter_cpp.medfilt(data=image, kernel_size=kernel_size, conditional=False) return res in our case we have engine = 'opencl' and no warning message, so medfil_opencl should not be None. it comes from here from silx.opencl import medfilt as medfilt_opencl In this code we have :param devicetype: type of device, can be "CPU", "GPU", "ACC" or "ALL" So let's do a first test by replacing gpu by cpu to see if it change something during the test.
Bug#938116:
Use salsa-ci, python-qtconsole FTBFS due to pyzmq https://salsa.debian.org/python-team/modules/python-qtconsole/-/jobs/435758
Bug#942533: sardana: needs a source-only upload.
Hello >Package: sardana >Version: 3.0.0a+3.f4f89e+dfsg-1 >Severity: serious >The release team have decreed that non-buildd binaries cannot migrate to >testing. Please make a source-only upload so your package can migrate. ok, but this packages comes from NEW. So it would be nice if the process NEW -> unstable could be a source upload. Ir is not possible to upload without the binary in New. This processus necessitate two uploads if I understand correctly.
Bug#938550: spyder: Python2 removal in sid/bullseye
> I didn't notice it, so wasn't planning to add it. spyder_kernels > imports without complaining, and spyder seems to start fine anyway. > Where does it come to notice? I do not know, but on wndows it is optional. So maybe this is not a big issue. Fred
Bug#938550: spyder: Python2 removal in sid/bullseye
It seems that wurlitzer which is a dependency of spyder-kernel is missing. did you plan to add it ? cheers
Bug#938550: spyder: Python2 removal in sid/bullseye
Hello > Hi Frédéric, I prepared spyder (and spyder-kernels) for python2 removal. > The removal of cloudpickle forces us to do it earlier than we otherwise > might have. no problem for me :), the faster we get rid of Python2, the better. > With spyder, it made sense to me to keep spyder as the main binary > package, relegating spyder3 to a transitional dependency package. I set > /usr/bin/spyder as a symlink to spyder3 (manpage likewise). Let me know > if you're happy with that and we can go ahead with the upload. > Otherwise I can swap it to keep spyder3 as the binary package. I think that you should set the section, oldlibs for spyder3 if this is a transitional package. then you can upload. Cheers Fred
Bug#940130: pymca: testsuite failure (segfault)
Hello, this is a probleme due to a bug in python-numpy which is already solved in python-numpy 1.6.5 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=933056 Cheers De : debian-science-maintainers [debian-science-maintainers-bounces+picca=synchrotron-soleil...@alioth-lists.debian.net] de la part de Gianfranco Costamagna [locutusofb...@debian.org] Envoyé : jeudi 12 septembre 2019 21:37 À : sub...@bugs.debian.org Objet : Bug#940130: pymca: testsuite failure (segfault) Source: pymca Version: 5.5.1+dfsg-1 Severity: serious Hello, looks like the package autopkgtest is failing... testFastFitEdfMap (PyMcaBatchTest.testPyMcaBatch) ... Segmentation fault https://ci.debian.net/packages/p/pymca/testing/amd64 can you please have a look? -- debian-science-maintainers mailing list debian-science-maintain...@alioth-lists.debian.net https://alioth-lists.debian.net/cgi-bin/mailman/listinfo/debian-science-maintainers
Bug#918852: ufo-filters: FTBFS with Sphinx 1.8: cannot import name 'Directive' from 'sphinx.directives'
thanks a lot both of you, I could not manage to find enought time these days for this package... Cheers Fred
Bug#918807: taurus: diff for NMU version 4.0.3+dfsg-1.1
The upstream, Just packages the latest taurus. So I think that you can defer your upload now. thanks a lot for your help. Frederic
Bug#914140: pytango FTBFS with boost 1.67
Hello Adrian If I look at the current boost1.67, I find this in the boost python package https://packages.debian.org/sid/amd64/libboost-python1.67.0/filelist and https://packages.debian.org/sid/amd64/libboost-python1.67-dev/filelist We can find these /usr/lib/x86_64-linux-gnu/libboost_python3-py36.a /usr/lib/x86_64-linux-gnu/libboost_python3-py36.so but only for the python3.6 version and not for 2.7 and 3.7 previously we had for boost 1.62 https://packages.debian.org/sid/amd64/libboost-python1.62-dev/filelist all python version add these -pyXY.so files. Is it an intended change in the new boost_python package, or a mistake ? This logic -pyXY was coded in the pytango setup.py, in order to deal with boost_python. cheers Fred
Bug#903218: python3-opengl: fails to install with python3.7 installed
looking at the fedora project they renames async -> async_ https://koji.fedoraproject.org/koji/buildinfo?buildID=1097515
Bug#903218:
In code search I found another package affected by this problem. Which seems to embed pyOpenGL. https://codesearch.debian.net/search?q=OpenGL.raw.GL.SGIX.async&perpkg=1 Cheers
Bug#903218:
looking at the fedora project they renames async -> async_ https://koji.fedoraproject.org/koji/buildinfo?buildID=1097515
Bug#904262: python-fabio builds for the default python3 version, but tests with all supported versions.
> your autopkg tests loops over all *supported* python versions, but you only > build the extension for the *default* python3 version. Try build-depending on > python3-all-dev instead and see that you have extensions built for both 3.6 > and > 3.7. Building in unstable, of course. But , I already depends on python3-all-dev ??? https://sources.debian.org/src/python-fabio/0.6.0+dfsg-1/debian/control/ I think that this is due to the python3.7 transtion which is ot over for python-fabio. right ?
Bug#904262: python-fabio builds for the default python3 version, but tests with all supported versions.
Hello, Matthias, I do not understand this bug report. I use pybuild so fabio should be build for all python3 versions. It is now FTBFS due to a problem with the cython package already reported. #903909 Cheers Frederic
Bug#876739: pyfai FTBFS: help2man: can't get `--help' info from /tmp/check_calib_0hk8odnh
This problem was due to this python-fabio (0.5.0+dfsg-2) unstable; urgency=medium * d/control - python-qt4 -> python3-pyqt4-dbg (Closes: #876288) Now that python-fabio was solved, it is ok to close this bug thanks Frederic
Bug#861736:
here the error message ~/Debian/nexus/bugs$ ./bug.py Traceback (most recent call last): File "./bug.py", line 15, in f.flush() File "/usr/lib/python2.7/dist-packages/nxs/napi.py", line 397, in flush raise NeXusError, "Could not flush NeXus file %s"%(self.filename) nxs.napi.NeXusError: Could not flush NeXus file /tmp/foo.h5
Bug#861736:
It seems that the fix is not enought this test failed at the flush import nxs f = nxs.open("/tmp/foo.h5", "w5") f.makegroup('entry', 'NXentry') f.opengroup('entry') f.makegroup('g', 'NXcollection') f.opengroup('g', 'NXcollection') f.makedata('d', 'float64', shape=(1,)) f.opendata('d') f.putdata(1.23) f.closedata() f.closegroup() f.flush() f.close()
Bug#861736:
Here after rebuilding hdf5 in debug mode :~/Debian/nexus$ ./bug.py H5get_libversion(majnum=0xbf8a5b04, minnum=0xbf8a5b08, relnum=0xbf8a5b0c) = SUCCEED; H5Eset_auto2(estack=H5P_DEFAULT, func=NULL, client_data=NULL) = SUCCEED; H5open() = SUCCEED; H5Pcreate(cls=8 (genprop class)) = 18 (genprop list); H5Pget_cache(plist=18 (genprop list), mdc_nelmts=0xbf8a5af8, rdcc_nslots=0xbf8a5afc, rdcc_nbytes=0xbf8a5b00, rdcc_w0=0xbf8a5b10) = SUCCEED; H5Pset_cache(plist=18 (genprop list), mdc_nelmts=0, rdcc_nslots=521, rdcc_nbytes=1024000, rdcc_w0=0.75) = SUCCEED; H5Pset_fclose_degree(plist=18 (genprop list), degree=H5F_CLOSE_STRONG) = SUCCEED; H5check_version(majnum=1, minnum=10, relnum=0) = SUCCEED; H5open() = SUCCEED; H5Fcreate(filename=0x82517ef8, flags=2, fcpl=H5P_DEFAULT, fapl=18 (genprop list)) = 0 (file); H5Pclose(plist=18 (genprop list)) = SUCCEED; ERROR: cannot open file: filenamenxs.h5 Traceback (most recent call last): File "./bug.py", line 5, in e.save("filenamenxs.h5", 'w5') File "/usr/lib/python2.7/dist-packages/nxs/tree.py", line 868, in save file = NeXusTree(filename, format) File "/usr/lib/python2.7/dist-packages/nxs/napi.py", line 320, in __init__ raise NeXusError, "Could not %s %s"%(op,filename) nxs.napi.NeXusError: Could not create filenamenxs.h5 H5Eget_auto2(estack=H5P_DEFAULT, func=0xbf8a6954, client_data=NULL) = SUCCEED; H5Eauto_is_v2(estack=H5P_DEFAULT, is_stack=0xbf8a6898) = SUCCEED; H5Eget_auto2(estack=H5P_DEFAULT, func=0xbf8a689c, client_data=0xbf8a68a0) = SUCCEED; H5Eset_auto2(estack=H5P_DEFAULT, func=NULL, client_data=NULL) = SUCCEED; H5Eset_auto2(estack=H5P_DEFAULT, func=NULL, client_data=NULL) = SUCCEED; H5Eauto_is_v2(estack=H5P_DEFAULT, is_stack=0xbf8a6858) = SUCCEED; H5Eget_auto2(estack=H5P_DEFAULT, func=0xbf8a685c, client_data=0xbf8a6860) = SUCCEED; H5Eset_auto2(estack=H5P_DEFAULT, func=NULL, client_data=NULL) = SUCCEED; H5Eset_auto2(estack=H5P_DEFAULT, func=NULL, client_data=NULL) = SUCCEED; H5Eauto_is_v2(estack=H5P_DEFAULT, is_stack=0xbf8a6858) = SUCCEED; H5Eget_auto2(estack=H5P_DEFAULT, func=0xbf8a685c, client_data=0xbf8a6860) = SUCCEED; H5Eset_auto2(estack=H5P_DEFAULT, func=NULL, client_data=NULL) = SUCCEED; H5Eset_auto2(estack=H5P_DEFAULT, func=NULL, client_data=NULL) = SUCCEED; H5Eauto_is_v2(estack=H5P_DEFAULT, is_stack=0xbf8a6888) = SUCCEED; H5Eget_auto2(estack=H5P_DEFAULT, func=0xbf8a688c, client_data=0xbf8a6890) = SUCCEED; H5Eset_auto2(estack=H5P_DEFAULT, func=NULL, client_data=NULL) = SUCCEED; H5Eset_auto2(estack=H5P_DEFAULT, func=NULL, client_data=NULL) = SUCCEED; H5Eauto_is_v2(estack=H5P_DEFAULT, is_stack=0xbf8a6888) = SUCCEED; H5Eget_auto2(estack=H5P_DEFAULT, func=0xbf8a688c, client_data=0xbf8a6890) = SUCCEED; H5Eset_auto2(estack=H5P_DEFAULT, func=NULL, client_data=NULL) = SUCCEED; H5Eset_auto2(estack=H5P_DEFAULT, func=NULL, client_data=NULL) = SUCCEED; H5Eauto_is_v2(estack=H5P_DEFAULT, is_stack=0xbf8a6888) = SUCCEED; H5Eget_auto2(estack=H5P_DEFAULT, func=0xbf8a688c, client_data=0xbf8a6890) = SUCCEED; H5Eset_auto2(estack=H5P_DEFAULT, func=NULL, client_data=NULL) = SUCCEED; H5Eset_auto2(estack=H5P_DEFAULT, func=NULL, client_data=NULL) = SUCCEED; H5Eauto_is_v2(estack=H5P_DEFAULT, is_stack=0xbf8a6898) = SUCCEED; H5Eget_auto2(estack=H5P_DEFAULT, func=0xbf8a689c, client_data=0xbf8a68a0) = SUCCEED; H5Eset_auto2(estack=H5P_DEFAULT, func=NULL, client_data=NULL) = SUCCEED; H5Eset_auto2(estack=H5P_DEFAULT, func=NULL, client_data=NULL) = SUCCEED; H5Eauto_is_v2(estack=H5P_DEFAULT, is_stack=0xbf8a6868) = SUCCEED; H5Eget_auto2(estack=H5P_DEFAULT, func=0xbf8a686c, client_data=0xbf8a6870) = SUCCEED; H5Eset_auto2(estack=H5P_DEFAULT, func=NULL, client_data=NULL) = SUCCEED; H5Eset_auto2(estack=H5P_DEFAULT, func=NULL, client_data=NULL) = SUCCEED; H5Eauto_is_v2(estack=H5P_DEFAULT, is_stack=0xbf8a6868) = SUCCEED; H5Eget_auto2(estack=H5P_DEFAULT, func=0xbf8a686c, client_data=0xbf8a6870) = SUCCEED; H5Eset_auto2(estack=H5P_DEFAULT, func=NULL, client_data=NULL) = SUCCEED; H5Eset_auto2(estack=H5P_DEFAULT, func=NULL, client_data=NULL) = SUCCEED; H5Eauto_is_v2(estack=H5P_DEFAULT, is_stack=0xbf8a6868) = SUCCEED; H5Eget_auto2(estack=H5P_DEFAULT, func=0xbf8a686c, client_data=0xbf8a6870) = SUCCEED; H5Eset_auto2(estack=H5P_DEFAULT, func=NULL, client_data=NULL) = SUCCEED; H5Eset_auto2(estack=H5P_DEFAULT, func=NULL, client_data=NULL) = SUCCEED;
Bug#861736:
activating the NXError reporting we got filenamenxs.h5 5 ERROR: cannot open file: filenamenxs.h5 0 and looking for this errormessage, we found it in the napi5.c file NXstatus NX5open(CONSTCHAR *filename, NXaccess am, NXhandle* pHandle) { hid_t attr1,aid1, aid2, iVID; pNexusFile5 pNew = NULL; char pBuffer[512]; char *time_buffer = NULL; char version_nr[10]; unsigned int vers_major, vers_minor, vers_release, am1 ; hid_t fapl = -1; int mdc_nelmts; size_t rdcc_nelmts; size_t rdcc_nbytes; double rdcc_w0; unsigned hdf5_majnum, hdf5_minnum, hdf5_relnum; *pHandle = NULL; if (H5get_libversion(&hdf5_majnum, &hdf5_minnum, &hdf5_relnum) < 0) { NXReportError("ERROR: cannot determine HDF5 library version"); return NX_ERROR; } if (hdf5_majnum == 1 && hdf5_minnum < 8) { NXReportError("ERROR: HDF5 library 1.8.0 or higher required"); return NX_ERROR; } /* mask of any options for now */ am = (NXaccess)(am & NXACCMASK_REMOVEFLAGS); /* turn off the automatic HDF error handling */ H5Eset_auto(H5E_DEFAULT, NULL, NULL); #ifdef USE_FTIME struct timeb timeb_struct; #endif pNew = (pNexusFile5) malloc (sizeof (NexusFile5)); if (!pNew) { NXReportError("ERROR: not enough memory to create file structure"); return NX_ERROR; } memset (pNew, 0, sizeof (NexusFile5)); /* start HDF5 interface */ if (am == NXACC_CREATE5) { fapl = H5Pcreate(H5P_FILE_ACCESS); H5Pget_cache(fapl,&mdc_nelmts,&rdcc_nelmts,&rdcc_nbytes,&rdcc_w0); rdcc_nbytes=(size_t)nx_cacheSize; H5Pset_cache(fapl,mdc_nelmts,rdcc_nelmts,rdcc_nbytes,rdcc_w0); H5Pset_fclose_degree(fapl,H5F_CLOSE_STRONG); am1 = H5F_ACC_TRUNC; pNew->iFID = H5Fcreate (filename, am1, H5P_DEFAULT, fapl); } else { if (am == NXACC_READ) { am1 = H5F_ACC_RDONLY; } else { am1 = H5F_ACC_RDWR; } fapl = H5Pcreate(H5P_FILE_ACCESS); H5Pset_fclose_degree(fapl,H5F_CLOSE_STRONG); pNew->iFID = H5Fopen (filename, am1, fapl); } if(fapl != -1) { H5Pclose(fapl); } if (pNew->iFID <= 0) { sprintf (pBuffer, "ERROR: cannot open file: %s", filename); NXReportError( pBuffer); free (pNew); return NX_ERROR; }
Bug#861736:
Herethe code ofthismethod /**/ static NXstatus NXinternalopen(CONSTCHAR *userfilename, NXaccess am, pFileStack fileStack); /*--*/ NXstatus NXopen(CONSTCHAR *userfilename, NXaccess am, NXhandle *gHandle){ int status; pFileStack fileStack = NULL; *gHandle = NULL; fileStack = makeFileStack(); if(fileStack == NULL){ NXReportError("ERROR: no memory to create filestack"); return NX_ERROR; } status = NXinternalopen(userfilename,am,fileStack); if(status == NX_OK){ *gHandle = fileStack; } return status; so lets's see the internalopen static NXstatus NXinternalopen(CONSTCHAR *userfilename, NXaccess am, pFileStack fileStack) { return LOCKED_CALL(NXinternalopenImpl(userfilename, am, fileStack)); } /*---*/ static NXstatus NXinternalopenImpl(CONSTCHAR *userfilename, NXaccess am, pFileStack fileStack) { int hdf_type=0; int iRet=0; NXhandle hdf5_handle = NULL; pNexusFunction fHandle = NULL; NXstatus retstat = NX_ERROR; char error[1024]; char *filename = NULL; int my_am = (am & NXACCMASK_REMOVEFLAGS); /* configure fortify iFortifyScope = Fortify_EnterScope(); Fortify_CheckAllMemory(); */ /* allocate data */ fHandle = (pNexusFunction)malloc(sizeof(NexusFunction)); if (fHandle == NULL) { NXReportError("ERROR: no memory to create Function structure"); return NX_ERROR; } memset(fHandle, 0, sizeof(NexusFunction)); /* so any functions we miss are NULL */ /* test the strip flag. Elimnate it for the rest of the tests to work */ fHandle->stripFlag = 1; if(am & NXACC_NOSTRIP){ fHandle->stripFlag = 0; am = (NXaccess)(am & ~NXACC_NOSTRIP); } fHandle->checkNameSyntax = 0; if (am & NXACC_CHECKNAMESYNTAX) { fHandle->checkNameSyntax = 1; am = (NXaccess)(am & ~NXACC_CHECKNAMESYNTAX); } if (my_am==NXACC_CREATE) { /* HDF4 will be used ! */ hdf_type=1; filename = strdup(userfilename); } else if (my_am==NXACC_CREATE4) { /* HDF4 will be used ! */ hdf_type=1; filename = strdup(userfilename); } else if (my_am==NXACC_CREATE5) { /* HDF5 will be used ! */ hdf_type=2; filename = strdup(userfilename); } else if (my_am==NXACC_CREATEXML) { /* XML will be used ! */ hdf_type=3; filename = strdup(userfilename); } else { filename = locateNexusFileInPath((char *)userfilename); if(filename == NULL){ NXReportError("Out of memory in NeXus-API"); free(fHandle); return NX_ERROR; } /* check file type hdf4/hdf5/XML for reading */ iRet = determineFileType(filename); if(iRet < 0) { snprintf(error,1023,"failed to open %s for reading", filename); NXReportError(error); free(filename); return NX_ERROR; } if(iRet == 0){ snprintf(error,1023,"failed to determine filetype for %s ", filename); NXReportError(error); free(filename); free(fHandle); return NX_ERROR; } hdf_type = iRet; } if(filename == NULL){ NXReportError("Out of memory in NeXus-API"); return NX_ERROR; } if (hdf_type==1) { /* HDF4 type */ #ifdef HDF4 NXhandle hdf4_handle = NULL; retstat = NX4open((const char *)filename,am,&hdf4_handle); if(retstat != NX_OK){ free(fHandle); free(filename); return retstat; } fHandle->pNexusData=hdf4_handle; NX4assignFunctions(fHandle); pushFileStack(fileStack,fHandle,filename); #else NXReportError( "ERROR: Attempt to create HDF4 file when not linked with HDF4"); retstat = NX_ERROR; #endif /* HDF4 */ free(filename); return retstat; } else if (hdf_type==2) { /* HDF5 type */ #ifdef HDF5 retstat = NX5open(filename,am,&hdf5_handle); if(retstat != NX_OK){ free(fHandle); free(filename); return retstat; } fHandle->pNexusData=hdf5_handle; NX5assignFunctions(fHandle); pushFileStack(fileStack,fHandle, filename); #else NXReportError( "ERROR: Attempt to create HDF5 file when not linked with HDF5"); retstat = NX_ERROR; #endif /* HDF5 */ free(filename); return retstat; } else if(hdf_type == 3){ /* XML type */ #ifdef NXXML NXhandle xmlHandle = NULL; retstat = NXXopen(filename,am,&xmlHandle); if(retstat != NX_OK){ free(fHandle); free(filename); return retstat; } fHandle->pNexusData=xmlHandle; NXXassignFunctions(fHandle); pushFileStack(fileSta
Bug#861736:
in the napi.h files we saw this. define CONCAT(__a,__b) __a##__b/* token concatenation */ #ifdef __VMS #define MANGLE(__arg) __arg #else #define MANGLE(__arg) CONCAT(__arg,_) #endif #define NXopen MANGLE(nxiopen) /** * Open a NeXus file. * NXopen honours full path file names. But it also searches * for files in all the paths given in the NX_LOAD_PATH environment variable. * NX_LOAD_PATH is supposed to hold a list of path string separated by the platform * specific path separator. For unix this is the : , for DOS the ; . Please note * that crashing on an open NeXus file will result in corrupted data. Only after a NXclose * or a NXflush will the data file be valid. * \param filename The name of the file to open * \param access_method The file access method. This can be: * \li NXACC__READ read access * \li NXACC_RDWR read write access * \li NXACC_CREATE, NXACC_CREATE4 create a new HDF-4 NeXus file * \li NXACC_CREATE5 create a new HDF-5 NeXus file * \li NXACC_CREATEXML create an XML NeXus file. * see #NXaccess_mode * Support for HDF-4 is deprecated. * \param pHandle A file handle which will be initialized upon successfull completeion of NXopen. * \return NX_OK on success, NX_ERROR in the case of an error. * \ingroup c_init */ extern NXstatus NXopen(CONSTCHAR * filename, NXaccess access_method, NXhandle* pHandle); so we need to check in this method what is going on.
Bug#861736:
Let's instrument the code print filename, mode, _ref(self.handle) status = nxlib.nxiopen_(filename,mode,_ref(self.handle)) print status $ python bug.py filenamenxs.h5 5 0
Bug#861736:
Hello here the napi code which cause some trouble. # Convert open mode from string to integer and check it is valid if mode in _nxopen_mode: mode = _nxopen_mode[mode] if mode not in _nxopen_mode.values(): raise ValueError, "Invalid open mode %s",str(mode) self.filename, self.mode = filename, mode self.handle = c_void_p(None) self._path = [] self._indata = False status = nxlib.nxiopen_(filename,mode,_ref(self.handle)) if status == ERROR: if mode in [ACC_READ, ACC_RDWR]: op = 'open' else: op = 'create' raise NeXusError, "Could not %s %s"%(op,filename) Soit seems that the nxlib.nxiopen_ method return an error
Bug#848137: [Dbconfig-common-devel] Bug#848137: problem with the upgrade of tango-db
> Ehm, yes. :) so I just tested an upgrade from jessie to sid of tango-db and it works :))) Now I have only one concern about the dump. Since we had a failure with the dump when it ran as user, we discovered that our procedures where wrong and necessitate the dbadmin grants in order to works. Would it be possible to display the error if the dump failed with the user grants. So during the upgrade we should see that something was wrong. This way we should have this interesting information. (from my point of view, but you can dis-agreed :) Fred
Bug#848137: [Dbconfig-common-devel] Bug#848137: problem with the upgrade of tango-db
Hello Paul > Officially, no, because the documentation says: "If files exist in both > data and scripts, they will both be executed in an unspecified order." > However, the current behavior of dbconfig-common is to first run the > script and then run the admin code and then run the user code. So you > should be fine (but please test) and I'll make sure this behavior > doesn't change in stretch. Reading this. I think that I do not need the scripts part in order to fix tango-db I juste need to get rid of the procedures in the dbadmin part and then the user scripts will be called :). Agreed ? thanks Fred
Bug#848137: [Dbconfig-common-devel] Bug#848137: problem with the upgrade of tango-db
Hello Paul > I really hope I can upload this weekend. I have code that I believe does > what I want. I am in the process of testing it. thanks a lot. > [...] > What I meant, > instead of the mysql code that runs as user, run a script for the > upgrade (they are run with database administrator credentials) and in > that script do two things: call the DROP PROCEDURES... and then use the > user credentials to run the normal script. > Apart from this repair, do you see more use cases? The problem is that > you would need nearly all the logic that is now in dbc_go for this to > work. What I am considering is if I could guarantee the order of > script/user mysql/admin mysql (or the last two reversed). I guess that > if I would guarantee the script to always come first, it would be easier > to solve the tango-db issue at hand (which was originally created by > dbconfig-common). ok, so If I understand correctl. In my tango-db postinst, I will add a script /usr/share/dbconfig-common/scripts/PACKAGE/upgrade/DBTYPE/VERSION which does the fix (drop all the procedures) then dbconfig-common will run my normal /usr/share/dbconfig-common/data/PACKAGE/upgrade/DBTYPE/VERSION script. right ? I have more than one version of 'normal' upgrade scripts, 7.2.6 8.0.5 8.2.0 9.2.5 so I need to add the same amount of fix scripts in order to be sure that the database is fixed from the first upgrade. 7.2.6 -> 8.0.5 etc... I just need to be sure that the database dump is done after the fix except if you run the dump at dbadmin, but In that case I would have a dump with the non fixed database. Cheers Frederic Paul
Bug#848137: [Dbconfig-common-devel] Bug#848137: problem with the upgrade of tango-db
Hello Paul, > Once I fixed 850190, Do you think that you will fix this bug before next week in order to let me enought time to fix tango and upload it. > I believe that ought to work, although that is > still a hack. I was thinking of doing the "DROP PROCEDURE IF EXISTS *" > calls with the administrator credentials and the rest of the upgrade > script with the proper tango credentials. But probably your solution is > easier to implement. I am thinking about this, and I agreed thaht it would be nice to put the fic in the postinst, just before the dbc_go whcih does the upgrade. Can you give me an example of how to execute this DROP PROCEDURES IF EXISTS * with the dbadmin right in this postinst I am wondering if dbconfig-common could provide something in order to execute an sql script everywhen at the request of the maintainer, dbc_dbuser = dbadmin dbc_custom_stript= dbc_go tango "runscript" which run the script with the right database configuration extracted from the configuration phase. Cheers Fred
Bug#848137: [Dbconfig-common-devel] Bug#848137: problem with the upgrade of tango-db
Hello, I discuss with the tango-db upstream and he found that this one line fixed the problem, befrore doing the tango-db upgrade UPDATE mysql.proc SET Definer='tango@localhost' where Db='tango'; Ideally it should be something like UPDATE mysql.proc SET Definer='xxx' where Db='yyy'; where xxx is the dbuser and yyy the database name.à It is true that for now my package works only if the database name is tango. this is a limitation but I do not want to mix this into this bug report. so can you help me write the right snipser at the right place in the debian scripts. Or maybe I should just put the upgrade script og tango-db 9.2.5 intot eh dbadmin part with this fix at the end in order to have something consistant forthe next upgrade (tango 10) Thanks for your help Cheers Frederic
Bug#848137: [Dbconfig-common-devel] Bug#848137: RE:Bug#848137: Info received (problem with the upgrade of tango-db)
Hello, > I am suspecting that this commit may be related to the current behavior: > https://anonscm.debian.org/cgit/collab-maint/dbconfig-common.git/commit/?id=acdb99d61abfff54630c4cfba6e4452357a83fb9 > I believe I implemented there that the drop of the database is performed > with the user privileges instead of the dbadmin privileges because I > believed one should always have the rights to drop the db. Apparently I > was wrong. We may need to clone or reassign this bug to dbconfig, but > I'm not sure yet if there aren't more things, or if tango-db should work > around the issue (which may be created by buggy dbconfig-common behavior > of the past). I can not give an educated guess if the current logic of dbconfig-common is good or not. I do not have enough knowledge of SQL/MySQL/Postsgresq etc... This problem could affect other package using dbconfig-common. I agreed also that I must fix the wrongly created procedure/tables due to the previous dbconfig-common behaviour. Can you help me in this proces in order to produce the right snopset to put in my package (preinst ?) script. I need to change the owner of the procedure from root @ localhost -> tango @ locahost Which kind of script should I add in my debian scripts ? thansk for your help Frederic
Bug#848137: RE:Bug#848137: Info received (problem with the upgrade of tango-db)
> I am not sure that I follow what you are doing, but if you need the code > to be run with the dbadmin privileges, you should put the code in: >/usr/share/dbconfig-common/data/PACKAGE/upgrade-dbadmin/DBTYPE/VERSION > instead of in: >/usr/share/dbconfig-common/data/PACKAGE/upgrade/DBTYPE/VERSION Hello Paul I just try to use dbconfig-common like I did before, (create the tango database, populate it with a few values and create some procedures.) What is strange to me is that with dbconfig-common (jessie version), I end up with procedure owned by root @ localhost but when I use the dbconfig-common of stretch, the procedure are owned by tango @ localhost 5which is right) I always used the non dbadmin script in order to configure my database. This is why I do not understand why the mysqldump does not work anymore. Does the dump changed between the jessie and strecth dbconfig-common ? By change I mean. In jessie it is run as root, but on stretch it is run as tango ? Cheers Fred
Bug#848137: Info received (problem with the upgrade of tango-db)
Thanks to reynald 1) On Jessie with the tango account mysql> use tango; mysql> show create procedure class_att_prop\G I got "Create Procedure": NULL But If I use the root account (mysqladmin) CREATE DEFINER=`root`@`localhost` PROCEDURE `class_att_prop` (IN class_name VARCHAR(255), INOUT res_str BLOB) which shows clearly that on jessie the procédure where created with the admin account 2) On stretch with the tango account > CREATE DEFINER=`tango`@`localhost` PROCEDURE `class_att_prop` (IN class_name > VARCHAR(255), INOUT res_str MEDIUMBLOB) Which shows that the procedures were created with the right tango account. Now the question is how can we fix this ?
Bug#848137: problem with the upgrade of tango-db
Hello, I would like to discuss about this bug [1] I tryed to reproduce the scenary of piuparts in a virtual machine (gnome-box) installed in 3 steps: jessie base system mysql-server (I need a working database) tango-db (daemon) It works ok, I have a running tango-db daemon (ps aux | grep tango) then replace jessie by stretch in the sources.list apt-get update apt-get install mysql-server (5.5 -> 5.6) the tango-db is still working. Now I installed the new default-mysql-server apt-get install default-mysql-server (5.6 -> mariadb 10.1) the tango-db daemon is still working. Now I upgrade tango-db apt-get install tango-db (tango8 -> tango9) and during this upgrade I have a problem of right that you can see in the bug report. I would like to know how to debug this problem, I tryed to export dbc_debug=1, But I got no real information about the failing mysqldump. I would say that I know almost nothing about sql, but I can learn. What is strange from mmy point of view, is that I created the table and populate it only with dbconfig-common. So I find strange that the dump can not work. Maybe this is a problem of compatibility between mysql/mariadb, because the dabatase was first created with mysql 5.5 and the upgrade is done via mariadb. Are you aware of problem of right due to mariadb ? thanks for your help Fred [1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=848137 please CC me I am not on the mailing list
Bug#811973: closed by Picca Frédéric-Emmanuel (Bug#811973: fixed in ssm 1.4.0-1~exp1)
No i do not have access to my computer until 3 january If you want to nmu go ahead Cheers De : Adrian Bunk [b...@stusta.de] Envoyé : mercredi 21 décembre 2016 16:57 À : 811...@bugs.debian.org; Picca Frédéric-Emmanuel Objet : Re: Bug#811973 closed by Picca Frédéric-Emmanuel (Bug#811973: fixed in ssm 1.4.0-1~exp1) Picca, can you also upload a fix/workaround for #811973 to unstable? Thanks Adrian -- "Is there not promise of rain?" Ling Tan asked suddenly out of the darkness. There had been need of rain for many days. "Only a promise," Lao Er said. Pearl S. Buck - Dragon Seed
Bug#844479: RE:Bug#844479: zeromq3: zeromq 4.2.0 breaks tango
I Uploaded tango 9.2.5~rc3+dfsg1-1into Debian unstable. I think that once migrated into testing it will be ok toclose this bug. Thanks Fred
Bug#844479: RE:Bug#844479: zeromq3: zeromq 4.2.0 breaks tango
Yes I work on this with the upstream :)) So don't worry I will tell you when it is ok. Cheers Fred
Bug#848137: tango-db: fails to upgrade from 'jessie': mysqldump: tango has insufficent privileges to SHOW CREATE PROCEDURE `class_att_prop`!
Hello Andreas, > In jessie, tango-db used mysql-server-5.5 (via mysql-server). > The upgrade of tango-db was performed after mysql-server had been upgraded > to mariadb-server-10.0 (via default-mysql-server) and was started again. do you know if the mariadb-server was running during the upgrade of tango-db. Because tango-db need a running server in order to work. My problem is that tango-db provide a daemon which require a running mysql/mariadb running server in order to be installed. BUT. I do not know how to express via Dependencies, how to have a running mysql/mariadb server. Especially if this running server is running on another computer than the one where I install the tango-db package. I need to support both scenarios tango-db + mysql on the same server (Maybe a Pre-Depends) tango-db and mysql/mariad server on different computers. cheers Fred
Bug#830399: Info received (Bug#830399: Info received (python-jedi: FTBFS: dh_auto_test: pybuild --test -i python{version} -p 2.7 returned exit code 13))
Hello, somenews about this issue ? Cheers Fred
Bug#844479: zeromq3: zeromq 4.2.0 breaks tango
Hello, I just opened a bug for tango https://github.com/tango-controls/cppTango/issues/312 what is the deadline where we can take the decision to upload or not zeromq 4.2.0 into Debian testing ? This will let also some time in order to check if this 4.2.0 do not have other size effect of dependeings softwares. Thanks Fred
Bug#844479: zeromq3: zeromq 4.2.0 breaks tango
Hello Luca > This is very unfortunate, but as explained on the mailing list, this > behaviour was an unintentional internal side effect. I didn't quite > realise it was there, and so most other devs. I understand, I just wanted to point that the synchrotron community invest a lot of efforts in order to provide a tango stack into Debian Stretch. (tango -> pytango -> spyder -> taurus -> hkl -> sardana) It would a big fail for use if Debian stretch were released without tango. > How much work would it be to change tango to avoid relying on aligned > internal recv buffers? I spoke with the tango upstream, and they told me that this change is not that trivial. This is unfortunate that it is so late in the release cycle of Debian. I think that they will not have the time to do this change before the 5 febuary. I will keep you informed if something moves from their part. BUT I beg the zeromq3 maintainer to stick with 4.1.5 for Stetch. Thanks a lot for considering Frederic
Bug#830399: Info received (Bug#830399: Info received (python-jedi: FTBFS: dh_auto_test: pybuild --test -i python{version} -p 2.7 returned exit code 13))
looking at the upstream repository,it seems that there is plenty of py3 fixes since the last release 0.9.0 so maybe it would be better to not run the unit test for the python3 now. Another solution is to take the HEAD of python-jedi, as explain by the upstream[1] and see if it pass the unit tests What is your opinion ? https://github.com/davidhalter/jedi/issues/808
Bug#830399: Info received (python-jedi: FTBFS: dh_auto_test: pybuild --test -i python{version} -p 2.7 returned exit code 13)
I just apply this patch and the import test PASS. I took only a part of the upstream patch but now I get this I: pybuild base:184: cd /<>/.pybuild/pythonX.Y_3.5/build; python3.5 -m pytest test = test session starts == platform linux -- Python 3.5.2+, pytest-3.0.3, py-1.4.31, pluggy-0.4.0 rootdir: /<>, inifile: pytest.ini collected 257 items / 1 errors ERRORS ERROR collecting .pybuild/pythonX.Y_3.5/build/test/test_integration.py test/conftest.py:59: in pytest_generate_tests cases = list(run.collect_dir_tests(base_dir, test_files)) test/run.py:293: in collect_dir_tests source = open(path).read() /usr/lib/python3.5/encodings/ascii.py:26: in decode return codecs.ascii_decode(input, self.errors)[0] E UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 331: ordinal not in range(128) !!! Interrupted: 1 errors during collection === 1 error in 4.38 seconds Description: TODO: Put a short summary on the line above and replace this paragraph with a longer explanation of this change. Complete the meta-information with other relevant fields (see below for details). To make it easier, the information below has been extracted from the changelog. Adjust it or drop it. . python-jedi (0.9.0-1) unstable; urgency=medium . * New upstream release * debian/watch: use pypi.debian.net redirector Author: Piotr Ożarowski --- The information above should follow the Patch Tagging Guidelines, please checkout http://dep.debian.net/deps/dep3/ to learn about the format. Here are templates for supplementary fields that you might want to add: Origin: , Bug: Bug-Debian: https://bugs.debian.org/ Bug-Ubuntu: https://launchpad.net/bugs/ Forwarded: Reviewed-By: Last-Update: 2016-11-16 --- python-jedi-0.9.0.orig/test/test_integration_import.py +++ python-jedi-0.9.0/test/test_integration_import.py @@ -18,22 +18,22 @@ def test_goto_definition_on_import(): def test_complete_on_empty_import(): assert Script("from datetime import").completions()[0].name == 'import' # should just list the files in the directory -assert 10 < len(Script("from .", path='').completions()) < 30 +assert 10 < len(Script("from .", path='whatever.py').completions()) < 30 # Global import -assert len(Script("from . import", 1, 5, '').completions()) > 30 +assert len(Script("from . import", 1, 5, 'whatever.py').completions()) > 30 # relative import -assert 10 < len(Script("from . import", 1, 6, '').completions()) < 30 +assert 10 < len(Script("from . import", 1, 6, 'whatever.py').completions()) < 30 # Global import -assert len(Script("from . import classes", 1, 5, '').completions()) > 30 +assert len(Script("from . import classes", 1, 5, 'whatever.py').completions()) > 30 # relative import -assert 10 < len(Script("from . import classes", 1, 6, '').completions()) < 30 +assert 10 < len(Script("from . import classes", 1, 6, 'whatever.py').completions()) < 30 wanted = set(['ImportError', 'import', 'ImportWarning']) assert set([c.name for c in Script("import").completions()]) == wanted if not is_py26: # python 2.6 doesn't always come with a library `import*`. -assert len(Script("import import", path='').completions()) > 0 +assert len(Script("import import", path='whatever.py').completions()) > 0 # 111 assert Script("from datetime import").completions()[0].name == 'import'
Bug#841600: pyfai: FTBFS: Tests failures
The problem was in scipy, #840264 Now it is fixed.