Bug#1050239: linux-image-6.1.0-11-amd64 breaks usermode networking for Windows VM in Gnome Boxes

2023-09-17 Thread Stijn Segers

Hi Salvatore,


Op woensdag 30 augustus 2023 om 15:03:37 +02:00:00 schreef Salvatore 
Bonaccorso :

Control: tags -1 + moreinfo

On Tue, Aug 22, 2023 at 03:45:08PM +0200, Stijn Segers wrote:

 Package: linux-image-6.1.0-11-amd64
 Version: 6.1.38-4

 Using kernel linux-image-6.1.0-11-amd64, my Windows 10 VM loses 
network
 connectivity. Linux VMs still work (tested with an Xubuntu 23.04 
and 22.04
 LTS VM). Rolling back to linux-image-6.1.0-10-amd64 makes the 
Windows VM

 connect to the network again.

 Tested in Gnome Boxes on Debian Bookworm; used virt-manager as well 
to start

 the Windows VM to make sure it was not a Gnome Boxes issue.

 Network configuration snippet of the Windows 10 VM:


 
 
   
   
   
 
 


You are not giving information on the underlying host, but can you
specificy is this a regression from 6.1.38-2?



Apologies, host is a AMD Ryzen 7 5800X 8-Core Processor. It is indeed a 
regression from 6.1.38-2.




While not exactly the same, there were regression from the AMD
Inception fixes, can you verify if #1043585 is the same and the
upstream changes fixes your issue?


I installed 6.1.52-1 and the Windows VM has network again, so this 
seems solved.


Grazie mille!

Stijn



Regards,
Salvatore




Bug#1050239: linux-image-6.1.0-11-amd64 breaks usermode networking for Windows VM in Gnome Boxes

2023-08-22 Thread Stijn Segers

Package: linux-image-6.1.0-11-amd64
Version: 6.1.38-4

Using kernel linux-image-6.1.0-11-amd64, my Windows 10 VM loses network 
connectivity. Linux VMs still work (tested with an Xubuntu 23.04 and 
22.04 LTS VM). Rolling back to linux-image-6.1.0-10-amd64 makes the 
Windows VM connect to the network again.


Tested in Gnome Boxes on Debian Bookworm; used virt-manager as well to 
start the Windows VM to make sure it was not a Gnome Boxes issue.


Network configuration snippet of the Windows 10 VM:




  
  
  function='0x0'/>






Thank you

Stijn



Bug#1021317: salt-master: Fails to start with "zmq.error.ZMQError: Invalid argument"

2022-12-18 Thread Stijn Segers

Hi!

Applying the fix linked in https://github.com/saltstack/salt/pull/62119 
resolves the ZMQ error, but unmasks another one - the 
salt.transport.base module. The base module is not part of 3004.1 (or 
.2). It was introduced with version 3005. Pulling the base.py file from 
https://github.com/saltstack/salt/tree/v3005/salt/transport doesn't 
solve it since Salt will start complaining about a missing 
'ZeroMQPubServerChannel' attribute.


I've checked the Debian package and it contains a lot of patches, so it 
looks like the Debian version is  heavily modified (diffing the 
upstream 3004.1 zeromq.py code against the Debian package zeromq.py 
confirms this).






dec 15 22:32:25 localhost salt-master[69709]: self.master.start()
dec 15 22:32:25 localhost salt-master[69709]: File 
"/usr/lib/python3/dist-packages/salt/master.py", line 711, in start
dec 15 22:32:25 localhost salt-master[69709]: chan = 
salt.transport.server.PubServerChannel.factory(opts)
dec 15 22:32:25 localhost salt-master[69709]: File 
"/usr/lib/python3/dist-packages/salt/transport/server.py", line 76, in 
factory
dec 15 22:32:25 localhost salt-master[69709]: import 
salt.transport.zeromq
dec 15 22:32:25 localhost salt-master[69709]: File 
"/usr/lib/python3/dist-packages/salt/transport/zeromq.py", line 102, in 

dec 15 22:32:25 localhost salt-master[69709]: class 
PublishClient(salt.transport.base.PublishClient):
dec 15 22:32:25 localhost salt-master[69709]: AttributeError: module 
'salt.transport' has no attribute 'base'


Cheers



Bug#801084: Bluez-firmware package on Buster does not fix this.

2020-01-19 Thread Stijn Segers

Hi,

I'd like to add to this that installing the bluez-firmware on an up to 
date Buster installation (10.2) does not fix it, just like Malvin 
points out. My hardware is looking for A1 instead of A0, but the issue 
is the same.


Dmesg:

$ sudo dmesg |grep BCM20702A1
[4.457749] Bluetooth: hci0: BCM20702A1 (001.002.014) build 
[4.458102] bluetooth hci0: firmware: failed to load 
brcm/BCM20702A1-0a5c-21e8.hcd (-2)
[4.458192] bluetooth hci0: Direct firmware load for 
brcm/BCM20702A1-0a5c-21e8.hcd failed with error -2
[4.458194] Bluetooth: hci0: BCM: Patch 
brcm/BCM20702A1-0a5c-21e8.hcd not found



bluez-firmware package contents:

$ dpkg -L bluez-firmware
/lib
/lib/firmware
/lib/firmware/BCM2033-FW.bin
/lib/firmware/BCM2033-MD.hex
/lib/firmware/STLC2500_R4_00_03.ptc
/lib/firmware/STLC2500_R4_00_06.ssf
/lib/firmware/STLC2500_R4_02_02_WLAN.ssf
/lib/firmware/STLC2500_R4_02_04.ptc
/usr
/usr/share
/usr/share/doc
/usr/share/doc/bluez-firmware
/usr/share/doc/bluez-firmware/BCM-LEGAL.txt
/usr/share/doc/bluez-firmware/README.Debian
/usr/share/doc/bluez-firmware/changelog.Debian.gz
/usr/share/doc/bluez-firmware/changelog.gz
/usr/share/doc/bluez-firmware/copyright


For me as well, following the instructions on 
http://plugable.com/2014/06/23/plugable-usb-bluetooth-adapter-solving-hfphsp-profile-issues-on-linux 
fixed this issue.


Thank you

Stijn



Bug#918227: xtables-addons-common: GeoIPCountryCSV.zip ERROR 404: Not Found

2019-06-22 Thread Stijn Segers
Hi,

I bumped into this issue myself. I have been able to use the newer 
GeoIP2-database [1] by updating manually:

$ cd /tmp/ && wget 
https://geolite.maxmind.com/download/geoip/database/GeoLite2-Country-CSV.zip
$ sudo /usr/lib/xtables-addons/xt_geoip_build -D /usr/share/xt_geoip/ *csv

Could the package be updated? It's missing from Buster but somehow available in 
Sid.

Thank you!

Stijn

[1] https://geolite.maxmind.com/download/geoip/database/GeoLite2-Country-CSV.zip

Bug#725884: hdparm: Standby (spindown) timeout option (-S) has no effect

2019-03-10 Thread Stijn Segers
Hi,

I recently experienced the same issue here. Until Debian Stretch everything was 
fine, but I recently upgraded to Debian Buster (pre-release). I previously just 
set the spindown time and this used to work fine. Hdparm -Y on the devices 
(using the plain /dev/sd? paths) still worked, but automatic spindown was 
somehow broken. I tried enabling APM (setting it to 127) but that didn't change 
a thing - I previously only set spindown time and that worked just fine. I then 
switched to /dev/disk/by-id/ paths, that didn't work either.

What I have noticed, however, is that spindown_time settings up to 59 work; 
anything from 60 (5 minutes) and up won't. Hope this helps.

Cheers

Stijn

Bug#919231: Salt-master unable to access directories

2019-02-07 Thread Stijn Segers
Thanks, that workaround fixes it indeed.

Verzonden met ProtonMail Mobile

 Oorspronkelijk bericht 
Aan 6 feb. 2019 19:19, Benjamin Drung schreef:

> reassign 919231 systemd 240-5
> retitle 919231 CacheDirectory/StateDirectory does not change owner/group
> thanks
>
> Hi Stijn,
>
> your bug description was enough for me to reproduce this misbehavior
> and tracked it down to systemd not behaving like the documentation
> describes:
>
> StateDirectory=, CacheDirectory=
> Except in case of ConfigurationDirectory=, the innermost specified
> directories will be owned by the user and group specified in User=
> and Group=. If the specified directories already exist and their
> owning user or group do not match the configured ones, all files
> and directories below the specified directories as well as the
> directories themselves will have their file ownership recursively
> changed to match what is configured. As an optimization, if the
> specified directories are already owned by the right user and
> group, files and directories below of them are left as-is, even
> if they do not match what is requested.
>
> The salt-master systemd service is configured to use
> /var/lib/salt/pki/master and /var/cache/salt/master as state and cache
> directory. salt should change the ownership, but it does not. Steps to
> reproduce:
>
> Take a minimal Debian 9 installation and:
>
> ```
> root@debian:~# apt install salt-master
> root@debian:~# sed -i 's/stretch/buster/g' /etc/apt/sources.list
> root@debian:~# apt upgrade
> [...]
> Setting up salt-master (2018.3.3+dfsg1-2) ...
> Installing new version of config file /etc/salt/master ...
> Job for salt-master.service failed because the control process exited
> with error code.
> See "systemctl status salt-master.service" and "journalctl -xe" for
> details.
> invoke-rc.d: initscript salt-master, action "restart" failed.
> ● salt-master.service - The Salt Master Server
> Loaded: loaded (/lib/systemd/system/salt-master.service; enabled;
> vendor preset: enabled)
> Active: failed (Result: exit-code) since Wed 2019-02-06 16:16:37
> UTC; 8ms ago
> Docs: man:salt-master(1)
> file:///usr/share/doc/salt/html/contents.html
> https://docs.saltstack.com/en/latest/contents.html
> Process: 31417 ExecStart=/usr/bin/salt-master (code=exited,
> status=13)
> Main PID: 31417 (code=exited, status=13)
>
> Feb 06 16:16:37 debian systemd[1]: Starting The Salt Master Server...
> Feb 06 16:16:37 debian salt-master[31417]: Failed to create directory
> path "/var/lib/salt/pki/master/minions" - [Errno 13] Permission denied:
> '/var/lib/salt/pki/master/minions'
> Feb 06 16:16:37 debian systemd[1]: salt-master.service: Main process
> exited, code=exited, status=13/n/a
> Feb 06 16:16:37 debian systemd[1]: salt-master.service: Failed with
> result 'exit-code'.
> Feb 06 16:16:37 debian systemd[1]: Failed to start The Salt Master
> Server.
> dpkg: error processing package salt-master (--configure):
> installed salt-master package post-installation script subprocess
> returned error exit status 1
> [...]
> ```
>
> Instead of doing an upgrade test, you can just do the test on testing
> by stopping salt-master, changing the permission to root and starting
> salt-master.
>
> ```
> root@debian:~# systemctl cat salt-master.service
> # /lib/systemd/system/salt-master.service
> [Unit]
> Description=The Salt Master Server
> Documentation=man:salt-master(1)
> file:///usr/share/doc/salt/html/contents.html
> https://docs.saltstack.com/en/latest/contents.html
> After=network.target
>
> [Service]
> LimitNOFILE=10
> Type=notify
> NotifyAccess=all
> ExecStart=/usr/bin/salt-master
> User=salt
> Group=salt
> CacheDirectory=salt/master
> RuntimeDirectory=salt
> StateDirectory=salt/pki/master
>
> [Install]
> WantedBy=multi-user.target
> root@debian:~# ls -ld /var/lib/salt /var/lib/salt/pki
> /var/lib/salt/pki/master
> drwxr-xr-x 3 salt salt 4096 Feb 6 16:16 /var/lib/salt
> drwxr-xr-x 3 root root 4096 Feb 6 16:16 /var/lib/salt/pki
> drwx-- 7 root root 4096 Feb 6 16:10 /var/lib/salt/pki/master
> root@debian:~# ls -ld /var/cache/salt /var/cache/salt/master
> drwxr-xr-x 3 root root 4096 Feb 6 16:10 /var/cache/salt
> drwxr-xr-x 8 root root 4096 Feb 6 16:11 /var/cache/salt/master
> rroot@debian:~# dpkg -l | grep systemd | sed 's/ \+amd64 .*$//'
> ii libnss-systemd:amd64 240-5
> ii libpam-systemd:amd64 240-5
> ii libsystemd0:amd64 240-5
> ii python-systemd 234-2+b1
> ii python3-systemd 234-2+b1
> ii systemd 240-5
> ii systemd-sysv 240-5
> ```
>
> The workaround is to manually change the owner/group to salt:
>
> root@debian:~# chown -R salt:salt /var/lib/salt/pki/master 
> /var/cache/salt/master
> root@debian:~# systemctl start salt-master
>
> --
> Benjamin Drung
> System Developer
> Debian & Ubuntu Developer
>
> 1&1 IONOS Cloud GmbH | Greifswalder Str. 207 | 10405 Berlin | Germany
> E-mail: benjamin.dr...@cloud.ionos.com | Web: www.ionos.de
>
> Head Office: Berlin, Germany
> District Court Berlin Charlottenburg, 

Bug#919231: Salt-master unable to access directories

2019-02-02 Thread Stijn Segers
Hi Benjamin,

Can you tell me what info you need? Anything I should check?

Salt-master and systemd package info:
$ apt policy systemd
systemd:
  Installed: 240-4
  Candidate: 240-4
  Version table:
 240-5 50
 50 http://cdn-fastly.deb.debian.org/debian sid/main amd64 Packages
*** 240-4 450
450 http://cdn-fastly.deb.debian.org/debian buster/main amd64 Packages
450 http://cdn-fastly.deb.debian.org/debian testing/main amd64 Packages
100 /var/lib/dpkg/status
$ apt policy salt-master
salt-master:
  Installed: 2018.3.3+dfsg1-2
  Candidate: 2018.3.3+dfsg1-2
  Version table:
*** 2018.3.3+dfsg1-2 450
450 http://cdn-fastly.deb.debian.org/debian buster/main amd64 Packages
450 http://cdn-fastly.deb.debian.org/debian testing/main amd64 Packages
 50 http://cdn-fastly.deb.debian.org/debian sid/main amd64 Packages
100 /var/lib/dpkg/status

Bug#919231: salt-master: Upgrade Stretch -> Buster: permission denied on certain files/directories

2019-01-13 Thread Stijn Segers
Package: salt-master
Version: 2018.3.3+dfsg1-2
Severity: important

Dear Maintainer,

Upgrading salt-master from its Stretch version to Buster (whole system was
upgraded) breaks the Salt master.

Symptoms:

E: Sub-process /usr/bin/dpkg returned an error code (1)

[...]

Job for salt-master.service failed because the control process exited with
error code.
See "systemctl status salt-master.service" and "journalctl -xe" for details.
invoke-rc.d: initscript salt-master, action "restart" failed.
● salt-master.service - The Salt Master Server
   Loaded: loaded (/lib/systemd/system/salt-master.service; enabled; vendor
preset: enabled)
   Active: failed (Result: exit-code) since Sun 2019-01-13 22:43:04 CET; 6ms
ago
 Docs: man:salt-master(1)
   file:///usr/share/doc/salt/html/contents.html
   https://docs.saltstack.com/en/latest/contents.html
  Process: 14194 ExecStart=/usr/bin/salt-master (code=exited, status=1/FAILURE)
 Main PID: 14194 (code=exited, status=1/FAILURE)

jan 13 22:43:04 icarus salt-master[14194]:   File "/usr/lib/python3/dist-
packages/salt/daemons/masterapi.py", line 237, in access_keys
jan 13 22:43:04 icarus salt-master[14194]: key = mk_key(opts, user)
jan 13 22:43:04 icarus salt-master[14194]:   File "/usr/lib/python3/dist-
packages/salt/daemons/masterapi.py", line 206, in mk_key
jan 13 22:43:04 icarus salt-master[14194]: with
salt.utils.files.fopen(keyfile, 'w+') as fp_:
jan 13 22:43:04 icarus salt-master[14194]:   File "/usr/lib/python3/dist-
packages/salt/utils/files.py", line 387, in fopen
jan 13 22:43:04 icarus salt-master[14194]: f_handle = open(*args, **kwargs)
# pylint: disable=resource-leakage
jan 13 22:43:04 icarus salt-master[14194]: PermissionError: [Errno 13]
Permission denied: '/var/cache/salt/master/.salt_key'
jan 13 22:43:04 icarus systemd[1]: salt-master.service: Main process exited,
code=exited, status=1/FAILURE
jan 13 22:43:04 icarus systemd[1]: salt-master.service: Failed with result
'exit-code'.
jan 13 22:43:04 icarus systemd[1]: Failed to start The Salt Master Server.


It turns out renaming /var/cache/salt works around this - a new /var/cache/salt
directory gets created and the .salt_key gets generated (does not exist on a
Stretch installation). There is a .root_key though.

After overwriting the contents of the new /var/cache/salt/ directory with what
was in the old one (and keeping the .salt_key), the Salt service starts, but
still seems unable to access (existing) directories:

jan 13 22:48:37 icarus salt-master[16017]: Traceback (most recent call last):
jan 13 22:48:37 icarus salt-master[16017]:   File
"/usr/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
jan 13 22:48:37 icarus salt-master[16017]: self.run()
jan 13 22:48:37 icarus salt-master[16017]:   File "/usr/lib/python3/dist-
packages/salt/utils/process.py", line 750, in _run
jan 13 22:48:37 icarus salt-master[16017]: return self._original_run()
jan 13 22:48:37 icarus salt-master[16017]:   File "/usr/lib/python3/dist-
packages/salt/master.py", line 234, in run
jan 13 22:48:37 icarus salt-master[16017]:
salt.utils.verify.check_max_open_files(self.opts)
jan 13 22:48:37 icarus salt-master[16017]:   File "/usr/lib/python3/dist-
packages/salt/utils/verify.py", line 429, in check_max_open_files
jan 13 22:48:37 icarus salt-master[16017]: accepted_count =
len(os.listdir(accepted_keys_dir))
jan 13 22:48:37 icarus salt-master[16017]: PermissionError: [Errno 13]
Permission denied: '/var/lib/salt/pki/master/minions'


This directory is 700, but when I chmod it to 755 (which I suppose is bad
practice, I presume it's 700 for a valid reason), restart
the Salt service, the permissions are reset to 700:

$ ls -lh /var/lib/salt/pki/master/|grep minions
drwx-- 2  755 root 4,0K dec 29 16:21 minions

Let me know if you need more information. This was a clean upgrade from Stretch
(no bits and pieces).

Thank you

Stijn Segers



-- System Information:
Debian Release: buster/sid
  APT prefers testing
  APT policy: (450, 'testing'), (50, 'unstable')
Architecture: amd64 (x86_64)

Kernel: Linux 4.19.0-1-amd64 (SMP w/4 CPU cores)
Locale: LANG=nl_BE.UTF-8, LC_CTYPE=nl_BE.UTF-8 (charmap=UTF-8), 
LANGUAGE=nl_BE.UTF-8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled

Versions of packages salt-master depends on:
ii  adduser  3.118
ii  lsb-base 10.2018112800
ii  python3  3.7.1-3
ii  python3-crypto   2.6.1-9+b1
ii  python3-systemd  234-2+b1
ii  python3-zmq  17.1.2-1
ii  salt-common  2018.3.3+dfsg1-2

Versions of packages salt-master recommends:
ii  python3-pygit2  0.27.3-1

salt-master suggests no packages.

-- Configuration Files:
/etc/salt/master changed [not included]

-- no debconf information


Bug#851513: Build failing

2017-01-28 Thread Stijn Segers
I applied all four patches to the zfs-dkms 0.6.5.8-3 package, but compiling 
fails (SPL built fine after I patched it):

CC [M] /var/lib/dkms/zfs/0.6.5.8/build/module/zfs/zpl_export.o
/var/lib/dkms/zfs/0.6.5.8/build/module/zfs/zpl_ctldir.c:424:13: error: 
initialization from incompatible pointer type 
[-Werror=incompatible-pointer-types]
.rename = zpl_snapdir_rename,
^~
/var/lib/dkms/zfs/0.6.5.8/build/module/zfs/zpl_ctldir.c:424:13: note: (near 
initialization for ‘zpl_ops_snapdir.rename’)
cc1: some warnings being treated as errors
/usr/src/linux-headers-4.9.0-1-common/scripts/Makefile.build:298: recept voor 
doel '/var/lib/dkms/zfs/0.6.5.8/build/module/zfs/zpl_ctldir.o' is mislukt
make[7]: *** [/var/lib/dkms/zfs/0.6.5.8/build/module/zfs/zpl_ctldir.o] Fout 1

These are stretch installs as well (have three with ZFS, all the same error).

Thanks

Stijn

Bug#847018: zfs-dkms: fails to build against kernel version 4.8.0-2-amd64

2016-12-27 Thread Stijn Segers

Hi,

The only modification needed is actually in zpl_inode.c. You can pull 
in this patch [1] and apply it in /usr/src/zfs-0.6.5.8/module/zfs, then 
run $ sudo dpkg-reconfigure zfs-dkms.


That should fix it in the meantime.

Cheers

Stijn

[1]: http://ix.io/1Ot5



Bug#836972: All gnome packages should be linked by version dependencies

2016-09-12 Thread Stijn Segers
I've been bitten by this bug as well, can confirm upgrading to the 
3.21.9x packages from unstable fixes the backlight control / no info on 
screen found problems.


Cheers

Stijn

On Sat, 10 Sep 2016 21:53:36 +0200 Petr Pulc  wrote:
> Just to make things clear, at least the core packages should be all
> version-linked:
>
> gnome-color-manager
> gnome-control-center
> gnome-settings-daemon
> gnome-shell
>
> Otherwise, key components such as backlight dimming, will not work.



Bug#772628: Similar issue

2016-05-07 Thread Stijn Segers
I have been bitten by this too; I run Debian on a Beaglebone Black with 2 GB 
eMMC storage; it would be really near if I could save some space by using 
compressed modules.

In light of all the ARM and other non-x86 devices Debian supports nowadays this 
is a valid issue. Since implementing support is pretty trivial I don't see why 
this is being ignored.

Thanks for making this happen!

Stijn

Bug#815125: Boot failure with Debian linux 4.4.2 package

2016-03-08 Thread Stijn Segers

Hi guys,

I can confirm the latest kernel package works:

linux-image-4.4.0-1-amd64:amd64 4.4.4-1

Boots fine here.

Thank you!

Stijn Segers




Bug#815125: Boot failure with Debian linux 4.4.2 package

2016-02-28 Thread Stijn Segers

Hi,

I would like to add this bug bit me too. I am on a Dell XPS 13 9350 
(Skylake, late 2015 model). So far, all 4.4.0 kernel packages from 
Unstable hang at boot (ie 4.4.2-1 through -3). Booting in UEFI mode, 
CSM disabled.


I had built my own kernel based on a 4.4.0 config from an earlier 
Experimental build (RC8) and updated that on every patchlevel release, 
still works fine (on 4.4.3 now).


The efi=noruntime boot argument does the trick for me; the 4.4.2 
testing package Ben uploaded however still hangs, just like the 4.5 RC4 
package from Experimental.


Can test/provide more info if needed.

Cheers

Stijn Segers

On Tue, 23 Feb 2016 20:17:03 +0100 Alexis Murzeau <amub...@gmail.com> 
wrote:

> Hi,
>
> I have the same traceback as Zdravko in Message #121 (NULL pointer
> dereference at RIP=0x81063682, 
__change_page_attr_set_clr+0x242).

>
> If I add "efi=old_map" parameter to kernel cmdline, the kernel boots
> fine.
>
> Also, this might help Norbert to have a traceback printed: using
> "quiet earlyprintk=efi,keep" kernel cmdline options will print only
> the traceback (so it should be faster to get the kernel to the crash
> traceback, if the cause is really a crash).
>
>
> After compiling several kernels, I narrowed down the crash to the
> patch "x86-efi-build-our-own-page-table-structures.patch".
>
> Without this patch, the kernel boots fine (dmesg output in attachment
> dmesg_linux-4.4.2_[...].txt)
>
> With this patch, I get the crash in efi_call (photo of
> "earlyprintk=efi,keep" output in attachment
> traceback_linux-4.4.2_with_patch_[...].jpg).
>
> I also added 4 printk to add information before the crash when
> calling efi_call_phys with efi_phys.set_virtual_address_map (see also
> "additionnal_printk.diff" in attachment). (Not sure if it can help)
>
> When I added these printk, the traceback stop at efi_call
> (__change_page_attr_set_clr isn't anymore in the traceback) but RIP
> is still the same as without these changes.
>
> See also the traceback I get in attachment
> traceback_linux-4.4.2-3_unmodified.jpg with the current 4.4 kernel
> (version 4.4.2-3 unmodified)
>
> Also, let me know if a new bug should be opened for this.
>
> Thanks,
> Alexis Murzeau