Re: Raspbian: After update from buster to bookworm, X11Forwarding in ssh connection stopped working

2023-08-09 Thread B.M.
On Montag, 7. August 2023 16:33:26 CEST you wrote:
> On Montag, 7. August 2023 15:19:49 CEST you wrote:
> > Dear all,
> >
> > I just dist-upgraded my Raspberry Pi from buster to bookworm, and while
> >
> > ssh -Y...
> >
> > worked like a charm in before the update and I could start any X11 program
> > over ssh, it doesn't work anymore since then. Executing
> >
> > ssh -Y -C -l myUser otherHostname.local -v
> >
> > I get
> >
> > ...
> > debug1: Requesting X11 forwarding with authentication spoofing.
> > debug1: Sending environment.
> > debug1: channel 0: setting env LANG = "en_US.UTF-8"
> > debug1: channel 0: setting env LC_MONETARY = "de_CH.UTF-8"
> > debug1: channel 0: setting env LC_MEASUREMENT = "de_CH.UTF-8"
> > debug1: channel 0: setting env LC_TIME = "de_CH.UTF-8"
> > debug1: channel 0: setting env LC_ALL = ""
> > debug1: channel 0: setting env LC_COLLATE = "C"
> > debug1: channel 0: setting env LC_NUMERIC = "de_CH.UTF-8"
> > X11 forwarding request failed on channel 0
> > ...
> >
> > From /etc/ssh/sshd_config on the server:
> >
> > AddressFamily inet
> > X11Forwarding yes
> > X11UseLocalhost no
> >
> > Interestingly, when connecting for the first time I got a warning:
> > WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
> > and I did just
> > ssh-keygen -f "/home/xxx/.ssh/known_hosts" -R "otherHostname"
> > which I did.
> >
> > xauth is installed on the server.
> >
> > What can be the reason, that I cannot use X11 forwarding anymore?
> >
> > Thank you.
> >
> > Best,
> > Bernd
>
> Sorry, correction: I didn't upgrade from buster to bookworm but from
> bullseye.

Just for the record: I could solve the problem, and it was sitting somewhere
else...

It's a Raspberry Pi running Raspbian with full sd card encryption, and
headless. Therefore there is dropbear used as small ssh server during boot
(built into initramfs), later ssh-server is used. After the update, dropbear
was also running and my connections where to dropbear, not sshd. Disabling
dropbear therefore solved the problem and my configuration of sshd was
perfectly fine.




Re: Raspbian: After update from buster to bookworm, X11Forwarding in ssh connection stopped working

2023-08-07 Thread B.M.
On Montag, 7. August 2023 15:19:49 CEST you wrote:
> Dear all,
>
> I just dist-upgraded my Raspberry Pi from buster to bookworm, and while
>
> ssh -Y...
>
> worked like a charm in before the update and I could start any X11 program
> over ssh, it doesn't work anymore since then. Executing
>
> ssh -Y -C -l myUser otherHostname.local -v
>
> I get
>
> ...
> debug1: Requesting X11 forwarding with authentication spoofing.
> debug1: Sending environment.
> debug1: channel 0: setting env LANG = "en_US.UTF-8"
> debug1: channel 0: setting env LC_MONETARY = "de_CH.UTF-8"
> debug1: channel 0: setting env LC_MEASUREMENT = "de_CH.UTF-8"
> debug1: channel 0: setting env LC_TIME = "de_CH.UTF-8"
> debug1: channel 0: setting env LC_ALL = ""
> debug1: channel 0: setting env LC_COLLATE = "C"
> debug1: channel 0: setting env LC_NUMERIC = "de_CH.UTF-8"
> X11 forwarding request failed on channel 0
> ...
>
> From /etc/ssh/sshd_config on the server:
>
> AddressFamily inet
> X11Forwarding yes
> X11UseLocalhost no
>
> Interestingly, when connecting for the first time I got a warning:
> WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
> and I did just
> ssh-keygen -f "/home/xxx/.ssh/known_hosts" -R "otherHostname"
> which I did.
>
> xauth is installed on the server.
>
> What can be the reason, that I cannot use X11 forwarding anymore?
>
> Thank you.
>
> Best,
> Bernd

Sorry, correction: I didn't upgrade from buster to bookworm but from bullseye.




Raspbian: After update from buster to bookworm, X11Forwarding in ssh connection stopped working

2023-08-07 Thread B.M.
Dear all,

I just dist-upgraded my Raspberry Pi from buster to bookworm, and while

ssh -Y...

worked like a charm in before the update and I could start any X11 program
over ssh, it doesn't work anymore since then. Executing

ssh -Y -C -l myUser otherHostname.local -v

I get

...
debug1: Requesting X11 forwarding with authentication spoofing.
debug1: Sending environment.
debug1: channel 0: setting env LANG = "en_US.UTF-8"
debug1: channel 0: setting env LC_MONETARY = "de_CH.UTF-8"
debug1: channel 0: setting env LC_MEASUREMENT = "de_CH.UTF-8"
debug1: channel 0: setting env LC_TIME = "de_CH.UTF-8"
debug1: channel 0: setting env LC_ALL = ""
debug1: channel 0: setting env LC_COLLATE = "C"
debug1: channel 0: setting env LC_NUMERIC = "de_CH.UTF-8"
X11 forwarding request failed on channel 0
...

From /etc/ssh/sshd_config on the server:

AddressFamily inet
X11Forwarding yes
X11UseLocalhost no

Interestingly, when connecting for the first time I got a warning:
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
and I did just
ssh-keygen -f "/home/xxx/.ssh/known_hosts" -R "otherHostname"
which I did.

xauth is installed on the server.

What can be the reason, that I cannot use X11 forwarding anymore?

Thank you.

Best,
Bernd




Re: How: Require root password instead of user password for GUI programs

2023-04-07 Thread B.M.
On Thu, 2023-04-06 at 11:04 -0400, Jeffrey Walton wrote:
> On Thu, Apr 6, 2023 at 8:36 AM B.M.  wrote:
> > 
> > I configured my system such that some users are in group sudo, but
> > they are
> > asked for the root password instead of just their user password by
> > creating a
> > file within /etc/sudoers.d/ with the line:
> > 
> >  Defaults rootpw
> > 
> > This is working just fine, but for graphical applications it
> > doesn't work: e.g.
> > to start synaptic I get a password prompt requiring my user
> > password, not the
> > root password.
> > 
> > How can I configure my system such that entering the root password
> > is also
> > required in these cases?
> > 
> > (Maybe there is something with polkit, but I couldn't figure out
> > myself...)
> 
> May be helpful:
> https://askubuntu.com/questions/1199006/how-to-let-polkit-request-root-password-instead-users-password
> 
> And possibly
> https://askubuntu.com/questions/1246661/defaults-rootpw-for-the-gui-password-prompt
> 
> Jeff

Thank you for your ideas.

In fact it seems that these solutions are a bit outdated - I found out
that the following is needed, as documented in the Arch Wiki.

PolicyKit got replaced by polkit (at least in current Debian Testing),
and the "old" solution with setting AdminIdentities doesn't work
anymore. Instead one has to add a file /etc/polkit-1/rules.d/50-
default.rules as follows:

polkit.addAdminRule(function(action, subject) {
return ["unix-user:0"];
});

in order to require root credentials for admin tasks (if sudo is
installed).

I hope someone finds this useful.

Best,
Bernd



How: Require root password instead of user password for GUI programs

2023-04-06 Thread B.M.
Hi,

I configured my system such that some users are in group sudo, but they are
asked for the root password instead of just their user password by creating a
file within /etc/sudoers.d/ with the line:

 Defaults rootpw

This is working just fine, but for graphical applications it doesn't work: e.g.
to start synaptic I get a password prompt requiring my user password, not the
root password.

How can I configure my system such that entering the root password is also
required in these cases?

(Maybe there is something with polkit, but I couldn't figure out myself...)

Thank you.

Have a nice day,
Bernd




Re: Higher power consumption in Debian than in Ubuntu - ASPM disabled instead of enabled for 2 modules (lspci), but why?

2022-12-21 Thread B.M.
On Mittwoch, 21. Dezember 2022 18:37:20 CET you wrote:
> Dear all,
>
> Comparing power consumption with powertop for a brand new Dell laptop I
> found out that Debian is consuming about 6 Watts while Ubuntu is consuming
> 3 Watts (idle). Comparing configs I just found out that lspci -vv reports
> ASPM enabled 6 x for Ubuntu and 4 x for Debian. The differences are:
>
> Ubuntu:
> Non-Volatile memory controller
> ASPM L1 Enabled
> Kernel driver in use: nvme
>
> PCI bridge: Intel Corp 12th Gen
> ASPM L1 Enabled
> Kernel driver in use: pcieport
>
> Debian:
> Non-Volatile memory controller
> ASPM Disabled
> Kernel driver in use: nvme
>
> PCI bridge: Intel Corp 12th Gen
> ASPM Disabled
> Kernel driver in use: pcieport
>
> So the exact same driver is been used and I didn't find any config within
> /etc/ modprobe.d or /etc/modules or /etc/modules-load.d with respect to
> these modules.
>
> Do you have an idea why this happens (same computer, so BIOS settings should
> have no impact)?
>
> Thank you very much.
>
> Best regards,
> Bernd

OK, problem solved: with the boot parameter pcie_aspm=force I get the very
same 3 W with Debian as with Ubuntu :-)

Best,
Bernd




Higher power consumption in Debian than in Ubuntu - ASPM disabled instead of enabled for 2 modules (lspci), but why?

2022-12-21 Thread B.M.
Dear all,

Comparing power consumption with powertop for a brand new Dell laptop I found
out that Debian is consuming about 6 Watts while Ubuntu is consuming 3 Watts
(idle). Comparing configs I just found out that lspci -vv reports ASPM enabled
6 x for Ubuntu and 4 x for Debian. The differences are:

Ubuntu:
Non-Volatile memory controller
ASPM L1 Enabled
Kernel driver in use: nvme

PCI bridge: Intel Corp 12th Gen
ASPM L1 Enabled
Kernel driver in use: pcieport

Debian:
Non-Volatile memory controller
ASPM Disabled
Kernel driver in use: nvme

PCI bridge: Intel Corp 12th Gen
ASPM Disabled
Kernel driver in use: pcieport

So the exact same driver is been used and I didn't find any config within /etc/
modprobe.d or /etc/modules or /etc/modules-load.d with respect to these
modules.

Do you have an idea why this happens (same computer, so BIOS settings should
have no impact)?

Thank you very much.

Best regards,
Bernd




Dell Precision 3570 - Debian instead of Ubuntu - follow-up: why is Debian power consumption so much higher?

2022-12-19 Thread B.M.
Hi,

I've got a brand new Dell Precision 3570 Laptop with Ubuntu 20.04 LTS pre-
installed. After figuring out recovery partition and tools, I installed Debian
Testing (Bookworm) side-by-side (since using a live medium doesn't really work
because it's based on Stable which isn't supporting the keyboard/touchpad well
yet).

Based on powertop the energy consumption of Ubuntu after booting, in a Gnome
one Wayland-session, running nothing but a terminal, is about 2.79 - 3.37
Watts, with an average of 3.08 W (over 15 measurements in a row).

In Debian Testing (Bookworm), also Gnome on Wayland, fresh boot, terminal
running powertop, I get about 5.0 Watts, so ~60% higher. This after installing
tlp (has been installed in Ubuntu) - before it has been around 8 - 10 W.

On Debian I also compared the output of tlp-stat; I could align some settings
afterwards (I added tlp config files and added some boot parameters:
workqueue.power_efficient=1, i915.i915_enable_fbc=1 and i915.i915_enable_dc=1).
To me, there doesn't seem to be much difference anymore, but the higher power
consumption remains. (There's only a SSD inside, no spinning discs, and screen
brightness is set to the minimum in both cases. Bluetooth is deactivated in
Gnome settings.)
Measuring in TTY on Debian, after logout of the Gnome session, I get 4.9 W as
well.

Any ideas what I could do to get Debian to be as power efficient as Ubuntu?

Thank you very much.

Best,
Bernd
--- TLP 1.5.0 

+++ Configured Settings:
defaults.conf L0004: TLP_ENABLE="1"
defaults.conf L0005: TLP_WARN_LEVEL="3"
/etc/tlp.d/50-estar-default.conf L0038: TLP_PERSISTENT_DEFAULT="1"
defaults.conf L0007: DISK_IDLE_SECS_ON_AC="0"
defaults.conf L0008: DISK_IDLE_SECS_ON_BAT="2"
defaults.conf L0009: MAX_LOST_WORK_SECS_ON_AC="15"
defaults.conf L0010: MAX_LOST_WORK_SECS_ON_BAT="60"
defaults.conf L0011: CPU_ENERGY_PERF_POLICY_ON_AC="balance_performance"
defaults.conf L0012: CPU_ENERGY_PERF_POLICY_ON_BAT="balance_power"
defaults.conf L0013: SCHED_POWERSAVE_ON_AC="0"
defaults.conf L0014: SCHED_POWERSAVE_ON_BAT="1"
defaults.conf L0015: NMI_WATCHDOG="0"
/etc/tlp.d/50-estar-default.conf L0150: DISK_DEVICES="mmcblk0 nvme0n1 sda sdb sdc"
/etc/tlp.d/70-hard-disk-spin-down.conf L0002: DISK_APM_LEVEL_ON_AC="1 1 1 1 1"
/etc/tlp.d/70-hard-disk-spin-down.conf L0003: DISK_APM_LEVEL_ON_BAT="1 1 1 1 1"
defaults.conf L0019: DISK_APM_CLASS_DENYLIST="usb ieee1394"
defaults.conf L0020: DISK_IOSCHED="keep keep"
defaults.conf L0021: SATA_LINKPWR_ON_AC="med_power_with_dipm max_performance"
defaults.conf L0022: SATA_LINKPWR_ON_BAT="med_power_with_dipm min_power"
defaults.conf L0023: AHCI_RUNTIME_PM_ON_AC="on"
defaults.conf L0024: AHCI_RUNTIME_PM_ON_BAT="auto"
defaults.conf L0025: AHCI_RUNTIME_PM_TIMEOUT="15"
defaults.conf L0026: PCIE_ASPM_ON_AC="default"
defaults.conf L0027: PCIE_ASPM_ON_BAT="default"
defaults.conf L0028: RADEON_DPM_PERF_LEVEL_ON_AC="auto"
defaults.conf L0029: RADEON_DPM_PERF_LEVEL_ON_BAT="auto"
defaults.conf L0030: RADEON_POWER_PROFILE_ON_AC="default"
defaults.conf L0031: RADEON_POWER_PROFILE_ON_BAT="default"
defaults.conf L0032: WIFI_PWR_ON_AC="off"
defaults.conf L0033: WIFI_PWR_ON_BAT="on"
/etc/tlp.d/50-estar-default.conf L0270: WOL_DISABLE="N"
defaults.conf L0035: SOUND_POWER_SAVE_ON_AC="1"
defaults.conf L0036: SOUND_POWER_SAVE_ON_BAT="1"
defaults.conf L0037: SOUND_POWER_SAVE_CONTROLLER="Y"
defaults.conf L0038: BAY_POWEROFF_ON_AC="0"
defaults.conf L0039: BAY_POWEROFF_ON_BAT="0"
defaults.conf L0040: BAY_DEVICE="sr0"
defaults.conf L0041: RUNTIME_PM_ON_AC="on"
defaults.conf L0042: RUNTIME_PM_ON_BAT="auto"
/etc/tlp.d/61-oem-blacklist.conf L0003: RUNTIME_PM_DRIVER_DENYLIST="igb igc"
defaults.conf L0044: USB_AUTOSUSPEND="1"
defaults.conf L0045: USB_EXCLUDE_AUDIO="1"
defaults.conf L0046: USB_EXCLUDE_BTUSB="0"
defaults.conf L0047: USB_EXCLUDE_PHONE="0"
defaults.conf L0048: USB_EXCLUDE_PRINTER="1"
defaults.conf L0049: USB_EXCLUDE_WWAN="0"
defaults.conf L0050: USB_AUTOSUSPEND_DISABLE_ON_SHUTDOWN="0"
defaults.conf L0051: RESTORE_DEVICE_STATE_ON_STARTUP="0"
defaults.conf L0052: RESTORE_THRESHOLDS_ON_BAT="0"
defaults.conf L0053: NATACPI_ENABLE="1"
defaults.conf L0054: TPACPI_ENABLE="1"
defaults.conf L0055: TPSMAPI_ENABLE="1"
/etc/tlp.d/50-estar-default.conf L0032: TLP_DEFAULT_MODE="BAT"
/etc/tlp.d/61-oem-blacklist.conf L0002: USB_DENYLIST="0bda:8153"
/etc/tlp.d/70-hard-disk-spin-down.conf L0005: DISK_SPINDOWN_TIMEOUT_ON_AC="1 1 1 1 1"
/etc/tlp.d/70-hard-disk-spin-down.conf L0006: DISK_SPINDOWN_TIMEOUT_ON_BAT="1 1 1 1 1"

+++ System Info
System = Dell Inc.  Precision 3570
BIOS   = 1.7.0
OS Release = Debian GNU/Linux bookworm/sid
Kernel = 6.0.0-6-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.0.12-1 (2022-12-09) x86_64
/proc/cmdline  = BOOT_IMAGE=/boot/vmlinuz-6.0.0-6-amd64 root=UUID=54338065-7093-4572-b1d8-a60452d250e6 ro workqueue.power_efficient=1 i915.i915_enable_fbc=1 quiet
Init system= systemd v252 (252.2-2)
Boot mode  = UEFI

+++ TLP Statu

Re: Dell Precision 3570 - Debian instead of Ubuntu

2022-12-16 Thread B.M.
On Fri, 2022-12-16 at 18:12 +0100, B.M. wrote:
> Hi,
> 
> The new laptop just arrived and I had a first look what the people
> did
> at Dell or Canonical:
> 
> After switching it on the first time, I was asked to enter
> / configure
> WLAN, username, password, hostname, keyboard layout, time zone. It
> also
> let me create a recovery USB stick. After a reboot I now could just
> use
> it. But of course I had a look behind the scenes...:
> 
> It came with Ubuntu 20.04.4 LTS and Gnome 3.36.8 running on X11.
> 
> The internal disk is formatted as
> - 891 MB EFI mounted at /boot/efi
> - 8.6 GB FAT not mounted, disks reports it as "Microsoft Reserved"
> - 503 GB Ext4 Root Partition
> 
> The USB stick I used as recovery medium got formatted as
> - 4 GB 0x00 (Bootable) ISO 9660 mounted at
> /media//DellRecoveryMedia
> - 4.1 MB EFI FAT not mounted
> - and some GB free space
> 
> After installing Synaptic I found out that there are some more
> Origins
> with installed packages as listed here:
> local/main (dell.archive.canonical.com):
>   oem-fix-misc-cnl-tlp-estar-conf, 5.0.3.4, maintainer
> commercial.engineering(at)canonical.com:
>     This package carrys agressive policy to pass energy-star, and
> also
> some blacklist for problematic devices
>   oem-somerville-factory-meta, 20.04ubuntu9, maintainer
> commercial.engineering(at)canonical.com:
>     It installs packages needed to support this hardware fully.
>   oem-somerville-factory-paras-35-meta, 20.04ubuntu3, maintainer
> commercial.engineering(at)canonical.com:
>     It installs packages needed to support this hardware fully.
> (factory)
>   oem-somerville-meta, 20.04ubuntu9, maintainer
> commercial.engineering(at)canonical.com:
>     This is a metapackage for Somerville platform. It installs
> packages
> needed to support this hardware fully.
>   oem-somerville-paras-35-meta, 20.04ubuntu3, maintainer
> commercial.engineering(at)canonical.com:
>     This is a metapackage for Somerville Paras-35 platform. It
> installs
> packages needed to support this hardware fully.
>   oem-somerville-partner-archive-keyring, 20.04ubuntu2, maintainer
> commercial.engineering(at)canonical.com:
>     Somerville project keyring.
> local/universe (dell.archive.canonical.com):
>   tlp, 1.3.1-2, maintainer ubuntu-devel-discuss(at)lists.ubuntu.com:
>     Save battery power on laptops
>     (Description)
>   tlp-rdw, 1.3.1-2, maintainer ubuntu-devel-
> discuss(at)lists.ubuntu.com:
>     Radio device wizard
>     (Description)
> focal/universe (dell.archive.canonical.com):
>   tlp, 1.3.1-2, maintainer ubuntu-devel-discuss(at)lists.ubuntu.com:
>   tlp-rdw, 1.3.1-2, maintainer ubuntu-devel-
> discuss(at)lists.ubuntu.com:
> stable/main (dl.google.com): 
>   google-chrome-stable is installed
> 
> Now my thoughts are:
> - Chrome not necessary...
> - tlp, tlp-rdw: also in Debian Testing (Bookworm), but with higher
> version number (1.5.0-1), but also with a different maintainer
> (raphael.hal...@gmail.com)
>     oem-fix-misc-cnl-tlp-estar-conf: does it make sense to keep this
> package?
>     oem-somerville-*: could make sense to keep...
> 
> --> Can I re-install these packages after installing Debian Testing
> by
> adding and enabling these Dell repositories?
> 
> Under /etc/apt/trusted.gpg.d/ there are some files that might be
> corresponding to these repos and maybe I should keep them:
>   ubuntu-keyring-2008-oem.gpg
>   ubuntu-keyring-2008-oem.key.gpg
>   ubuntu-keyring-2020-oem.gpg
>   
> and under /etc/apt/sources.list.d/ there are:
>   focal-oem.list
>   oem-somerville-paras-35-meta.list
>   
> But: as far as I remember, one should not mix Debian packages /repos
> with Ubuntu packages / repos, but I might work.
> 
> What do you think?
> 
> (Of course I could just keep it as it is, but I'd prefer having a
> Debian-only setup in our family across all devices.)
> 
> (Fun fact also mentioned: on the Dell website for the laptop there is
> a
> Q&A section where is stated that the laptop comes without any OS pre-
> installed but one could install Ubuntu on it while when asking their
> chat the answer was that it's not allowed to sell computers without
> any
> OS installed (at least here in Switzerland)...)
> 
> Have a nice day,
> Bernd
> 
> 
> PS: Please add me CC since I'm currently not subscribed to this list.
> Thanks.
> 

Hi again,

Looking a bit further, it seems that all these packages are not
containing more than 3 config files within /etc/tlp.d/
And the tlp and tlp-rdw packages itself. Does someone have an idea what
it means that these packages have higher version numbers in Debian
Testing but a different maintainer (see my last mail)? Going back in
the Debian changelog file there hasn't been a change in the maintainer,
i.e. I have to assume that the packages are not identical? Should/could
I compare their config files?

Thank you and have a nice day,
Bernd



Re: Dell Precision 3570 - Debian instead of Ubuntu

2022-12-16 Thread B.M.
Hi,

The new laptop just arrived and I had a first look what the people did
at Dell or Canonical:

After switching it on the first time, I was asked to enter / configure
WLAN, username, password, hostname, keyboard layout, time zone. It also
let me create a recovery USB stick. After a reboot I now could just use
it. But of course I had a look behind the scenes...:

It came with Ubuntu 20.04.4 LTS and Gnome 3.36.8 running on X11.

The internal disk is formatted as
- 891 MB EFI mounted at /boot/efi
- 8.6 GB FAT not mounted, disks reports it as "Microsoft Reserved"
- 503 GB Ext4 Root Partition

The USB stick I used as recovery medium got formatted as
- 4 GB 0x00 (Bootable) ISO 9660 mounted at
/media//DellRecoveryMedia
- 4.1 MB EFI FAT not mounted
- and some GB free space

After installing Synaptic I found out that there are some more Origins
with installed packages as listed here:
local/main (dell.archive.canonical.com):
  oem-fix-misc-cnl-tlp-estar-conf, 5.0.3.4, maintainer
commercial.engineering(at)canonical.com:
This package carrys agressive policy to pass energy-star, and also
some blacklist for problematic devices
  oem-somerville-factory-meta, 20.04ubuntu9, maintainer
commercial.engineering(at)canonical.com:
It installs packages needed to support this hardware fully.
  oem-somerville-factory-paras-35-meta, 20.04ubuntu3, maintainer
commercial.engineering(at)canonical.com:
It installs packages needed to support this hardware fully.
(factory)
  oem-somerville-meta, 20.04ubuntu9, maintainer
commercial.engineering(at)canonical.com:
This is a metapackage for Somerville platform. It installs packages
needed to support this hardware fully.
  oem-somerville-paras-35-meta, 20.04ubuntu3, maintainer
commercial.engineering(at)canonical.com:
This is a metapackage for Somerville Paras-35 platform. It installs
packages needed to support this hardware fully.
  oem-somerville-partner-archive-keyring, 20.04ubuntu2, maintainer
commercial.engineering(at)canonical.com:
Somerville project keyring.
local/universe (dell.archive.canonical.com):
  tlp, 1.3.1-2, maintainer ubuntu-devel-discuss(at)lists.ubuntu.com:
Save battery power on laptops
(Description)
  tlp-rdw, 1.3.1-2, maintainer ubuntu-devel-
discuss(at)lists.ubuntu.com:
Radio device wizard
(Description)
focal/universe (dell.archive.canonical.com):
  tlp, 1.3.1-2, maintainer ubuntu-devel-discuss(at)lists.ubuntu.com:
  tlp-rdw, 1.3.1-2, maintainer ubuntu-devel-
discuss(at)lists.ubuntu.com:
stable/main (dl.google.com): 
  google-chrome-stable is installed

Now my thoughts are:
- Chrome not necessary...
- tlp, tlp-rdw: also in Debian Testing (Bookworm), but with higher
version number (1.5.0-1), but also with a different maintainer
(raphael.hal...@gmail.com)
oem-fix-misc-cnl-tlp-estar-conf: does it make sense to keep this
package?
oem-somerville-*: could make sense to keep...

--> Can I re-install these packages after installing Debian Testing by
adding and enabling these Dell repositories?

Under /etc/apt/trusted.gpg.d/ there are some files that might be
corresponding to these repos and maybe I should keep them:
  ubuntu-keyring-2008-oem.gpg
  ubuntu-keyring-2008-oem.key.gpg
  ubuntu-keyring-2020-oem.gpg
  
and under /etc/apt/sources.list.d/ there are:
  focal-oem.list
  oem-somerville-paras-35-meta.list
  
But: as far as I remember, one should not mix Debian packages /repos
with Ubuntu packages / repos, but I might work.

What do you think?

(Of course I could just keep it as it is, but I'd prefer having a
Debian-only setup in our family across all devices.)

(Fun fact also mentioned: on the Dell website for the laptop there is a
Q&A section where is stated that the laptop comes without any OS pre-
installed but one could install Ubuntu on it while when asking their
chat the answer was that it's not allowed to sell computers without any
OS installed (at least here in Switzerland)...)

Have a nice day,
Bernd


PS: Please add me CC since I'm currently not subscribed to this list.
Thanks.


On Mon, 2022-11-28 at 10:04 +0100, B.M. wrote:
> Hi,
> 
> I'm going to buy a Dell Precision 3570 laptop in the next couple of
> weeks. 
> Since it's a Build Your Own device, I can order it with Ubuntu 20.04
> LTS pre-
> installed instead of paying for an never used Win 11 :-)
> 
> Since all our other computers are happily running Debian, I'd like to
> replace 
> this Ubuntu by Debian Testing (later Bookworm). I've already decided
> to run it 
> on a single btrfs partition and learn something about subvolumes... I
> assume 
> the machine should work well - but who knows? How would you proceed?
> 
> a) leave it running Ubuntu forever
> b) replace Ubuntu by Debian, fiddling around issues if there are any
> later
> c) resize the partition, install Debian side-by-side, check than if
> anything 
> works as expected
> d) a

Dell Precision 3570 - Debian instead of Ubuntu

2022-11-28 Thread B.M.
Hi,

I'm going to buy a Dell Precision 3570 laptop in the next couple of weeks.
Since it's a Build Your Own device, I can order it with Ubuntu 20.04 LTS pre-
installed instead of paying for an never used Win 11 :-)

Since all our other computers are happily running Debian, I'd like to replace
this Ubuntu by Debian Testing (later Bookworm). I've already decided to run it
on a single btrfs partition and learn something about subvolumes... I assume
the machine should work well - but who knows? How would you proceed?

a) leave it running Ubuntu forever
b) replace Ubuntu by Debian, fiddling around issues if there are any later
c) resize the partition, install Debian side-by-side, check than if anything
works as expected
d) analyze the installed system (how?) to find out any special configs etc.
before replacing Ubuntu by Debian
e) other...

Thank you for your ideas.

Have a nice day,
Bernd

PS: Please cc me, since I'm not regularly subscribed to the list




Re: Network manager: activating VPN in GNOME remote session doesn't work, but in KDE remote session (xrdp)

2022-08-09 Thread B.M.
Hmm, there seems to be a difference between networkmanager in KDE and in Gnome:

Situation is, there are two users, A and B.

B defined this VPN connection and wants to start it.

If only B is logged in (directly, not remotely), she can start the VPN in both
KDE and Gnome.

If B is logged in remotely, she can start it only if it's a KDE session, not
in Gnome.

A cannot start the VPN: in a KDE session, networkmanager asks for a password
(which A doesn't know); in a Gnome session, no password dialog is shown at all
(which cannot work, obviously...).

I already tried to store the password in the VPN config file at /etc/
NetworkManager/system-connections/...,  followed by a
  service network-manager restart
but it didn't work:

...
[vpn]
...
password-flags=0
...
[vpn-secrets]
password=PASSWORD

Any ideas?

Thank you.

Best,
Bernd


On Tuesday, August 9, 2022 11:09:44 AM CEST B.M. wrote:
> nmcli con up  doesn't work either: nothing happens except the
> three dots where the VPN icon is shown and after 90 seconds a timeout
> message appears in the terminal window; so exactly the same behaviour :-(
>
> Bernd
>
> On Monday, August 8, 2022 3:03:17 PM CEST Harald Dunkel wrote:
> > Hi BM
> >
> > if your VPN is IPsec, then you might want to examine charon's output via
> > journalctl. Probably openvpn, wireguard and others can be found there,
> > too.
> >
> > Another thing to try is to establish the VPN connection using nmcli in a
> > terminal window, e.g.
> >
> > nmcli con up "VPN name"
> >
> > Maybe you get a usable error message this way.
> >
> >
> > Regards
> >
> > Harri






Re: Network manager: activating VPN in GNOME remote session doesn't work, but in KDE remote session (xrdp)

2022-08-09 Thread B.M.
nmcli con up  doesn't work either: nothing happens except the three
dots where the VPN icon is shown and after 90 seconds a timeout message
appears in the terminal window; so exactly the same behaviour :-(

Bernd


On Monday, August 8, 2022 3:03:17 PM CEST Harald Dunkel wrote:
> Hi BM
>
> if your VPN is IPsec, then you might want to examine charon's output via
> journalctl. Probably openvpn, wireguard and others can be found there, too.
>
> Another thing to try is to establish the VPN connection using nmcli in a
> terminal window, e.g.
>
>   nmcli con up "VPN name"
>
> Maybe you get a usable error message this way.
>
>
> Regards
>
> Harri






Network manager: activating VPN in GNOME remote session doesn't work, but in KDE remote session (xrdp)

2022-08-08 Thread B.M.
Hi

I'm encountering a somehow strange problem:

A user logs into another computer via xrdp, starting either KDE Plasma or
GNOME3. Then she wants to connect to a vpn (openvpn) by clicking on Tray Icon
-> Networks ->  -> Connect (in case of KDE) or  -> VPN Off
-> Connect (in case of Gnome).

If it's a KDE session, connecting to the VPN works just fine.
In a Gnome session, the VPN icon is shown as connecting... (the three dots)
"forever", while the VPN menu entry is shown as if it connected successfully
(there is an entry to disconnect).

Syslog shows the same two VPN related lines in both cases, but while in the
KDE case there are more lines following, in the Gnome case it seems to hang -
no more line appear. The first two lines are:

MYHOSTNAME NetworkManager[1169]:   [1659945658.8171]
vpn[0x55b54f2be990,ede1c8a5-4428-4610-bf27-dc2bc040e876,""]:
starting openvpn
MYHOSTNAME NetworkManager[1169]:   [1659945658.8175] audit:
op="connection-activate" uuid="ede1c8a5-4428-4610-bf27-dc2bc040e876"
name="" pid=20182 uid=1002 result="success"

If the user is logged into the computer directly, she can activate the VPN
connection and it's just working fine. Only in a remote session it doesn't
work, but only in the Gnome case.

Debian Testing with Gnome 42, KDE Plasma/Frameworks also the latest.

Does anyone have an idea what's happening here or why the connection is not
established correctly?

Thank you very much!

Best,
Bernd

PS: Please add me as CC since I'm currently not subscribed to the list.




Re: Problem mounting encrypted blu-ray disc or image

2022-07-25 Thread B.M.
Hello again

First of all, I tested all my BD backup discs now, and there are no problems
from #1 (2017) - #12 (05/2018) as well as #14 - #17 (2019-05/2020).
# 13 from 03/2019 fails.

#1 to #10 are all from 05/2017, they're the first BD backup at all and I assume
I used some manual workflow back then to start with; they also contain only
pictures, not my current larger set of files.

Then there's #11/#12 (2018), #14/#15 (01/2020) as well as #16/#17 (05/2020),
with each of these pairs being completely ok.

Afterwards (#18 - #22) there is a pattern such as
#18 (disc 1 of 2 for 01/2021) fails
#19 (disc 2 of 2 for 01/2021) is ok
#20 (disc 1 of 2 for 07/2021) fails
#21 (disc 2 of 2 for 07/2021) is ok
#22 is disc 1 of "NA" for 06/2022: I noticed the problem and didn't
continue...

I use git for my script, but only since 2020; there's no change to my IMGSIZE
setting in the git log, so this cannot explain why #11 - #17 are ok, #18
starts failing. #13 from 2019 is kind of an outlier.


I'm going to try a new full (not incremental) backup, spanning multiple discs,
in the near future and test my script thoroughly... but before, I'll add an
option to it which allows to modify the "last backup date" value stored in the
extended attributes of the filesystem for all backuped folders. That's the part
I really like: I can see very easily in a file manager (dolphin in this case),
which folders are backuped and when the last backup has been done.

Best,
Bernd


On Montag, 11. Juli 2022 10:10:51 CEST you wrote:
> Hi,
>
> B.M. wrote:
> > Do I understand correctly, you say that this Pioneer drive doesn't work
> > well with Verbatim BD-RE, i.e. their rewriteable BDs.
>
> Yes. The problem is with the high reading speed of the drive and with
> a physical flaw of Verbatim BD-RE (CMCMAG/CN2/0).
> The flaw is that there are letters engraved in the transparent area around
> the inner hole, which reach to the thickened ring around the hole.
> This ring is obviously essential for physical stability and the letters
> weaken it enough so that 10 to 20 full read runs on my Pioneer BDR-209
> are enough to produce a radial crack at the hole. This crack grows towards
> the rim in a few more full speed reads. As soon as the dye is reached, the
> medium is unreadable.
>
> Writing is no problem, because it happens at most at 2.0x speed.
> Older Verbatim BD-RE (VERBAT/IM0/0) are no problem. But one cannot buy
> them any more.
> Reading the new Verbatim BD-RE media is no problem on Optiarc BD RW
> BD-5300S, LG BD-RE BH16NS40, and ASUS BW-16D1HT. And of course not with the
> old LG drives like BD-RE GGW-H20L which read (and write) BD-RE at 2.3x
> speed.
> > Since I only use BD-R, it
> > doesn't matter for me and my use case, but thank you nevertheless.
>
> I tested about 50 reads with RITEK/BR3/0 and Verbatim CMCMAG/BA5/0
> BD-R media. No problems. (And no engraved letters to see around the
> inner hole.)
> The media which you inspected by dvd+rw-medianinfo are CMCMAG/BA5
> (dunno what dvd+rw-medianinfo did to the "/0" part of the name).
>
>
> I am still curious whether the decryption problems are caused by
> not closing the /dev/mapper device.
>
>
> Have a nice day :)
>
> Thomas






Re: Problem mounting encrypted blu-ray disc or image

2022-07-11 Thread B.M.
> Good questions. Make some experiments. :))
> At least the manual intervention is a good suspect because it occurs exactly
> when you get undecryptable images.

Will do later.

> I see in your script:
>
> umount /mnt/BDbackup
> cryptsetup luksClose /dev/mapper/BDbackup
> losetup -d $IMGLOOP
>
>
> #
> # Step 5: Burn to BD-R
> #
>
> and would expect that the three lines are there for a reason.

Well, I'm a thorough guy ;-) If I do losetup, luskOpen, mount before copying
files, I do umount, luksClose, losetup afterwards as well.

> Do i understand correctly that the overflow happens in line 173
> with the tar run ?
>
>  tar cf - -C "`dirname "$line"`" "`basename "$line"`" | plzip >
> "$zipfilename1"

Exactly.

> If so: What happens next ? Does the script abort without cleaning up ?
> (I.e. no unmounting, closing, and de-looping by the script ?)

It tries for all remaining folders, all of them immediately fail because of
disk full and then it waits for input at step 4, i.e. no automatic cleaning up
but if I continue (what I do) it will do the cleanup before burning.


> > >   dvd+rw-mediainfo /dev/dvd
> >
> > INQUIRY:[PIONEER ][BD-RW   BDR-209D][1.30]
>
> That's the killer of Verbatim BD-RE. (If you buy BD-RE, then take any other
> brand, which will probably be made by Ritek.)

Do I understand correctly, you say that this Pioneer drive doesn't work well
with Verbatim BD-RE, i.e. their rewriteable BDs. Since I only use BD-R, it
doesn't matter for me and my use case, but thank you nevertheless.

> There would still be 100 MB free for another session.
> (But don't mess with good backups which are not intended as multi-session.)

For backup reasons, I use each BD disc once, no overwriting, no multi-session,
just write and forget (OK, I should have tested them for readability
afterwards, not just randomly but all of them - lesson learned)  ;-)

Best,
Bernd




Re: Problem mounting encrypted blu-ray disc or image

2022-07-10 Thread B.M.
> No
>   cryptsetup luksClose /dev/mapper/BDbackup
> between remove and burn ?

To be honest, I cannot say for sure, so maybe yes. But: what would be the
implication? The fs inside is already unmounted, is cryptsetup luksClose
modifying anything within the image?

> Andy Polyakov decided to format BD-R by default. Possibly because he used
> an operating system (IIRC, Solaris) which did not expect that BD-R can be
> used for multi-session. So its mount program followed the volume descriptors
> starting at block 16 rather than at 16 blocks after the start of the
> youngest session.
> Whatever, growisofs by default wants to update the volume descriptors at
> block 16 of the BD-R and for this uses BD-R Pseudo-Overwrite formatting.
> This special feature uses the Defect Management to replace old written
> blocks by newly written blocks.
>
> Formatted BD-R cause the drive to perform Defect Management when writing.
> This means half write speed at best, heavy clonking with smaller write
> quality problems, and often miserable failure on media which work well
> unformatted.

Ah, I remember, some years ago before I started using BD I had a look at there
specification.

> That's why i riddle why your burns do not fail in the end.
> What do you get from a run of
>
>   dvd+rw-mediainfo /dev/dvd

INQUIRY:[PIONEER ][BD-RW   BDR-209D][1.30]
GET [CURRENT] CONFIGURATION:
 Mounted Media: 41h, BD-R SRM+POW
 Media ID:  CMCMAG/BA5
 Current Write Speed:   12.0x4495=53940KB/s
 Write Speed #0:12.0x4495=53940KB/s
 Write Speed #1:10.0x4495=44950KB/s
 Write Speed #2:8.0x4495=35960KB/s
 Write Speed #3:6.0x4495=26970KB/s
 Write Speed #4:4.0x4495=17980KB/s
 Write Speed #5:2.0x4495=8990KB/s
 Speed Descriptor#0:00/12088319 R@12.0x4495=53940KB/s W@12.0x4495=53940KB/
s
 Speed Descriptor#1:00/12088319 R@10.0x4495=44950KB/s W@10.0x4495=44950KB/
s
 Speed Descriptor#2:00/12088319 R@8.0x4495=35960KB/s W@8.0x4495=35960KB/s
 Speed Descriptor#3:00/12088319 R@6.0x4495=26970KB/s W@6.0x4495=26970KB/s
 Speed Descriptor#4:00/12088319 R@4.0x4495=17980KB/s W@4.0x4495=17980KB/s
 Speed Descriptor#5:00/12088319 R@2.0x4495=8990KB/s W@2.0x4495=8990KB/s
POW RESOURCES INFORMATION:
 Remaining Replacements:16843296
 Remaining Map Entries: 0
 Remaining Updates: 0
READ DISC INFORMATION:
 Disc status:   appendable
 Number of Sessions:1
 State of Last Session: incomplete
 "Next" Track:  1
 Number of Tracks:  2
READ TRACK INFORMATION[#1]:
 Track State:   partial incremental
 Track Start Address:   0*2KB
 Free Blocks:   0*2KB
 Track Size:12032000*2KB
READ TRACK INFORMATION[#2]:
 Track State:   invisible incremental
 Track Start Address:   12032000*2KB
 Next Writable Address: 12032000*2KB
 Free Blocks:   56320*2KB
 Track Size:56320*2KB
FABRICATED TOC:
 Track#1  : 14@0
 Track#AA : 14@12088320
 Multi-session Info:#1@0
READ CAPACITY:  12088320*2048=24756879360

While for a readable disc I get:

INQUIRY:[PIONEER ][BD-RW   BDR-209D][1.30]
GET [CURRENT] CONFIGURATION:
 Mounted Media: 41h, BD-R SRM+POW
 Media ID:  CMCMAG/BA5
 Current Write Speed:   12.0x4495=53940KB/s
 Write Speed #0:12.0x4495=53940KB/s
 Write Speed #1:10.0x4495=44950KB/s
 Write Speed #2:8.0x4495=35960KB/s
 Write Speed #3:6.0x4495=26970KB/s
 Write Speed #4:4.0x4495=17980KB/s
 Write Speed #5:2.0x4495=8990KB/s
 Speed Descriptor#0:00/12088319 R@12.0x4495=53940KB/s W@12.0x4495=53940KB/
s
 Speed Descriptor#1:00/12088319 R@10.0x4495=44950KB/s W@10.0x4495=44950KB/
s
 Speed Descriptor#2:00/12088319 R@8.0x4495=35960KB/s W@8.0x4495=35960KB/s
 Speed Descriptor#3:00/12088319 R@6.0x4495=26970KB/s W@6.0x4495=26970KB/s
 Speed Descriptor#4:00/12088319 R@4.0x4495=17980KB/s W@4.0x4495=17980KB/s
 Speed Descriptor#5:00/12088319 R@2.0x4495=8990KB/s W@2.0x4495=8990KB/s
POW RESOURCES INFORMATION:
 Remaining Replacements:16843296
 Remaining Map Entries: 0
 Remaining Updates: 0
READ DISC INFORMATION:
 Disc status:   appendable
 Number of Sessions:1
 State of Last Session: incomplete
 "Next" Track:  1
 Number of Tracks:  2
READ TRACK INFORMATION[#1]:
 Track State:   partial incremental
 Track Start Address:   0*2KB
 Free Blocks:   0*2KB
 Track Size:12032000*2KB
READ TRACK INFORMATION[#2]:
 Track State:   invisible incremental
 Track Start Address:   12032000*2KB
 Next Writable Address: 12032000*2KB
 Free Blocks:   56320*2KB
 Track Size:56320*2KB
FABRICATED TOC:
 Track#1  : 14@0
 Track#AA : 14@12088320
 Multi-session Info:#1@0
READ CAPACITY:  12088320*2048=24756879360

> Your way of creating a big image has the disadvantage of needing
> extra disk space. Cool would be to write directly to 

Re: Problem mounting encrypted blu-ray disc or image

2022-07-09 Thread B.M.
> > > A UDF filesystem image is supposed to bear at its start 32 KiB of zeros.
>
> B.M. wrote:
> > This is indeed the case:
> > [...]
> > For a readable disk, this look like you said: Only zeros.
>
> So it looks like at least a part of the problem is decryption.

Agreed

> > > If UDF does not work even unencrypted,
> >
> > Why should UDF not work correctly without encryption?
>
> It's improbable, i confess.
> But for now we are hunting an unexplainable problem. So we have to divide
> the situation in order to narrow the set of suspects.
>
> Verifying that your procdure with two UDF images is not the culprit would
> help even if the result is boringly ok, as we expect. (Or we are in for
> a surprise ...)

I don't have two UDF images.
In my script I create a file, put an encrypted UDF filesystem into it and start
writing compressed files into it. Unfortunately it can happen (and happened in
the past) that the filesystem got filled up completely.

Beside that, I use a fully encrypted system with several partitions...
Extract from df -h:

FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/sdb2_crypt 28G   23G  3.0G  89% /
/dev/sdb1 447M  202M  221M  48% /boot
/dev/mapper/var_crypt  27G   18G  8.4G  68% /var
/dev/mapper/vraid1-home   1.8T  1.5T  251G  86% /home
/dev/mapper/BDbackup  6.5M  6.5M  2.0K 100% /mnt/BDbackup

(I create the image file as /home/TMP_BKP/backup.img just because that's where
I have enough available space.)

> After the boring outcome you have the unencrypted images to make the next
> step, namely to create /dev/mapper/BDbackup with a new empty image file
> as base, to copy the images into it (e.g. by dd), and to close it.
> Then try whether the two encrypted image files can be properly openend
> as /dev/mapper/BDbackup ans show mountable UDF filesystems.
>
> > it's not only the burned disc which is not readable/mountable, it's
> > also the image I created before burning.
>
> So we can exclude growisofs as culprit.
>
> > Might it be possible, that when my UDF filesystem gets filled completely,
> > the encryption get damaged?
>
> That would be a bad bug in the device-mapper code and also such a mishap
> is hard to imagine. The UDF driver is supposed not to write outside its
> filesystem data range. That range would be at most as large as the payload
> of the device mapping.

Doesn't look like that - I tried the following several times:
Create (a much smaller) image file, put an encrypted filesystem in it, fill it
completely with either cp or dd, unmount it, close and re-open with
cryptsetup, than check /dev/mapper/BDbackup: no problems, only hex zeros and
it's mountable.

> > Multi-disc backups are not
> > handled by my script, I have to intervene manually.
>
> That's always a potential source of problems.

> Do i get it right, that your script copies files into the mounted UDF
> and gets a "filesystem full" error ?
>
> What exactly are you doing next ?
> (From where to where are you moving the surplus files ?
> Does the first /dev/mapper device stay open while you create the encrypted
> device for the second UDF filesystem ? Anything i don't think of ... ?)

If you want you can have a look at my script, I attached it to this mail...

Basically, I use extended attributes (user.xdg.tags) to manage which folders
have to get backuped, write the last backup date into user.xdg.comment. By
comparing file timestamps with these backup dates this allows for incremental
backups.
Then for each folder which should be backuped, I use tar and plzip, writing
into BKPDIR="/mnt/BDbackup".

"Filesystem full" is not handled at all. Typically if this happens it's quite
late i.e. most folders are already backuped and I do the following:
- remove the last lz-file, I never checked if it is corrupted
- burn the image
- reset user.xdg.comment for not yet backuped folders manually
- execute the script again, burn the so created second image

Since this is quite ugly, I try to prevent it by moving very large lz-files
from /mnt/BDbackup to a temporary location outside of /mnt/BDbackup while the
script is running. When the "create lz-files"-part of my script has finished, I
check if there is sufficient space to move the large files back to /mnt/
BDbackup. If yes I do this, if not I leave them outside, burn the first disc,
then I create a second image manually, put the large files into the empty
filesystem, burn this disc as well. Not perfect at all, I know, but it's
working... and I do this about every 3 or 6 months. Beside that, it's just a
second kind of backup additionally to bi-weekly backups on external, also
encrypted HDDs. (I think with these two kind of backups I'm doing e

Re: Problem mounting encrypted blu-ray disc or image

2022-07-07 Thread B.M.
> > file "$IMGFILE"
> > LUKS encrypted file, ver 2 [, , sha256] UUID:
> > 835847ff-2cb3-4c6d-aa04-d3b79010a2d3
> So it did not stay unencrypted by mistake.
> (I assume this is one of the unreadable images.)

It looks like this for both, the readable and the unreadable discs.

> > mount -t udf -o novrs /dev/mapper/BDbackup /mnt/BDbackup
> > [62614.207920] UDF-fs: warning (device dm-10): udf_load_vrs: No anchor
> > found [62614.207922] UDF-fs: Scanning with blocksize 2048 failed
> > So now I'm stuck again, but maybe one little step later...
> 
> Yeah. Reading the anchor is a little bit further in the procedure.
> But already the missing VRS is a clear indication that the image or disc
> does not get properly decrypted when being mounted for reading.
> The VRS was there when it was mounted for writing. Later it's gone.
> 
> A UDF filesystem image is supposed to bear at its start 32 KiB of zeros.
> Have a look with a hex dumper or editor at /dev/mapper/BDbackup.
> If you see something heavily non-zero, then decryption is the main
> suspect.

This is indeed the case:

9F AC 31 11  1B EA FC 5D  28 A7 41 4E  12 B6 DA D1 | .¬1..êü](§AN.¶ÚÑ
AE 29 C2 30  ED 7D 1E 75  80 2A 1E 3D  4A 45 1C 6F | ®)Â0í}.u.*.=JE.o
78 0C 78 F1  6F 6F FB 62  A6 79 E5 50  CA 67 9F 6E | x.xñooûb¦yåPÊg.n
69 C2 86 C0  36 40 A8 62  2C F5 15 0F  83 79 B8 46 | iÂ.À6@¨b,õ...y¸F
DF 38 E7 33  0D 2D C9 59  20 4C AF 06  B1 37 80 B2 | ß8ç3.-ÉY L¯.±7.²
D8 D3 00 61  69 07 2B 4B  1D 64 20 92  4A B9 72 29 | ØÓ.ai.+K.d .J¹r)
66 65 A8 FE  F0 BF D1 1F  AC 48 2E 7B  65 42 CB 69 | fe¨þð¿Ñ.¬H.{eBËi
9B DA EC 7E  55 F3 F3 08  82 F5 A9 0F  DB D2 BD 6D | .Úì~Uóó..õ©.ÛÒ½m
2B BC 00 F5  A2 68 A2 CF  18 11 77 49  05 18 B1 18 | +¼.õ¢h¢Ï..wI..±.
C1 18 E5 CB  48 F3 C6 FF  E5 85 C3 E5  60 F9 01 81 | Á.åËHóÆÿå.Ãå`ù..
96 DA B0 44  07 A4 E6 8D  99 E0 A4 F5  6F 1F F8 2E | .Ú°D.¤æ..à¤õo.ø.
36 B4 80 19  11 1F C3 93  0A EA BC 3B  09 D7 B2 D4 | 6´Ã..ê¼;.ײÔ

For a readable disk, this look like you said: 
Only zeros.


> 
> > Thanks again for any hints...
> 
> As said, i would try whether UDF works fine without encryption.
> If yes, i would try whether dd-ing an unencryptedly populated UDF image
> into /dev/mapper/BDbackup yields images which are more reliably readable.
> 
> If UDF does not work even unencrypted, then i'd consider ext2 or ISO 9660
> as alternatives.
> (ext2 would be treated like UDF. For ISO 9660 i'd propose xorriso and
>  directing its output to the not mounted /dev/mapper/BDbackup.)

Why should UDF not work correctly without encryption?

I have an idea what might be the root cause for my problems:
As I mentioned earlier, from the small sample of discs I checked it seems that 
if I burned two discs for a backup session instead of one (too much data for 
one disc), the first one is unreadable, but the second one is readable.
With respect to the first discs it might be that during the execution of my 
script files get copied until the filesystem is full. Multi-disc backups are 
not 
handled by my script, I have to intervene manually. I never expected it to 
harm my process, moved some backup files manually, created another image which 
I burned on a second disc. So my question is basically:

Might it be possible, that when my UDF filesystem gets filled completely, the 
encryption get damaged? Or is my filesystem too large?

# Parameter:
[...]
IMGSIZE=24064000K
# There is an old comment in my script at this line, saying:
# let's try that: 24064000K
# 24438784K according to dvd+rw-mediainfo but creates at
# least sometimes INVALID ADDRESS FOR WRITE;
# alternative according to internet research: 23500M
IMGFILE=/home/TMP_BKP/backup.img
IMGLOOP=`losetup -f`

[...]

# Prepare loopback device:
echo "Preparing loopback device..."
touch $IMGFILE
truncate -s $IMGSIZE $IMGFILE
losetup $IMGLOOP $IMGFILE
echo "Creating encryption, filesystem and mounting:"
cryptsetup luksFormat --cipher aes-xts-plain64 $IMGLOOP
cryptsetup luksOpen $IMGLOOP BDbackup
mkudffs -b 2048 -m bdr --label $1 /dev/mapper/BDbackup
mount -t udf /dev/mapper/BDbackup /mnt/BDbackup

But: it's not only the burned disc which is not readable/mountable, it's also 
the image I created before burning.

Thank you once again.

Best,
Bernd




Re: Problem mounting encrypted blu-ray disc or image

2022-07-05 Thread B.M.
On Montag, 4. Juli 2022 19:51:57 CEST Thomas Schmitt wrote:
> Hi,
>
> B.M. wrote that dmesg reports:
> > UDF-fs: warning (device dm-10): udf_load_vrs: No VRS found
>
> That's a very early stage of UDF recognition.
> Given that you were able to copy files into that UDF image by help of
> the Linux kernel driver, i deem it improbable that the properly decrypted
> UDF format would be in such a bad shape.
>
> So it looks like decryption uses a wrong key when you mount it again.
>
> Consider to exercise the procedure without encryption to make sure
> that the resulting $IMGFILE are recognizable UDF and contain the files
> which you expect. Just to be sure.
>
> > my last backups consist of two discs each, and I cannot
> > mount the first one but I can mount the second one for each of them
>
> Hard to explain.
>
> I see a theoretical race condition in the sequence of
>   IMGLOOP=`losetup -f`
> and
>   losetup $IMGLOOP $IMGFILE
> but cannot make up a situation where this would lead to silent failure
> to encrypt.
>
> What does
>   file "$IMGFILE"
> say ?
>
> Consider to use --verbose and/or --debug with the two runs of
> cryptsetup luksOpen. Maybe you see a reason why they are at odds.
>
>
> Have a nice day :)
>
> Thomas

Well, I can provide you with:

file "$IMGFILE"
LUKS encrypted file, ver 2 [, , sha256] UUID: 835847ff-2cb3-4c6d-aa04-
d3b79010a2d3

and I also compared

cryptsetup luksOpen -r --verbose --debug /dev/dvd BDbackup

for two discs, one mounting without any problem, one with the above mentioned
problem. Differences are only Checksums of the LUKS header, DM-UUIDs and udev
cookie values - as expected, I would say.

I also tried mounting again, and here is once again output from dmesg:

mount -t udf /dev/mapper/BDbackup /mnt/BDbackup

[62606.932713] UDF-fs: warning (device dm-10): udf_load_vrs: No VRS found
[62606.932717] UDF-fs: Scanning with blocksize 512 failed
[62606.932860] UDF-fs: warning (device dm-10): udf_load_vrs: No VRS found
[62606.932862] UDF-fs: Scanning with blocksize 1024 failed
[62606.932982] UDF-fs: warning (device dm-10): udf_load_vrs: No VRS found
[62606.932984] UDF-fs: Scanning with blocksize 2048 failed
[62606.933111] UDF-fs: warning (device dm-10): udf_load_vrs: No VRS found
[62606.933113] UDF-fs: Scanning with blocksize 4096 failed

and if I skip VRS (from man mount: Ignore the Volume Recognition Sequence and
attempt to mount anyway.):

mount -t udf -o novrs /dev/mapper/BDbackup /mnt/BDbackup

[62614.207353] UDF-fs: warning (device dm-10): udf_load_vrs: No anchor found
[62614.207358] UDF-fs: Scanning with blocksize 512 failed
[62614.207667] UDF-fs: warning (device dm-10): udf_load_vrs: No anchor found
[62614.207670] UDF-fs: Scanning with blocksize 1024 failed
[62614.207920] UDF-fs: warning (device dm-10): udf_load_vrs: No anchor found
[62614.207922] UDF-fs: Scanning with blocksize 2048 failed
[62614.208202] UDF-fs: warning (device dm-10): udf_load_vrs: No anchor found
[62614.208204] UDF-fs: Scanning with blocksize 4096 failed
[62614.208205] UDF-fs: warning (device dm-10): udf_fill_super: No partition
found (1)

So now I'm stuck again, but maybe one little step later...


Reply to David Christensen's comment:

Thank you for your suggestions, well, most of them are not new to me and I'm
following them already; especially as my solution is basically a bash script
which I also put into git ;-) Years ago when I started with this solution, I
tested it and I could restore all files successfully. Since the script is
unchanged, I didn't expect that kind of problem now.

Encrypting each file separately would be another way, but for me it has always
been nice to just have to decrypt the whole disc once, on the cli, the same
way as I can do it with my external hard disks and so on...

My use case - by the way, and this might be of interest for others as well -
is backuping up all the typical family stuff... files, images, mails and so on,
and I do bi-weekly backups alternating on several encrypted HDDs, stored
offsite (at my office desk). Additionally I started writing encrypted BD discs,
just to have incremental read only backups, created every 3 - 6 months and
stored offsite in the office as well...


Thanks again for any hints...

(Please add me to your reply in CC as I'm currently not subscribed to the list
anymore.)

Bernd




Problem mounting encrypted blu-ray disc or image

2022-07-04 Thread B.M.
Hello

I create encrypted backups on blu-ray discs for some years now with a bash
script, but now I encountered a problem mounting some of these discs (but not
all of them - in fact, my last backups consist of two discs each, and I cannot
mount the first one but I can mount the second one for each of them - seems
strange...). It's not date-related (and they are not too old).

In detail, I use the following commands:

IMGFILE=/home/TMP_BKP/backup.img
IMGSIZE=24064000K
IMGLOOP=`losetup -f`

touch $IMGFILE
truncate -s $IMGSIZE $IMGFILE
losetup $IMGLOOP $IMGFILE
cryptsetup luksFormat --cipher aes-xts-plain64 $IMGLOOP
cryptsetup luksOpen $IMGLOOP BDbackup
mkudffs -b 2048 --label $1 /dev/mapper/BDbackup
mount -t udf /dev/mapper/BDbackup /mnt/BDbackup

... then I create my compressed backup files ...

umount /mnt/BDbackup
cryptsetup luksClose /dev/mapper/BDbackup
losetup -d $IMGLOOP

growisofs -dvd-compat -Z /dev/dvd=$IMGFILE; eject


In order to mount the disc, I use:

cryptsetup luksOpen -r /dev/dvd BDbackup
mount -t udf /dev/mapper/BDbackup /mnt/BDbackup


Unfortunately, this fails now for some of my discs and also for the last image
file I created (not deleted yet...):

mount: /mnt/BDbackup: wrong fs type, bad option, bad superblock on /dev/
mapper/BDbackup, missing codepage or helper program, or other error.

And dmesg shows:

UDF-fs: warning (device dm-10): udf_load_vrs: No VRS found
UDF-fs: Scanning with blocksize 2048 failed
UDF-fs: warning (device dm-10): udf_load_vrs: No VRS found
UDF-fs: Scanning with blocksize 4096 failed


Any ideas what may happen here?

Thank you.

Best,
Bernd




Program behaves differently depending on how started

2019-03-21 Thread B.M.
Dear all,

I have a problem with a program that once works well and once has a problem, 
depending on the way I start it.

If I start kdenlive (video editing) by clicking on a file in dolphin, it works 
as expected, i.e. it can play music files as well as video clips including 
video and audio.

If I start kdenlive by clicking on a desktop icon (KDE), via the start menu or 
by just typing kdenlive in a shell, it play video clips well but cannot play 
music files (I cannot hear them).

I already checked using ps and both times /usr/bin/kdenlive is shown, both 
times running inside firejail, but if I start it in a shell circumventing 
firejail that doesn't change anything.

So, there has to be a difference in "process context", right? How I can I check 
that? Unfortunately I'm all completely stumped in this case.

Are there any ways how to compare two programs running in what they can do, can 
access, ...? Any idea how to proceed?

Thanks for ideas and pointers,
Bernd

Re: inotifywait in bash script: space

2017-12-03 Thread B.M.
Dear all,

thank you for your inputs - indeed, I had to add "eval" to my script to get it 
working.

Best,
Bernd


On Samstag, 2. Dezember 2017 17:10:10 CET you wrote:
> Dear all,
> 
> not a Debian specific question, but I hope to get an answer here...
> 
> I try to use inotifywait in a bash script to block until a file in a
> directory gets changed. Then the script should continue. That's working
> well unless the path I hand over to inotifywait contains spaces.
> 
> I already tried to escape them like so:
> 
>   MYCOMMAND = "inotifywait  -qqr -e create \"$MYDIR\""
> 
> where MYDIR is something like "/tmp/test dir" but although echo shows the
> command correctly and putting the very same command in a shell just works,
> as part of my script it doesn't work. (I don't want to get paths back from
> inotifywait.)
> 
> Thanks for any pointers towards clarification and help.
> 
> Best,
> Bernd




inotifywait in bash script: space

2017-12-02 Thread B.M.
Dear all,

not a Debian specific question, but I hope to get an answer here...

I try to use inotifywait in a bash script to block until a file in a directory 
gets changed. Then the script should continue. That's working well unless the 
path I hand over to inotifywait contains spaces.

I already tried to escape them like so:

  MYCOMMAND = "inotifywait  -qqr -e create \"$MYDIR\""

where MYDIR is something like "/tmp/test dir" but although echo shows the 
command correctly and putting the very same command in a shell just works, as 
part of my script it doesn't work. (I don't want to get paths back from 
inotifywait.)

Thanks for any pointers towards clarification and help.

Best,
Bernd



Digikam shared library error

2017-05-08 Thread B.M.
Hi,

I've just upgraded two installations from jessie to stretch. Both have digikam 
installed. First installation runs digikam without any problem. But on the 
second one I've a shared library error:

Digikam: error while loading shared libraries: libgphoto2_port.so.10: cannot 
open shared object file: No such file or directory

Interestingly, on both systems synaptic says that digikam-private-libs depends 
on libgphoto-port12 (>=2.5.10) which is installed on both and no libgphoto-
port10 is installed on neither the machine with the running nor the one with 
the not-running digikam. How's that possible?

I already tried checks, reinstalls of digikam and its dependencies, cruft and 
so on but I don't find out, what the root of the problem really is. Any hints 
on how to solve that are welcome...

Thank you,
Bernd