Re: Debian 11 installer crashed and reboot

2021-08-20 Thread Chuck Zmudzinski

On 8/21/2021 12:08 AM, Chuck Zmudzinski wrote:

On 8/19/2021 3:11 PM, Andy Smith wrote:

Hi Chuck,

On Tue, Aug 17, 2021 at 08:04:43AM -0400, Chuck Zmudzinski wrote:

After some testing of the Debian 11 installer on Xen
(using the debian-11.0.0-amd64-netinst.iso), I find that
this image only supports installation into a Xen PV guest,
the guest always crashes and reboots for either a BIOS
or OVMF boot into an HVM Xen guest.

Could you report this to Debian's Xen team as a bug? Perhaps it is
as simple as needing different kernel options in the netinst
installer kernel given that the full install works under HVM?


Hi Andy,

That is exactly what I think the problem is. I will look into it and 
see if

there is a simple solution like adding the correct kernel configuration
options and kernel modules to the kernel and ramdisk on the default
debian installer iso. The generic default amd64 iso should work in
Xen HVM guests.


The Debian Xen team is very under-resourced for human help and it
has been a long time since they have managed to keep the version in
stable to a recent and supported one upstream. If you run a Xen dom0
on Debian I think really you need to be building your own packages
or using the Debian Xen team's packages from sid.

The stable packaged 4.11 hypervisor is out of even security support
upstream so it's not really suitable for production use.


With bullseye released, that is oldstable now. Xen 4.14 is on bullseye
and I think Xen 4.15 is the latest release upstream, so it is not
too far behind now. I presume Xen 4.14 is still getting security patches
upstream, but I cannot find a good explanation of Xen's support
cycle on their website and I don't know if Debian can expect upstream
to support 4.14 until bullseye becomes oldstable in two years or so.


I found this page: https://xenbits.xen.org/docs/unstable/support-matrix.html

Xen 4.14 will have security support until 2023-07-24, which should cover 
most

of the time bullseye will be the stable version.

Chuck



I don't
think the Debian Xen team would recommend using it but would instead
suggest using their newer package that;s in sid (on stable) and
test/report bigs against that. But let's get this reported.


If/when I find the solution, I will post a bug report and see if the Xen
team can get the solution into the installer media for future bullseye
point releases. I already know the same problem exists on Xen 4.14
on bullseye, and AFAIK even sid has not bumped to 4.15 yet.


I'm not skilled enough in Debian package building to help the team
but I do still report bigs sometimes; for production use I am
building packages from newer upstream source.

For this problem I can't help as I don't run HVM guests (only PV and
PVH).


I always had difficulty with pygrub/pvgrub on PV domains, and using
bullseye's Xen-4.14 version, I could not boot the debian installer iso
with pygrub but had to extract the xen-enabled kernel and ramdisk to
Dom0 and boot them from within Dom0. I think this is another Xen
bug in the Xen-4.14 package that I also will investigate next week.
Probably some tweaks to the pygrub script are needed there.

I have always wanted to try out PVH domains, but have not done so
yet.


The Debian Xen team mailing list is at:
https://alioth-lists.debian.net/cgi-bin/mailman/listinfo/pkg-xen-devel


I will do some testing on this next week. I do want to help the Xen team
make Debian more Xen-friendly and will report these bugs, hopefully
sometime next week.

Cheers,

Chuck


Cheers,
Andy







Re: Debian 11 installer crashed and reboot

2021-08-20 Thread Chuck Zmudzinski

On 8/21/2021 12:42 AM, John Mok wrote:

Hi Chuck,

As a quick fix on the make installer media work, how can I remake my 
own installation ISO by recompil ing the kernel and initramfs ?


Can give me some directions ?


Not yet regarding what kernel options are needed, because
I have not done any tests yet to see which kernel configuration
options or modules might be missing that are needed by the
Xen HVM.

A quick solution would not even require any recompilation
of a kernel, since we know the kernel and ramdisk of
an installed bullseye system does work in a Xen HVM guest.

Therefore...

if you can build an iso that is the same as the installation
iso in every way except that it boots using the default Debian kernel
and ramdisk on the final installed system instead of the stripped
down kernels/ramdisks on the installation media, you should get an
iso that can boot the debian installer in a Xen HVM guest.

I am not familiar with the process that the debian installer team
uses to generate the installation ISOs, but if I wanted to learn
I would start by looking at

https://wiki.debian.org/DebianInstaller/CheckOut 



and follow the instructions in the wiki checkout the source code
for the debian installer and from there you might be able to learn
how to build and modify a Debian installation ISO.

Regards,

Chuck

Chuck



Thanks a lot.

John Mok


On Sat, Aug 21, 2021, 12:08 Chuck Zmudzinski > wrote:


On 8/19/2021 3:11 PM, Andy Smith wrote:
> Hi Chuck,
>
> On Tue, Aug 17, 2021 at 08:04:43AM -0400, Chuck Zmudzinski wrote:
>> After some testing of the Debian 11 installer on Xen
>> (using the debian-11.0.0-amd64-netinst.iso), I find that
>> this image only supports installation into a Xen PV guest,
>> the guest always crashes and reboots for either a BIOS
>> or OVMF boot into an HVM Xen guest.
> Could you report this to Debian's Xen team as a bug? Perhaps it is
> as simple as needing different kernel options in the netinst
> installer kernel given that the full install works under HVM?
Hi Andy,

That is exactly what I think the problem is. I will look into it
and see if
there is a simple solution like adding the correct kernel
configuration
options and kernel modules to the kernel and ramdisk on the default
debian installer iso. The generic default amd64 iso should work in
Xen HVM guests.
>
> The Debian Xen team is very under-resourced for human help and it
> has been a long time since they have managed to keep the version in
> stable to a recent and supported one upstream. If you run a Xen dom0
> on Debian I think really you need to be building your own packages
> or using the Debian Xen team's packages from sid.
>
> The stable packaged 4.11 hypervisor is out of even security support
> upstream so it's not really suitable for production use.
With bullseye released, that is oldstable now. Xen 4.14 is on bullseye
and I think Xen 4.15 is the latest release upstream, so it is not
too far behind now. I presume Xen 4.14 is still getting security
patches
upstream, but I cannot find a good explanation of Xen's support
cycle on their website and I don't know if Debian can expect upstream
to support 4.14 until bullseye becomes oldstable in two years or so.
> I don't
> think the Debian Xen team would recommend using it but would instead
> suggest using their newer package that;s in sid (on stable) and
> test/report bigs against that. But let's get this reported.
If/when I find the solution, I will post a bug report and see if
the Xen
team can get the solution into the installer media for future bullseye
point releases. I already know the same problem exists on Xen 4.14
on bullseye, and AFAIK even sid has not bumped to 4.15 yet.
>
> I'm not skilled enough in Debian package building to help the team
> but I do still report bigs sometimes; for production use I am
> building packages from newer upstream source.
>
> For this problem I can't help as I don't run HVM guests (only PV and
> PVH).
I always had difficulty with pygrub/pvgrub on PV domains, and using
bullseye's Xen-4.14 version, I could not boot the debian installer iso
with pygrub but had to extract the xen-enabled kernel and ramdisk to
Dom0 and boot them from within Dom0. I think this is another Xen
bug in the Xen-4.14 package that I also will investigate next week.
Probably some tweaks to the pygrub script are needed there.

I have always wanted to try out PVH domains, but have not done so
yet.
>
> The Debian Xen team mailing list is at:
>
https://alioth-lists.debian.net/cgi-bin/mailman/listinfo/pkg-xen-devel

I will do some testing on this next week. I d

Re: Debian 11 installer crashed and reboot

2021-08-20 Thread John Mok
Hi Chuck,

As a quick fix on the make installer media work, how can I remake my own
installation ISO by recompil ing the kernel and initramfs ?

Can give me some directions ?

Thanks a lot.

John Mok


On Sat, Aug 21, 2021, 12:08 Chuck Zmudzinski  wrote:

> On 8/19/2021 3:11 PM, Andy Smith wrote:
> > Hi Chuck,
> >
> > On Tue, Aug 17, 2021 at 08:04:43AM -0400, Chuck Zmudzinski wrote:
> >> After some testing of the Debian 11 installer on Xen
> >> (using the debian-11.0.0-amd64-netinst.iso), I find that
> >> this image only supports installation into a Xen PV guest,
> >> the guest always crashes and reboots for either a BIOS
> >> or OVMF boot into an HVM Xen guest.
> > Could you report this to Debian's Xen team as a bug? Perhaps it is
> > as simple as needing different kernel options in the netinst
> > installer kernel given that the full install works under HVM?
> Hi Andy,
>
> That is exactly what I think the problem is. I will look into it and see if
> there is a simple solution like adding the correct kernel configuration
> options and kernel modules to the kernel and ramdisk on the default
> debian installer iso. The generic default amd64 iso should work in
> Xen HVM guests.
> >
> > The Debian Xen team is very under-resourced for human help and it
> > has been a long time since they have managed to keep the version in
> > stable to a recent and supported one upstream. If you run a Xen dom0
> > on Debian I think really you need to be building your own packages
> > or using the Debian Xen team's packages from sid.
> >
> > The stable packaged 4.11 hypervisor is out of even security support
> > upstream so it's not really suitable for production use.
> With bullseye released, that is oldstable now. Xen 4.14 is on bullseye
> and I think Xen 4.15 is the latest release upstream, so it is not
> too far behind now. I presume Xen 4.14 is still getting security patches
> upstream, but I cannot find a good explanation of Xen's support
> cycle on their website and I don't know if Debian can expect upstream
> to support 4.14 until bullseye becomes oldstable in two years or so.
> > I don't
> > think the Debian Xen team would recommend using it but would instead
> > suggest using their newer package that;s in sid (on stable) and
> > test/report bigs against that. But let's get this reported.
> If/when I find the solution, I will post a bug report and see if the Xen
> team can get the solution into the installer media for future bullseye
> point releases. I already know the same problem exists on Xen 4.14
> on bullseye, and AFAIK even sid has not bumped to 4.15 yet.
> >
> > I'm not skilled enough in Debian package building to help the team
> > but I do still report bigs sometimes; for production use I am
> > building packages from newer upstream source.
> >
> > For this problem I can't help as I don't run HVM guests (only PV and
> > PVH).
> I always had difficulty with pygrub/pvgrub on PV domains, and using
> bullseye's Xen-4.14 version, I could not boot the debian installer iso
> with pygrub but had to extract the xen-enabled kernel and ramdisk to
> Dom0 and boot them from within Dom0. I think this is another Xen
> bug in the Xen-4.14 package that I also will investigate next week.
> Probably some tweaks to the pygrub script are needed there.
>
> I have always wanted to try out PVH domains, but have not done so
> yet.
> >
> > The Debian Xen team mailing list is at:
> > https://alioth-lists.debian.net/cgi-bin/mailman/listinfo/pkg-xen-devel
> I will do some testing on this next week. I do want to help the Xen team
> make Debian more Xen-friendly and will report these bugs, hopefully
> sometime next week.
>
> Cheers,
>
> Chuck
> >
> > Cheers,
> > Andy
> >
>
>


Re: Failed to detect SSD when installing Bullseye

2021-08-20 Thread Bjørn Mork
Nikolay Velyurov  writes:

> I tried to install Bullseye on my MacBookAir6,1 but the SSD is not detected:
>
> ACPI: SSDT 0x8CD7E000 00010B (v01 APPLE  SataAhci 1000
> INTL 20100915)
> libata version 3.00 loaded.
> ahci :04:00.0: version 3.0
> ahci :04:00.0: AHCI 0001. 32 slots 1 ports 6 Gbps 0x1 impl SATA mode
> ahci :04:00.0: flags: 64bit ncq sntf led pio slum part
> scsi host0: ahci
> ata1: SATA max UDMA/133 abar m512@0xb070 port 0xb0700100 irq 54
> DMAR: DRHD: handling fault status reg 2
> DMAR: [DMA Write] Request device [04:00.1] PASID  fault addr
> fffe [fault reason 02] Present bit in context entry is clear

try booting with intel_iommu=off on the kernel command line



Bjørn



Re: Debian 11 installer crashed and reboot

2021-08-20 Thread Chuck Zmudzinski

On 8/19/2021 3:11 PM, Andy Smith wrote:

Hi Chuck,

On Tue, Aug 17, 2021 at 08:04:43AM -0400, Chuck Zmudzinski wrote:

After some testing of the Debian 11 installer on Xen
(using the debian-11.0.0-amd64-netinst.iso), I find that
this image only supports installation into a Xen PV guest,
the guest always crashes and reboots for either a BIOS
or OVMF boot into an HVM Xen guest.

Could you report this to Debian's Xen team as a bug? Perhaps it is
as simple as needing different kernel options in the netinst
installer kernel given that the full install works under HVM?

Hi Andy,

That is exactly what I think the problem is. I will look into it and see if
there is a simple solution like adding the correct kernel configuration
options and kernel modules to the kernel and ramdisk on the default
debian installer iso. The generic default amd64 iso should work in
Xen HVM guests.


The Debian Xen team is very under-resourced for human help and it
has been a long time since they have managed to keep the version in
stable to a recent and supported one upstream. If you run a Xen dom0
on Debian I think really you need to be building your own packages
or using the Debian Xen team's packages from sid.

The stable packaged 4.11 hypervisor is out of even security support
upstream so it's not really suitable for production use.

With bullseye released, that is oldstable now. Xen 4.14 is on bullseye
and I think Xen 4.15 is the latest release upstream, so it is not
too far behind now. I presume Xen 4.14 is still getting security patches
upstream, but I cannot find a good explanation of Xen's support
cycle on their website and I don't know if Debian can expect upstream
to support 4.14 until bullseye becomes oldstable in two years or so.

I don't
think the Debian Xen team would recommend using it but would instead
suggest using their newer package that;s in sid (on stable) and
test/report bigs against that. But let's get this reported.

If/when I find the solution, I will post a bug report and see if the Xen
team can get the solution into the installer media for future bullseye
point releases. I already know the same problem exists on Xen 4.14
on bullseye, and AFAIK even sid has not bumped to 4.15 yet.


I'm not skilled enough in Debian package building to help the team
but I do still report bigs sometimes; for production use I am
building packages from newer upstream source.

For this problem I can't help as I don't run HVM guests (only PV and
PVH).

I always had difficulty with pygrub/pvgrub on PV domains, and using
bullseye's Xen-4.14 version, I could not boot the debian installer iso
with pygrub but had to extract the xen-enabled kernel and ramdisk to
Dom0 and boot them from within Dom0. I think this is another Xen
bug in the Xen-4.14 package that I also will investigate next week.
Probably some tweaks to the pygrub script are needed there.

I have always wanted to try out PVH domains, but have not done so
yet.


The Debian Xen team mailing list is at:
https://alioth-lists.debian.net/cgi-bin/mailman/listinfo/pkg-xen-devel

I will do some testing on this next week. I do want to help the Xen team
make Debian more Xen-friendly and will report these bugs, hopefully
sometime next week.

Cheers,

Chuck


Cheers,
Andy





Re: Respect for newbies and new comers [ was : moderators, I would appreciate if you could interfere ]

2021-08-20 Thread Polyna-Maude Racicot-Summerside


On 2021-08-20 5:22 p.m., Polyna-Maude Racicot-Summerside wrote:
> 
> 
> On 2021-08-19 5:33 p.m., Brian wrote:
>> On Thu 19 Aug 2021 at 14:45:16 -0400, Polyna-Maude Racicot-Summerside wrote:
>>
>>>
>>>
>>> On 2021-08-19 2:18 p.m., Thomas Schmitt wrote:
 Hi,

 Steve McIntyre wrote:
>>> This mailing list, like all
>>> Debian-hosted mailing lists, is subject to both the Debian mailing
>>> list Code of Conduct and the main Debian Code of Conduct:
>>> [...]
>>>   Inappropriate behaviour on the list may lead to warnings; repeated bad
>>>  behaviour may lead to temporary or permanent bans for offenders.
>>>
>>> If you're trying to label that as "politically correct" then I think
>>> you may need to change your expectations. The "principles of open
>>> source" do not include a free pass to be abusive to others.

 Pierre-Elliott Bécue wrote:
>> [...] Steve's message is pretty explicit, so I guess
>> the message he is trying to pass here is quite clear.

 Polyna-Maude Racicot-Summerside wrote:
> Great for him, cheer if you feel like it.

 Well then:

 Three cheers for Steve McIntyre, former Debian Project Leader, carer of
 Debian boot capabilities including the inavoidable cooperation with
 Microsoft to get Debian working on Secure Boot EFI, and one of the most
 influencial users of my own contribution to the free software world.

   https://qa.debian.org/developer.php?email=93sam%40debian.org
OMG ! That's a truckload of contribution !

   https://wiki.debian.org/SteveMcIntyre


 Also he is one of the people who were addressed by the previous title
 of this thread: "moderators, I would appreciate if you could interfer".
 So giving him back-talk in respect to list rules and conventions seems
 somewhat like asking for a demonstration of his authority.

>>> I misunderstood the message from Steve and have sent him a apologies (in
>>> private).
>>>
>>> It wasn't my goal to provoke anyone here.
>>>
>>> And I'd like to add, now that I know that he's been the one that got the
>>> hard job of working with Microsoft. Thank you Steve for you hard work
>>> and be assured that I'm sure you had hard time convincing Microsoft that
>>> it would be good that they help us so we can get secure boot. I remember
>>> using secure boot in 2009 so it's been a long time since all this
>>> started and at that time they were not much into open source.
>>>
>>> Thank you Steve and sorry for taking your limited time.
>>
>> A illustration of "collapse of stout party" comes to mind her :).
>>
> Thank you
> 

-- 
Polyna-Maude R.-Summerside
-Be smart, Be wise, Support opensource development



OpenPGP_signature
Description: OpenPGP digital signature


Re: nvme SSD and poor performance

2021-08-20 Thread David Christensen

On 8/17/21 2:54 AM, Pierre Willaime wrote:
> I have a nvme SSD (CAZ-82512-Q11 NVMe LITEON 512GB) on debian stable
> (bulleye now).
>
> For a long time, I suffer poor I/O performances which slow down a lot of
> tasks (apt upgrade when unpacking for example).


On 8/20/21 1:50 AM, Pierre Willaime wrote:

Thanks all.

I activated `# systemctl enable fstrim.timer` (thanks Linux-Fan).

But I do not think my issue is trim related after all. 



I agree.


I have always a 
lot of I/O activities from jdb2 even just after booting and even when 
the computer is doing nothing for hours.


Here is an extended  log of iotop where you can see jdb2 anormal 
activities: https://pastebin.com/eyGcGdUz



Was your command something like this?

# iotop -o -k -t -q -q -q > iotop.out


I cannot (yet) find what process is generating this activities. I tried 
to kill a lot of jobs seing in atop output with no results.



Analyzing the first ten minutes worth of data with an improvised Perl 
script:


index of field 'read_bw'
1256840.19 firefox-esr
  77926.07 apt
316.08 perl
 22.74 dpkg
 15.27 [kworker/6:0-events]
index of field 'write_bw'
 220512.79 thunderbird
  29613.87 firefox-esr
  23221.20 dpkg
  15211.66 [jbd2/nvme0n1p2-]
   5529.57 [dpkg]
   4148.09 systemd-journald
   1699.13 perl
533.28 mandb
507.15 apt
145.61 rsyslogd
131.59 atop
115.77 syncthing
 46.35 xfce4-terminal
 38.60 smartd
 15.48 Xorg
 15.44 NetworkManager
  7.64 bash
index of field 'swap_percent'
index of field 'io_wait_percent'
  12427.58 [jbd2/nvme0n1p2-]
   1334.15 firefox-esr
568.91 dpkg
385.31 thunderbird
293.57 mandb
 99.99 syncthing
 64.82 [kworker/13:3-events_freezable_power_]
 63.86 smartd
 55.12 [kworker/u32:3+flush-259:0]
 38.64 [kworker/u32:2-flush-259:0]
 25.27 [kworker/u32:3-events_unbound]
 23.13 [kworker/4:0-events_freezable_power_]
 22.68 [kworker/u32:2-events_unbound]
 21.55 apt
 12.51 [kworker/u32:1-ext4-rsv-conversion]
  9.87 [kworker/u32:2-ext4-rsv-conversion]
  9.63 [kworker/9:1-events]
  8.90 [kworker/u32:1-flush-259:0]
  8.58 perl
  8.11 [kworker/9:1-events_freezable_power_]
  5.85 [dpkg]
  4.33 [kworker/u32:3-ext4-rsv-conversion]
  4.26 NetworkManager
  3.57 [kworker/13:3-mm_percpu_wq]
  2.85 [kworker/9:1-mm_percpu_wq]
  2.71 [kworker/4:0-mm_percpu_wq]
  0.40 [kworker/13:3-events] [kworker/4:0-events]
  0.38 [kworker/6:1-events]
  0.36 [kworker/9:1-rcu_gp]
  0.30 [kworker/u32:3-flush-259:0]
  0.26 [kworker/6:0-events_freezable_power_]
  0.16 systemd-journald
  0.03 [kworker/6:0-events]


I appears:

- firefox-esr used the most read bandwidth -- 1256840.19 K/s total

- thunderbird used the most write bandwdith -- 220512.79 K/s total

- No processes were swapping.

- jbd2/nvme0n1p2- waited the longest for I/O -- 12427.58 % total


Both apt(8) and dpkg(1) were also running and using a small amount of 
I/O.  While I may leave Firefox and Thunderbird running when installing 
a package or two, I shut them down for updates and upgrades.  Was the 
iotop data collected during a long-running upgrade?



AIUI the jbd2/nvme0n1p2 I/O corresponds to the bottom half of the kernel 
(e.g. device driver interface, DDI) in response to I/O via the top half 
of the kernel (e.g. application programming interface, API).  The way to 
reduce jdb2 I/O is to reduce application I/O.



> On 8/17/21 7:07 AM, Dan Ritter wrote:
>> I don't think you have a significant performance problem, but
>> you are definitely feeling some pain -- so can you tell us more
>> about what feels slow? Does it happen during the ordinary course
>> of the day?

Program are slow to start. Sometimes there is a delay when I type 
(letters are displayed few second after typing). Apt unpack take forewer 
(5 hours to unpack packages when upgrading to bulleye).


The computer is a recent Dell precision desktop with i9-9900 as CPU, an 
NVIDIA GP107GL [Quadro P400] (and the GPU integrated to the CPU). The 
nvme SSD is supposed to be a decent one. This desktop is yet a lot 
slower than my (more basic) laptop.


Complete system info: https://pastebin.com/zaGVEpae



That's a good workstation.  :-)


Firefox and Thunderbird are habitual trouble-makers on my Debian 
desktops.  I run Xfce with the CPU Graph panel applet.  If I leave 
Firefox or Thunderbird open long enough, eventually I will see a core 
pegged at 100% and the CPU fan will spin up.  Both applications keep 
working in this state; but a swirling toilet bowl mouse pointer in 
Thunderbird is a danger sign -- I have lost e-mail messages when moving 
a message produced that symptom.  The only cure is to close the 
offending program(s) and implement add

Re: bullseye: systemd-networkd-wait-online timeouts

2021-08-20 Thread Jochen Spieker
Andy Smith:
> On Wed, Aug 18, 2021 at 09:36:30PM +0200, Jochen Spieker wrote:
>
>> Aug 18 10:59:20 h2907737 systemd-networkd-wait-online[936688]: Event loop 
>> failed: Connection timed out
>> Aug 18 10:59:20 h2907737 apt-helper[936686]: E: Sub-process 
>> /lib/systemd/systemd-networkd-wait-online returned an error code (1)
>> 
>> For some reason systemd does not (fully?) recognize that the system is
>> online.
> 
> I don't know yet about bullseye but in buster systemd does assume
> that you are online when using ifupdown unless you enable the
> "ifupdown-wait-online" service, which actually waits for every
> interface marked auto be be ready before allowing "network-online"
> target to be reached.
> 
> Is that service enabled? If you disable it, what happens? I expect
> systemd to assume it is online.

Sounds like a good idea! But:

# systemctl status ifupdown-wait-online.service
● ifupdown-wait-online.service - Wait for network to be configured by ifupdown
 Loaded: loaded (/lib/systemd/system/ifupdown-wait-online.service; enabled; 
vendor preset: enabled)
 Active: active (exited) since Sun 2021-08-15 00:16:58 CEST; 5 days ago
   Main PID: 79 (code=exited, status=0/SUCCESS)
  Tasks: 0 (limit: 105)
 Memory: 0B
 CGroup: /system.slice/ifupdown-wait-online.service

Aug 15 00:16:58 .stratoserver.net systemd[1]: Finished Wait for network 
to be configured by ifupdown.
Warning: journal has been rotated since unit was started, output may be 
incomplete.

# systemctl disable --now ifupdown-wait-online.service 
Removed 
/etc/systemd/system/network-online.target.wants/ifupdown-wait-online.service.

# /lib/systemd/systemd-networkd-wait-online  --timeout=2 --any
Event loop failed: Connection timed out


> Also are you absolutely sure that you aren't using systemd-networkd
> and/or NetworkManager and there isn't any .link files for systemd
> anywhere, that it's purely ifupdown?

The package network-manager is not installed, /etc/systemd/network is
empty and /etc/systemd/networkd.conf only contains comments, so I think
I am sure.

> May be worth asking your hosting provider as well because uf they
> are mandating how your /etc/network/interfaces looks (and its use)
> then they need to know how to make it play nice with systemd in
> Debian 11.

That is probable the right approach. Thanks for your suggestions!

J.
-- 
No-one appears to be able to help me.
[Agree]   [Disagree]
 



signature.asc
Description: PGP signature


Re: Reading of release notes (was Re: Still on stretch, getting ready for bullseye)

2021-08-20 Thread songbird
piorunz wrote:
...
> New install would change network interface name anyway.

  net.ifnames=0 works for me in that regards.


  songbird



Re: Re: Updating kernels impossible when /boot is getting full

2021-08-20 Thread Ilkka Huotari
> Large unneeded files can be deleted with "rm".

This is what I ended up doing. The version 25 was in currently use and I
rm'ed all the old version files.

That worked, but I needed to do the same when version 31 appeared.
Fortunately the new version will get into use so rm will work.

Maybe apt-get upgrade could use some magic to get around the space
limitation issue. Or maybe something else could be done.

At install time it would be good to suggest some good size for the boot
partition. I did search a bit but ended up creating a too small partition,
maybe the info was too old.

Thanks all!

Ilkka


Re: Respect for newbies and new comers [ was : moderators, I would appreciate if you could interfere ]

2021-08-20 Thread Polyna-Maude Racicot-Summerside


On 2021-08-19 5:33 p.m., Brian wrote:
> On Thu 19 Aug 2021 at 14:45:16 -0400, Polyna-Maude Racicot-Summerside wrote:
> 
>>
>>
>> On 2021-08-19 2:18 p.m., Thomas Schmitt wrote:
>>> Hi,
>>>
>>> Steve McIntyre wrote:
>> This mailing list, like all
>> Debian-hosted mailing lists, is subject to both the Debian mailing
>> list Code of Conduct and the main Debian Code of Conduct:
>> [...]
>>   Inappropriate behaviour on the list may lead to warnings; repeated bad
>>  behaviour may lead to temporary or permanent bans for offenders.
>>
>> If you're trying to label that as "politically correct" then I think
>> you may need to change your expectations. The "principles of open
>> source" do not include a free pass to be abusive to others.
>>>
>>> Pierre-Elliott Bécue wrote:
> [...] Steve's message is pretty explicit, so I guess
> the message he is trying to pass here is quite clear.
>>>
>>> Polyna-Maude Racicot-Summerside wrote:
 Great for him, cheer if you feel like it.
>>>
>>> Well then:
>>>
>>> Three cheers for Steve McIntyre, former Debian Project Leader, carer of
>>> Debian boot capabilities including the inavoidable cooperation with
>>> Microsoft to get Debian working on Secure Boot EFI, and one of the most
>>> influencial users of my own contribution to the free software world.
>>>
>>>   https://qa.debian.org/developer.php?email=93sam%40debian.org
>>>   https://wiki.debian.org/SteveMcIntyre
>>>
>>>
>>> Also he is one of the people who were addressed by the previous title
>>> of this thread: "moderators, I would appreciate if you could interfer".
>>> So giving him back-talk in respect to list rules and conventions seems
>>> somewhat like asking for a demonstration of his authority.
>>>
>> I misunderstood the message from Steve and have sent him a apologies (in
>> private).
>>
>> It wasn't my goal to provoke anyone here.
>>
>> And I'd like to add, now that I know that he's been the one that got the
>> hard job of working with Microsoft. Thank you Steve for you hard work
>> and be assured that I'm sure you had hard time convincing Microsoft that
>> it would be good that they help us so we can get secure boot. I remember
>> using secure boot in 2009 so it's been a long time since all this
>> started and at that time they were not much into open source.
>>
>> Thank you Steve and sorry for taking your limited time.
> 
> A illustration of "collapse of stout party" comes to mind her :).
> 
Thank you

-- 
Polyna-Maude R.-Summerside
-Be smart, Be wise, Support opensource development



OpenPGP_signature
Description: OpenPGP digital signature


Failed to detect SSD when installing Bullseye

2021-08-20 Thread Nikolay Velyurov
Hi all,

I tried to install Bullseye on my MacBookAir6,1 but the SSD is not detected:

ACPI: SSDT 0x8CD7E000 00010B (v01 APPLE  SataAhci 1000
INTL 20100915)
libata version 3.00 loaded.
ahci :04:00.0: version 3.0
ahci :04:00.0: AHCI 0001. 32 slots 1 ports 6 Gbps 0x1 impl SATA mode
ahci :04:00.0: flags: 64bit ncq sntf led pio slum part
scsi host0: ahci
ata1: SATA max UDMA/133 abar m512@0xb070 port 0xb0700100 irq 54
DMAR: DRHD: handling fault status reg 2
DMAR: [DMA Write] Request device [04:00.1] PASID  fault addr
fffe [fault reason 02] Present bit in context entry is clear
ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
DMAR: DRHD: handling fault status reg 3
DMAR: [DMA Write] Request device [04:00.1] PASID  fault addr
fffe [fault reason 02] Present bit in context entry is clear
ata1.00: failed to IDENTIFY (INIT_DEV_PARAMS failed, err_mask=0x80)
DMAR: DRHD: handling fault status reg 2
DMAR: [DMA Write] Request device [04:00.1] PASID  fault addr
fffe [fault reason 02] Present bit in context entry is clear
ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
DMAR: DRHD: handling fault status reg 3
DMAR: [DMA Write] Request device [04:00.1] PASID  fault addr
fffe [fault reason 02] Present bit in context entry is clear
ata1.00: failed to IDENTIFY (INIT_DEV_PARAMS failed, err_mask=0x80)
ata1: limiting SATA link speed to 3.0 Gbps
DMAR: DRHD: handling fault status reg 2
DMAR: [DMA Write] Request device [04:00.1] PASID  fault addr
fffe [fault reason 02] Present bit in context entry is clear
ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
DMAR: DRHD: handling fault status reg 3
DMAR: [DMA Write] Request device [04:00.1] PASID  fault addr
fffe [fault reason 02] Present bit in context entry is clear
ata1.00: failed to IDENTIFY (INIT_DEV_PARAMS failed, err_mask=0x80)
DMAR: DRHD: handling fault status reg 2
DMAR: [DMA Write] Request device [04:00.1] PASID  fault addr
fffe [fault reason 02] Present bit in context entry is clear
ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 320)

Buster and earlier releases work just well:

ACPI: SSDT 0x8CD7E000 00010B (v01 APPLE  SataAhci 1000
INTL 20100915)
libata version 3.00 loaded.
ahci :04:00.0: version 3.0
ahci :04:00.0: AHCI 0001. 32 slots 1 ports 6 Gbps 0x1 impl SATA mode
ahci :04:00.0: flags: 64bit ncq sntf led pio slum part
scsi host0: ahci
ata1: SATA max UDMA/133 abar m512@0xb070 port 0xb0700100 irq 49
ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
ata1.00: unexpected _GTF length (8)
ata1.00: ATA-8: APPLE SSD TS0128F, 109R0219, max UDMA/100
ata1.00: 236978176 sectors, multi 0: LBA48 NCQ (depth 32)
ata1.00: unexpected _GTF length (8)
ata1.00: configured for UDMA/100
scsi 0:0:0:0: Direct-Access ATA  APPLE SSD TS0128 0219 PQ: 0 ANSI: 5
sd 0:0:0:0: [sda] 236978176 512-byte logical blocks: (121 GB/113 GiB)
sd 0:0:0:0: [sda] 4096-byte physical blocks
sd 0:0:0:0: [sda] Write Protect is off
sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't
support DPO or FUA
 sda: sda1 sda2 sda3 sda4 sda5 sda6 sda7
sd 0:0:0:0: [sda] Attached SCSI disk

Can anyone help me please?

Regards,
Nikolay



Re: Relatively boring bullseye upgrade reports

2021-08-20 Thread James B
Another for the list,

Dell/WYSE Zx0 box, with AMD G-T56N cpu and 8gb flash drive.Used for offsite 
playback of my home media via DWService and as a gateway to my home 
network.Installation worked perfectly - no issues at all.
-- 
  James B
  portoteache...@fastmail.com

Em Sex, 20 Ago ʼ21, às 10:22, Reco escreveu:
> hc2: Samsung Exsynos 5422-based board, Odroid HC2
> Currently stores backups.
> 
> Nothing to report, the upgrade went smoothly.
> 
> Reco
> 
> 



Re: Still on stretch, getting ready for bullseye

2021-08-20 Thread Curt
On 2021-08-18, mick crane  wrote:
>> 
>> Not sure I see how that analogy applies.
>> 
>> If the 'water' is the knowledge of upgrading Debian releases as present
>> in the official release notes, then the chemical analysis is the advice
>> on upgrading from this list? In that case, it seems the horse is in
>> danger of eating the chemical analysis of the water, rather than
>> drinking the water.
>> 
>> The reverse would make more sense, the water being the advice from this
>> list, and the release notes being the chemical analysis.
>
> you are right,I was being a bit flippant.
> Maybe more the horse is in an important race and just needs to know 
> where the water hole is.

I think in the case of certain people the horse is rather high and they
should get down from it.



Re: nvme SSD and poor performance

2021-08-20 Thread piorunz

On 20/08/2021 18:11, Marco Möller wrote:

browser.sessionstore.interval
Storing the session is not the only data written to the disk. If it
would only be this data, then indeed setting a higher interval would be
sufficient. But there is much more going on. Especially the cache seems
to be a cause for the extreme high I/O of Firefox. That's why I finally
decided to go for the psd tool, and this tool made it for me.


That's why you can tune disk cache in Firefox, disable it, make is
smaller and so on. By definition disk cache for a browser is a hot cache
where everything lands and it's being taken from again instead of being
downloaded from internet. On my PC I have /home on RAID10 made of 4
spinning drives, so all programs can cache and thrash on the storage all
day. If you, or anyone else in this topic, have /home on SSD and/or
Firefox/TB data churn is a concern, you could disable Fx disk cache
altogether.
I am glad that solution you have chosen, that psd tool (I don't know it)
work for you.

--

With kindest regards, piorunz.

⢀⣴⠾⠻⢶⣦⠀
⣾⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org
⠈⠳⣄



Re: nvme SSD and poor performance

2021-08-20 Thread Alexander V. Makartsev

On 20.08.2021 13:50, Pierre Willaime wrote:


Program are slow to start. Sometimes there is a delay when I type 
(letters are displayed few second after typing). Apt unpack take 
forewer (5 hours to unpack packages when upgrading to bulleye).


That looks abnormal to me. Have you tried to update SSD firmware¹ and 
BIOS for your Dell PC?
LiteOn Storage branch was bought by KIOXIA (Toshiba), so I can't find 
any official information about technical specifications of SSD you have, 
but just by looking at its photos, I assume it is an average one, with 
external cache and TLC NAND chips.
The performance of a NVMe looks like it working at slower x2 speed, 
instead of utilizing full x4 lanes. But it could be also its normal 
working state and delays\slowdowns could come from parts unrelated to 
NVMe, like SATA HDDs.

A "smartctl" output for all storage devices could be useful.

The computer is a recent Dell precision desktop with i9-9900 as CPU, 
an NVIDIA GP107GL [Quadro P400] (and the GPU integrated to the CPU). 
The nvme SSD is supposed to be a decent one. This desktop is yet a lot 
slower than my (more basic) laptop.


Complete system info: https://pastebin.com/zaGVEpae




[1] 
https://www.dell.com/support/home/en-us/drivers/driversdetails?driverid=0n4c4


--
With kindest regards, Alexander.

⢀⣴⠾⠻⢶⣦⠀
⣾⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org
⠈⠳⣄



Re: nvme SSD and poor performance

2021-08-20 Thread Tixy
On Fri, 2021-08-20 at 17:48 +0200, Christian Britz wrote:
> 
> Am 20.08.21 um 10:50 schrieb Pierre Willaime:
> > 
> > I activated `# systemctl enable fstrim.timer` (thanks Linux-Fan).
> > 
> Interesting. I am almost 100% certain that it was enabled for me by 
> default on bullseye. Maybe that behaviour changed during the release 
> process.

It's enabled on my PC which was a fresh install in December using a
Bullseye release candidate installer.

-- 
Tixy



Re: Issues with Bullseye

2021-08-20 Thread Hans
Am Freitag, 20. August 2021, 18:55:44 CEST schrieb Brian:
Yeah, looks like we woke a sleeping dog up. :)

Best

Hans

> On Sun 15 Aug 2021 at 16:51:19 +0200, Hans wrote:
> > Am Sonntag, 15. August 2021, 16:36:05 CEST schrieb Brian:
> > Yes, you are very right!
> > 
> > > There isn't any such thing as being "too critical" when it comes to
> > > technical matters :).
> 
> Other may have been listening in to us [1]. One never knows what a
> short remark leads to :).
> 
>  [1] https://lists.debian.org/debian-devel/2021/08/msg00269.html



signature.asc
Description: This is a digitally signed message part.


Re: nvme SSD and poor performance

2021-08-20 Thread Linux-Fan

Pierre Willaime writes:


Thanks all.

I activated `# systemctl enable fstrim.timer` (thanks Linux-Fan).


You're welcome :)

But I do not think my issue is trim related after all. I have always a lot  
of I/O activities from jdb2 even just after booting and even when the  
computer is doing nothing for hours.


Here is an extended  log of iotop where you can see jdb2 anormal activities:  
https://pastebin.com/eyGcGdUz


According to that, a lot of firefox-esr and dpkg and some thunderbird  
processes are active. Is there a high intensity of I/O operations when all  
Firefox, Thunderbird instances and system upgrades are closed?


When testing with iotop here, options `-d 10 -P` seemed to help getting a  
steadier and less cluttered view. Still, filtering your iotop output for  
Firefox, Thunderbird and DPKG respectively seems to be quite revealing:


| $ grep firefox-esr eyGcGdUz | grep -E '[0-9]{4,}.[0-9]{2} K/s'
| 10:38:513363 be/4 pierre  0.00 K/s 1811.89 K/s  0.00 % 17.64 % 
firefox-esr [mozStorage #3]
| 10:39:585117 be/4 pierre  0.00 K/s 1112.59 K/s  0.00 %  0.37 % 
firefox-esr [IndexedDB #14]
| 10:41:553363 be/4 pierre  0.00 K/s 6823.06 K/s  0.00 %  0.00 % 
firefox-esr [mozStorage #3]
| 10:41:553305 be/4 pierre   1469.88 K/s0.00 K/s  0.00 % 60.57 % 
firefox-esr [QuotaManager IO]
| 10:41:553363 be/4 pierre   6869.74 K/s 6684.07 K/s  0.00 % 31.96 % 
firefox-esr [mozStorage #3]
| 10:41:566752 be/4 pierre   2517.19 K/s0.00 K/s  0.00 % 99.99 % 
firefox-esr [Indexed~Mnt #13]
| 10:41:566755 be/4 pierre   31114.18 K/s0.00 K/s  0.00 % 99.58 % 
firefox-esr [Indexed~Mnt #16]
| 10:41:563363 be/4 pierre   9153.40 K/s0.00 K/s  0.00 % 87.06 % 
firefox-esr [mozStorage #3]
| 10:41:576755 be/4 pierre   249206.18 K/s0.00 K/s  0.00 % 59.01 % 
firefox-esr [Indexed~Mnt #16]
| 10:41:576755 be/4 pierre   251353.11 K/s0.00 K/s  0.00 % 66.02 % 
firefox-esr [Indexed~Mnt #16]
| 10:41:586755 be/4 pierre   273621.58 K/s0.00 K/s  0.00 % 59.51 % 
firefox-esr [Indexed~Mnt #16]
| 10:41:586755 be/4 pierre   51639.70 K/s0.00 K/s  0.00 % 94.90 % 
firefox-esr [Indexed~Mnt #16]
| 10:41:596755 be/4 pierre   113869.64 K/s0.00 K/s  0.00 % 79.03 % 
firefox-esr [Indexed~Mnt #16]
| 10:41:596755 be/4 pierre   259549.09 K/s0.00 K/s  0.00 % 56.99 % 
firefox-esr [Indexed~Mnt #16]
| 10:44:413265 be/4 pierre   1196.21 K/s0.00 K/s  0.00 % 20.89 % 
firefox-esr
| 10:44:413289 be/4 pierre   3813.36 K/s  935.22 K/s  0.00 %  4.59 % 
firefox-esr [Cache2 I/O]
| 10:44:533363 be/4 pierre  0.00 K/s 1176.90 K/s  0.00 %  0.00 % 
firefox-esr [mozStorage #3]
| 10:49:283363 be/4 pierre  0.00 K/s 1403.16 K/s  0.00 %  0.43 % 
firefox-esr [mozStorage #3]

So there are incredible amounts of data being read by Firefox (Gigabytes in  
a few minutes)? Does this load reflect in atop or iotop's summarizing lines  
at the begin of the respective screens?


| $ grep thunderbird eyGcGdUz | grep -E '[0-9]{4,}.[0-9]{2} K/s'
| 10:38:432846 be/4 pierre  0.00 K/s 1360.19 K/s  0.00 % 15.51 % 
thunderbird [mozStorage #1]
| 10:39:492873 be/4 pierre  0.00 K/s 4753.74 K/s  0.00 %  0.00 % 
thunderbird [mozStorage #6]
| 10:39:492875 be/4 pierre  0.00 K/s 19217.56 K/s  0.00 %  0.00 % 
thunderbird [mozStorage #7]
| 10:39:502883 be/4 pierre  0.00 K/s 18014.56 K/s  0.00 % 29.39 % 
thunderbird [mozStorage #8]
| 10:39:502883 be/4 pierre  0.00 K/s 3305.94 K/s  0.00 % 27.28 % 
thunderbird [mozStorage #8]
| 10:39:512883 be/4 pierre  0.00 K/s 61950.19 K/s  0.00 % 63.11 % 
thunderbird [mozStorage #8]
| 10:39:512883 be/4 pierre  0.00 K/s 41572.77 K/s  0.00 % 27.19 % 
thunderbird [mozStorage #8]
| 10:39:522883 be/4 pierre  0.00 K/s 20961.20 K/s  0.00 % 65.02 % 
thunderbird [mozStorage #8]
| 10:39:522883 be/4 pierre  0.00 K/s 43345.16 K/s  0.00 %  0.19 % 
thunderbird [mozStorage #8]
| 10:42:272846 be/4 pierre  0.00 K/s 1189.63 K/s  0.00 %  0.45 % 
thunderbird [mozStorage #1]
| 10:42:332846 be/4 pierre  0.00 K/s 1058.52 K/s  0.00 %  0.31 % 
thunderbird [mozStorage #1]
| 10:47:272846 be/4 pierre  0.00 K/s 2113.53 K/s  0.00 %  0.66 % 
thunderbird [mozStorage #1]

Thunderbird seems to write a lot here. This would average at ~18 MiB/s of writing  
and hence explain why the SSD is loaded continuously. Again: Does it match the  
data reported by atop? [I am not experienced in reading iotop output, hence  
I might interpret the data wrongly].


By comparison, dpkg looks rather harmless:

| $ grep dpkg eyGcGdUz | grep -E '[0-9]{4,}.[0-9]{2} K/s'
| 10:38:254506 be/4 root0.00 K/s 4553.67 K/s  0.00 %  0.26 % dpkg 
--status-fd 23 --no-triggers --unpack --auto-deconfigure 
--force-remove-protected --recursive /tmp/apt-dpkg-install-E69bfZ
| 10:38:334506 be/4 root7.73 K/s 4173.77 K/s  0.00 %  1.52 % dpkg 
--status-fd 23 --no-triggers --unpack --auto-deconfigure 
--f

Re: nvme SSD and poor performance

2021-08-20 Thread Marco Möller

On 20.08.21 18:22, piorunz wrote:

On 17/08/2021 15:03, Marco Möller wrote:

I have no experience with SSD, but running my Debian Desktop from a USB
Memory Stick since years, please allow me to share information which
supports the suggestion of Linux-Fan to also investigate if there is
extraordinary I/O taking place and maybe could be avoided:
In the past I found extreme(!) I/O to be produced by Firefox, when it is
writing its cache, and when it is writing its session restore
information. These writes to my observation occure all the time, kind of
nonstop! I could get it very satisfactorily reduced by applying a tool
called "Profile Sync Daemon" (psd) from package "profile-sync-daemon".
While it is packaged for Debian, as a starting point to study its
documentation I suggest to better look it up in the Arch Linux Wiki.


Firefox setting which determines how often write session data to disk is:
browser.sessionstore.interval

Default setting is value of 15000 = 15 seconds, not that bad. But still
I changed that to 10 minutes (value of 60). If I lose an open tab
once in a year when Firefox crashes, so be it. To be honest, I haven't
seen Fx crash in a year, or more.

--

With kindest regards, piorunz.

⢀⣴⠾⠻⢶⣦⠀
⣾⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org
⠈⠳⣄



browser.sessionstore.interval
Storing the session is not the only data written to the disk. If it 
would only be this data, then indeed setting a higher interval would be 
sufficient. But there is much more going on. Especially the cache seems 
to be a cause for the extreme high I/O of Firefox. That's why I finally 
decided to go for the psd tool, and this tool made it for me. Well, I 
assume it has good reasons that someone programmed psd, probably the 
author also noticed that with Firefox parameters alone it often is not 
enough to get the I/O to the HDD caused by Firefox significantly reduced?


I am curious to see what's producing the high jdb2 traffic on the system 
of Pierre, and if this can be stopped and if this finally can be 
attributed to have been the cause for the trouble.


Unfortunately I did not make good notes on how I searched for what was 
causing the traffic on my system, I only remember to have monitored my 
system with iotop and inotifywait.


---
Always stay in good spirits!
Marco



Re: Issues with Bullseye

2021-08-20 Thread Brian
On Sun 15 Aug 2021 at 16:51:19 +0200, Hans wrote:

> Am Sonntag, 15. August 2021, 16:36:05 CEST schrieb Brian:
> Yes, you are very right!
> > 
> > There isn't any such thing as being "too critical" when it comes to
> > technical matters :).

Other may have been listening in to us [1]. One never knows what a
short remark leads to :).

 [1] https://lists.debian.org/debian-devel/2021/08/msg00269.html

-- 
Brian.



Re: nvme SSD and poor performance

2021-08-20 Thread piorunz

On 17/08/2021 15:03, Marco Möller wrote:

I have no experience with SSD, but running my Debian Desktop from a USB
Memory Stick since years, please allow me to share information which
supports the suggestion of Linux-Fan to also investigate if there is
extraordinary I/O taking place and maybe could be avoided:
In the past I found extreme(!) I/O to be produced by Firefox, when it is
writing its cache, and when it is writing its session restore
information. These writes to my observation occure all the time, kind of
nonstop! I could get it very satisfactorily reduced by applying a tool
called "Profile Sync Daemon" (psd) from package "profile-sync-daemon".
While it is packaged for Debian, as a starting point to study its
documentation I suggest to better look it up in the Arch Linux Wiki.


Firefox setting which determines how often write session data to disk is:
browser.sessionstore.interval

Default setting is value of 15000 = 15 seconds, not that bad. But still
I changed that to 10 minutes (value of 60). If I lose an open tab
once in a year when Firefox crashes, so be it. To be honest, I haven't
seen Fx crash in a year, or more.

--

With kindest regards, piorunz.

⢀⣴⠾⠻⢶⣦⠀
⣾⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org
⠈⠳⣄



Re: nvme SSD and poor performance

2021-08-20 Thread Christian Britz




Am 20.08.21 um 10:50 schrieb Pierre Willaime:


I activated `# systemctl enable fstrim.timer` (thanks Linux-Fan).

Interesting. I am almost 100% certain that it was enabled for me by 
default on bullseye. Maybe that behaviour changed during the release 
process.


Best Regards,
Christian



Re: Typical timescales for publishing binary packages in -backports?

2021-08-20 Thread Andy Smith
Hello,

On Thu, Aug 19, 2021 at 08:26:07PM +0100, Brian wrote:
> On Thu 19 Aug 2021 at 16:16:01 +, Andy Smith wrote:
> > So I was wondering what is the typical timescale for binary packages
> > from the kernel source upload to appear in buster-backports?
> 
> I do not think there is a typical timescale. -backports is by special
> effort for any of its packages. Perhaps "soon" is the best that can be
> predicted.

Hah, today is the day, apparently!

Thanks,
Andy



Re: Reading of release notes (was Re: Still on stretch, getting ready for bullseye)

2021-08-20 Thread piorunz

On 20/08/2021 12:40, Greg Wooledge wrote:


This line of thought probably comes from Windows, when any hardware
change causes problems, drivers installation, it can refuse to boot,
etc. Not a problem on Debian :)


It can be.  Any change to the hardware, or even the mobo firmware, can
cause PCI data to change, which means "predictable" network interface
names can change.


New install would change network interface name anyway.


Installation of a new video or (wireless) network card may require the
installation of a new (non-free) driver in some cases.


Of course, just like normal install.


But overall, yes.  Debian handles hardware changes more gracefully than
some other operating systems have historically done.


Yes I agree. Overall, Debian will almost always survive hardware change.
If required, tweak things to make it back to normal. Still, even with
that it's so much less work to do than new install.

--

With kindest regards, piorunz.

⢀⣴⠾⠻⢶⣦⠀
⣾⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org
⠈⠳⣄



Re: Buster to Bullseye upgrade problem

2021-08-20 Thread Gareth Evans
On Fri 20 Aug 2021, at 04:45, David Wright  wrote:
> On Thu 19 Aug 2021 at 07:42:56 (+0100), Gareth Evans wrote:
> > On Thu 19 Aug 2021, at 05:50, David Wright  wrote:
> > > On Thu 19 Aug 2021 at 04:00:04 (+0100), Gareth Evans wrote:
> > > > On Wed 18 Aug 2021, at 23:33, piorunz  wrote:
> > > > > On 18/08/2021 16:14, Gareth Evans wrote:
> > > > > > Unpacking gir1.2-gst-plugins-bad-1.0:amd64 (1.18.4-3) ...
> > > > > > [1mdpkg:[0m error processing archive 
> > > > > > /tmp/apt-dpkg-install-Un4rDW/28-gir1.2-gst-plugins-bad-1.0_1.18.4-3_amd64.deb
> > > > > >  (--unpack):
> > > > > >   trying to overwrite 
> > > > > > '/usr/lib/x86_64-linux-gnu/girepository-1.0/GstTranscoder-1.0.typelib',
> > > > > >  which is also in package pitivi 0.999-1+b1
> > > > > > [1mdpkg-deb:[0m [1;31merror:[0m paste subprocess was killed by 
> > > > > > signal (Broken pipe)
> > > > >
> > > > > IMO that's the source of your problem.
> > > > > You have two packages fighting to overwrite the same file. You should
> > > > > inspect them.
> > > > > Are you sure they come from buster, not from foreign repository?
> > > > 
> > > > Apparently they are both from Buster.
> > > 
> > > I'm afraid not. You need to check the version numbers more carefully.
> > 

Hello again,

Thanks David, for your clear explanation of the problem.

> > I meant the packages (gir1.2-gst-plugins-bad-1.0, pitivi) being upgraded, 
> > rather than the versions replacing them, were both from Buster at the point 
> > of the upgrade (weren't they?)
> 
> One might assume so, but only you can check that ...

I was mistaken in thinking that gir1.2-gst-plugins-bad-1.0 was installed in 
Buster in the first place.

[Buster]
$ apt policy gir*bad*
gir1.2-gst-plugins-bad-1.0:
  Installed: (none)
  Candidate: 1.14.4-1+deb10u2
  Version table:
 1.14.4-1+deb10u2 500
500 http://deb.debian.org/debian buster/main amd64 Packages
500 http://security.debian.org/debian-security buster/updates/main 
amd64 Packages

$ aptitude why gir1.2-gst-plugins-bad-1.0
Not currently installed
The candidate version 1.14.4-1+deb10u2 has priority optional
No dependencies require to install gir1.2-gst-plugins-bad-1.0

$ apt policy pitivi
pitivi:
  Installed: 0.999-1+b1
  Candidate: 0.999-1+b1
  Version table:
 *** 0.999-1+b1 500
500 http://deb.debian.org/debian buster/main amd64 Packages
100 /var/lib/dpkg/status

So pitivi 0.999 as installed is a Buster package, and gir* is installed during 
the upgrade as a dependency of Bullseye's newer pitivi version.

[Bullseye] 
$ aptitude why gir1.2-gst-plugins-bad-1.0
i   pitivi Depends gir1.2-gst-plugins-bad-1.0 (>= 1.18.0)


The first upgrade interruption issue (repeated here for clarity):

--
Unpacking gir1.2-gst-plugins-bad-1.0:amd64 (1.18.4-3) ...
dpkg: error processing archive 
/tmp/apt-dpkg-install-YeCJ7K/28-gir1.2-gst-plugins-bad-1.0_1.18.4-3_amd64.deb 
(--unpack):
 trying to overwrite 
'/usr/lib/x86_64-linux-gnu/girepository-1.0/GstTranscoder-1.0.typelib', which 
is also in package pitivi 0.999-1+b1
dpkg-deb: error: paste subprocess was killed by signal (Broken pipe)
--

appears to be a file conflict, per 

https://www.debian.org/releases/stable/amd64/release-notes/ch-upgrading.en.html#file-conflicts

which includes that "File conflicts should not occur if you upgrade from a 
“pure” buster system..."

So I would like to know if apt is not handling this properly, or if the 
scenario of a file changing packages (see David's previous email) is an 
expected exception to the (sort of) rule.

Shouldn't pitivi 0.999 be disregarded anyway as it's being upgraded?  It's not 
a conflict involving two Bullseye packages, nor with one from Bullseye and one 
held/pinned etc, so I don't see why it should happen.

FWIW I followed s4.2 of the release notes ("start from pure Debian") 
meticulously
https://www.debian.org/releases/stable/amd64/release-notes/ch-upgrading.en.html#system-status

as well as running the commands kindly suggested by piorunz:

> On Wed 18 Aug 2021, at 23:33, piorunz  wrote:
> [...]
> check if any packages are newer than repository (possibly foreign)
> apt-show-versions | grep newer
> 
> check if any packages are foreign (not from Debian)
> aptitude search '~i(!~ODebian)'

There is also no explanation in term.log, syslog or dpkg.log for the second 
interruption:

--
Processing triggers for libapache2-mod-php7.4 (7.4.21-1+deb11u1) ...
[upgrade interrupted...]
W: APT had planned for dpkg to do more than it reported back (5014 vs 5047).
   Affected packages: texlive-fonts-recommended:amd64 texlive-lang-greek:amd64 
texlive-latex-base:amd64 texlive-latex-extra:amd64 
texlive-latex-recommended:amd64 texlive-pictures:amd64 
texlive-plain-generic:amd64 texlive-science:amd64
---

which occurs even if pitivi is removed before upgrading, and the warning 
doesn't appear in term.log either.

If anyone can shed further light, I would be interested, but it's not 
ultimately a roadblock to upgrading so possibly not worth worrying abou

Re: Buster to Bullseye upgrade problem

2021-08-20 Thread Gareth Evans
On Fri 20 Aug 2021, at 12:32, Greg Wooledge  wrote:
> On Thu, Aug 19, 2021 at 10:45:57PM -0500, David Wright wrote:
> > One might assume so, but only you can check that. There are two logs
> > of the upgrade. /var/log/apt/history.log (and its predecessors) shows
> > the command issued, followed by the packages affected, with the old
> > and new version numbers in parentheses. /var/log/apt/term.log (and its
> > predecessors) shows the various packages being unpacked and then set
> > up.
> 
> Huh.  I didn't know about the terminal session logs.  Nifty.
> 

> 'Twould be nice if it showed the command which was typed before each
> logged session, but still... nifty.

You may be aware of "script" which I have found useful for that. You have to 
overlook gibberishy characters though, as backspace and other special chars are 
included in the output file.

https://linux.die.net/man/1/script

You can set it up to run with new terminals, but I can't find the instructions 
to do so.

Kind regards
Gareth



Re: Reading of release notes (was Re: Still on stretch, getting ready for bullseye)

2021-08-20 Thread Greg Wooledge
On Fri, Aug 20, 2021 at 11:55:32AM +0100, piorunz wrote:
> On 19/08/2021 13:21, songbird wrote:
> > when i changed motherboards i figured it was worth a fresh
> > install
> 
> This line of thought probably comes from Windows, when any hardware
> change causes problems, drivers installation, it can refuse to boot,
> etc. Not a problem on Debian :)

It can be.  Any change to the hardware, or even the mobo firmware, can
cause PCI data to change, which means "predictable" network interface
names can change.

Installation of a new video or (wireless) network card may require the
installation of a new (non-free) driver in some cases.

But overall, yes.  Debian handles hardware changes more gracefully than
some other operating systems have historically done.



Re: Buster to Bullseye upgrade problem

2021-08-20 Thread Greg Wooledge
On Thu, Aug 19, 2021 at 10:45:57PM -0500, David Wright wrote:
> One might assume so, but only you can check that. There are two logs
> of the upgrade. /var/log/apt/history.log (and its predecessors) shows
> the command issued, followed by the packages affected, with the old
> and new version numbers in parentheses. /var/log/apt/term.log (and its
> predecessors) shows the various packages being unpacked and then set
> up.

Huh.  I didn't know about the terminal session logs.  Nifty.

'Twould be nice if it showed the command which was typed before each
logged session, but still... nifty.



Re: Reading of release notes (was Re: Still on stretch, getting ready for bullseye)

2021-08-20 Thread piorunz

On 19/08/2021 13:21, songbird wrote:

when i changed motherboards i figured it was worth a fresh
install


This line of thought probably comes from Windows, when any hardware
change causes problems, drivers installation, it can refuse to boot,
etc. Not a problem on Debian :)

Motherboard or even CPU model or manufacturer has no effect on Debian
installation. Look at LiveCD images, they work on everything. Your
system also works on everything unless you specifically disabled some
modules, recompiled kernel just for your hardware, and so on.

I sometimes use persistent Linux on a stick, to boot any computer with
my Linux distro, instead of typical LiveCD, for diagnostics or
maintenance. I am yet to find a computer where my USB stick wouldn't
boot. And that's normal installed OS, not LiveCD. It's full meat, latest
Linux Mint, compressed Btrfs, with all updates and plenty of software
installed.

--

With kindest regards, piorunz.

⢀⣴⠾⠻⢶⣦⠀
⣾⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org
⠈⠳⣄



Re: Pipewire for multiple users

2021-08-20 Thread Lucio Crusca

Il 19/08/21 23:48, martin f krafft ha scritto:
Pulse and systemd need a dbus session, and |su| will not get you that. 
If you want to use either, you need to ensure that you're properly 
logged in as the user, or configure your system to set up the dbus 
session accordingly.

>
> An easy way to do this is to use |ssh| with X-forwarding:
>
> |ssh -X u@localhost |
>
> and then try it again.

I don't get how it managed to work until a few days ago, because I never 
used ssh and "u" user used to play sounds perfectly until PipeWire 
0.3.32 even switching users with "su" only.


However using ssh it now works with PipeWire 0.3.33 too and that's 
enough for me, thanks a lot.




Re: Relatively boring bullseye upgrade reports

2021-08-20 Thread Reco
hc2: Samsung Exsynos 5422-based board, Odroid HC2
Currently stores backups.

Nothing to report, the upgrade went smoothly.

Reco



Re: Reading of release notes (was Re: Still on stretch, getting ready for bullseye)

2021-08-20 Thread Anssi Saari
songbird  writes:

> Anssi Saari wrote:
> ...
>> And yes, the working upgrades are the reason I've stuck with Debian
>> since Hamm. My ever evolving desktop computer is on its second
>> installation now since I reinstalled when I switched to 64-bits
>> somewhere in the decade before last.

>   when i changed motherboards i figured it was worth a fresh
> install...

Interesting but I don't see that as a reason to reinstall, especially
Linux since it usually just keeps going. Well, barring new hardware
sometimes.

Personally I'm thinking of going to unstable as I can't seem to get
anywhere with my new video card but since I have Arch already installed,
perhaps not. Then again I spent about half an hour collecting scripts
and configs I need to copy over to Arch if I'm going to use it as a
daily driver. That work in the end is why I don't reinstall.



Re: nvme SSD and poor performance

2021-08-20 Thread Pierre Willaime

Thanks all.

I activated `# systemctl enable fstrim.timer` (thanks Linux-Fan).

But I do not think my issue is trim related after all. I have always a 
lot of I/O activities from jdb2 even just after booting and even when 
the computer is doing nothing for hours.


Here is an extended  log of iotop where you can see jdb2 anormal 
activities: https://pastebin.com/eyGcGdUz


I cannot (yet) find what process is generating this activities. I tried 
to kill a lot of jobs seing in atop output with no results.



I don't think you have a significant performance problem, but
you are definitely feeling some pain -- so can you tell us more
about what feels slow? Does it happen during the ordinary course
of the day?


Program are slow to start. Sometimes there is a delay when I type 
(letters are displayed few second after typing). Apt unpack take forewer 
(5 hours to unpack packages when upgrading to bulleye).


The computer is a recent Dell precision desktop with i9-9900 as CPU, an 
NVIDIA GP107GL [Quadro P400] (and the GPU integrated to the CPU). The 
nvme SSD is supposed to be a decent one. This desktop is yet a lot 
slower than my (more basic) laptop.


Complete system info: https://pastebin.com/zaGVEpae




Le 18/08/2021 à 00:24, David Christensen a écrit :

On 8/17/21 2:54 AM, Pierre Willaime wrote:
 > Hi,
 >
 > I have a nvme SSD (CAZ-82512-Q11 NVMe LITEON 512GB) on debian stable
 > (bulleye now).
 >
 > For a long time, I suffer poor I/O performances which slow down a lot of
 > tasks (apt upgrade when unpacking for example).
 >
 > I am now trying to fix this issue.
 >
 > Using fstrim seems to restore speed. There are always many GiB which are
 > reduced :
 >
 >  #  fstrim -v /
 >  / : 236,7 GiB (254122389504 octets) réduits
 >
 > then, directly after :
 >
 >  #  fstrim -v /
 >  / : 0 B (0 octets) réduits
 >
 > but few minutes later, there are already 1.2 Gib to trim again :
 >
 >  #  fstrim -v /
 >  / : 1,2 GiB (1235369984 octets) réduits
 >
 >
 > /Is it a good idea to trim, if yes how (and how often)?/
 >
 > Some people use fstrim as a cron job, some other add "discard" option to
 > the /etc/fstab / line. I do not know what is the best if any. I also
 > read triming frequently could reduce the ssd life.
 >
 >
 >
 > I also noticed many I/O access from jbd2 and kworker such as :
 >
 >  # iotop -bktoqqq -d .5
 >  11:11:16 364 be/3 root    0.00 K/s    7.69 K/s  0.00 %
 > 23.64 % [jbd2/nvme0n1p2-]
 >  11:11:16   8 be/4 root    0.00 K/s    0.00 K/s  0.00 %
 > 25.52 % [kworker/u32:0-flush-259:0]
 >
 > The percentage given by iotop (time the thread/process spent while
 > swapping in and while waiting on I/O) is often high.
 >
 > I do not know what to do for kworker and if it is a normal behavior. For
 > jdb2, I have read it is filesystem (ext4 here) journal.
 >
 > I added the "noatime" option to /etc/fstab / line but it does not seem
 > to reduce the number of access.
 >
 > Regards,
 > Pierre
 >
 >
 > P-S: If triming it is needed for ssd, why debian do not trim by default?



On 8/17/21 6:14 AM, Pierre Willaime wrote:

Le 17/08/2021 à 14:02, Dan Ritter a écrit :

The first question is, how slow is this storage?


Here is a good article on using fio:
https://arstechnica.com/gadgets/2020/02/how-fast-are-your-disks-find-out-the-open-source-way-with-fio/ 



Thanks for the help.

Here are the output of fio tests.


Single 4KiB random write process test:

WRITE: bw=197MiB/s (207MB/s), 197MiB/s-197MiB/s (207MB/s-207MB/s), 
io=12.0GiB (12.9GB), run=62271-62271msec


https://pastebin.com/5Cyg9Xvt


16 parallel 64KiB random write processes (two different results, 
further tests are closer to the second than the first):


WRITE: bw=523MiB/s (548MB/s), 31.8MiB/s-33.0MiB/s (33.4MB/s-35.6MB/s), 
io=35.5GiB (38.1GB), run=63568-69533msec


WRITE: bw=201MiB/s (211MB/s), 11.9MiB/s-14.8MiB/s (12.5MB/s-15.5MB/s), 
io=14.3GiB (15.3GB), run=60871-72618msec



https://pastebin.com/XVpPpqsC
https://pastebin.com/HEx8VvhS


Single 1MiB random write process:

   WRITE: bw=270MiB/s (283MB/s), 270MiB/s-270MiB/s (283MB/s-283MB/s), 
io=16.0GiB (17.2GB), run=60722-60722msec


https://pastebin.com/skk6mi7M




Thank you for posting your fio(1) runs on pastebin -- it is far easier 
to comment on real data.  :-)



It would help if you told us:

1.  Make and model of computer (or motherboard and chassis, if DIY).

2.  Make and model of CPU.

3.  Quantity, make, and model of memory modules, and how your memory 
slots are populated.


4.  NVMe drive partitioning, formatting, space usage, etc..


STFW "CAZ-82512-Q11 NVMe LITEON 512GB", that looks like a decent desktop 
NVMe drive.



Looking at your fio(1) runs:

1. It appears that you held 6 parameters constant:

 --name=random-write
 --ioengine=posixaio
 --rw=randwrite
 --runtime=60
 --time_based
 --end_fsync=1

2.  It appears that you varied 4 parameters:

 run    bs    size    numjobs    iodepth
 1    4k    4g    1 

Re: Relatively boring bullseye upgrade reports

2021-08-20 Thread Reco
Hi.

Let me join the party, I hope I'm not late.

caiman: Marvell Armada 385-based router, Linksys WRT1200AC.
Currently used as unmanaged switch.

My only gripe with the upgrade was snmpd. Bullseye's version reordered
just about everything in snmpd.conf.

Reco



Re: bbc script

2021-08-20 Thread tomas
On Thu, Aug 19, 2021 at 08:56:22PM -0400, Greg Wooledge wrote:
> > On Thu, 19 Aug 2021, Greg Wooledge wrote:
> > > Also relevant: https://mywiki.wooledge.org/BashFAQ/115

Great explanation, Greg. I wish I had half of your talent :)

One small addendum: the "predefined strings" in the case statement
are actually patterns to match the (contents) of the variable in
question:

> The case statement simply compares what the user typed to a bunch of
> predefined strings.  You choose what those strings are.
> 
>   case $ch in
> a) add; break;;
> s) subtract; break;;
> m) multiply; break;;
> q) exit 0;;
> *) echo "Unrecognized command.  Please try again.";;
>   esac

So the first one ("a") just matches when $ch contains exactly an "a".
But the last one ("*") matches everything (a way of saying "else",
or "if nothing matched, then...").

Same for the latter example:

>   case $ch in
> a | ad | add) add; break;;

the "a | ad | add" is a pattern matching "a" or "ad", or "add". There
are more useful patterns.

Cheers
 - t


signature.asc
Description: Digital signature