Re: backup of backup or alternating backups?

2024-09-30 Thread Tim Woodall

On Mon, 30 Sep 2024, Default User wrote:


Hi!

On a thread at another mailing list, someone mentioned that they, each
day, alternate doing backups between two external usb drives. That got
me to thinking (which is always dangerous) . . .

I have a full backup on usb external drive A, "refreshed" daily using
rsnapshot. Then, every day, I use rsync to make usb external drive B an
"exact" copy of usb external drive A. It seemed to be a good idea,
since if drive A fails, I can immediately plug in drive B to replace
it, with no down time, and nothing lost.

But of course, any errors on drive A propagate daily to drive B.

So, is there a consensus on which would be better: 
1) continue to "mirror" drive A to drive B?
or,
2) alternate backups daily between drives A and B?



IMO it can take days, weeks even months to discover that something has
got corrupted and/or deleted in error.

I don't think either strategy is "better", they have different pros and
cons. But in particular, your strategy doesn't require both drives to be
online at once and at least gives you a one day window to discover that
you've synced corruption.

I think my strategy would be something more akin to the following (I
think rsnapshot can do this but I've not actually used it)

1. Alternate disks (as you are doing)
2. Create a new directory MMDD and backup into that directory,
creating hard links to the files from the previous backup (two days
before)
3. Delete the oldest directories as/when you start running out of space.


On a slight tangent, how does rsnapshot deal with ext4 uninited extents?
These are subtly different to sparse files, they're still not written to
disk but the disk blocks are explicitly reserved for the file:

truncate (sparse file) vs fallocate (blocks reserved)

I've noticed that, at least on bookworm, lseek for SEEK_HOLE/SEEK_DATA
treats fallocate as a hole similar to a sparse file. I haven't tested
tar with the --sparse option but I suspect it will treat the two types
of file the same too.



Re: Cleanup in a bash script

2024-09-30 Thread Tim Woodall

On Sat, 28 Sep 2024, Michael Kjörling wrote:


On 28 Sep 2024 16:28 +0100, from debianu...@woodall.me.uk (Tim Woodall):

Hmmm, I've managed to fix it. The problem seems to be related to using
echo in the exit trap itself while both stderr and stdout are redirected
- e.g. via |& tee log

If I change the echo in the trap function to /bin/echo then it works! (I
don't see the output anywhere but that isn't really a problem in this
case - I know why it's aborted!)

I still can't create a small testcase but maybe this gives someone a
clue what issue I'm hitting?

In this particular case it's almost certain that the tee program will
have gone away before the bash script calls the exit handler.


That last sentence seems _very_ relevant here.

If the other end of the pipe is gone, then the shell builtin `echo`
probably fails with SIGPIPE/EPIPE. So will /bin/echo too, of course,
but that will fail just that process, not the script or the /bin/bash
that is executing the script as is probably the case in your
situation. I suspect that if you dig into this, you will find that
everything works as expected up to that `echo` statement.

Check $? after /bin/echo in the handler (probably non-zero), and do a
`type echo` from within a script executed in the same way (probably
"shell builtin").

If so, there's your difference.



Yes, this does seem to be my problem. However I didn't know about this
feature of builtin echo and, had I known, I wouldn't have needed to ask
the question.

Here's something else where they explicity use /bin/echo in preference
to bash builtin echo

https://unix.stackexchange.com/questions/522929/check-if-named-pipe-is-open-for-reading

The first answer:
You cannot use the built-in echo (even in a subshell) because the
SIGPIPE will be either caught or kill the whole shell.


Re: ifupdown and inet6 gateways for inet interfaces

2024-09-28 Thread Tim Woodall

On Fri, 27 Sep 2024, Andy Smith wrote:


Hi,

Here is a manual network setup I have created by use of the "ip"
command:

$ ip address show dev enX0
2: enX0:  mtu 1500 qdisc mq state UP group 
default qlen 1000
   link/ether 00:16:5e:00:02:39 brd ff:ff:ff:ff:ff:ff
   inet 85.119.82.225/32 scope global enX0
  valid_lft forever preferred_lft forever
   inet6 2001:ba8:1f1:f1d7::2/64 scope global
  valid_lft forever preferred_lft forever
   inet6 fe80::216:5eff:fe00:239/64 scope link
  valid_lft forever preferred_lft forever
$ ip route show
default via inet6 fe80::1 dev enX0 src 85.119.82.225
$ ip -6 route show
2001:ba8:1f1:f1d7::/64 dev enX0 proto kernel metric 256 pref medium
fe80::/64 dev enX0 proto kernel metric 256 pref medium
default via fe80::1 dev enX0 metric 1024 pref medium

Note that it has a single global scope IPv4 address which is a /32, and
its IPv4 default route is via an IPv6 link-local address.

This works fine, however I had to configure it using the "ip" command:


Fascinating! I had absolutely no idea you could do that!

I suspect you can do it with pre-up commands and inet6 manual. I'd not
be surprised if everything else expects a gateway to be on the same
AF_FAMILY.

I've used this where I want an interface up but don't want it
configured but you can add whatever ip commands you need.

auto xenbr0_19
iface xenbr0_19 inet6 manual
pre-up echo 0 >/proc/sys/net/ipv6/conf/default/accept_dad
pre-up echo 0 >/proc/sys/net/ipv6/conf/default/accept_ra
bridge_ports intlan0.19
bridge_stp off   # disable Spanning Tree Protocol
bridge_waitport 0# no delay before a port becomes available
bridge_fd 0  # no forwarding delay



Re: Cleanup in a bash script

2024-09-28 Thread Tim Woodall

On Sat, 28 Sep 2024, Greg Wooledge wrote:


On Sat, Sep 28, 2024 at 14:53:10 +0100, Tim Woodall wrote:

Is there a way in bash to guarantee that a trap gets called for cleanup
in a script?


#!/bin/bash
trap cleanup EXIT
cleanup() {
   ...
}

This works in bash -- i.e., it calls the cleanup function regardless
of whether the shell exits by calling "exit", or by falling off the
end of the script, or by receiving a fatal signal.  It does NOT work in
/bin/sh (dash, or any other implementation).  You have been warned.



That's exactly what I'm doing but somehow it's not working. (See below
for what the problem seems to be)

But I also cannot create a small testcase to reproduce.

It's a pain, I'm trying to debug something that takes a very long time
to run and the cleanup requires a complex ordering of unmounting,
deleting loop devices and deleting files  which all happens
automatically normally.

But as soon as I send the (copious) output to a pipe, then it doesn't
cleanup and I'm left having to do it by hand.

I guess I'll have to try and debug what isn't working.

Hmmm, I've managed to fix it. The problem seems to be related to using
echo in the exit trap itself while both stderr and stdout are redirected
- e.g. via |& tee log

If I change the echo in the trap function to /bin/echo then it works! (I
don't see the output anywhere but that isn't really a problem in this
case - I know why it's aborted!)

I still can't create a small testcase but maybe this gives someone a
clue what issue I'm hitting?

In this particular case it's almost certain that the tee program will
have gone away before the bash script calls the exit handler.

apt-cache policy bash
bash:
  Installed: 5.2.15-2+b7



Cleanup in a bash script

2024-09-28 Thread Tim Woodall

Is there a way in bash to guarantee that a trap gets called for cleanup
in a script?

I have a script that works perfectly normally and cleans up after
itself, even if it goes wrong.

However on trying to debug something else, I wanted to run it like this:

./script |& tee log

and now it doesn't clean up if I  it.

Is there a way in bash to guarantee (modulo uncatchable signals) that a
cleanup routine gets called?




Re: Unused blocks and fstrim

2024-09-23 Thread Tim Woodall

On Mon, 23 Sep 2024, Steve Keller wrote:


Tim Woodall  writes:




The raid rebuild is a particular pain point IMO. It's important to do a
discard after a failed disk rebuild otherwise every block is 'in use' on
the underlying storage.


Hmm, does a RAID rebuild really always copy the whole new disk, even
the unused space?  But what kind of info is then kept in the first
128 MiB of /dev/md0, if not a flag for every block telling whether it's
used or not?


After a rebuild I always create a LV with all the free space and then
discard it.


:(

I currently have RAID only on a server with HDDs which don't support
TRIM anyway.  I have only needed twice to rebuild the RAID-1 with 2
disks and I seem to remember that not the whole disk was copied, but I
might be wrong on that.



I think the bitmaps are for dirty blocks - so a resynch after a power
failure is quick, not for a failed disk replacement rebuild.

But perhaps there's a config option somewhere so that the md device can
track discards in a bitmap.

My guess is most people run at 90% capacity so it's not that useful...



Re: Best practice for fresh install on UEFI with multiple disks?

2024-09-20 Thread Tim Woodall

On Fri, 20 Sep 2024, Florent Rougon wrote:


Le 20/09/2024, Tim Woodall  a écrit:


Because the script will abort after the mount fails.

root@dirac:~# cat test.sh
#!/bin/bash

set -e

mount /boot/efi2

echo "do important stuff"

root@dirac:~# ./test.sh
mount: /boot/efi2: /dev/sda2 already mounted on /boot/efi2.
   dmesg(1) may have more information after failed mount system call.


Note that do important stuff is never reached.


That's interesting because my system doesn't behave the same. I had of
course checked, before writing my first message, that 'mount /boot/efi2'
returns exit status 0 even when /boot/efi2 is already mounted. With your
script (called foo.sh here), here is what I get:

# mount | grep efi2
/dev/sda1 on /boot/efi2 type vfat 
(rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)
# /tmp/foo.sh
do important stuff
# mount | grep efi2
/dev/sda1 on /boot/efi2 type vfat 
(rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)
/dev/sda1 on /boot/efi2 type vfat 
(rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)
#

Every invocation adds a new, duplicate entry in the output of 'mount'.

This is Debian sid amd64; /usr/bin/mount is from 'mount' package version
2.40.2-8.



That's very interesting and looks like it's probably a kernel change.

Tim.


Re: Best practice for fresh install on UEFI with multiple disks?

2024-09-20 Thread Tim Woodall

On Fri, 20 Sep 2024, Florent Rougon wrote:


Le 20/09/2024, Tim Woodall  a ?crit:


Haven't looked at the script but assuming it's run set -e, then your
suggestion will fail if it's already mounted.


Why?



Because the script will abort after the mount fails.

root@dirac:~# cat test.sh
#!/bin/bash

set -e

mount /boot/efi2

echo "do important stuff"

root@dirac:~# ./test.sh
mount: /boot/efi2: /dev/sda2 already mounted on /boot/efi2.
   dmesg(1) may have more information after failed mount system call.


Note that do important stuff is never reached.



Re: Unused blocks and fstrim

2024-09-20 Thread Tim Woodall

On Fri, 20 Sep 2024, Steve Keller wrote:


I'd like to understand some technical details about how fstrim, file
systems, and block devices work.

Do ext4 and btrfs keep a list of blocks that have already been reported as
unused or do they have to report all unused blocks to the block device
layer everytime the fstrim command is issued?

Does LVM keep information on every block about its usage or does it always
have to pass trim operations to the lower layer?

And does software RAID, i.e. /dev/md* keep this information on every block?
Can RAID skip unused blocks from syncing in a RAID-1 array when I replace a
disk?

Steve



In the default, iscsi, md, lvm, ext2 do not keep this information. Don't
know if it's configurable sonewhere but I suspect not. Don't know about
btrfs.

Some of this data is cached, but not between reboots.

The raid rebuild is a particular pain point IMO. It's important to do a
discard after a failed disk rebuild otherwise every block is 'in use' on
the underlying storage.

After a rebuild I always create a LV with all the free space and then
discard it.

I think a VG free space skipping md rebuild would suit me better than
discard tracking at all the different levels. I guess ZFS users might
have a different view of how useful lvm aware mdraid is :-)




Re: Best practice for fresh install on UEFI with multiple disks?

2024-09-19 Thread Tim Woodall

On Thu, 19 Sep 2024, Florent Rougon wrote:


Hi,

Le 19/09/2024, Andy Smith  a ?crit:


I don't think the answer, on Debian, has changed since I asked the
same question in 2020:

https://lists.debian.org/debian-user/2020/11/msg00455.html


There is a script at [1] to install as, e.g.,
/etc/grub.d/90_copy_to_boot_efi2, so that it is automatically run every
time grub updates its configuration file. I believe the script is fine,
except I would do

 mount /boot/efi2

rather than

 mount /boot/efi2 || :

Maybe the intent is for the script not to return a non-zero exit status
when /boot/efi2 can't be mounted, however in this case I certainly don't
want the rsync command to be run.



Haven't looked at the script but assuming it's run set -e, then your
suggestion will fail if it's already mounted.

Best would be to check that, and unmount again only if the script
mounted.

Tim.



Re: logging with iptables

2024-09-19 Thread Tim Woodall

On Thu, 19 Sep 2024, fxkl4...@protonmail.com wrote:


in my iptables i havetcp LOG flags 0 level 4 prefix "REJECT: "
this does what i want but how to direct the logging
it gets written to multiple file in /var/log
syslog, messages, kern, debug
can i restrict this to a single file



*.*;auth,authpriv.none;kern.none-/var/log/syslog

Add kern.none to the ones you don't want kernel messages in.

That will, of course stop all logging, not just iptables.




Re: LVM in LVM

2024-09-16 Thread Tim Woodall

On Mon, 16 Sep 2024, Steve Keller wrote:


In older Debian releases, I think at least until Debian 9, it was
possible to access PVs and LVs which are stored in a LV.  The PV
inside the containing LV could be displayed and activated with
vgdisplay(8) and vgchange(8).

This scenario makes sense if you have a LV for a VM guest, that uses
LVM inside, and when you need to access file systems in the guest from
the host (while the guest is shutdown).

I don't see how this can be done in the current Debian 12.

Steve



Not sure because I've previously battled the opposite problem but I'd
start here in lvm.conf

# Configuration option devices/sysfs_scan.
# Restrict device scanning to block devices appearing in sysfs.
# This is a quick way of filtering out block devices that are
# not
# present on the system. sysfs must be part of the kernel and
# mounted.)
sysfs_scan = 1

# Configuration option devices/scan_lvs.
# Scan LVM LVs for layered PVs, allowing LVs to be used as PVs.
# When 1, LVM will detect PVs layered on LVs, and caution must
# be
# taken to avoid a host accessing a layered VG that may not
# belong
# to it, e.g. from a guest image. This generally requires
# excluding
# the LVs with device filters. Also, when this setting is
# enabled,
# every LVM command will scan every active LV on the system
# (unless
# filtered), which can cause performance problems on systems
# with
# many active LVs. When this setting is 0, LVM will not detect
# or
# use PVs that exist on LVs, and will not allow a PV to be
# created on
# an LV. The LVs are ignored using a built in device filter that
# identifies and excludes LVs.
scan_lvs = 0



Re: Really ancient debian images? (potato or older)

2024-09-14 Thread Tim Woodall

On Sat, 14 Sep 2024, Michael Kjörling wrote:


On 14 Sep 2024 16:15 +0100, from debianu...@woodall.me.uk (Tim Woodall):

Is there anywhere I can download really, really ancient debian images. I
need potato or older (i386). I'd like a mountable disk image.


And, of course, as soon as I sent that I found a potato CD on
archive.org. Downloading now - it's going to take a while - about 20x
faster than it would have been back in the day over dialup but
definitely not a high speed archive :-)


There's also archive.debian.org. See for example
https://archive.debian.org/debian/dists/potato/main/disks-i386/2.2.26-2001-06-14/images-1.44/




This is fantastic, thanks all!

And for anyone who might want to do something like this in the future,
the only file I actually needed was:

https://archive.debian.org/debian/dists/potato/main/disks-i386/2.2.26-2001-06-14/base2_2.tgz

I downloaded the CD, mounted it and got it that way but it can be
downloaded directly. I assume the other old things are similar.

I've managed to build versions going all the way back to 1999.

Re: Really ancient debian images? (potato or older)

2024-09-14 Thread Tim Woodall

On Sat, 14 Sep 2024, Tom Furie wrote:


On Sat, Sep 14, 2024 at 04:15:46PM +0100, Tim Woodall wrote:


The oldest backups I still have go back to 2006 which, sadly, is way too
modern. Checking the potato release information says security updates
were discontinued in 2003. I switched from Redhat to Debian around that
time so I'm likely not to have had any backups for potato anyway
although I did at least boot it at that time.


And, of course, as soon as I sent that I found a potato CD on
archive.org. Downloading now - it's going to take a while - about 20x
faster than it would have been back in the day over dialup but
definitely not a high speed archive :-)


https://cdimage.debian.org/mirror/cdimage/ has images going all the way back
to 1.3. Prior to 3.0 is in the "older-contrib" directory.



Thanks! I didn't see that "older-contrib" directory and I assumed they'd
been deleted.

That's downloading much, much faster than archive.org.

Tim.



Re: Really ancient debian images? (potato or older)

2024-09-14 Thread Tim Woodall

On Sat, 14 Sep 2024, Tim Woodall wrote:


Is there anywhere I can download really, really ancient debian images. I
need potato or older (i386). I'd like a mountable disk image.

I have no idea if debootstrap supports this, I haven't tried (yet), I
was hoping there was somewhere I could download an image.

This is to build some ancient software. I've tried Jessie, which is the
oldest release I have images for but unfortunately that has e2fslibs-dev
1.42 while the software will not build with e2fslibs >1.19.

The oldest backups I still have go back to 2006 which, sadly, is way too
modern. Checking the potato release information says security updates
were discontinued in 2003. I switched from Redhat to Debian around that
time so I'm likely not to have had any backups for potato anyway
although I did at least boot it at that time.



And, of course, as soon as I sent that I found a potato CD on
archive.org. Downloading now - it's going to take a while - about 20x
faster than it would have been back in the day over dialup but
definitely not a high speed archive :-)



Really ancient debian images? (potato or older)

2024-09-14 Thread Tim Woodall

Is there anywhere I can download really, really ancient debian images. I
need potato or older (i386). I'd like a mountable disk image.

I have no idea if debootstrap supports this, I haven't tried (yet), I
was hoping there was somewhere I could download an image.

This is to build some ancient software. I've tried Jessie, which is the
oldest release I have images for but unfortunately that has e2fslibs-dev
1.42 while the software will not build with e2fslibs >1.19.

The oldest backups I still have go back to 2006 which, sadly, is way too
modern. Checking the potato release information says security updates
were discontinued in 2003. I switched from Redhat to Debian around that
time so I'm likely not to have had any backups for potato anyway
although I did at least boot it at that time.

Tim.



Re: MAC filter

2024-09-04 Thread Tim Woodall

On Wed, 4 Sep 2024, Andy Smith wrote:


Hi,

On Wed, Sep 04, 2024 at 04:29:00AM +, Tim Woodall wrote:

Every reply I've seen talks about local macs.


To be honest I had trouble parsing the original post as a cohesive
English text. I mean it had words I understood, just not in that
particular combination.


Yes. I assumed the author wasn't an English soeaker.


Don't know a good way on ipv6, best I can think of is
ping ff02::1


ip neighbor


Nice. Thanks. And ip neighbour works too :-)


(is also the "new" way for IPv4)

Thanks,
Andy






Re: MAC filter

2024-09-03 Thread Tim Woodall

On Sun, 1 Sep 2024, John Conover wrote:



The MAC filter needs a local filter for the two 16 X dual hex, (23
total,) digits.

The MAC is router usually aligned internally by the router, and
contains unique hex digits.

Does any anyone recall how to query the digits to the display?


Every reply I've seen talks about local macs.

arp -a

ipv4.firewall17.home.woodall.me.uk (192.168.100.1) at 00:16:3e:e0:70:01 [ether] 
on eth0

if you're using ipv4

Don't know a good way on ipv6, best I can think of is
ping ff02::1

64 bytes from fe80::216:3eff:fee0:7001%eth0: icmp_seq=3 ttl=64 time=2.15ms



Re: Bridging Network Connections with libvirt are unreliable

2024-08-29 Thread Tim Woodall

On Wed, 28 Aug 2024, Rainer Dorsch wrote:


In the systemd log, the first entry indicating network problems is that the DNS
server switches to another interface. But it could easily be a consequence and
not the cause of the issue:

Aug 28 06:57:54 h370 dhclient[1195]: DHCPREQUEST for 192.168.4.203 on eno1.4
to 192.168.4.1 port 67
Aug 28 06:57:54 h370 dhclient[1195]: DHCPACK of 192.168.4.203 from 192.168.4.1
Aug 28 06:57:54 h370 dnsmasq[2386]: reading /etc/resolv.conf
Aug 28 06:57:54 h370 dnsmasq[2386]: using nameserver 192.168.4.1#53
Aug 28 06:57:54 h370 dhclient[1195]: bound to 192.168.4.203 -- renewal in
18265 seconds.



To me that looks like it's the DHCP request(renewal?) that is more
likely breaking things. The DHCP server is presumably rewriting
resolv.conf.

I have the following setting to stop dhcp changing resolv.conf:

$ cat /etc/dhcp/dhclient-enter-hooks.d/nodnsupdate
make_resolv_conf() {
:
}

Don't know if that will fix your problem but it should hopefully stop
those dnsmasq lines appearing in the log.

Does the problem definitely happen when the dhcp update happens or are
these just the nearest logs?

Tim.



Re: Subscribing to bug updates.

2024-08-26 Thread Tim Woodall

On Mon, 26 Aug 2024, Jonathan Dowland wrote:


On Mon Aug 26, 2024 at 9:08 AM BST, Tim Woodall wrote:

Is there some magic needed to subscribe to bug updates?

…

I've managed to do this in the past. Not sure what I've done wrong or
has changed.


Can you outline what you tried this time?




Sent three emails all like this (only real difference being the bug
number in the subject)

Date: Sun, 25 Aug 2024 21:28:37 +0100 (BST)
From: Tim Woodall 
To: 1075339-subscr...@bugs.debian.org
Message-ID: <6d96abc2-7cda-2d29-61b4-92af8806f...@woodall.me.uk>
MIME-Version: 1.0
Content-Type: text/plain; format=flowed; charset=US-ASCII

Which were delivered here after some greylisting

(UTC+1)
Aug 25 21:39:59 mailrelay sm-mta[13405]: 47PKTjN6013316: 
to=<1073606-subscr...@bugs.debian.org>, delay=00:10:14, xdelay=00:00:03, 
mailer=esmtp, pri=301580, relay=buxtehude.debian.org. [209.87.16.39], dsn=2.0.0, 
stat=Sent (OK id=1siK1n-00G1BG-4V)
Aug 25 21:40:00 mailrelay sm-mta[13405]: 47PKScsH013310: 
to=<1075339-subscr...@bugs.debian.org>, delay=00:11:22, xdelay=00:00:01, 
mailer=esmtp, pri=301580, relay=buxtehude.debian.org. [209.87.16.39], dsn=2.0.0, 
stat=Sent (OK id=1siK1o-00G1BG-2g)
Aug 25 21:40:01 mailrelay sm-mta[13405]: 47PKT991013313: 
to=<1075196-subscr...@bugs.debian.org>, delay=00:10:52, xdelay=00:00:01, 
mailer=esmtp, pri=301580, relay=buxtehude.debian.org. [209.87.16.39], dsn=2.0.0, 
stat=Sent (OK id=1siK1o-00G1BG-W3)

But other than one spam email the last attempt to deliver email to
debianbugs@ was:

(UTC)
Aug 25 20:33:11 imap202 sm-mta[23678]: 47PKX8mZ023677: 
to=, delay=00:00:01, xdelay=00:00:01, mailer=local, 
pri=33352, relay=debianbugs+woodall.me.uk, dsn=2.0.0, stat=Sent

which was a response to a different email that did arrive OK. The emails
to control and to the bugs themselves all worked, it's only the emails
to the subscribe address that seem to have vanished.


Tim.


Subscribing to bug updates.

2024-08-26 Thread Tim Woodall

Is there some magic needed to subscribe to bug updates?

I added patches for three trixie RC bugs at the weekend and tried to
subscribe to uodates but I didn't get the subscribe confirmation email.

Do I need the word subscribe in subject or body perhaps?

debian.org/Bugs/Developer says subject and body are ignored.

I've managed to do this in the past. Not sure what I've done wrong or
has changed.

Tim.



Re: QEMU: Run a container of a different architecture

2024-08-22 Thread Tim Woodall

On Wed, 21 Aug 2024, Steve Keller wrote:


Can I run a container for a different CPU architecture using
systemd-nspawn?  I can easily install on my amd64 host a Debian
container of the same architecture and run that:



Don't know about systemd-nspawn but I do something like this using
unshare, binfmt-support and qemu-user-static.

I don't have to do anything at all other than create the file system
with the emulated architecture and then chroot into it with those
packages installed.

$ apt-mark showmanual | grep binf
binfmt-support
$ apt-mark showmanual | grep qemu
qemu-user-static


I don't think I installed anything else but I might have missed
something.

Perhaps systemd-nspawn would similarly work with those packages
installed.

Tim.




Re: Authenticator apps

2024-08-05 Thread Tim Woodall

On Sun, 4 Aug 2024, to...@tuxteam.de wrote:


On Sun, Aug 04, 2024 at 05:44:07PM +0100, Mick Ab wrote:

I have a Debian Bullseye desktop PC.

I am looking for a 2fa authenticator that works on my desktop, without
using a smartphone or tablet.


I don't know what an "authenticator app" is. If what you need is TOTP,
oathtool (in the same-named Debian package) might be your friend.



I use this too, and it gives the same numbers as FreeOTP which I have
installed on my phone.




Re: how to use debootstrap tar as repo inside the chroot

2024-08-02 Thread Tim Woodall

On Fri, 2 Aug 2024, Tim Woodall wrote:



# Reading through this now I'm not absolutely sure that those hoops I
# jump throught to sign the repo are needed...


Just confirmed the gpg stuff is not needed


# Not sure why I have that proxy bit in here either. I think I'm
# installing from a file repo...



Again, confirmed that they weren't needed and this works fine with no
network access at all.



Re: how to use debootstrap tar as repo inside the chroot

2024-08-02 Thread Tim Woodall

On Fri, 2 Aug 2024, daggs wrote:


Greetings,

I'm working on an automated Debian installation without network access.
I've discovered the --make-tarball and --unpack-tarball switches which I use to 
create the tarball and use it as repo.
the initial deployment is this: debootstrap --arch amd64 --unpack-tarball 
/tmp/debs.tar.gz stable /mnt
which installs the base pkgs however debs.tar.gz holds other deb files which I 
want to install when within the chroot.
I looked at the /mnt after the initial deployment and I see that there are 
files that might help me in /var/cache/apt/archives/ and  /var/lib/apt/lists/.
so I was wondering, is there a way to use these files to create a valid local 
repo and use it to install the pkgs I've prepared using the --make-tarball 
switch.
is there a standard way to do it which I've might have missed while looking 
online?



Here's an outline of how I do it - which you might be able to modify for
your use case:

# Download everything I need.
  rm -f "${APT_WORK}"/archives/*.deb
  APT_CONFIG=${APT_CONFIG} \
  DEBIAN_FRONTEND=noninteractive \
apt-get -q -y --allow-unauthenticated install -d \
-o APT::Install-Recommends=false \
  "$@"

#$@ is a list of debs I download - I guess this matches your tarball.


#Set up a local repo for use inside the chroot of everything we downloaded in 
phase 1
mkdir -p "${BUILDCHROOT}/.repo/dists/${DIST}/main/binary-${ARCH}/"
mkdir -p "${BUILDCHROOT}/.repo/pool"
mkdir -p "${BUILDCHROOT}/.repo/sim"

aptftp-conf()
{
  cat <"dists/${DIST}/main/binary-${ARCH}/Packages" )
xz -c "${BUILDCHROOT}/.repo/dists/${DIST}/main/binary-${ARCH}/Packages" 
>"${BUILDCHROOT}/.repo/dists/${DIST}/main/binary-${ARCH}/Packages.xz"

apt-ftparchive release -c=<( aptftp-conf ) "${BUILDCHROOT}/.repo/dists/${DIST}/" 
>"${BUILDCHROOT}/.repo/dists/${DIST}/Release"

# Reading through this now I'm not absolutely sure that those hoops I
# jump throught to sign the repo are needed...

# I now chroot into ${BUILDCHROOT} where the essential packages are
# already unpacked and installed in my workflow. I guess this point is
# simliar to the end of your --unpack-tarball step.

cat <"/etc/apt/sources.list.d/${DIST}.sources"
Types: deb
URIs: file:/.repo
Suites: ${DIST}
Components: main

CATEOF


# Not sure why I have that proxy bit in here either. I think I'm
# installing from a file repo...

#Now we install the remaining required packages and cleanup
  apt-get -o Acquire::http::Proxy="http://localhost:3128/"; \
  -o APT::Get::AllowUnauthenticated=yes \
  -o Acquire::AllowInsecureRepositories=yes \
  update
  apt-get -o Acquire::http::Proxy="http://localhost:3128/"; \
  -o APT::Install-Recommends=false \
  -o Dpkg::Options::="--force-confdef" \
  -o Dpkg::Options::="--force-confold" \
  --allow-unauthenticated -y \
  install bootstrap-required 

Re: Tool to store on IMAP server

2024-07-30 Thread Tim Woodall

On Tue, 30 Jul 2024, Nicolas George wrote:


Tim Woodall (12024-07-30):

Yes, I use unison to keep some imap servers in sync.


Be precise: you use unison to keep the directories that serve as mail
storage for some IMAP servers in sync. Your unison does not know that
there is IMAP involved.


Correct. Unison has no idea that it's synching maildirs.

cat inbox-imap17.prf

# Unison preferences file
#

# Roots of the synchronization
root = /home/tim/Maildir/INBOX
root = ssh://imap17//home/tim/Maildir/INBOX

#Name matches anything, including a directory name.
#Path matches anything but is anchored at the root
ignore = Name .lock
ignore = Name {dovecot.*}
ignore = Name {dovecot-*}
ignore = Name {tmp/*}

# Log actions to the terminal
log = true



I've never 'tried' to break it by, for example, performing imap things
on the same email on two servers at once but my guess is I'd end up with
a duplicated email. It's possible I'd get a synch conflict but I've not
had one yet. And I do occasionally use imap on more than one server at
once.

But I'd assume dropping files into the maildir with non-conflicting
names - there might be restricions on how they should be named - would
'just work' at least with dovecot.



Re: Tool to store on IMAP server

2024-07-30 Thread Tim Woodall

On Mon, 29 Jul 2024, mick.crane wrote:


On 2024-07-29 14:36, Nicolas George wrote:

Hi.

I am looking for a tool that reads a mail from its input and stores it
into an IMAP mailbox:
With a new Dovecot install I believe I copied all the old mails into eg. 
~/Maidir/cur

and they showed up.
I was concerned the '1722260402.M755015P70320.xx,S=17279,W=17606:2,S'
numbers might get mixed up with new ones but didn't seem to matter.
Probably not how you are supposed to do it.
mick



Yes, I use unison to keep some imap servers in sync. Three servers
receive mail which is then replicated and I've had no problems.

So just adding files should be fine.



Re: sendmail without DNS

2024-07-22 Thread Tim Woodall

On Sun, 21 Jul 2024, Adam Weremczuk wrote:


This is in a way a continuation of my recently "purely local DNS" thread.

To recap: my objective is to send emails to a single domain with both DNS and 
any other email traffic being disabled.


A simple working solution that I've found for Postfix is:

/etc/hosts
1.2.3.4example.com

/etc/postfix/main.cf
smtp_dns_support_level = disabled
smtp_host_lookup = native

Now I'm trying to achieve the same thing for Sendmail to no avail.

So far I've tried:

- the above /etc/hosts entry

- DEAMON_OPTIONS(`Port-smtp,Addr=127.0.0.1, Name=MTA')dnl in sendmail.mc 
followed by m4 sendmail.mc > sendmail.cf


You can just type make in /etc/mail and dbs will be rebuilt and it will
tell you if you need to reload.



- /etc/mail/mailertable
example.com esmtp:[1.2.3.4]



I use this. Are you missing FEATURE(mailertable)

sendmail.mc:FEATURE(`mailertable',`hash -o /etc/mail/mailertable.db')dnl



Re: umask - default user settings?

2024-07-17 Thread Tim Woodall

On Wed, 17 Jul 2024, Max Nikulin wrote:


On 17/07/2024 15:37, Tim Woodall wrote:

umask 077 can come with its own problems when using shared directories.


<https://wiki.debian.org/UserPrivateGroups>

Taking into account old 022 vs. 002 discussions it might be 007.


I'm not a sudo user but IIUC, root inherits the umask, which can then
cause problems when things can't read config files that should be world
readable.


Do you mean the following bug or something else?


No, I'm talking about sudo, not su. I'm not a sudo user so I can't test
but my understanding is that root inherits the umask of the invoking
user (or it used to)



Re: umask - default user settings?

2024-07-17 Thread Tim Woodall

On Mon, 15 Jul 2024, Jeffrey Walton wrote:



Debian is a multi-user operating system. Decisions should be made accordingly.

I suppose umask is a moot point on phones and tablets, where
single-user is often the use case.



umask 077 can come with its own problems when using shared directories.

years ago I used to use cvs pserver specifically to finesse this
problem. Now that (almost) everybody uses a remote git server it's less
relevant there.

I'm not a sudo user but IIUC, root inherits the umask, which can then
cause problems when things can't read config files that should be world
readable.

Rather than change umask, I'd suggest that the better change is to make
home directories 0700 by default. If that is the wrong choice then it
only has to be fixed once per user. Creating 'world/group' readable
files with too restrictive permissions never goes away.



Re: Debian for Limited memory

2024-07-17 Thread Tim Woodall

On Tue, 16 Jul 2024, Michael Kj?rling wrote:


Debian 12 will boot in 256 MB RAM (I think that's the minimum
supported configuration on amd64, which your VPS very likely is) and a


One annoying "feature" I've found if you create the disk image on
another machine is that 'modules=dep' often won't boot and
'modules=most' won't boot on low ram machines.

There are obvious ways around this but it can be confusing, especially
where you're booting blind or near blind and need it to at least get to
a point where you can write a log for investigating on another machine.

debian 12 will boot in 256MB happily although I think a 'modules=most'
initrd needs 512MB on and64 at least.



Re: timeout for iptables

2024-07-02 Thread Tim Woodall

On Tue, 2 Jul 2024, Jeff Peng wrote:


Hello gurus,

Is there a tool for maintaining the timeout for iptables rules?

for example, one IP would be blocked by my iptables for 24 hours, and another 
IP should be blocked for one week.




Off the top of my head I can't think exactly how to do it but I think
you can use -m hashlimit and use the --hastlimit-htable-expire to time
things out.

But this will depend on exactly what you're doing. If you're adding
something to the hashtable that keeps happening then it might not
expire the way you want.



Re: sendmail and starttls failing

2024-07-01 Thread Tim Woodall

On Mon, 1 Jul 2024, Tim Woodall wrote:


On Sun, 30 Jun 2024, Tim Woodall wrote:


On Sun, 30 Jun 2024, Michael Grant wrote:


Yeah I'm seeing this too!  Identical in fact.  This is what I did to
fix this:  I added this to my /etc/mail/access file for my local
server that sends this messages to me:

   SRV_Features:127.0.0.1  L U G

Specifically, I added the U and G features, (I already had the L
feature disabled for localhost).  Uppercase letter disables the
feature, lowercase enables it.

I found the U and G mentioned here:

https://forums.oracle.com/ords/apexds/post/solaris-11-4-sendmail-issue-after-sendmail-8-18-1-update-7312

I did not try this suggestion to use U2 and G2 that he mentioned.  If
you do let me know.



Thanks!

I've just added u2 g2 and it seems to work. My quick test had bare LF
removed and bare CR replaced by space which isn't what I expected but is
good enough...




Actually, in bookworm this only seems to work with cr. mail wasn't
sending a lf and my email client was not displaying ^f

This works for testing cr and lf.
echo -ne 'Subject: test\n\ncr\rcr/lf\nlf' | /usr/sbin/sendmail -i -- root



This is what I see in sendmail logs:
Jul  1 17:06:55 dirac sm-mta[21391]: 461G6tQr021391: collect: relay=localhost, 
from=, info=Bare carriage return (CR) not 
allowed, where=body, status=replaced

I don't think bare LF are a problem for sendmail which is why I suspect
they're not being replaced.



Re: sendmail and starttls failing

2024-07-01 Thread Tim Woodall

On Sun, 30 Jun 2024, Tim Woodall wrote:


On Sun, 30 Jun 2024, Michael Grant wrote:


Yeah I'm seeing this too!  Identical in fact.  This is what I did to
fix this:  I added this to my /etc/mail/access file for my local
server that sends this messages to me:

   SRV_Features:127.0.0.1  L U G

Specifically, I added the U and G features, (I already had the L
feature disabled for localhost).  Uppercase letter disables the
feature, lowercase enables it.

I found the U and G mentioned here:

https://forums.oracle.com/ords/apexds/post/solaris-11-4-sendmail-issue-after-sendmail-8-18-1-update-7312

I did not try this suggestion to use U2 and G2 that he mentioned.  If
you do let me know.



Thanks!

I've just added u2 g2 and it seems to work. My quick test had bare LF
removed and bare CR replaced by space which isn't what I expected but is
good enough...




Actually, in bookworm this only seems to work with cr. mail wasn't
sending a lf and my email client was not displaying ^f

This works for testing cr and lf.
echo -ne 'Subject: test\n\ncr\rcr/lf\nlf' | /usr/sbin/sendmail -i -- root



Re: sendmail and starttls failing

2024-07-01 Thread Tim Woodall

On Mon, 1 Jul 2024, Mark Fletcher wrote:


On Sun, 30 Jun 2024 at 23:21, Tim Woodall  wrote:




The thing I'm seeing is  in the body of the email - I had no idea
this was illegal - and I'm surprised that tools like cron don't do
something to avoid sending "illegal" emails. Indeed, even mail will do
so happily.

cron isn?t a mail sending tool ? not the right place to police something

like this. Seems to me that sendmail is.

Mark



Sendmail now polices it - so cron emails get stuck if they contain a
bare cr. Presumably every mta is now doing something similar.

There may be a cron setting that I'm missing to avoid this, I haven't
looked yet.

It is, of course, possible to run the output of every cron job through a
filter too.

my sendmail is now replacing bare cr with space so cron emails are
delivered.



Re: sendmail and starttls failing

2024-06-30 Thread Tim Woodall

On Sun, 30 Jun 2024, Michael Grant wrote:


Yeah I'm seeing this too!  Identical in fact.  This is what I did to
fix this:  I added this to my /etc/mail/access file for my local
server that sends this messages to me:

   SRV_Features:127.0.0.1  L U G

Specifically, I added the U and G features, (I already had the L
feature disabled for localhost).  Uppercase letter disables the
feature, lowercase enables it.

I found the U and G mentioned here:

https://forums.oracle.com/ords/apexds/post/solaris-11-4-sendmail-issue-after-sendmail-8-18-1-update-7312

I did not try this suggestion to use U2 and G2 that he mentioned.  If
you do let me know.



Thanks!

I've just added u2 g2 and it seems to work. My quick test had bare LF
removed and bare CR replaced by space which isn't what I expected but is
good enough...




Re: sendmail and starttls failing

2024-06-30 Thread Tim Woodall

On Sun, 30 Jun 2024, Greg Wooledge wrote:


On Sun, Jun 30, 2024 at 23:08:01 +0100, Tim Woodall wrote:

According to this
https://support.trustwave.com/kb/KnowledgebaseArticle10016.aspx

bare CRs aren't allowed in emails but this has always worked.

I'm only likely to have cron generating emails like this.

Strange that this would have been changed in a stable release. It
doesn't seem to have been a security update.


It looks like it's coming from this change:

https://metadata.ftp-master.debian.org/changelogs//main/s/sendmail/sendmail_8.17.1.9-2+deb12u2_changelog

 * Fix CVE-2023-51765 (Closes: #1059386):
   sendmail allowed SMTP smuggling in certain configurations.
   Remote attackers can use a published exploitation
   technique to inject e-mail messages with a spoofed
   MAIL FROM address, allowing bypass of an SPF protection
   mechanism. This occurs because sendmail supports
   . but some other popular e-mail servers
   do not. This is resolved with 'o' in srv_features.

I don't know the details of how this leads to a security hole.




It might be - but the wording suggested that this is blocking bare 
which isn't my problem - and also I'd assume this is header related.

The thing I'm seeing is  in the body of the email - I had no idea
this was illegal - and I'm surprised that tools like cron don't do
something to avoid sending "illegal" emails. Indeed, even mail will do
so happily.



Re: sendmail and starttls failing

2024-06-30 Thread Tim Woodall

On Sun, 30 Jun 2024, Tim Woodall wrote:


On Sun, 30 Jun 2024, Michael Grant wrote:


After an update today, sendmail is refusing to accept mail.  I'm
seeing this in the logs:



Hmmm, this update seems to have done a lot of odd things.



root@dirac:~# mail root
Cc: 
Subject: test cr

this
is^Ma test
.
root@dirac:~# mailq
MSP Queue status...
/var/spool/mqueue-client (1 request)
-Q-ID- --Size-- -Q-Time- Sender/Recipient---
45ULV1xk014043   15 Sun Jun 30 22:31 r...@dirac.home.woodall.me.uk
 (Deferred: 421 4.5.0 Bare carriage return (CR) not allowed)
 root
Total requests: 1
MTA Queue status...
/var/spool/mqueue is empty
Total requests: 0



According to this
https://support.trustwave.com/kb/KnowledgebaseArticle10016.aspx

bare CRs aren't allowed in emails but this has always worked.

I'm only likely to have cron generating emails like this.

Strange that this would have been changed in a stable release. It
doesn't seem to have been a security update.




Re: sendmail and starttls failing

2024-06-30 Thread Tim Woodall

On Sun, 30 Jun 2024, Michael Grant wrote:


After an update today, sendmail is refusing to accept mail.  I'm
seeing this in the logs:



Hmmm, this update seems to have done a lot of odd things.

MSP Queue status...
/var/spool/mqueue-client (2 requests)
-Q-ID- --Size-- -Q-Time- Sender/Recipient---
45U9e1iI01814530770 Sun Jun 30 10:40 MAILER-DAEMON
 (Deferred: 421 4.5.0 Bare carriage return (CR) not allowed)
 root
45U5Qnln00888528799 Sun Jun 30 06:26 root
  7BIT   (Deferred: 421 4.5.0 Bare carriage return (CR) not allowed)
 root
Total requests: 2
MTA Queue status...
/var/spool/mqueue is empty
Total requests: 0



That's the cron email telling me about the update.

It's not at all clear to me what it's complaining about.
root@dirac:/var/spool/mqueue-client# od -t x1 qf45U* | grep 0d
root@dirac:/var/spool/mqueue-client#

Unless it's the bare CR in the body of the email - which should be fine!

Moving the queue files from mqueue-client to mqueue and fixing up the
owner and perms and they delivered fine.




Re: sendmail and starttls failing

2024-06-30 Thread Tim Woodall

On Sun, 30 Jun 2024, Michael Grant wrote:


Jun 30 11:43:00 bottom sm-mta[18852]: AUTH: available mech=DIGEST-MD5 CRAM-MD5 
LOGIN PLAIN, allowed mech=EXTERNAL


Update here, it's not apparently an STARTTLS error, it's an AUTH
error.  Something in the update last night altered my list of
available AUTH mechanisms.

I manually updated sendmail.cf and updated this line:

O AuthMechanisms=EXTERNAL DIGEST-MD5 CRAM-MD5 NTLM LOGIN PLAIN

by adding "DIGEST-MD5 CRAM-MD5 NTLM LOGIN PLAIN" and now it accepts
mail from my desktop.

I don't see where this is configured.  /etc/sasl2/Sendmail.conf which
is a link to /etc/mail/sasl/Sendmail.conf.2, but this file looks good,
I don't know where it's getting the AuthMechanisms from (yet).



I think this is configured in sasl.m4

and I suspect it's something to do with the "sm_version_math" stuff but
exactly what has changed to break this for you I don't know

ifelse(eval(sm_version_math >= 526848), `1', `dnl
ifelse(sm_enable_auth, `yes', `dnl
dnl #
dnl # Set a more reasonable timeout on negotiation
dnl #
define(`confTO_AUTH',  `2m')dnl  #   , def=10m
dnl #
dnl # Do not touch anything above this line...
dnl #
dnl # Available Authentication methods
dnl #
define(`confAUTH_MECHANISMS',dnl
`DIGEST-MD5 CRAM-MD5 PLAIN LOGIN')dnl
dnl #
dnl # These, we will trust for relaying
dnl #
TRUST_AUTH_MECH(`DIGEST-MD5 CRAM-MD5 PLAIN LOGIN')
dnl #
dnl # for 8.12.0+, add EXTERNAL as an available & trusted mech (w/STARTTLS)
dnl # and allow sharing of /etc/sasldb(2) file, allow group read/write
dnl #
ifelse(eval(sm_version_math >= 527360), `1', `dnl
define(`confAUTH_MECHANISMS',dnl
`EXTERNAL 'defn(`confAUTH_MECHANISMS'))dnl
TRUST_AUTH_MECH(`EXTERNAL')
define(`confDONT_BLAME_SENDMAIL',dnl
defn(`confDONT_BLAME_SENDMAIL')`,GroupReadableSASLDBFile,GroupWritableSASLDBFile')dnl
')dnl





Re: can't connect to server from outside LAN

2024-06-12 Thread Tim Woodall

On Wed, 12 Jun 2024, Greg Marks wrote:


I'm running a Debian server from my home with a static IP address,
with ssh configured to use key-based authentication rather than
password-based.  As of a couple weeks ago, I have been unable to ssh to
my server from external locations.  When I ssh from a laptop connected
to the wireless network on the same router as my home server, I do
successfully connect to the server.  But when I ssh from an external
location, I get this error:




The problem began a couple weeks ago; previously (and for many years)
I had been able to ssh to my server without issue.  The first time it
failed, I was using free wireless at an airport; I was able to ssh to my
server from the hotel that morning, and maybe, the first time I tried,
from the airport, but then subsequent ssh attempts from the airport
failed to connect.  I mention this only because nothing had changed in
my server's configuration when this problem began.

This is a real problem for me, as a lot of my work involves sending
files via scp between work and home.  Any suggestions about how to
troubleshoot and hopefully fix the problem will be greatly appreciated.



Run tcptraceroute to ports 22 and 80 to see ehere it's being blocked.

(or 443)

Depending on where it's blocked and why, possibly run sshd on a
different port. (or fix the firewall if it's controlled by you)

You can also run openvpn on 443 without breaking the webserver, which is
another workaround.




Re: Strange difference between bullseye and bookworm.

2024-05-29 Thread Tim Woodall

On Tue, 28 May 2024, Tim Woodall wrote:


On Tue, 28 May 2024, Tim Woodall wrote:


I start a new user namespace as follows:
(The special bashrc is just because there are some things in my default
one that (expectedly) don't work in the lxc user namespace)




I then mount an overlayfs on top of that:
fuse-overlayfs -o lowerdir=lower,upperdir=overlay,workdir=work mount




And it appears that fuse-overlayfs is the problem. Downgrading to the
version from bullseye fixes:

root@bookworm19:~# apt-get install fuse-overlayfs=1.4.0-1
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following packages will be DOWNGRADED:
 fuse-overlayfs


The problem only seems to trigger when lower is a squashfs mount...



And upgrading to the version in trixie (doesn't even need to be
backported, the package installs as is) also fixes.




Re: Strange difference between bullseye and bookworm.

2024-05-28 Thread Tim Woodall

On Tue, 28 May 2024, Tim Woodall wrote:


I start a new user namespace as follows:
(The special bashrc is just because there are some things in my default
one that (expectedly) don't work in the lxc user namespace)




I then mount an overlayfs on top of that:
fuse-overlayfs -o lowerdir=lower,upperdir=overlay,workdir=work mount




And it appears that fuse-overlayfs is the problem. Downgrading to the
version from bullseye fixes:

root@bookworm19:~# apt-get install fuse-overlayfs=1.4.0-1
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following packages will be DOWNGRADED:
  fuse-overlayfs


The problem only seems to trigger when lower is a squashfs mount...



Strange difference between bullseye and bookworm.

2024-05-28 Thread Tim Woodall

I start a new user namespace as follows:
(The special bashrc is just because there are some things in my default
one that (expectedly) don't work in the lxc user namespace)

lxc-usernsexec -m b:0:689824:65536 -- /bin/bash --rcfile ~/.bashrc.lxc

Inside there I mount a squash fs image that includes the normal tools
for building packages

squashfuse bookworm.amd64.build-deb.sqfs lower

I then mount an overlayfs on top of that:
fuse-overlayfs -o lowerdir=lower,upperdir=overlay,workdir=work mount

I bind mount /dev/null into there
cd mount
touch dev/null
mount -o bind /dev/null dev/null

and then I chroot:
/sbin/chroot .

This all appears to be working perfectly on both bookworm and bullseye
hosts.

But in bookworm, apt-get update fails in a weird way:

root@dirac:/# apt-get update
Get:1 http://aptmirror.home.woodall.me.uk/local bookworm InRelease [18.9 kB]
Get:2 http://deb.debian.org/debian bookworm InRelease [151 kB]
Err:1 http://aptmirror.home.woodall.me.uk/local bookworm InRelease
  Couldn't execute /usr/bin/apt-key to check 
/var/lib/apt/lists/partial/aptmirror.home.woodall.me.uk_local_dists_bookworm_InRelease
Get:3 http://deb.debian.org/debian-security bookworm-security InRelease [48.0 
kB]
Err:2 http://deb.debian.org/debian bookworm InRelease
  Couldn't execute /usr/bin/apt-key to check 
/var/lib/apt/lists/partial/deb.debian.org_debian_dists_bookworm_InRelease
Err:3 http://deb.debian.org/debian-security bookworm-security InRelease
  Couldn't execute /usr/bin/apt-key to check 
/var/lib/apt/lists/partial/deb.debian.org_debian-security_dists_bookworm-security_InRelease

Notice that "Couldn't execute /usr/bin/apt-key"

Running exactly the same on a bullseye host and this "just works"

Running:
strace -f apt-get update |& less

[pid  6619] execve("/usr/bin/apt-key", ["/usr/bin/apt-key", "--quiet", "--readonly", "verify", "--status-fd", 
"3", "/tmp/apt.sig.xWh7oI", "/tmp/apt.data.JpfP2n"], 0x5566c9baafc0 /* 48 vars */) = -1 EOPNOTSUPP (Operation not supported)

This is my problem!

If I unpack the squashfs image but otherwise follow the same steps (i.e.
lower is a normal directory) then this works.

When I compare other things I see this in the working one:
[pid  6701] openat(AT_FDCWD, "/proc/self/fd", 
O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = -1 ENOENT (No such file or directory)

while I see this in the non-working one:
[pid  6701] openat(AT_FDCWD, "/proc/self/fd", 
O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = -1 EOPNOTSUPP (Operation not supported)

ENOENT is expected as I don't have /proc mounted in the namespace.

execve works for other tasks:
[pid  6693] execve("/usr/bin/dpkg", ["/usr/bin/dpkg", 
"--print-foreign-architectures"], 0x7ffe96e5ffa0 /* 42 vars */) = 0
works on both the bullseye and bookworm hosts, there's something special
about apt-key. Weirdly, copying dpkg over apt-key and I still get this
EOPNOTSUPP error. But deleting it completely and I get ENOENT from the
execve call.

Does anyone have any ideas what might be be wrong here, what I could try
to get this working again?

Tim.



Re: Alpine 6.26 - can't stop it wanting to save the password.

2024-05-28 Thread Tim Woodall

On Mon, 27 May 2024, Curt wrote:


On 2024-05-26, Tim Woodall  wrote:


Anyone got any ideas how to disable this?




If you have ~/.alpine.passfile apparently it will keep asking, but maybe
you don't, in which case I'm stumped.


Thanks, no that file doesn't exist. I'm a bit stumped too - and another
problem has come up that has handicapped me investigating in the source
although I've now found a workaround.

Tim.



Alpine 6.26 - can't stop it wanting to save the password.

2024-05-26 Thread Tim Woodall

I start alpine with the following alias

alias pine='alpine -p 
\{imap202.home.woodall.me.uk/norsh/tls/user=tim\}remote_pinerc'

and after entering my password I get:

Preserve password on DISK for next login? [y]:

I don't want to do this. My googling suggested that I could set

  [X]  Disable Password Caching
  [X]  Disable Password File Saving

But this doesn't work. The setting has been added to my remove_pinerc
successfully:

diff -u 1716741312.M572575P3067.imap202.home.woodall.me.uk\,S\=4980\,W\=5056\:2\,S 1714905325.M710303P24542.imap202.home.woodall.me.uk\,S\=4921\,W\=4995\:2\,S 
--- 1716741312.M572575P3067.imap202.home.woodall.me.uk,S=4980,W=5056:2,S2024-05-26 16:35:12.600198938 +

+++ 1714905325.M710303P24542.imap202.home.woodall.me.uk,S=4921,W=4995:2,S   
2024-05-05 10:35:25.737002123 +
@@ -1,5 +1,5 @@
 x-pine-pinerc: 1368669514
-Date: Sun, 26 May 2024 17:35:12 +0100 (BST)
+Date: Sun, 5 May 2024 11:35:25 +0100 (BST)
 From: Pine Remote Data 
 Subject: Pine Remote Data Container
 MIME-Version: 1.0
@@ -28,16 +28,14 @@
enable-unix-pipe-cmd,
enable-bounce-cmd,
no-compose-send-offers-first-filter,
-   disable-index-locale-dates,
-   disable-password-file-saving,
-   disable-password-caching
+   disable-index-locale-dates
 sort-key=tHread
 folder-sort-rule=alpha-with-dirs-first
 goto-default-rule=most-recent-folder
 character-set=iso-8859-1
 editor=/usr/bin/vim
 last-time-prune-questioned=124.5
-last-version-used=6.26
+last-version-used=6.24
 remote-abook-metafile=.ab009012

Anyone got any ideas how to disable this?



Re: Current best practices for system configuration management?

2024-04-21 Thread Tim Woodall

On Thu, 18 Apr 2024, Mike Castle wrote:


Now, I would like to expand that into also setting up various config
files that I currently do manually, for example, the `/etc/apt/*`
configs I need to make the above work.  For a single set of files,
manual isn't bad, but as I want to get into setting up LDAP, autofs,
and so on, it is time to explore solutions.  I only have four systems
at the moment (two physical and two virtual), so I don't think I need
something too fancy.


I do this - but it needs a bit of thought as to how you want to do it.

I'm assuming you don't care about debian policy ...

I do one of four things in packages depending on what I want to achieve
and what the package I'm configuring supports.

1.(easiest) - just drop a config file into a conf.d directory. I
use this for maintaining sources.list. Back in the buster days I was
using openssh from backports that supported this as maintaining the
config is so much easier than any of the other ways and so much easier
to fix bugs in your packages - a failing postinst script can be a pain
to resolve.

apt-mirror@aptmirror19:~ (none)$ apt-cache search bookworm-sources
bookworm-sources - Meta-package to pull in list of apt-sources for debian

(I generate these packages automatically from what is available on
deb.debian.org.)

FWIW, I also have a dev repo and a fast repo - the dev repo is a clone
where I can test out substantial changes and the fast repo only has a
couple of packages but can rebuild "the world" very quickly when I'm
testing. So I have a number of extra levels of indirection to support
this:

$ apt-cache depends bookworm-sources
bookworm-sources
  Depends: bookworm-local-sources

That is the normal package that I would install.

apt-mirror@aptmirror19:/mnt/mirror/local/main/o (master)$ apt-cache depends 
bookworm-dev-sources
bookworm-dev-sources
  Depends: bookworm-dev-main-sources

That lets me use the dev repo - and thanks to pinning, the packages in
it will replace (even downgrade) packages from local.

Ditto for my fast repo.

2. (next easiest) - add a file in /usr/share/ and then overwrite the
file in /etc by copying in postinst.

3. edit the config file without diverting it using a sed script.

4. divert the config and edit/replace it in the package.

3. and 4. in particular can make upgrades more difficult if/when you
find bugs in the way you're handling diversions and uninstalls.



Re: Am I missing a trick somewhere? lxc-usernsexec and squashfs

2024-04-07 Thread Tim Woodall

On Sat, 6 Apr 2024, Tim Woodall wrote:


Hi,

I use lxc-usernsexec to simulate root (and other users) for a non-root
user.

lxc-usernsexec -m b:0:10:65536

That then chroots into an overlayfs mounted using fuse.

The lowerdir is a mounted squashfs, the upperdir is a regular directory.

squashfuse rootimg.sqfs lower
fuse-overlayfs -o lowerdir=lower,upperdir=upper,workdir=work mount

This is all working nicely, and much faster than extracting a tarfile to
generate the lowerdir which is what I used to do.

But I have to jump through hoops to generate the lower sqfs.


Turns out there was something wrong with my testing. Not exactly sure
what I did wrong but provided you do the mounting inside the container
then it "just works". The hoops are only needed if you want to mount
outside the container.

Tim.



Am I missing a trick somewhere? lxc-usernsexec and squashfs

2024-04-06 Thread Tim Woodall

Hi,

I use lxc-usernsexec to simulate root (and other users) for a non-root
user.

lxc-usernsexec -m b:0:10:65536

That then chroots into an overlayfs mounted using fuse.

The lowerdir is a mounted squashfs, the upperdir is a regular directory.

squashfuse rootimg.sqfs lower
fuse-overlayfs -o lowerdir=lower,upperdir=upper,workdir=work mount

This is all working nicely, and much faster than extracting a tarfile to
generate the lowerdir which is what I used to do.

But I have to jump through hoops to generate the lower sqfs.

If I generate it while running under lxc so outside it looks like root
owned files have owner 10, to the generating process running under
lxc they have owner root(0). When using tar it automatically did the
necessary translations when extracting the archive back to the correct
lxc uid.

If I build the sqfs while inside lxc the sqfs gets the uid 0 for root
owned files. The files in mount/root, for example are then not readable when
it's mounted later.

If I try to build it from outside lxc then I don't have permission to
read mount/root at all so the files don't make it into the sqfs.

I'm doing the following to ensure that the sqfs files have the correct
ownership when I come to mount it:

for (( i = 0 ; i < 65536 ; i++ )); do echo "+$i +$(( i + 10 ))"; done 
>mapfile
tar -C root --numeric-owner --owner-map=mapfile --group-map=mapfile 
--one-file-system -cf - . | tar2sqfs -q root.sqfs

This maps the 0-65535 uids that are seen inside lxc back to the
10-165535 uids that are used outside of lxc so that the sqfs is an
accurate image of the original filesystem metadata.

overlayfs has an option to map uids which the man page suggests exists
for exactly this reason, but I cannot work out how to use it. I feel as
though I need the option on squashfuse.

It doesn't really matter to me, but one issue with my solution is that
the sqfs images aren't usable by a different user who has a different
uidmap because the uids in the sqfs are tied to the uidmap.

Is there a way to do what I'm trying to do without using tar (or
equivalent) to change the uids in the sqfs? I don't want to squash
everything down to a single user.

-o uidmapping/-o gidmapping for fuse-overlayfs aren't mentioned in the
-h help, only in the manpage, so perhaps there's an option to squashfuse
that isn't mentioned in either that I'm missing? Or perhaps there's a
way to get overlayfs to work if the root.sqfs has uid 0-65535.

Note that the whole point of all of this is so that nothing has to be
done as root. I use this for testing installs, building packages etc and
a rogue rm -fr /, for example, can't take out the whole system. So
solutions that involve doing steps as the real root user aren't
suitable.

Tim.



Re: Dependencies between components.

2024-04-06 Thread Tim Woodall

On Sat, 6 Apr 2024, Simon Hollenbach wrote:


Hi,

I have not found a mistake in your considerations about "sane"
component inter-dependency.

However, package dependencies are declared upon a package with a
suitable version, whether this package can be set-up on a bespoke
target system remains to be determined by APT when the package is
installed or upgraded. Just consider for example some manually held
packages - These might break your package install even if all the
needed packages are downloadable (All the components needed are
correctly configured in sources.list ).

I hope this helps. I'd like to understand why you are asking this
question, this might enable us to give you better-suited information.



I have local packages I can install that setup sources.list.d so if, for
example, I want to use backports, I can do:
apt-get install bookworm-backports-main-sources
and I will get an appropriate file in sources.list.d. These get
autogenerated.

To cut a long story short, I was missing having backports installed
despite me having a patched version of a backports package installed. I
then missed a security fix because although my tooling saw it and
auto-rebuilt my patched local package, it couldn't install because of
missing (new) dependencies on other packages in backports. The previous
version had installed because it didn't have dependencies on other
packages in backports.

So my local packages that add files to sources.list.d now express
required dependencies - so if I have
bookworm-backports-local-main-sources installed
(which has any packages from backports that I have local changes to) I
will automatically get bookworm-backports-main-sources too.

I've never actually patched any packages from
backports-sloppy/updates/proposed-updates but while I was at it
I thought it made sense to add dependencies for those too so if I ever
do use backports-sloppy, I will get backports too rather than have to
remember to manually install it.

Tim.



Dependencies between components.

2024-03-30 Thread Tim Woodall

Is there a wiki or something else that lays out exactly what other
distributions and components each debian (distribution,component) tuple
is allowed to depend on?

This is what I've concluded so far.

I'm assuming transitive dependencies are allowed, e.g.
bookworm-updates-contrib can depend on bookworm-non-free so I've
considered the dependencies between distributions with the same
component and the dependencies between components of the same
distribution separately.


First considering the distribution dependencies. All of these are
always allowed between the same component.

bookworm-proposed-updates : bookworm
bookworm-updates  : bookworm
bookworm-backports-sloppy : bookworm-backports bookworm
bookworm-backports: bookworm

I believe that updates is a subset of proposed-updates so dependency
on updates by proposed-updates is moot

I'm unclear whether backports is allowed to depend on -updates but I
assume not as I've not seen anything saying that you need to enable
-updates if you enable -backports. I guess the backporter would have to
wait for the point release if they ever needed something only in
bookworm-updates (it's hard to imagine many cases where a -updates
package would be required for backporting so this is somewhat
theoretical - I think it's only if there's a security update involved)


Now considering the dependencies between components in the same
distribution:

contrib  : non-free non-free-firmware main
non-free : non-free-firmware main
non-free-firmware: main

Some sources seem to say that non-free depends on contrib while others
say contrib depends on non-free. My understanding on contrib is that it
is for packages that cannot be in main because they depend on non-free
even though they're otherwise free. But I'm not sure if there's a two
way dependency here.

I'm assuming that non-free-firmware cannot depend on non-free or contrib
- that would seem to defeat the goal of non-free-firmware - although I
could see a case where a firmware loader is in contrib while the
firmware itself is in non-free so I'm not sure exactly what is allowed
or expected here.



Re: shellcheck, bashism's and one liners.

2024-03-17 Thread Tim Woodall

On Sun, 17 Mar 2024, Greg Wooledge wrote:


Tim's assumption here is that he can write a function which emits a
stream of whitespace-separated words, and use this safely in an unquoted
command substitution.

   count $(args)

I'm guessing "count" is a stand-in for something more complex, but $(args)
is pretty much exactly what he wants to do in his real script.



Yes, thanks, this is exactly it.

Taken me most of the weekend but I've managed to get rid of everything
except a handful of places where shellcheck can't understand/follow.

Some "computed" sourced files, a few places I'm using sed where it says
to try // / stuff - and I might revisit one day, and a couple of other
places that I don't want to change but are correct.

This exercise found two bugs - one had a workaround that I now need to
revisit. The other was because I could no longer use an associative
array to hold a space split string but had to use a computed variable to
hold a list.

There was nothing wrong with the data in the array but the array index
was wrong, which shellcheck couldn't spot either way but v_$a was not a
valid variable name, which it would have been if the array index was
correct.


And then I've spent a lot of time tracking down the new bugs. The
hardest of which was down to changing $a into "$a" resulted in a leading
space in a function parameter.



The problem is that this works *brilliantly* with inputs that are
heavily restricted to a specific set of characters, and fails *utterly*
if the inputs do not conform to that limitation.  There is no middle
ground, and there is no way to fix it up.  Once your inputs take off
their training wheels, you have to throw this script away and rewrite
it from the ground up.


This is true, but at the point debian has an architecture with a space
in the name is probably the point I rewrite it in python :-)


That's why shellcheck gives warnings about this kind of thing.  (Well,
one of many reasons.)






Re: shellcheck, bashism's and one liners.

2024-03-17 Thread Tim Woodall

On Sun, 17 Mar 2024, Greg Wooledge wrote:


On Sun, Mar 17, 2024 at 09:25:10AM +, Tim Woodall wrote:

In almost all other cases, the space separated items cannot, even in
theory, contain a rogue space, so suppressing the warning is fine


Famous Last Words™.


As one example, it calls out to an external program that builds a cache
in a temporary dir and it can be told to keep that temporary dir for
future runs.

$ cat test
#!/bin/bash

do_work()
{
  local f=0
  while [[ $1 =~ ^-- ]]; do
[[ $1 = "--fast" ]] && f=1
shift
  done
  echo -n "first file arg = '$1'"
  [[ ${f} -eq 1 ]] && echo -n " running with --fast"
  echo
}

loopitems=(
  aitem1
  aitem2
  bitem1
  bitem2
)

declare -A fast

echo "No quoting"
for i in "${loopitems[@]}"; do
  do_work ${fast[${i:0:1}]} --option1 --option2 "${i}"
  fast[${i:0:1}]=--fast
done

echo
echo "Quoting"
unset fast
declare -A fast
for i in "${loopitems[@]}"; do
  do_work "${fast[${i:0:1}]}" --option1 --option2 "${i}"
  fast[${i:0:1}]=--fast
done



$ shellcheck test

In test line 26:
  do_work ${fast[${i:0:1}]} --option1 --option2 "${i}"
  ^---^ SC2086: Double quote to prevent globbing and word 
splitting.

Did you mean:
  do_work "${fast[${i:0:1}]}" --option1 --option2 "${i}"


$ ./test
No quoting
first file arg = 'aitem1'
first file arg = 'aitem2' running with --fast
first file arg = 'bitem1'
first file arg = 'bitem2' running with --fast

Quoting
first file arg = ''
first file arg = 'aitem2' running with --fast
first file arg = ''
first file arg = 'bitem2' running with --fast


because fast is an array I can't use the trick that "${x[@]}"
expands to nothing at all when fast is a list because you cannot have an
array of lists.

I could, instead, use something like (I haven't tested these exactly
lines so tweaks are possibly needed)

declare -a "fast_${i:0:1}=( --fast )"

and
declare -n "v=fast_${i:0:1}"
do_work "${v[@]}" ...

Which is what I've now done in the one place where filenames were
involved.


Re: shellcheck, bashism's and one liners.

2024-03-17 Thread Tim Woodall

On Sun, 17 Mar 2024, Geert Stappers wrote:


On Sun, Mar 17, 2024 at 09:25:10AM +, Tim Woodall wrote:

Hi,

I've been cleaning up some bash scripts


Good



and, where possible, addressing things reported by shellcheck.


Oh,  shellcheck, https://www.shellcheck.net/



I have this one-liner (which works but shellcheck doesn't like the
quoting)

idxsrc="$( newest_file $( APT_CONFIG=${APT_CONFIG} apt-get indextargets --format 
'$(FILENAME)' 'Identifier: Packages' ))"

SC2016: Expressions don't expand in single quotes, use double quotes for that.
SC2046: Quote this to prevent word splitting.

The first is easy enough to avoid by using backslash instead. But the
second I can't see how to fix as a one-liner.

I can make shellcheck happy by doing it like this:

mapfile -t idxpackages < <( APT_CONFIG=${APT_CONFIG} apt-get indextargets 
--format \$\(FILENAME\) 'Identifier: Packages' )
idxsrc="$( newest_file "${idxpackages[@]}")"


For what it is worth:

- a shell is nice and good tool
- shellcheck is an afterthought
- my shellcheck experience learnt me that it can't see the difference
  between "dangerous" and "potentionally dangerous"


Thing I'm trying to tell:  Avoid that shellcheck blocks you


Oh, it's not blocking me. Where I know it's OK I've added a
# shellcheck disable=
to silence the warning.

But I've got one more case where it's *possible* that a quoting error
that shellcheck doesn't like could cause things to break.

I have an array of filenames (inc paths). These are all local files
generated by the script so they will never contain a space in the script
controlled bit. But if, for some reason, I decided to do:
mv /mnt/mirror "/mnt/m i r r o r"
then it would get very upset.

But these lists are in an associative array, so it's hard to make
shellcheck safe. I was hoping that an easy answer to my one liner would
give me a clue to this harder case.

I could write some very convoluted code, generating lists on the fly
with eval and then using namerefs in the associative array, but that
would be letting the shellcheck tail wag the scripting dog!



shellcheck, bashism's and one liners.

2024-03-17 Thread Tim Woodall

Hi,

I've been cleaning up some bash scripts and, where possible, addressing
things reported by shellcheck.

I have this one-liner (which works but shellcheck doesn't like the
quoting)

idxsrc="$( newest_file $( APT_CONFIG=${APT_CONFIG} apt-get indextargets --format 
'$(FILENAME)' 'Identifier: Packages' ))"

SC2016: Expressions don't expand in single quotes, use double quotes for that.
SC2046: Quote this to prevent word splitting.

The first is easy enough to avoid by using backslash instead. But the
second I can't see how to fix as a one-liner.

I can make shellcheck happy by doing it like this:

mapfile -t idxpackages < <( APT_CONFIG=${APT_CONFIG} apt-get indextargets 
--format \$\(FILENAME\) 'Identifier: Packages' )
idxsrc="$( newest_file "${idxpackages[@]}")"

I have a number of other places where I'm relying on a variable
containing a number of space separated items that I DO want word
splitting and so the shellcheck warning is incorrect and I either
suppress it or find a fix similar to the above.

In almost all other cases, the space separated items cannot, even in
theory, contain a rogue space, so suppressing the warning is fine but
the above one could, therefore I wanted to find a proper fix.

Is there a one-liner way to make shellcheck happy on the count line
below (other than # shellcheck disable=SC2046)?

args() { echo a b c d; }
count() { echo $#; }
count $(args)

Obviously, any correct solution should output 4

Thanks.


For background, this script builds a debian image and this part of the
script checks to see if any of the packages in the minimal image have
been updated since it was built and so it needs rebuilding.

As an optimization, this script first checks the timestamp on the
Packages files that are used to build the image, if they're older than
the image then it's not possible for there to be a newer package and so
I don't need to do the checks on individual packages.




Re: Committing git working tree with other git repos

2024-03-14 Thread Tim Woodall

 On Wed, 13 Mar 2024, Paul M Foster wrote:


Folks:

I have a /home/paulf/stow directory with contains subdirectories for each
of the packages whose dotfiles I want to manage, like:

/home/paulf/stow/alacritty

In each subdirectory, I have all the config files for that packages, under
git management. This means that the directory will look like this:

/home/paulf/stow/alacritty/.git
/home/paulf/stow/alacritty/.config/alacritty/alacritty.yml

This works well with stow (configs are now symlinks in $HOME).

I'd like to copy all of this to a git repo on gitlab. You would think you
could go to the ~/stow directory, "git init", then "git add" each
directory, and all is good. However, git looks inside the directories and
sees there are already .git directories there, and refuses to add the
directories and their contents to its repo. Instead, it wants you to use
"submodules", to wit:

git submodule add ./alacritty

This adds an *empty* alacritty subdirectory to the git repo, which isn't
useful.

I need a way to bring all these subdirectories and their contents under a
git repo so I can send it to gitlab. Any suggestions?

Paul




So I thought this was a rather interesting exercise and tried it on one
of my repos that contains etckeeper files, one branch per machine.

I came up with this script (beware if your branches have weird
characters in the names or something, there's limited quoting/escaping
here.)

# clone the repo (I'm assuming you've managed to merge all your repos
# into one with a separate branch for each. I started from this so I've
# not got commands to do it but it shouldn't be hard, just add a
# commonremote and push to a named branch for each existing repo)
git clone -n git@einstein:/configs.git
cd configs

# Create a new branch with a completely empty commit at the root
# This must not match any existing branch.
rbp=rebasepoint
tree=$( git hash-object -wt tree --stdin < /dev/null )
commit=$( git commit-tree -m 'root commit' $tree )
git branch $rbp $commit
git checkout $rbp

# Don't know how to stop this one getting created but we need to delete
# it to simplify the rest. I expected git clone -n to not create this!
git branch -d master

# First map all the commits in each branch on the remote into a
# subdirectory of the branch name on my (relatively low power) machine
# this maps about 30 objects per second.
# This has a very long line with subtle quoting - take care when
# cutting/pasting.
for i in $( git branch -r | grep -v HEAD ); do
  echo $i
  git filter-branch -f --index-filter 'git ls-files -s | sed "s:\t\"*:&'"$i"'/:" | GIT_INDEX_FILE=$GIT_INDEX_FILE.new 
git update-index --index-info && if [ -e "$GIT_INDEX_FILE.new" ]; then mv "$GIT_INDEX_FILE.new" 
"$GIT_INDEX_FILE"; fi' -- $i
done

# Now rebase each branch onto the previous one (Note that we're starting
# with rebasepoint that we created above)
# This gets progressively slower on my machine, not exactly sure why.
for i in $( git branch -r | grep -v HEAD ); do
  git branch --track ${i#origin/} $i
  git rebase $rbp ${i#origin/}
  rbp=${i#origin/}
done

# The branch you are on at this point should be a branch that combines
# all of the upstream branches


# If anything goes wrong, just delete the configs directory and start
# again. You are changing nothing on the upstream unless/until you
# decide to push.


tim@dirac:~/git/flatten/configs (xen3)$ ls origin/
aptmirror17  citrix17   dirac   ipmi17  ntp17wiki17
aptmirror19  cups17 einsteinipmi19  ntp19wiki19
asterisk17   debootstrap17  firewall17  ipmi2   proxy17  xen17
asterisk19   debootstrap19  firewall19  mail17  proxy19  xen19
backup17 debootstrap2   firewall2   mail19  rpi  xen2
bind17   dhcp17 imap17  master  rpi-flat17a  xen3
bind19   dhcp19 imap19  mtd19   victoria17

HTH.

Tim.



Re: Committing git working tree with other git repos

2024-03-13 Thread Tim Woodall

On Wed, 13 Mar 2024, Tim Woodall wrote:


On Wed, 13 Mar 2024, Paul M Foster wrote:


Folks:

I have a /home/paulf/stow directory with contains subdirectories for each
of the packages whose dotfiles I want to manage, like:

/home/paulf/stow/alacritty

In each subdirectory, I have all the config files for that packages, under
git management. This means that the directory will look like this:

/home/paulf/stow/alacritty/.git
/home/paulf/stow/alacritty/.config/alacritty/alacritty.yml

This works well with stow (configs are now symlinks in $HOME).

I'd like to copy all of this to a git repo on gitlab. You would think you
could go to the ~/stow directory, "git init", then "git add" each
directory, and all is good. However, git looks inside the directories and
sees there are already .git directories there, and refuses to add the
directories and their contents to its repo. Instead, it wants you to use
"submodules", to wit:

git submodule add ./alacritty

This adds an *empty* alacritty subdirectory to the git repo, which isn't
useful.

I need a way to bring all these subdirectories and their contents under a
git repo so I can send it to gitlab. Any suggestions?

Paul




I don't know exactly how to do it but git filter-branch might get you
started.

You can use that to rewrite the history of each branch so they're now
deeper subdirectories.

Now add a remote with an empty commit on master. git rebase each repo
onto that and push as a branch.

Finally git rebase each branch onto master in turn.

If I've understood your need correctly this should not give any
conflicts.


If you don't want to mess with rewriting history then just regular git
mv, rebase onto common master and rebase each tree in turn. But this
will (IMO) make history harder to understand and any future rebase
cleanups will likely be a disaster.

Tim.




Even easier if you don't need git checkout to give everything in a
single tree:

Just push each project as a separate branch to a common remote then git
worktree add to flatten it back to your current structure

git fetch --all will be your friend here to see which branches on a
remote have changed.



Re: Committing git working tree with other git repos

2024-03-13 Thread Tim Woodall

On Wed, 13 Mar 2024, Paul M Foster wrote:


Folks:

I have a /home/paulf/stow directory with contains subdirectories for each
of the packages whose dotfiles I want to manage, like:

/home/paulf/stow/alacritty

In each subdirectory, I have all the config files for that packages, under
git management. This means that the directory will look like this:

/home/paulf/stow/alacritty/.git
/home/paulf/stow/alacritty/.config/alacritty/alacritty.yml

This works well with stow (configs are now symlinks in $HOME).

I'd like to copy all of this to a git repo on gitlab. You would think you
could go to the ~/stow directory, "git init", then "git add" each
directory, and all is good. However, git looks inside the directories and
sees there are already .git directories there, and refuses to add the
directories and their contents to its repo. Instead, it wants you to use
"submodules", to wit:

git submodule add ./alacritty

This adds an *empty* alacritty subdirectory to the git repo, which isn't
useful.

I need a way to bring all these subdirectories and their contents under a
git repo so I can send it to gitlab. Any suggestions?

Paul




I don't know exactly how to do it but git filter-branch might get you
started.

You can use that to rewrite the history of each branch so they're now
deeper subdirectories.

Now add a remote with an empty commit on master. git rebase each repo
onto that and push as a branch.

Finally git rebase each branch onto master in turn.

If I've understood your need correctly this should not give any
conflicts.


If you don't want to mess with rewriting history then just regular git
mv, rebase onto common master and rebase each tree in turn. But this
will (IMO) make history harder to understand and any future rebase
cleanups will likely be a disaster.

Tim.



Re: Spam from the list?

2024-03-07 Thread Tim Woodall

On Thu, 7 Mar 2024, Andy Smith wrote:


Hi,

On Thu, Mar 07, 2024 at 09:44:51AM +0100, Hans wrote:
> --- sninp ---
> 
> Authentication-Results: mail35c50.megamailservers.eu; spf=none 
> smtp.mailfrom=lists.debian.org

> Authentication-Results: mail35c50.megamailservers.eu;
> 	dkim=fail reason="signature verification failed" (2048-bit key) 
> header.d=debian.org header.i=@debian.org header.b="pDp/TPD5"

> Return-Path: 
> Received: from bendel.debian.org (bendel.debian.org [82.195.75.100])
> 	by mail35c50.megamailservers.eu (8.14.9/8.13.1) with ESMTP id 
> 425I9ZEK112497

>for ; Tue, 5 Mar 2024 18:09:37 +
> 
> --- snap ---
> 
> White mails get the dkim=pass and spam mails got dkim=fail (as you see above).


A great many legitimate emails will fail DKIM so it is not a great
idea to reject every email that does so. I don't think that you are
going to have a good time using Internet mailing lists while your
mail provider rejects mails with invalid DKIM, so if I were you I'd
work on fixing that rather than trying to get everyone involved to
correctly use DKIM.


And some dkim seems setup with the intention that it should not be used
for mailinglusts:

DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
d=dow.land;
s=20210720;
h=From:In-Reply-To:References:Subject:To:Message-Id:Date:
Content-Type:Content-Transfer-Encoding:Mime-Version:Sender:Reply-To:Cc:
Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
List-Subscribe:List-Post:List-Owner:List-Archive;

This one passed on bendel but not when it got to me. Most on debian-user
seem ok, debian-devel does seem to get more submissions with broken dkim
(based on looking at a random handful on each list)

AFAICT, it's a problem at the originator causing failures, either
something wrong with dkim setup or too strict set of headers.

I shall be checking what this does when it gets back to me. One of the
problems with dkim is that you assume it still works, it's hard to know
what others actually see...



Re: SOLVED Re: Disk corruption and performance issue.

2024-02-26 Thread Tim Woodall

On Mon, 26 Feb 2024, Andy Smith wrote:


Hi,

On Mon, Feb 26, 2024 at 06:25:53PM +, Tim Woodall wrote:

Feb 17 17:01:49 xen17 vmunix: [3.802581] ata1.00: disabling queued TRIM 
support
Feb 17 17:01:49 xen17 vmunix: [3.805074] ata1.00: disabling queued TRIM 
support


from libata-core.c

 { "Samsung SSD 870*",  NULL, ATA_HORKAGE_NO_NCQ_TRIM |
  ATA_HORKAGE_ZERO_AFTER_TRIM |
  ATA_HORKAGE_NO_NCQ_ON_ATI },

This fixed the disk corruption errors at the cost of dramatically
reducing performance. (I'm not sure why because manual fstrim didn't
improve things)


That's interesting. I have quite a few of these drives and haven't
noticed any problems. What kernel version introduced the above
workarounds?

$ sudo lsblk -do NAME,MODEL
NAME MODEL
sda  SAMSUNG_MZ7KM1T9HAJM-5
sdb  SAMSUNG_MZ7KM1T9HAJM-5
sdc  Samsung_SSD_870_EVO_4TB
sdd  Samsung_SSD_870_EVO_4TB
sde  ST4000LM016-1N2170
sdf  ST4000LM016-1N2170
sdg  SuperMicro_SSD
sdh  SuperMicro_SSD

Thanks,
Andy



Looks like the fix was brand new around sept 2021
https://www.neowin.net/news/linux-patch-disables-trim-and-ncq-on-samsung-860870-ssds-in-intel-and-amd-systems/

I was still seeing corruption in August 2022 but it's possible the fix
wasn't backported to whatever release I was running.

Tim.



Re: SOLVED Re: Disk corruption and performance issue.

2024-02-26 Thread Tim Woodall

On Mon, 26 Feb 2024, Gremlin wrote:

re running fstrim in a vm.


The Host system takes care of it


I guess you've no idea what iscsi is. Because this makes no sense at
all. systemd or no systemd. The physical disk doesn't have to be
something the host system knows anything about.

Here's a thread of someone wanting to do fstrim from a vm with iscsi
mounted disks.

https://serverfault.com/questions/1031580/trim-unmap-zvol-over-iscsi


And another page suggesting you should.

https://gist.github.com/hostberg/86bfaa81e50cc0666f1745e1897c0a56

8.10.2. Trim/Discard It is good practice to run fstrim (discard)
regularly on VMs and containers. This releases data blocks that the
filesystem isn't using anymore. It reduces data usage and resource load.
Most modern operating systems issue such discard commands to their disks
regularly. You only need to ensure that the Virtual Machines enable the
disk discard option.


I would guess that if you use sparse file backed storage to a vm you'd
want the vm to run fstrim too but this isn't a setup I've ever used so
perhaps it's nonsense.



Re: SOLVED Re: Disk corruption and performance issue.

2024-02-26 Thread Tim Woodall

On Mon, 26 Feb 2024, Gremlin wrote:


Are you using systemd ?

No, I'm not


You should not be running trim in a container/virtual machine


Why not? That's, in my case, basically saying "you should not be running
trim on a drive exported via iscsi" Perhaps I shouldn't be but I'd like
to understand why. Enabling thin_provisioning and fstrim works and gets
mapped to the underlying layers all the way down to the SSD.

My underlying VG is less than 50% occupied, so I can trim the free space
by creating a LV and then removing it again (I have issue_discards set)

FWIW, I did issue fstrim in the VMs with no visible issues at all.
Perhaps I got lucky?


Here is some info: https://wiki.archlinux.org/title/Solid_state_drive


I don't see VM or virtual machine anywhere on that page.



SOLVED Re: Disk corruption and performance issue.

2024-02-26 Thread Tim Woodall

TLDR; there was a firmware bug in a disk in the raid array resulting in
data corruption. A subsequent kernel workaround resulted in
dramatically reducing the disk performance. (probably just writes but I
didn't confirm)


Initially, under heavy disk load I got errors like:


Preparing to unpack .../03-libperl5.34_5.34.0-5_arm64.deb ...
Unpacking libperl5.34:arm64 (5.34.0-5) ...
dpkg-deb (subprocess): decompressing archive 
'/tmp/apt-dpkg-install-zqY3js/03-libperl5.34_5.34.0-5_arm64.deb' 
(size=4015516) member 'data.tar': lzma error: compressed data is corrupt

dpkg-deb: error:  subprocess returned error exit status 2
dpkg: error processing archive 
/tmp/apt-dpkg-install-zqY3js/03-libperl5.34_5.34.0-5_arm64.deb (--unpack):
cannot copy extracted data for 
'./usr/lib/aarch64-linux-gnu/libperl.so.5.34.0' to 
'/usr/lib/aarch64-linux-gnu/libperl.so.5.34.0.dpkg-new': unexpected end of 
file or stream


The checksum will have been verified by apt during the download but when
it comes to read the downloaded deb to unpack and install it doesn't get
the same data. The corruption can happen at both the writing (the file
on disk is corrupted) and the reading (the file on disk has the correct
checksum)



A second problem I got was 503 errors from apt-cacher-ng (which ran on
the same machine as the above error)



I initially assumed this was due to faulty memory, or possibly a faulty
CPU. But I assumed memory because the disk errors were happening in a VM
and no other VMs were affected. Because I always start the same VMs in
the same order I assumed they'd be using the same physical memory each
time.

However, nothing I could do would help track down where the memory
problem was. Everything worked perfectly except when using the disk
under load.

At this time I spent a significant amount of time migrating everything
important, including the big job that triggered this problem, off this
machine onto the pair. After that the corruption problems went away but
I continued to get periodic 503 errors from apt-cacher-ng.


I continued to worry at this on and off but failed to make any progress
in finding what was wrong. The version of the motherboard is no longer
available otherwise I'd probably have bought another one. During this
time I also spent quite a lot of time ensurning that it was much easier
to move VMs between my two machines. I'd underestimated how tricky this
would be if the dodgy machine failed totally which I became aware of
when I did migrate the VM having problems.


Late last year or early this year someone (possibly Andy Smith?) posted
a question about logical/physical sector sizes on SSDs. That set me off
investigating again as that's not something I'd thought of. That didn't
prove fruitful either but I did notice this in the kernel logs:

Feb 17 17:01:49 xen17 vmunix: [3.802581] ata1.00: disabling queued TRIM 
support
Feb 17 17:01:49 xen17 vmunix: [3.805074] ata1.00: disabling queued TRIM 
support


from libata-core.c

 { "Samsung SSD 870*",  NULL, ATA_HORKAGE_NO_NCQ_TRIM |
  ATA_HORKAGE_ZERO_AFTER_TRIM |
  ATA_HORKAGE_NO_NCQ_ON_ATI },

This fixed the disk corruption errors at the cost of dramatically
reducing performance. (I'm not sure why because manual fstrim didn't
improve things)


At this point I'd discovered that the big job that had been regularly
hitting corruption issues now completed. However, it was taking 19 hours
instead of 11 hours.

I ordered some new disks - I'd assumed both disks were affected but
while writing this I notice that that "disabling queued TRIM support"
prints twice for the same disk, not once per disk.

I thought one of these was my disk but looking again now I see I had
1000MX500 which doesn't actually match.

 { "Crucial_CT*M500*",  NULL, ATA_HORKAGE_NO_NCQ_TRIM |
  ATA_HORKAGE_ZERO_AFTER_TRIM },
 { "Crucial_CT*MX100*",  "MU01", ATA_HORKAGE_NO_NCQ_TRIM |
  ATA_HORKAGE_ZERO_AFTER_TRIM },

While waiting for my disks I started looking at the apt-cacher-ng
503 problem - which has continued to bug me. I got lucky and discovered
a way I could almost always trigger it.

I managed to track that down to a race condition when updating the
Release files if multiple machines request the same file at the same
moment.

After finding a fix I found this bug reporting the same problem:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1022043

There is now a patch attached to that bug that I've been running for a
few weeks without a single 503 error.

And Sunday I replaced the two disks with new ones. Today that big job
completed in 10h15m.

Another thing I notice, although I'm not sure I understand what is going
on, is that my iscsi disks all have
   Thin-provisioning: No

This means tha fstrim on the vm doesn't work. Switching them to Yes and
it does. So I'm not exactly sure where the queued trim was coming from
in the first place.

I also need to check the version of tgt in sid because there doesn't
seem to be an option to switch this in the config alt

NUC freeazing due to kernel bug

2024-02-07 Thread Tim Janssen
Dear Sir/Madam,

I use debian server on my NUC to run a low powered home server. It freezes
every 2-3 days what looks to be a kernel bug. From a lot of testing it only
occurs when the ethernet cable is inserted and it seems it has to do
something with low power mode (c-states). These issues have been reported
ever since kernel 5.10. I wonder if the debian devs are aware of this issue
and if a fix is undereway.

Best regards,

Tim Janssen


Re: install Kernel and GRUB in chroot.

2024-02-03 Thread Tim Woodall

On Sat, 3 Feb 2024, Max Nikulin wrote:

It seems secure boot is disabled in your case, so I am wondering why you do 
not boot xen.efi directly.



Because the NVRAM is extremely tempremental. Most updates fail, or
worse, corrupt it to the point it's hard to get anything to boot.

Additionally, there was a bug in an older version of xen that caused a
kernel oops if wifi networking was started. So I wanted to start vanilla
debian and I don't dare touch the NVRAM again (or the bios) until I
absolutely have to.

I don't remember for certain now but I might be booting using
bootx86.efi (which is a copy of grubx64.efi)

It's an old laptop but it still works well for me.




Re: install Kernel and GRUB in chroot.

2024-02-02 Thread Tim Woodall

On Thu, 1 Feb 2024, Marco Moock wrote:


Am 01.02.2024 um 19:20:01 Uhr schrieb Tim Woodall:


$ cat /boot/efi/EFI/XEN/xen.cfg
[global]
default=debian

[debian]
options=console=vga smt=true
kernel=vmlinuz root=/dev/mapper/vg--dirac-root ro quiet
ramdisk=initrd.img


menuentry "Xen EFI NVME" {
 insmod part_gpt
 insmod search_fs_uuid
 insmod chain
#set root=(hd1,gpt1)
 search --no-floppy --fs-uuid --set=root C057-BC13
 chainloader (hd1,gpt1)/EFI/XEN/xen.efi
}


Then this file tells the boot loader about the /boot or / partition.
Is that the Xen virtualization software?


The NVRAM is configured to boot:
/boot/efi/EFI/debian/grubx64.efi

That then hunts for grub.cfg. I believe it finds the first grub.cfg -
which has caused me issues in the past where I've had a legacy partition
on the disk that I'd forgotten about but the efi application sees. I'd
be interested if there's a way to tell grubx64.efi to look for a
particular partition UUID.

That menuentry above then tells efi to chainload the xen.efi
application. This is all in efi land.

That then reads xen.cfg and boots the kernel and initrd defined in that
file.




Re: IPv6, ip token, NetworkManager and accept_ra

2024-02-02 Thread Tim Woodall

On Fri, 2 Feb 2024, Ralph Aichinger wrote:


Hi fellow Debian users!

In my quest to advance the IPv6 preparedness of my home LAN I want to
find a solution to use IP tokens on all my clients. IP tokens (keeping
the host part of the IPv6 address static while getting the subnet part
by SLAAC) seem very elegant to me, because it avoids DHCPv6 completely,
and still makes mostly working DNS records possible.

Opinions on SLAAC+IP tokens are welcome ;)



I don't use NetworkManager but I have this sort of thing:

iface eth0 inet6 auto
pre-up echo 64 
>/proc/sys/net/ipv6/conf/$IFACE/accept_ra_rt_info_max_plen
pre-up ip token set ::17:102/64 dev $IFACE
up ip -6 addr add fd01:08b0:bfcd:100::17:112/64 dev $IFACE


One of my clients is a surface laptop running Debian sid, Gnome,
NetworkManager and getting connection via WiFi. The first hickup with
this is, that seemingly ra is disabled on my NetworkManager configured
device wl0:

root@surface:~# ip token set ::5fac dev wl0
Error: ipv6: Router advertisement is disabled on device.

This can easily corrected with

echo 1 >  /proc/sys/net/ipv6/conf/wl0/accept_ra



you can set that in /etc/sysctl.conf. Not exactly sure why it's not set
by default though... Check whether it's turned off there!


But what is the correct way to do this "ip token set" with
NetworkManager (or in spite of NetworkManager ;)?



Don't know, sorry.


Should I use nmcli or something else? Is there maybe even a hidden
Gnome GUI option?

Any other comments on this maybe quixotic endavour are welcome ;)



I like using tokens rather than the address generated from the MAC.



Re: install Kernel and GRUB in chroot.

2024-02-01 Thread Tim Woodall

On Thu, 1 Feb 2024, Marco Moock wrote:


Am 02.02.2024 um 01:46:06 Uhr schrieb Dmitry:


2. ==>BAM<== some how that binary knows the system partition.


That information is on the EFI partition, where the GRUB bootloader
binary also resides.

root@ryz:/boot/efi/EFI# cat /boot/efi/EFI/debian/grub.cfg
search.fs_uuid 5b8b669d-xyz root hd0,gpt2 #boot partition
set prefix=($root)'/grub'
configfile $prefix/grub.cfg
root@ryz:/boot/efi/EFI#

If that information is loaded, the kernel can be loaded from the boot
partition.





Are you sure that file does anything? I don't have one

drwxr-xr-x 2 root root   4096 Dec 31  2017 .
drwxr-xr-x 6 root root   4096 Dec 25  2019 ..
-rwxr-xr-x 1 root root 163840 Sep 11  2022 grubx64.efi


This finds my boot partition and then chainloads the XEN efi binary
which does have some config.

/boot/efi/EFI/XEN:
total 38204
drwxr-xr-x 2 root root 4096 May  5  2023 .
drwxr-xr-x 6 root root 4096 Dec 25  2019 ..
-rwxr-xr-x 1 root root 31132473 Aug 12 08:34 initrd.img
-rwxr-xr-x 1 root root  5283136 Aug 12 08:34 vmlinuz
-rwxr-xr-x 1 root root  138 May  5  2023 xen.cfg
-rwxr-xr-x 1 root root  2687456 Jun 20  2021 xen.efi

$ cat /boot/efi/EFI/XEN/xen.cfg
[global]
default=debian

[debian]
options=console=vga smt=true
kernel=vmlinuz root=/dev/mapper/vg--dirac-root ro quiet
ramdisk=initrd.img


menuentry "Xen EFI NVME" {
insmod part_gpt
insmod search_fs_uuid
insmod chain
#set root=(hd1,gpt1)
search --no-floppy --fs-uuid --set=root C057-BC13
chainloader (hd1,gpt1)/EFI/XEN/xen.efi
}



Re: install Kernel and GRUB in chroot.

2024-02-01 Thread Tim Woodall

On Fri, 2 Feb 2024, Dmitry wrote:


Hi Tim. The community is so kind.

So.


I'm not exactly sure what you're doing.


Understand how GRUB works, to boot myself.

1. Trying to install Debian on the Flash.
2. Use it by the Debootstrap.
3. Now I want to boot using that Flash.

Looks like a caught the thread.

1. ESP is a partition that stores GRUB Binary. /boot/EFI/Name/grub64.eif
2. ==>BAM<== some how that binary knows the system partition.



because grub64.efi understands the disk layout and looks for it. You can
build your own

I'm not giving any guarantees - look at the date on this file:

$ ls -al test-uefi
-rw-r--r-- 1 tim tim 341 Dec 31  2018 test-uefi

$ cat test-uefi
grub-mkimage -o bootx64.efi -p /EFI/BOOT -O x86_64-efi \
 fat iso9660 part_gpt part_msdos \
 normal boot echo linux configfile loopback chain \
 efifwsetup efi_gop efi_uga \
 ls search search_label search_fs_uuid search_fs_file \
 gfxterm gfxterm_background gfxterm_menu test all_video loadenv \
 exfat ext2 lvm mdraid09 mdraid1x diskfilter

but that probably builds (or once worked) a .efi application that will
successfully boot a system by searching for grub.cfg. I don't remember
the details...

I also have this - take with a pinch of salt - I wrote this learning
about this system as you are trying to now...


$ ls -al uefi-notes
-rw-r--r-- 1 tim tim 2375 Dec  1  2018 uefi-notes

1 FDISK

g - create a new empty GPT partition table

p - create a primary partition
+128M (size)

t - change type
1 - EFI system

p - create primary partition
fill rest of disk

vgcreate vg-uefi-boot /dev/sdb2

lvcreate -L 128M -n boot vg-uefi-boot

lvcreate -l 100%FREE -n root vg-uefi-boot

mke2fs -j /dev/mapper/vg--uefi--boot-boot

mke2fs -j /dev/mapper/vg--uefi--boot-root

mkdosfs /dev/sdb1

mount /dev/vg-uefi-boot/root /mnt/image/

debootstrap --variant=minbase stretch /mnt/image ftp://einstein/debian/

mount -o bind /proc /mnt/image/proc
mount -o bind /dev /mnt/image/dev
mount -o bind /sys /mnt/image/sys

chroot /mnt/image </etc/hostname

cat </etc/fstab
# /etc/fstab: static file system information.
#
#   
/dev/vg-uefi-boot/root   /  ext3errors=remount-ro  1   1
/dev/vg-uefi-boot/boot   /boot  ext3defaults   1   2
UUID=D1EA-35CC   /boot/efi auto defaults,noatime,nofail 0 0
EOFX

mount /boot
mkdir /boot/efi
mount /boot/efi

#Don't install recommends
cat </etc/apt/apt.conf.d/99-no-recommends
APT
{
  Install-Recommends "false";
}
EOFX

#Setup apt sources
cat </etc/apt/sources.list
deb ftp://einstein/debian stretch main non-free
deb ftp://einstein/debian-security stretch/updates main non-free
deb ftp://einstein/local stretch main
EOFX

echo link_in_boot = Yes >/etc/kernel-img.conf

apt-get update
apt-get -y upgrade

apt-get -y install sysvinit-core
apt-get -y install openssh-server
apt-get -y install ifupdown
apt-get -y install grub-efi-amd64
apt-get -y install mdadm
apt-get -y install lvm2
apt-get -y install linux-image-amd64

grub-install

mkdir /boot/efi/EFI/BOOT
cp /boot/efi/EFI/debian/grubx64.efi /boot/efi/EFI/BOOT/bootx64.efi

update-grub

(update root password)

umount /boot/efi
umount /boot

EOF

umount /mnt/image/proc
umount /mnt/image/dev
umount /mnt/image/sys
umount /mnt/image/

vgchange -aln vg-uefi-boot

(Installed firmware-realtek)

mount /dev/vg-uefi-boot/root /mnt/image/
mount -o bind /proc /mnt/image/proc
mount -o bind /dev /mnt/image/dev
mount -o bind /sys /mnt/image/sys
chroot /mnt/image
mount -a

umount -a
exit

umount /mnt/image/proc
umount /mnt/image/dev
umount /mnt/image/sys
umount /mnt/image/
vgchange -aln vg-uefi-boot




Re: install Kernel and GRUB in chroot.

2024-02-01 Thread Tim Woodall

On Thu, 1 Feb 2024, Dmitry wrote:


Greetings!

After:
1. Creating GPT table and GPT partition with fdisk.
2. Copy data with a debootstrap.
3. Chroot into newly creating system.

I need to prepare that system for booting.
1. Install Kernel.
2. Install GRUB and Configure.
3. Add changes to UEFI to start booting.

And at the point two (Install GRUB) I a little bit confused.

1. Need to create ESP, and put GRUB there.
2. Need to configure GRUB to select appropriate kernel and ramdisk.


I'm not exactly sure what you're doing. But the "trick" to doing most of
this in a chroot is to bind mount /dev, /proc, /sys and /run into the
chroot.

Then things like installing the kernel, building the initrd etc
(usually) just work.

"Add changes to UEFI to start booting" depends on the actual hardware
that will boot. If you're preparing images on one system to boot on
another then that bit you'll have to solve by booting the hardware.

I'd probably pick a live distro but it's theoretically[1] possible to
generate your own bootx64.efi that will then boot your system. Once it's
booted you can then use the normal tools to replace it with a more
easily maintained debian solution.

[1] Not just theoretical, I've actually done it once long ago.



Re: Automatically installing GRUB on multiple drives

2024-01-26 Thread Tim Woodall

On Fri, 26 Jan 2024, Tim Woodall wrote:


On Fri, 26 Jan 2024, Nicolas George wrote:



Now that I think a little more, this concern is not only unconfirmed,
it is rather absurd. The firmware would never write in parts of the
drive that might contain data.


UEFI understands the EFI system filesystem so it can "safely" write new
files there.

The danger then is that a write via mdadm corrupts the filesystem. I'm
not sure if mdadm will detect the inconsistent data or assume both
sources are the same.

Hardware raid that the bios cannot subvert is obviously one solution.



https://stackoverflow.com/questions/32324109/can-i-write-on-my-local-filesystem-using-efi



Re: Automatically installing GRUB on multiple drives

2024-01-26 Thread Tim Woodall

On Fri, 26 Jan 2024, Nicolas George wrote:



Now that I think a little more, this concern is not only unconfirmed,
it is rather absurd. The firmware would never write in parts of the
drive that might contain data.


UEFI understands the EFI system filesystem so it can "safely" write new
files there.

The danger then is that a write via mdadm corrupts the filesystem. I'm
not sure if mdadm will detect the inconsistent data or assume both
sources are the same.

Hardware raid that the bios cannot subvert is obviously one solution.



Re: Automatically installing GRUB on multiple drives

2024-01-25 Thread Tim Woodall

On Wed, 24 Jan 2024, Nicolas George wrote:


It is rather ugly to have the same device be both a RAID with its
superblock in the hole between GPT and first partition and the GPT in
the hole before the RAID superblock, but it serves its purpose: the EFI
partition is kept in sync over all devices.


Until your UEFI bios writes to the disk before the system has booted.

I'll be interested to hear how this goes and whether it's reliable.

I tried it years ago, using a no-superblock raid and custom initrd
(initramfs as I think it was then) to start it, but upgrades, and even
kernel updates, became 'terrifying'. Now I use dd to copy the start of
the disk...



Re: Disk corruption and performance issue.

2024-01-21 Thread Tim Woodall

On Sat, 20 Jan 2024, David Christensen wrote:


On 1/20/24 08:25, Tim Woodall wrote:

Some time ago I wrote about a data corruption issue. I've still not
managed to track it down ...


Please post a console session that demonstrates, or at least documents, the 
data corruption.




Console session is difficult - this is a script that takes around 6
hours to run - but a typical example of corruption is something like
this:

Preparing to unpack .../03-libperl5.34_5.34.0-5_arm64.deb ...
Unpacking libperl5.34:arm64 (5.34.0-5) ...
dpkg-deb (subprocess): decompressing archive 
'/tmp/apt-dpkg-install-zqY3js/03-libperl5.34_5.34.0-5_arm64.deb' (size=4015516) 
member 'data.tar': lzma error: compressed data is corrupt
dpkg-deb: error:  subprocess returned error exit status 2
dpkg: error processing archive 
/tmp/apt-dpkg-install-zqY3js/03-libperl5.34_5.34.0-5_arm64.deb (--unpack):
 cannot copy extracted data for './usr/lib/aarch64-linux-gnu/libperl.so.5.34.0' 
to '/usr/lib/aarch64-linux-gnu/libperl.so.5.34.0.dpkg-new': unexpected end of 
file or stream

The checksum will have been verified by apt during the download but when
it comes to read the downloaded deb to unpack and install it doesn't get
the same data. The corruption can happen at both the writing (the file
on disk is corrupted) and the reading (the file on disk has the correct
checksum)



Please post console sessions that document the make and model of your disks, 
their partition tables, your md RAID configurations, and your LVM 
configurations.




Can you please give a clue as to what you're looking for? This is a
machine exposing dozens of LVM volumes via iscsi targets that are then
exported into VMs that then may be used as PVs in the VM.

The disk that I'm using when I saw the above error is a straight LVM ->
iscsi -> ext3 mounted like this:

/dev/xvdb on /mnt/mirror/ftp/mirror type ext3 (rw,noatime)

That is this iscsi target:
[fd01:8b0:bfcd:100:230:18ff:fe08:5ad6]:3260,1 iqn.xen17:aptmirror-archive

configured like this:
root@xen17:~# cat /etc/tgt/conf.d/aptmirror17.conf

  backing-store /dev/vg-xen17/aptmirror17


  backing-store /dev/vg-xen17/aptmirror-archive


and configured in the vm config like this:
disk=[ 
'script=block-iscsi,vdev=xvda,target=portal=xen17:3260,iqn=iqn.xen17:aptmirror17,w',
   
'script=block-iscsi,vdev=xvdb,target=portal=xen17:3260,iqn=iqn.xen17:aptmirror-archive,w',
]




Putting a sector size 512/512 disk and a sector size 512/4096 disk into the 
same mirror is unconventional.  I suppose there are kernel developers who 
could definitively explain the consequences, but I am not one of them.  The 
KISS solution is to use matching disks in RAID.




The problem with matching disks in the raid, which has bitten me before,
is that they can both be subject of a recall. I make a deliberate effort
to avoid matching disks for exactly that reason.

I'm happy to accept that this is "unconventional" - however, I didn't
even know it had happened. It was Andy's thread that gave me the clue to
look. I'm surprised that mdadm didn't say something - and I thought
LVM/mdadm all did everything at the 4k level anyway so I don't really
see why it should matter.


All the
partitions start on a 4k boundary but the big partition is not an exact
multiple of 4k.


I align my partitions to 1 MiB boundaries and suggest that you do the same.


They are aligned at 1M boundaries but while I could see that sub-4k
alignment could be triggering some (expected) problem, I can't really
see why 4k or 1M alignment would be different:

Device  StartEndSectors   Size Type
/dev/sda12048   4095   2048 1M BIOS boot
/dev/sda24096 264191 260096   127M EFI System
/dev/sda3  264192 1953525134 1953260943 931.4G Linux filesystem





... the "heavy load" filesystem that triggered the issue ...


Please post a console session that demonstrates how data corruption is 
related to I/O throughput.




I don't know how to do that except that I run a script every Sunday that
rebuilds my entire set of packages that I have locally in a sandbox. For
each package that builds a clean sandbox, installs all of the
build-depends and then builds it. It also generates some multi-hundred
MB compressed tar archives of "clean" systems that I use to bootstrap
installing new VMs. I have had the following commands report:

build-tarfiles.sh:  tar -C ${BUILDCHROOT} --one-file-system -Jcf ${PDIR}/${tgt} 
.
build-tarfiles.sh:  tar tvf ${PDIR}/${tgt} >/dev/null
build-tarfiles.sh:  tar tvf ${PDIR}/${tgt} >/dev/null

Where the first tar tvf reports that the archive is corrupted while the
second works (and the archive is uncorrupted)


There are a LOT of
partitions and filesystems in a complicated layered LVM setup ...


Complexity is the enemy of data integrity and system reliability.  I sugg

Disk corruption and performance issue.

2024-01-20 Thread Tim Woodall
ays   
-   50
171 Program_Fail_Count  0x0032   100   100   000Old_age   Always   
-   0
172 Erase_Fail_Count0x0032   100   100   000Old_age   Always   
-   0
173 Ave_Block-Erase_Count   0x0032   067   067   000Old_age   Always   
-   433
174 Unexpect_Power_Loss_Ct  0x0032   100   100   000Old_age   Always   
-   12
180 Unused_Reserve_NAND_Blk 0x0033   000   000   000Pre-fail  Always   
-   45
183 SATA_Interfac_Downshift 0x0032   100   100   000Old_age   Always   
-   0
184 Error_Correction_Count  0x0032   100   100   000Old_age   Always   
-   0
187 Reported_Uncorrect  0x0032   100   100   000Old_age   Always   
-   0
194 Temperature_Celsius 0x0022   074   052   000Old_age   Always   
-   26 (Min/Max 0/48)
196 Reallocated_Event_Count 0x0032   100   100   000Old_age   Always   
-   0
197 Current_Pending_ECC_Cnt 0x0032   100   100   000Old_age   Always   
-   0
198 Offline_Uncorrectable   0x0030   100   100   000Old_age   Offline  
-   0
199 UDMA_CRC_Error_Count0x0032   100   100   000Old_age   Always   
-   1
202 Percent_Lifetime_Remain 0x0030   067   067   001Old_age   Offline  
-   33
206 Write_Error_Rate0x000e   100   100   000Old_age   Always   
-   0
210 Success_RAIN_Recov_Cnt  0x0032   100   100   000Old_age   Always   
-   0
246 Total_LBAs_Written  0x0032   100   100   000Old_age   Always   
-   63148678276
247 Host_Program_Page_Count 0x0032   100   100   000Old_age   Always   
-   1879223820
248 FTL_Program_Page_Count  0x0032   100   100   000Old_age   Always   
-   1922002147

Does anything leap out at anyone? Anything I should try next? Normally I
try and avoid having disks bought at the same time from the same brand
paired together but I'll give that a try if it will fix this.

Tim.



Re: standardize uid:gid?

2024-01-17 Thread Tim Woodall

On Thu, 18 Jan 2024, David Chmelik wrote:


Couldn't Debian standardize uid:gid numbers for daemons?  The oldest--and
only strictly UNIX-like--GNU/Linux (Slackware) does this so if you install
multiple instances and want them the same, you can backup /etc/passwd & /
etc/group from one and use them on another (as long as there aren't
different users which is sometimes the case).  This standardization was
probably from UNIX/*BSD.
   What happens every time I can't use *BSD or Slackware on a server and
resort to Devuan (or Debian-based for some users' PCs) is that all the
uid:gid seem random so there's no way to administer them all with the same
files in cases that it'd work.  Having random uid:gid is a last-rate
style.



I haven't tried it but I would assume that if the user exists then the
package uses that.

So cresting a template /etc/passwd before installing packages would fix
this.

I agree this is annoying, and hardish to fix once servers are deployed.



Re: disable auto-linking of /bin -> /usr/bin/

2024-01-09 Thread Tim Woodall

On Wed, 10 Jan 2024, Jeffrey Walton wrote:



I think some programs can break, like those that assume / and /usr are
both mounted early in the boot process. I think the only guarantee is
/ will be mounted early, and all programs needed to boot are available
from /. I thought there was a discussion about some problems with
systemd when / and /usr are different mount points (and only / is
mounted early), but I can't find it at the moment.



Yes, I thought systemd requires /usr mounted by the initrd.



Re: disable auto-linking of /bin -> /usr/bin/

2024-01-09 Thread Tim Woodall

On Wed, 10 Jan 2024, miphix wrote:


If you were to issue 'ls -l /' You'll find that /bin, /sbin,
lib{32,64,x32} are linked to their counterparts in /usr/. I under-
stand the logic in doing so. However, for specific reasons that would
require exhaustive explanations that I would prefer to save us all from
me doing, I would like to break this behaviour by having /usr genuinely
be whole heartedly installed on its own partition. I'm cool with doing
things the hard and painful way. Any details you can share that would
allow me to figure out how to break, or divert this behaviour would
be appreciated. I'm not elite with linux enough to figure this out,
but I am comfertable with digging deep with the right background
knowledge to navigate what's needed.




This is a very complex migration in progress.

IIRC, before bookworm not having merged usr will work. bookworm will
probably work, but it will fight you and it's not supported, and trixie
plain just won't work.

By the time trixie is released, everything will be in /usr according to
dpkg. Scripts may no longer have consistent paths to the same
application.



Re: Help: network abuse

2023-12-23 Thread Tim Woodall

On Sat, 23 Dec 2023, David Christensen wrote:
Sending a RST to a falsified IP address would make the sending host into an 
attacker by proxy.  Why do you suggest it?



Because the OP wants it to stop. And the OP is running a server on this
port that is clearly not responding properly or we'd at least see the
syn+ack. Perhaps it cannot keep up with the connections.

So the op needs to tell the problem clients to stop retrying.

If it's malicious traffic then there's nothing the op can do to stop it
except get a new ip or get their ISP to drop it before it gets to them.

The op can try icmp port unreachable too. But that tells the client
there's no server, rather than there's a tcp problem.

If it's not a bandwidth problem then the op should just ignore it.

Nobody, but nobody is going to send traffic to some random host with a
fake source ip in the hopes someone will notice and start sending RST
some tine later to that address instead of continuing to drop it.



Re: Help: network abuse

2023-12-23 Thread Tim Woodall

On Thu, 21 Dec 2023, David Christensen wrote:



Perhaps you could set up a DMZ, move services into the DMZ,  and provide a 
VPN connection to the DMZ for your Internet users.  Then you could close all 
of the incoming WAN ports except VPN.



It might be possible to put the VPN endpoint into a VPS, create an SSH tunnel 
out from the httpd server to the VPS, and close all of the WAN incoming 
ports.




If the OP is worried about the bandwidth usage then none of that will
help. The fact that the OP is not sending a SYN+ACK (according to the
tcpdumps that I saw) means that this is already blackholed.[2]

There are three options at this point:
1. Ignore it - my "EVILSYN[1]" blacklist is right at the top of my iptables
rules and drops without logging before anything else.

2. Talk to their ISP and get it blocked there - that's the only surefire
way to stop it eating their quota if that's the problem.

3. Try and make them give up - that's why I suggested sending a RST.


[1] I have a set of rules that blacklist IPs that send too many SYN
packets that are not responded to with SYN+ACK.

[2] This did look weird. I'm not sure how only some connections get a
SYN+ACK back - I wonder if their webserver is rate-limited and these are
"genuine" connection attempts that are failing - although the SPT=80
DPT=80 looks suspiciously like something crafted to get through naive
stateless firewall rules that rely on outgoing (allowed) connections to
have DPT=80 to the internet and SPT=80 from the internet.



Re: Help: network abuse

2023-12-21 Thread Tim Woodall

On Thu, 21 Dec 2023, Alain D D Williams wrote:


My home PC is receiving, for hours at a time, 12-30 kB/s input traffic. This is
unsolicited. I do not know what it is trying to achieve but suspect no good. It
is also eating my broadband allowance.

This does not show up in the Apache log files - the TCP connection does not 
succeed.

Sometimes my machine does send a packet in reply, there are 2 examples at the
foot of this email.

Questions:

? What is going on ?

? What can I do about it ?
 I do manually add some of the IPs to the f2b chain which will stop replies
 but that is about it.

My ISP refuses to do anything about it - I admit that I cannot see what they
could do, maybe filter packets with a source port of 80 or 443.

I also get attempts to break into ssh (port 22) - I am not worried about that.

I append a few lines of output of "tcpdump -n -i enp3s0" done today.
192.168.108.2 is the address of my desktop PC.

The connecting IPs below all belong to Amazon but this changes with time, China
is another common source of similar packets.

11:08:56.354303 IP 34.217.144.104.80 > 192.168.108.2.80: Flags [S], seq 
19070976, win 51894, options [mss 1401,sackOK,TS val 1182532729 ecr 0,nop,wscale 
7], length 0
11:08:56.354700 IP 34.217.144.104.80 > 192.168.108.2.80: Flags [S], seq 
3665362944, win 51894, options [mss 1402,sackOK,TS val 4179952761 ecr 0,nop,wscale 
7], length 0
11:08:56.360527 IP 52.195.179.12.80 > 192.168.108.2.80: Flags [S], seq 
479395840, win 51894, options [mss 1412,sackOK,TS val 3391683448 ecr 0,nop,wscale 
7], length 0
11:08:56.360696 IP 52.195.179.12.80 > 192.168.108.2.80: Flags [S], seq 
1622147072, win 51894, options [mss 1410,sackOK,TS val 2887711608 ecr 0,nop,wscale 
7], length 0
11:08:56.360950 IP 54.184.78.87.80 > 192.168.108.2.80: Flags [S], seq 
3168796672, win 51894, options [mss 1404,sackOK,TS val 535364985 ecr 0,nop,wscale 
7], length 0
11:08:56.364565 IP 52.195.179.12.80 > 192.168.108.2.80: Flags [S], seq 
132317184, win 51894, options [mss 1407,sackOK,TS val 2350122105 ecr 0,nop,wscale 
7], length 0
11:08:56.364708 IP 34.217.144.104.80 > 192.168.108.2.80: Flags [S], seq 
1098776576, win 51894, options [mss 1405,sackOK,TS val 3426157689 ecr 0,nop,wscale 
7], length 0
11:08:56.367975 IP 13.231.232.88.80 > 192.168.108.2.80: Flags [S], seq 
3272540160, win 51894, options [mss 1413,sackOK,TS val 979961209 ecr 0,nop,wscale 
7], length 0

2 days ago a similar capture. Note that the source port is 443 not 80:

09:47:31.416452 IP 5.45.73.147.443 > 192.168.108.2.80: Flags [S], seq 
2724200448, win 51894, options [mss 1401,sackOK,TS val 862439534 ecr 0,nop,wscale 
7], length 0
09:47:31.417861 IP 27.124.10.200.443 > 192.168.108.2.80: Flags [S], seq 
925237248, win 51894, options [mss 1407,sackOK,TS val 756418658 ecr 0,nop,wscale 
7], length 0
09:47:31.440892 IP 27.124.10.197.443 > 192.168.108.2.80: Flags [S], seq 
3474063360, win 51894, options [mss 1404,sackOK,TS val 3970828642 ecr 0,nop,wscale 
7], length 0
09:47:31.449393 IP 27.124.10.200.443 > 192.168.108.2.80: Flags [S], seq 
2844721152, win 51894, options [mss 1407,sackOK,TS val 1831471202 ecr 0,nop,wscale 
7], length 0
09:47:31.451430 IP 154.39.104.67.443 > 192.168.108.2.80: Flags [S], seq 
2336358400, win 51894, options [mss 1415,sackOK,TS val 395513698 ecr 0,nop,wscale 
7], length 0
09:47:31.451610 IP 27.124.10.225.443 > 192.168.108.2.80: Flags [S], seq 
808976384, win 51894, options [mss 1414,sackOK,TS val 1960250978 ecr 0,nop,wscale 
7], length 0
09:47:31.453372 IP 143.92.60.30.443 > 192.168.108.2.80: Flags [S], seq 
3177512960, win 51894, options [mss 1408,sackOK,TS val 4033677410 ecr 0,nop,wscale 
7], length 0
09:47:31.456937 IP 27.124.10.225.443 > 192.168.108.2.80: Flags [S], seq 
1042087936, win 51894, options [mss 1415,sackOK,TS val 2011106914 ecr 0,nop,wscale 
7], length 0
09:47:31.461961 IP 27.124.10.226.443 > 192.168.108.2.80: Flags [S], seq 
3200516096, win 51894, options [mss 1403,sackOK,TS val 2314013026 ecr 0,nop,wscale 
7], length 0

Examples where my machine sends a reply:

09:47:31.658790 IP 27.124.10.225.443 > 192.168.108.2.80: Flags [S], seq 
612564992, win 51894, options [mss 1415,sackOK,TS val 2011106914 ecr 0,nop,wscale 
7], length 0
09:47:31.659442 IP 192.168.108.2.80 > 154.39.104.67.443: Flags [S.], seq 
3770299450, ack 1858732033, win 65160, options [mss 1460,sackOK,TS val 164888251 
ecr 395513698,nop,wscale 7], length 0

09:47:31.756220 IP 5.45.73.147.443 > 192.168.108.2.80: Flags [S], seq 
2992898048, win 51894, options [mss 1401,sackOK,TS val 862439534 ecr 0,nop,wscale 
7], length 0
09:47:31.756272 IP 192.168.108.2.80 > 5.45.73.147.443: Flags [.], ack 
1226309633, win 509, options [nop,nop,TS val 2085784149 ecr 994101358], length 0


You can try sending RST. That might make them give up.

There is not much else you can do.

I sometimes do a whois on a persistent offender and blacklist the entire
network. But I don't know if they stop as this happens before any
logging.

I'd suggest sending RST f

Re: Firefox, was: Telnet

2023-12-04 Thread Tim Woodall

On Mon, 4 Dec 2023, Michael Kj?rling wrote:


On 4 Dec 2023 10:29 +, from debianu...@woodall.me.uk (Tim Woodall):

Some years ago I abandoned firefox because there was no way to override
one of its 'I'm sorry Dave, I'm afraid I can't do that' spasms.

It's crazy that they make things like certificate pinning *impossible*
to override.


Firefox deprecated support for HPKP in version 72 in January 2020, in
response to their issue 1412438. 
https://bugzilla.mozilla.org/show_bug.cgi?id=1412438



Yes, I know. I don't recall now if this was the last straw that broke
the camel's back but firefox adopted a habit of deciding that things
'shouldn't be done' so leave no way to say 'yes, I know what I'm doing'

IIRC they did something daft with self-signed certificates as well.

chrome based browsers aren't much better but I've yet to be hit by going
from working to not working without an intervening 'are you sure'.

Anyway, I was a netscape/mozilla devotee since the 90s, but other than
those 'never update' VMs I don't use it now. I still use lynx - although
there is less and less that works, the browser itself has never failed
to 'do its best'




Re: Telnet

2023-12-04 Thread Tim Woodall

On Sun, 3 Dec 2023, Greg Wooledge wrote:


On Sun, Dec 03, 2023 at 11:52:51AM -0700, Charles Curley wrote:


True. None the less, there is at least one perfectly good use for
telnet: testing connections to servers.

charles@hawk:~$ telnet hawk
Trying 127.0.1.1...
telnet: Unable to connect to remote host: Connection refused
charles@hawk:~$ telnet hawk 80
Trying 127.0.1.1...
Connected to hawk.localdomain.
Escape character is '^]'.
^]
telnet> quit
Connection closed.
charles@hawk:~$


Yes, there is plenty of use for the telnet *client*.  Nobody disputes this.

The question is whether anyone should be running a telnetd *server*.
On an isolated network, it might be acceptable.  But it's really a bad
habit that should be stomped out aggressively, as machines which are
currently on an isolated network might not remain there forever.




Agree with all of the above. However, the op was connecting to what
looks like a router address. It's possibly hardware that cannot be
updated, only replaced. (and I'm not sure, therefore, if this is a
debian question at all)

I have some (post 2020) motherboards whose ipmi does not work with jvm
post stretch, nor firefox post buster. So I have to keep an old setup
around.

You should never put these sorts of devices on the internet anyway. It
might be *nice* if we didn't have to use old 'insecure' protocols but
it's not *insecure* to do so. The IPMI in question are only accessible
via physical access (so network encryption is hardly helpful) or VPN
(which is kept up to date)

It has frustrated me that the browser writers have refused to
distinguish between rfc1918 (and equivalent ipv6) addresses and
publically routable addresses when it comes to warnings and refusals to
connect.

Some years ago I abandoned firefox because there was no way to override
one of its 'I'm sorry Dave, I'm afraid I can't do that' spasms.

It's crazy that they make things like certificate pinning *impossible*
to override. Another one that bit me - again hardware where the only way
to use https was to have it generate its own self signed certificate
that expired after a year. You can 'work around' but it's *expensive*
the first time you hit it as you end up losing other config. Sure, the
hardware was buggy...



Re: connect two hosts over wifi without router?

2023-11-27 Thread Tim Woodall

On Mon, 27 Nov 2023, Hans wrote:


Hi folks,

just before I am trying forever:

Is it possible, to connect two hosts directly over wlan without using a
router?



Yes. It's called adhoc networking. No AP, no router, just two wifi cards
acting as a ptp link.

But I doubt you can do it other than by low level commands.

I have done it in the past when trying to connect two points via a very
marginal link but I don't recall details. At some point I'll boot up my
eeepc and see if I still have scripts around - if nobody else sorts your
problem.



Re: quick alpine config question?

2023-11-25 Thread Tim Woodall

On Sat, 25 Nov 2023, Tixy wrote:


On Sat, 2023-11-25 at 13:49 +, Tixy wrote:

On Sat, 2023-11-25 at 11:46 +, Tim Woodall wrote:

OK. This is weird! Something is joining those two lines.


Not at this end it isn't. For me, all 3 of your diffs look the same on
screen and are binary the same apart from the space you deleted in the
third one.

Perhaps your MUA is treating it as being in quoted-printable encoding?


I've just looked at the message on the web archive [1] and see that
shows the lines joined before you removed the space after the '=',
perhaps that's what you were looking at?

[1] https://lists.debian.org/debian-user/2023/11/msg00898.html



I wasn't looking at that but yes, that's exactly what I was seeing.

When I view them in alpine, both the versions in sent-mail and the
versions in debian-user look like that, with the lines joined.

If I "postpone" the message and then continue editing the lines do not
get merged (I tested this while composing the previous messages)

After your email (thanks) I went and checked the Maildir. All the
messages are correct with the linebreak - so they've gone out from
alpine, thirough various mailservers to debian-user and then back
through more to my Maildir and nothing has corrupted them.

Interestingly, and I didn't think to try this before because my emails
are plain text, hitting `H' (full header mode) does display the message
correctly with the linebreak.

My guess is that it's an unfortunate interaction of using vim as my
editor, alpine setting format=flowed and that diff having a trailing
space.

I think, therefore, alpine (and the webserver you linked to) are
correctly reflowing these two lines. At some point I need to see if
there's a way to get alpine (or vim) to warn about this situation or
maybe a different header would be more appropriate as I do not have
:set fo+=w so I do not normally have trailing spaces (and vim does
highlight them for me so I knew it was there but I left it in
deliberately.

Now I know about this I can avoid it - but a better header would seem to
be appropriate given that I do not use this feature.

https://stackoverflow.com/questions/16282728/how-do-i-compose-format-flowed-emails-that-include-hanging-indents-with-vim

Tim.


Re: quick alpine config question?

2023-11-25 Thread Tim Woodall

OK. This is weird! Something is joining those two lines.

diff --git a/dovecot/conf.d/10-mail.conf b/dovecot/conf.d/10-mail.conf
index b47235f..5b20997 100644
--- a/dovecot/conf.d/10-mail.conf
+++ b/dovecot/conf.d/10-mail.conf
@@ -27,7 +27,8 @@
 #
 # 
 #
-mail_location = mbox:~/mail:INBOX=/var/mail/%u
+#mail_location = mbox:~/mail:INBOX=/var/mail/%u
+mail_location = maildir:~/Maildir:INBOX=~/Maildir/INBOX:LAYOUT=fs

 # If you need to set multiple mailbox locations or want to change default
 # namespace settings, you can do it by defining namespace sections.
@@ -47,6 +48,7 @@ namespace inbox {
   # namespaces or some clients get confused. '/' is usually a good one.
   # The default however depends on the underlying mail storage format.
   #separator =
+  separator = /

   # Prefix required to access this namespace. This needs to be different for
   # all namespaces. For example "Public/".

I've tried removing the trailing space on the original commented
separator =



Re: quick alpine config question?

2023-11-25 Thread Tim Woodall

On Sat, 25 Nov 2023, Tim Woodall wrote:


On Fri, 24 Nov 2023, Tim Woodall wrote:


On Thu, 23 Nov 2023, Karen Lewellen wrote:


Hi folks,
We have a member of the greater Toronto Linux Users group, who rather 
enjoys setting up email accounts,hosting allot of them personally.
He is new to alpine though, our test of roundcube was almost, but not 
quite successful due to sloppy JavaScript coding on the send button.
where he is stuck at the moment is how to get alpine to display imap 
things like the sent mail folder int he folders list.
Dreamhost is having the same problem with some of our new office email 
accounts.
is there a specific imap server config file, or a choice from the main 
settings s, config c options?
Also, what about the imap aspect of certificates? I recall there is a new 
way to create the aph thing google requires.

Thanks for ideas.
Kare




Are you using mbox or Maildir? And is inbox in /home or /var/spool/mail?

I have this working with dovecot. Maildir with INBOX in /home.

I'll look up the config but it will probably be Saturday. I vaguely
recall having ti tweak something in the dovecot configs.

Tim.




These are the two changes I made to /etc/dovecot/conf.d/10-mail.conf


Sorry, don't know what happened there: I must have accidentally deleted the 
extra line!

diff --git a/dovecot/conf.d/10-mail.conf b/dovecot/conf.d/10-mail.conf
index b47235f..5b20997 100644
--- a/dovecot/conf.d/10-mail.conf
+++ b/dovecot/conf.d/10-mail.conf
@@ -27,7 +27,8 @@
 #
 # 
 #
-mail_location = mbox:~/mail:INBOX=/var/mail/%u
+#mail_location = mbox:~/mail:INBOX=/var/mail/%u
+mail_location = maildir:~/Maildir:INBOX=~/Maildir/INBOX:LAYOUT=fs

 # If you need to set multiple mailbox locations or want to change default
 # namespace settings, you can do it by defining namespace sections.
@@ -47,6 +48,7 @@ namespace inbox {
   # namespaces or some clients get confused. '/' is usually a good one.
   # The default however depends on the underlying mail storage format.
   #separator = 
+  separator = /


   # Prefix required to access this namespace. This needs to be different for
   # all namespaces. For example "Public/".



Re: quick alpine config question?

2023-11-25 Thread Tim Woodall

On Fri, 24 Nov 2023, Tim Woodall wrote:


On Thu, 23 Nov 2023, Karen Lewellen wrote:


Hi folks,
We have a member of the greater Toronto Linux Users group, who rather 
enjoys setting up email accounts,hosting allot of them personally.
He is new to alpine though, our test of roundcube was almost, but not quite 
successful due to sloppy JavaScript coding on the send button.
where he is stuck at the moment is how to get alpine to display imap things 
like the sent mail folder int he folders list.
Dreamhost is having the same problem with some of our new office email 
accounts.
is there a specific imap server config file, or a choice from the main 
settings s, config c options?
Also, what about the imap aspect of certificates? I recall there is a new 
way to create the aph thing google requires.

Thanks for ideas.
Kare




Are you using mbox or Maildir? And is inbox in /home or /var/spool/mail?

I have this working with dovecot. Maildir with INBOX in /home.

I'll look up the config but it will probably be Saturday. I vaguely
recall having ti tweak something in the dovecot configs.

Tim.




These are the two changes I made to /etc/dovecot/conf.d/10-mail.conf

diff --git a/dovecot/conf.d/10-mail.conf b/dovecot/conf.d/10-mail.conf
index b47235f..5b20997 100644
--- a/dovecot/conf.d/10-mail.conf
+++ b/dovecot/conf.d/10-mail.conf
@@ -27,7 +27,8 @@
 #
 # 
 #
-mail_location = mbox:~/mail:INBOX=/var/mail/%u
+#mail_location = mbox:~/mail:INBOX=/var/mail/%u
+mail_location = maildir:~/Maildir:INBOX=~/Maildir/INBOX:LAYOUT=fs

 # If you need to set multiple mailbox locations or want to change default
 # namespace settings, you can do it by defining namespace sections.
@@ -47,6 +48,7 @@ namespace inbox {
   # namespaces or some clients get confused. '/' is usually a good one.
   # The default however depends on the underlying mail storage format.
   #separator = 
+  separator = /


   # Prefix required to access this namespace. This needs to be different for
   # all namespaces. For example "Public/".



Re: quick alpine config question?

2023-11-23 Thread Tim Woodall

On Thu, 23 Nov 2023, Karen Lewellen wrote:


Hi folks,
We have a member of the greater Toronto Linux Users group, who rather enjoys 
setting up email accounts,hosting allot of them personally.
He is new to alpine though, our test of roundcube was almost, but not quite 
successful due to sloppy JavaScript coding on the send button.
where he is stuck at the moment is how to get alpine to display imap things 
like the sent mail folder int he folders list.
Dreamhost is having the same problem with some of our new office email 
accounts.
is there a specific imap server config file, or a choice from the main 
settings s, config c options?
Also, what about the imap aspect of certificates? I recall there is a new way 
to create the aph thing google requires.

Thanks for ideas.
Kare




Are you using mbox or Maildir? And is inbox in /home or /var/spool/mail?

I have this working with dovecot. Maildir with INBOX in /home.

I'll look up the config but it will probably be Saturday. I vaguely
recall having ti tweak something in the dovecot configs.

Tim.



Re: Boot fails to load network or USB, piix4_smbus - SMBus Host Controller, after update to dbus (1.14.10-3) unstable

2023-11-23 Thread Tim Woodall

On Thu, 23 Nov 2023, Andy Dorman wrote:

I have not yet figured out how to fix our two broken servers since we can't 
boot them to update them.  Since we have several identical running servers 
and can mount and manipulate the file system of the dead servers, is it 
possible to just copy a good initrd.img-6.5.0-4-amd64 to the dead server 
/boot directory and boot to that image? That sounds way too simple to me.




I'm pretty sure you can bind mount /proc, /sys, /dev, /run, chroot and
then update-initramfs to regen.




Re: Why is bullseye-backports recommended on bookworm?

2023-11-18 Thread Tim Woodall

On Tue, 14 Nov 2023, Vincent Lefevre wrote:


To my surprise, reportbug asks me to use bullseye-backports
(= oldstable-backports) on my bookworm (= stable) machine:

Your version (6.1.55-1) of linux-image-6.1.0-13-amd64 appears to be out of date.
The following newer release(s) are available in the Debian archive:
 bullseye-backports (backports-policy): 6.1.55+1~bpo11+1
Please try to verify if the bug you are about to report is already addressed by 
these releases.  Do you still want to file a report [y|N|q|?]?

Why?



I'm not exactly sure how numbering works with unstable but
6.1.55+1~bpo11+1 comes after 6.1.55-1 but before 6.1.55+1

I'm not sure how you've ended up with a version with a '-' or bpo has
ended up with a '+'.

I thought the whole point of the bpo numbering was so it sorted before
the next release.



Re: IMAP vs POP was Thunderbird vs Claws Mail

2023-11-18 Thread Tim Woodall

On Sat, 18 Nov 2023, Joe wrote:


If this area is likely to be the issue, try telnet to the IMAP server
using port 143, you should get back a list of capabilities which may
help. Oddly, though I'm using port 993 to my local server, it does not
return any information from that port, only on 143. Presumably this is
to assist security.



I'd assume you need to use something like openssl s_client rather than
telnet to port 993.




Re: Disk error?

2023-08-21 Thread Tim Woodall

On Mon, 21 Aug 2023, Tim Woodall wrote:


On Wed, 24 Aug 2022, Tim Woodall wrote:


I got this error while installing build-essential

Preparing to unpack .../03-libperl5.34_5.34.0-5_arm64.deb ...
Unpacking libperl5.34:arm64 (5.34.0-5) ...
dpkg-deb (subprocess): decompressing archive 
'/tmp/apt-dpkg-install-zqY3js/03-libperl5.34_5.34.0-5_arm64.deb' 
(size=4015516) member 'data.tar': lzma error: compressed data is corrupt

dpkg-deb: error:  subprocess returned error exit status 2
dpkg: error processing archive 
/tmp/apt-dpkg-install-zqY3js/03-libperl5.34_5.34.0-5_arm64.deb (--unpack):
cannot copy extracted data for 
'./usr/lib/aarch64-linux-gnu/libperl.so.5.34.0' to 
'/usr/lib/aarch64-linux-gnu/libperl.so.5.34.0.dpkg-new': unexpected end of 
file or stream


Am I right that this must be a local error - apt will have verified the
checksum while downloading the deb? (and it worked on rerunning - the
deb was in acng)

I can find nothing anywhere that suggests anything has gone wrong (other
than this error) but (and I'm sure it's a coincidence) since installing
ACNG (on the same machine) I've been getting a number of errors similar
to things like this where files appear to be corrupted but work on the
next attempt.

There's no SMART errors or anything like that either.

Anyone got any ideas - any logging I should add to try and track down
where the issue might be?

Tim.




Just a FYI, I've been battling this, and errors like this for almost a
year. The last but one kernel upgrade seems to have fixed it. :-)

I've been reverting all the changes I made trying to track it down and,
touch wood, it's not come back.

One of these two upgrades fixed it. (the first doesn't seem plausible)

Start-Date: 2023-08-11  06:25:52
Commandline: apt-get -y upgrade
Upgrade: usbip:amd64 (2.0+5.10.179-3, 2.0+5.10.179-5)
End-Date: 2023-08-11  06:25:53

Start-Date: 2023-08-12  08:16:59
Commandline: apt-get -y dist-upgrade
Install: linux-image-5.10.0-24-amd64:amd64 (5.10.179-5, automatic)
Upgrade: linux-image-amd64:amd64 (5.10.179-3, 5.10.179-5)
End-Date: 2023-08-12  08:23:07

I cannot tell which machine mattered, could be the xen host, the guest
running apt, the guest running apt-cacher-ng or the one running the
squid proxy. (the last two feel impossible given the symptoms above)

The disk was mounted via iscsi on the xen host, so it's not quite as
simple as saying apt got the right file over the network therefore it
must be the guest.

I'm not going to try to reproduce and track down exactly what fixed it.

Tim.



So I spoke too soon. Reverting the last change I made, resulting in
halving the memory and leaving only one vcpu in the guest, meaning that
the guest is severely loaded and I got a one bit error in a downloaded
(Packages.xz) file.



Re: Disk error?

2023-08-21 Thread Tim Woodall

On Wed, 24 Aug 2022, Tim Woodall wrote:


I got this error while installing build-essential

Preparing to unpack .../03-libperl5.34_5.34.0-5_arm64.deb ...
Unpacking libperl5.34:arm64 (5.34.0-5) ...
dpkg-deb (subprocess): decompressing archive 
'/tmp/apt-dpkg-install-zqY3js/03-libperl5.34_5.34.0-5_arm64.deb' 
(size=4015516) member 'data.tar': lzma error: compressed data is corrupt

dpkg-deb: error:  subprocess returned error exit status 2
dpkg: error processing archive 
/tmp/apt-dpkg-install-zqY3js/03-libperl5.34_5.34.0-5_arm64.deb (--unpack):
cannot copy extracted data for 
'./usr/lib/aarch64-linux-gnu/libperl.so.5.34.0' to 
'/usr/lib/aarch64-linux-gnu/libperl.so.5.34.0.dpkg-new': unexpected end of 
file or stream


Am I right that this must be a local error - apt will have verified the
checksum while downloading the deb? (and it worked on rerunning - the
deb was in acng)

I can find nothing anywhere that suggests anything has gone wrong (other
than this error) but (and I'm sure it's a coincidence) since installing
ACNG (on the same machine) I've been getting a number of errors similar
to things like this where files appear to be corrupted but work on the
next attempt.

There's no SMART errors or anything like that either.

Anyone got any ideas - any logging I should add to try and track down
where the issue might be?

Tim.




Just a FYI, I've been battling this, and errors like this for almost a
year. The last but one kernel upgrade seems to have fixed it. :-)

I've been reverting all the changes I made trying to track it down and,
touch wood, it's not come back.

One of these two upgrades fixed it. (the first doesn't seem plausible)

Start-Date: 2023-08-11  06:25:52
Commandline: apt-get -y upgrade
Upgrade: usbip:amd64 (2.0+5.10.179-3, 2.0+5.10.179-5)
End-Date: 2023-08-11  06:25:53

Start-Date: 2023-08-12  08:16:59
Commandline: apt-get -y dist-upgrade
Install: linux-image-5.10.0-24-amd64:amd64 (5.10.179-5, automatic)
Upgrade: linux-image-amd64:amd64 (5.10.179-3, 5.10.179-5)
End-Date: 2023-08-12  08:23:07

I cannot tell which machine mattered, could be the xen host, the guest
running apt, the guest running apt-cacher-ng or the one running the
squid proxy. (the last two feel impossible given the symptoms above)

The disk was mounted via iscsi on the xen host, so it's not quite as
simple as saying apt got the right file over the network therefore it
must be the guest.

I'm not going to try to reproduce and track down exactly what fixed it.

Tim.



issue after Kernel upgrade

2023-08-14 Thread Tim McConnell
Hi List, 
I upgraded my Kernel and now after restarting my computer I have to go
into /proc/sys/user/max_*_namespaces and modify the value to something
other than 0 (zero). Where do I file a bug? 
I'm on Debian Stable with Kernel version 6.1.0-11-amd64 if it helps
any. 
Thanks! 
-- 
Tim McConnell 



Re: not a debian problem

2023-07-29 Thread Tim Woodall

On Sat, 29 Jul 2023, Tim Woodall wrote:


On Wed, 26 Jul 2023, Dan Ritter wrote:


Tim Woodall wrote:

This is not a debian problem but I'm hoping the collective wisdom might
have some ideas.

One can be soft rebooted with no issues, the other hangs in the bios.

Anyone seen anything like this and what was the issue?

Both machines are running bullseye if there are things I can check while
booted.


It is most likely either a motherboard BIOS change or a BIOS
setting that differs; it is less likely to be a BMC firmware
difference, but that's possible.

Can you diff the output of dmidecode from each system?



System Initializing...

For approx 9 seconds.

Followed by a single frame (in the capture at least) of:
AST1300/2300 VGA True Color Graphics and Video Accelerator
VBIOS Version 0.96.00
DRAM Size: 16MB

At which point there's a mode change and then the rest of the boot
proceeds.


When it fails to boot I get the "System Initializing..." screen with the
flashing cursor but this hangs indefinitely. I recorded for 2 minutes
and literally the only frames present are related to the flashing cursor

Which makes me suspect that the issue is somehow related to initializing
the video card.


I'll report back if I get to the bottom of this. Going to check the
bios settings related to video - perhaps there's a difference there. I
run both of these systems headless.




Problem fixed. I changed the onboard graphics from AST1300/2300 to Intel
in the BIOS.

That as a bit of a mistake as it then broke the IPMI. So I attached a
screen and keyboard, rebooted, changed it back, and now the problem is
gone.

Tim.



Re: not a debian problem

2023-07-29 Thread Tim Woodall

On Wed, 26 Jul 2023, Dan Ritter wrote:


Tim Woodall wrote:

This is not a debian problem but I'm hoping the collective wisdom might
have some ideas.

One can be soft rebooted with no issues, the other hangs in the bios.

Anyone seen anything like this and what was the issue?

Both machines are running bullseye if there are things I can check while
booted.


It is most likely either a motherboard BIOS change or a BIOS
setting that differs; it is less likely to be a BMC firmware
difference, but that's possible.

Can you diff the output of dmidecode from each system?

-dsr-



Sorry for the delay, was travelling and only just had time to try this.

That was a good idea, unfortunately, it's not given me any hints:

--- dmi.good2023-07-29 18:15:45.685631581 +0100
+++ dmi.bad 2023-07-29 18:15:38.793189732 +0100
@@ -127,12 +127,12 @@
Type Detail: Unknown
Speed: 1333 MT/s
Manufacturer: Kingston 
-   Serial Number: 141A8354 
+   Serial Number: 112068F9

Asset Tag: A1_AssetTagNum0
Part Number: 99U5428-018.A00LF
Rank: 2
Configured Memory Speed: 1333 MT/s
-   Minimum Voltage: 31.0 V
+   Minimum Voltage: 1.048 V
Maximum Voltage: 31.908 V
Configured Voltage: Unknown

@@ -160,7 +160,7 @@
Type Detail: Unknown
Speed: 1333 MT/s
Manufacturer: Kingston 
-   Serial Number: 151A4D52 
+   Serial Number: 142054F9

Asset Tag: A1_AssetTagNum1
Part Number: 99U5428-018.A00LF
Rank: 2
@@ -191,7 +191,7 @@
Access Method: Memory-mapped physical 32-bit address
Access Address: 0xFFB4
Status: Valid, Not Full
-   Change Token: 0x022B
+   Change Token: 0x02BB
Header Format: Type 1
Supported Log Type Descriptors: 25
Descriptor 1: Single-bit ECC memory error


The Serial Number changes are for the DIMMs in the system. That voltage
difference is "odd"

All four dimms have the same part number. The DIMMs in BANK1 have
Minimum Voltage: 32.332 V, Maximum Voltage: Unknown.

I don't know how to interpret that "Change Token" - which is part of the
System Event Log but I assume it's not relevant.


I've just discovered that I can record the boot.

Ignoring the flashing cursor in the top RH corner the boot sequence goes:


System Initializing...

For approx 9 seconds.

Followed by a single frame (in the capture at least) of:
AST1300/2300 VGA True Color Graphics and Video Accelerator
VBIOS Version 0.96.00
DRAM Size: 16MB

At which point there's a mode change and then the rest of the boot
proceeds.


When it fails to boot I get the "System Initializing..." screen with the
flashing cursor but this hangs indefinitely. I recorded for 2 minutes
and literally the only frames present are related to the flashing cursor

Which makes me suspect that the issue is somehow related to initializing
the video card.


I'll report back if I get to the bottom of this. Going to check the
bios settings related to video - perhaps there's a difference there. I
run both of these systems headless.


Tim.



Re: General Questions

2023-07-26 Thread Tim Woodall

On Tue, 25 Jul 2023, Greg Wooledge wrote:


On Tue, Jul 25, 2023 at 07:53:52AM -0400, Dan Ritter wrote:

Source Code wrote:

3. Is it possible to reduce RAM consumption? And minimize it? Let's say up
to 100-200 mb?


That depends on what you choose to run, and how. I would not
recommend trying to do anything interesting on a machine with
less than 256MB of RAM. That will not be enough for many common
uses.


According to 
256 MB is the absolute minimum amount of RAM for installing bookworm
on an AMD64 machine, using the text installer and no GUI packages.

However, if one is trying to set up a low-memory server of some kind,
especially in a virtual machine or similar environment, that's an
entirely different line of questioning.

I'm guessing that's NOT the goal here, because the OP mentioned WiFi.
This leaves me somewhat perplexed.




FWIW, i don't think a minimal bullseye install will boot under xen with
128MB of ram configured. IIRC Buster will.



not a debian problem

2023-07-26 Thread Tim Woodall

This is not a debian problem but I'm hoping the collective wisdom might
have some ideas.

I have two, nominally identical, systems. Only difference should be the
make and model of the ssd disks.

One can be soft rebooted with no issues, the other hangs in the bios.

They have ipmi, and a power cycle via that will allow a successful
reboot. It doesn't require a power drain.

I'm 95% sure I've tried swapping the disks between the machines - but
I'll reconfirm when I've got a few hours spare. The problem is the
motherboard (or case?)

The motherboard is J1900D2Y

It's not a huge issue, I rarely reboot and can do it remotely if
required, but it's an annoyance I'd like to fix.

Anyone seen anything like this and what was the issue?

For various reasons I cannot reboot the good one at will, so I'm hoping
to gather a set of things to try and check all in one go.

Both machines are running bullseye if there are things I can check while
booted.

Tim



Re: override logrotate.timer from another package?

2023-07-04 Thread Tim Woodall

On Tue, 4 Jul 2023, Harald Dunkel wrote:


Hi folks,

what is the recommended way to override logrotate.timer from a
metapackage to get hourly logfile rotation (depending on size
and age of the logfile, as usual)?

I had added

etc/systemd/system/logrotate.timer.d/hourly.conf


I'm not exactly clear what you're doing but I guess you're creating a
package that provides the config file?

Sticking config files in packages can be problematic when it comes to
upgrades. I do it a lot, in particular it makes it much easier to change
the config across lots of machines, but doing it all "properly" in the
face of local edits isn't easy.

Others would probably say use ansible (or something like that) but
it works for me.



to the package install file, but at upgrade time it complained
that nobody ran systemctl daemon-reload. Ain't this the job of
the DEBHELPER macro in the postinst script?


Once your package is installed, what does the postinst script look like?



Should this file be installed in /lib/systemd/system/logrotate.timer.d
instead, using dh_installsystemd ?



Assuming you are creating a package and you want to share it with
possible local edits in /etc then this would be the way to go. If, like
me, you're using a package in place of manually editing then /etc is
more likely the right place.




Every insightful is highly appreciated

Harri






  1   2   3   4   5   6   7   8   9   10   >