Re: Testing amd64 netinst LUKS+LVM install broken

2024-04-09 Thread Andrew M.A. Cater
On Tue, Apr 09, 2024 at 05:51:26PM -0700, Craig Hesling wrote:
> Hi all,
> 
> I'm having an issue with the guided partitioner in the Debian testing amd64
> installer.
> Specifically, the "Guided - use entire disk and set up encrypted LVM"
> errors out and emit the following error message:
> 
> partman-lvm: pvcreate: error while loading shared libraries: libaio.so.1:
> cannot open shared object file: No such file or directory
> 

Hi,

I suspect that Debian Testing _might_ be uninstallable just at the moment.
There's a large scale migration and change of packages (to do with securing
a workable time implementation post-2038 and moving from 32 bit values).

That's taken a lot longer than expected: Debian Unstable and therefore
Debian Testing have been hit. It's just possible that this library
is caught up in the dependencies.

This will resolve itself in due course - but it might be better to
install a minimal stable and then upgrade to testing later.

Be aware also that the problematic version of xz libraries is also
in Debian testing - someone else pointed this up the other day.

Hope this helps,

Andy
(amaca...@debian.org)


> Is this a known issue?
> 
> *Reproduction:*
> 
> md5sum ~/Downloads/debian-testing-amd64-netinst.iso
> > d80f2f073cdb2db52d9d1dd8e625b04b
>  /home/craig/Downloads/debian-testing-amd64-netinst.iso
> dd if=/dev/zero of=~/Downloads/test-hda.img bs=1G count=8
> qemu-system-x86_64 -cdrom ~/Downloads/debian-testing-amd64-netinst.iso -hda
> ~/Downloads/test-hda.img -m 8G
> 
> https://youtu.be/jJ-oOA2s8Wc
> 
> All the best,
> 
> Craig



Re: Testing amd64 netinst LUKS+LVM install broken

2024-04-09 Thread Craig Hesling
I also, just tried the latest download from
https://cdimage.debian.org/cdimage/weekly-builds/amd64/iso-cd/:

md5sum debian-testing-amd64-netinst.iso
> e618afbebbbdf9495c74140bc87f2a4b  debian-testing-amd64-netinst.iso
sha256sum debian-testing-amd64-netinst.iso
> a72e2cd87f8bc1af3a6df65a12194c8e043c617fd15f23d545ddab8c55c82e51
 debian-testing-amd64-netinst.iso
sha512sum debian-testing-amd64-netinst.iso
>
db2ea5d9aecf92768d3904b2101a32b73a140e450335fcbfd4c640247b779c0b30938d50ad13938fb158f1063fdfd6514d1bbf38dd9b059fc5c6ac7b1ff3a50a
 debian-testing-amd64-netinst.iso

This shows the issue.

All the best,

Craig

On Tue, Apr 9, 2024 at 5:51 PM Craig Hesling 
wrote:

> Hi all,
>
> I'm having an issue with the guided partitioner in the Debian testing
> amd64 installer.
> Specifically, the "Guided - use entire disk and set up encrypted LVM"
> errors out and emit the following error message:
>
> partman-lvm: pvcreate: error while loading shared libraries: libaio.so.1:
> cannot open shared object file: No such file or directory
>
> Is this a known issue?
>
> *Reproduction:*
>
> md5sum ~/Downloads/debian-testing-amd64-netinst.iso
> > d80f2f073cdb2db52d9d1dd8e625b04b
>  /home/craig/Downloads/debian-testing-amd64-netinst.iso
> dd if=/dev/zero of=~/Downloads/test-hda.img bs=1G count=8
> qemu-system-x86_64 -cdrom ~/Downloads/debian-testing-amd64-netinst.iso
> -hda ~/Downloads/test-hda.img -m 8G
>
> https://youtu.be/jJ-oOA2s8Wc
>
> All the best,
>
> Craig
>


Testing amd64 netinst LUKS+LVM install broken

2024-04-09 Thread Craig Hesling
Hi all,

I'm having an issue with the guided partitioner in the Debian testing amd64
installer.
Specifically, the "Guided - use entire disk and set up encrypted LVM"
errors out and emit the following error message:

partman-lvm: pvcreate: error while loading shared libraries: libaio.so.1:
cannot open shared object file: No such file or directory

Is this a known issue?

*Reproduction:*

md5sum ~/Downloads/debian-testing-amd64-netinst.iso
> d80f2f073cdb2db52d9d1dd8e625b04b
 /home/craig/Downloads/debian-testing-amd64-netinst.iso
dd if=/dev/zero of=~/Downloads/test-hda.img bs=1G count=8
qemu-system-x86_64 -cdrom ~/Downloads/debian-testing-amd64-netinst.iso -hda
~/Downloads/test-hda.img -m 8G

https://youtu.be/jJ-oOA2s8Wc

All the best,

Craig


Re: HDD long-term data storage with ensured integrity

2024-04-09 Thread piorunz

On 02/04/2024 13:53, David Christensen wrote:


Does anyone have any comments or suggestions regarding how to use
magnetic hard disk drives, commodity x86 computers, and Debian for
long-term data storage with ensured integrity?


I use Btrfs, on all my systems, including some servers, with soft Raid1
and Raid10 modes (because these modes are considered stable and
production ready). I decided on Btrfs not ZFS, because Btrfs allows to
migrate drives on the fly while partition is live and heavily used,
replace them with different sizes and types, mixed capacities, change
Raid levels, change amount of drives too. I could go from single drive
to Raid10 on 4 drives and back while my data is 100% available at all times.
It saved my bacon many times, including hard checksum corruption on NVMe
drive which otherwise I would never know about. Thanks to Btrfs I
located the corrupted files, fixed them, got hardware replaced under
warranty.
Also helped with corrupted RAM: Btrfs just refused to save file because
saved copy couldn't match read checksum from the source due to RAM bit
flips. Diagnosed, then replaced memory, all good.
I like a lot when one of the drives get ATA reset for whatever reason,
and all other drives continue to read and write, I can continue using
the system for hours, if I even notice. Not possible in normal
circumstances without Raid. Once the problematic drive is back, or after
reboot if it's more serious, then I do "scrub" command and everything is
resynced again. If I don't do that, then Btrfs dynamically correct
checksum errors on the fly anyway.
And list goes on - I've been using Btrfs for last 5 years, not a single
problem to date, it survived hard resets, power losses, drive failures,
countless migrations.


[1] https://github.com/openzfs/zfs/issues/15526

[2] https://github.com/openzfs/zfs/issues/15933


Problems reported here are from Linux kernel 6.5 and 6.7 on Gentoo
system. Does this even affects Debian Stable with 6.1 LTS?

--
With kindest regards, Piotr.

⢀⣴⠾⠻⢶⣦⠀
⣾⠁⢠⠒⠀⣿⡁ Debian - The universal operating system
⢿⡄⠘⠷⠚⠋⠀ https://www.debian.org/
⠈⠳⣄



Re: using mbuffer: what am i doing wrong?

2024-04-09 Thread DdB
Am 09.04.2024 um 15:30 schrieb Arno Lehmann:

> I'd propose to use
> 
> ss -f inet -lpn
> 
> ss instead of netstat... I try to catch up with changing times :-)
(...)
> 
> Arno
> 
> 

Thank you so much! Your suggestion did help big time, and the transfer
is working now as desired.

Great relief and a good lesson: not to install net-tools, but iproute2
instead. :-)

Have a nice day!
DdB



Re: using mbuffer: what am i doing wrong?

2024-04-09 Thread Michael Kjörling
On 9 Apr 2024 15:13 +0200, from debianl...@potentially-spam.de-bruyn.de (DdB):
>> port=8000 # just an example
>> filename=test.bin # created before
>> 
>> # Start the receiver first, like:
>> mbuffer -I $port -o $filename
>> 
>> # Then start the sender like:
>> mbuffer -i $filename -O ${receiverIP}:$port
> 
> On my LAN (all virtual, at the moment), this fails, because the sender
> cannot connect to the receiver, so times out.

Looking at the mbuffer man page, it certainly looks like it should
work. Another tool that will let you do the same thing is nc (netcat);
nc -l on the listener side, plain nc on the connecting side.


> What left me puzzled is this:
> Just starting the receiver, and then checking, if it is listening at the
> $port, i found:
>> sudo netstat | grep $port
> to return nothing

netstat defaults to resolving IPs and ports to names. At a minimum,
add -n if you are grepping its output for a specific port number. (You
may also want to use grep -w.)

I also suggest to double-check to make sure that you don't have a
firewall blocking the traffic.

-- 
Michael Kjörling 🔗 https://michael.kjorling.se
“Remember when, on the Internet, nobody cared that you were a dog?”



Re: using mbuffer: what am i doing wrong?

2024-04-09 Thread Arno Lehmann

Hello,


I have not used mbuffer for a long time, so won't comment on that.

But your netstat call looks unsuitable to diagnose.

I'd propose to use

ss -f inet -lpn

ss instead of netstat... I try to catch up with changing times :-)

-f inet because in this case, you're (probably) just interested in IPv4 
network sockets. Could be IPv6, of course, then use inet6


-l list listening sockets, not active connections
-p show the process using the socket. Will usually require root
-n show numbers, not translated names

l and n are probably most important for you (and are also available for 
netstat) as you would otherwise first miss a listening socket, and then 
have grep miss the output if you use a port number that is assigned (see 
/etc/services)


Good luck!

Arno


--
Arno Lehmann

IT-Service Lehmann
Sandstr. 6, 49080 Osnabrück



using mbuffer: what am i doing wrong?

2024-04-09 Thread DdB
Hello list,

from my research, the abbreviated takeaway is:

> port=8000 # just an example
> filename=test.bin # created before
> 
> # Start the receiver first, like:
> mbuffer -I $port -o $filename
> 
> # Then start the sender like:
> mbuffer -i $filename -O ${receiverIP}:$port

On my LAN (all virtual, at the moment), this fails, because the sender
cannot connect to the receiver, so times out.

What left me puzzled is this:
Just starting the receiver, and then checking, if it is listening at the
$port, i found:
> sudo netstat | grep $port
to return nothing

At that point, i decided, i would appreciate some hint from people more
experienced than i am.

What am i doing wrong?
DdB



Re: Why LVM

2024-04-09 Thread David Christensen

On 4/8/24 16:54, Stefan Monnier wrote:

If I have a hot-pluggable device (SD card, USB drive, hot-plug SATA/SAS
drive and rack, etc.), can I put LVM on it such that when the device is
connected to a Debian system with a graphical desktop (I use Xfce) an icon
is displayed on the desktop that I can interact with to display the file
systems in my file manager (Thunar)?


In the past: definitely not.  Currently: no idea.
I suspect not, because I think the behavior on disconnection is still
poor (you want to be extra careful to deactivate all the volumes on the
drive *before* removing it, otherwise they tend to linger "for ever").

I guess that's one area where partitions are still significantly better
than LVM.


 Stefan "who doesn't use much hot-plugging of mass storage"



Thank you for the clarification.  :-)


David