Re: Switch boot entry by power-on reason

2024-07-29 Thread hede
Hello David and Thomas, 

On Sun, 21 Jul 2024 10:45:59 + David  wrote:

> On Sun, 21 Jul 2024 at 09:46, Thomas Schmitt  wrote:
> > ...

> So your manually written grub.cfg. would contain something like the below 
> lines
> in addition to whatever other content you need to boot the machine.
> 
> smbios --type 1 --get-byte 24 --set result
> if [ "${result}" == "REPLACEME" ] ; then
> default=1
> else
> default=2
> fi
> 

Many thanks, it works :-) 

Here for me the result is either 5 or 6 and I can switch between the boot 
entries by manipulating the default entry via this smbios command. The best 
thing with this solution is, that it only sets the default value so the user 
can override it manually by choosing some other entry. The timeout remains. 

Btw: This computer is one of those where Windows is constantly kicking out Grub 
from the EFI Boot Manager. After booting Windows there is no longer any grub in 
the EFI Boot Manager and the PC starts Windows only. 

But that's not Microsofts fault here, it's the BIOS vendors fault (HP/AMI) as 
this UEFI removes all EFI Boot Manager entries except the Default one. I tried 
to add several additional entries and they all get removed on next boot. So if 
Grub adds itself, the Windows boot entry gets removed. So Windows adds itself 
again on next boot (which I think is quite reasonable to do). So then Grub gets 
removed again. 

The solution is to force Windows to add Grub as its own Bootloader with (in a 
Windows Admin Shell):

bcdedit /set {bootmgr} path \EFI\debian\shimx64.efi

There are plenty of articles in the Internet how to do so correctly. Afterwards 
Windows wants to install Grub if Grub gets removed from UEFI. ;-) 

> Hopefully all that remains is to use the above information to figure out
> the actual value needed to replace my REPLACEME placeholder.

Boot the PC via all the different methods and for every one run in the Grub 
shell: 
smbios --type 1 --get-byte 24
This returns the value of the current startup type / boot reason. 

Many thanks
hede



Re: CVE-2023-5217 unimportant for firefox?

2023-09-30 Thread hede
On Sat, 30 Sep 2023 17:28:29 +0200 Klaus Singvogel  
wrote:

> hede wrote:
> > Hi, 
> > 
> > does anyone know why CVE-2023-5217 (critical vp8 encoder bug) is rated as 
> > an "open unimportant issue" for firefox-esr? Currently it is not fixed in 
> > bookworm and newer [1]. Mozilla itself rates it as "critical" [2].  
> 
> That's fixed in Debian Bullseye.
> If I look into /usr/share/doc/firefox-esr/changelog.Debian.gz, I find this 
> entry on top:
> 
> -
> firefox-esr (115.3.1esr-1~deb11u1) bullseye-security; urgency=medium
> 
>   * New upstream release.
>   * Fix for mfsa2023-44, also known as CVE-2023-5217.
> -

Yeah, fixed in Bullseye and not in Bookworm and newer, that's what I 
criticised. 

But the Wanderer and Lee already had an explanation: Firefox in Bookworm and 
newer uses the system library (libvpx) which has fixes applied. 

hede



Re: CVE-2023-5217 unimportant for firefox?

2023-09-30 Thread hede
On Sat, 30 Sep 2023 07:37:04 -0400 The Wanderer  wrote:

> When I follow the link to [3], and look at the bottom of the page, I see
> what looks to me like an explanation

Ah, I get it. That's indeed a good explanation. Then the state of "vulnerable" 
is simply wrong, because it's actually "not applicable". 

Someone should fix the Debian security tracker. ;-) 

Thank you Wanderer,

hede



CVE-2023-5217 unimportant for firefox?

2023-09-30 Thread hede
Hi, 

does anyone know why CVE-2023-5217 (critical vp8 encoder bug) is rated as an 
"open unimportant issue" for firefox-esr? Currently it is not fixed in bookworm 
and newer [1]. Mozilla itself rates it as "critical" [2].

[1] https://security-tracker.debian.org/tracker/source-package/firefox-esr
[2] https://www.mozilla.org/en-US/security/advisories/mfsa2023-44/

hede



Re: fstrim: /: the discard operation is not supported

2023-06-11 Thread hede
On Sun, 11 Jun 2023 19:54:32 +0200 hede  wrote:

> on bullseye fstrim was working fine, but after the upgrade to bookworm fstrim 
> stops working. 

Sorry, my fault. 

I'm using a script to decrypt this drive via SecureBoot+TPM so /etc/crypttab 
doesn't get used and this script doesn't use the allow-discard option by 
default and was resetted to default values with the upgrade to bookworm. 

After patching the script, everything is fine for me. :-)

(btw, I have to repair my github repo ... somehow it's broken:
https://github.com/hede5562/mortar , fork of 
https://github.com/noahbliss/mortar 
I mean, github is broken, but... okay... github is Microsoft now... eh!?)

regards
hede



Re: Cloning a disk: partclone?

2023-01-20 Thread hede

Am 19.01.2023 20:14, schrieb Charles Curley:

On Thu, 19 Jan 2023 12:49:57 -0600
Tom Browder  wrote:


+ Can it do a complete clone on an active disk? Or do I need a live
CD or USB stick?


I wouldn't try backing up a live partition due to issues with
referential integrity. Suppose two interdependent files, A and B, 
change

during the backing up, like so:

A is backed up.
A and B are both changed.
B is backed up.

Your originals are fine, they both have the changes. But the backup is
broken: only one file has the changes.


That's true. Especially for a block based copy of mounted filesystems 
there's a high risk of inconsistency, which results not only in broken 
files but even not mountable filesystems. (i.e. something like a "dd" of 
the /dev/sdX)


Copying files from some mounted filesystem, even some active root 
partition, works for most cases. It's not exactly 100%, but there's a 
high chance everything is fine. I did that several times right in the 
live system and never got real problems. Even including databases 
(SQL+ldap). Yes, it's absolutely preferred to boot the machine by some 
external boot media for system cloning purposes, but if someone wants to 
take the risk, chances are high it works.


It's reliable enough so that many backup solutions don't depend on 
snapshots. And it works. Yet, backing up data _with_ using snapshots is 
to be preferred.


The only exception to that is an LVM logical disk (or similar) where 
you

have taken a snapshot, and you back up the snapshot.


or any other snapshot mechanism like btrfs snapshots.

hede



Re: why some memory are missing

2023-01-08 Thread hede

Am 08.01.2023 00:30, schrieb Jeremy Hendricks:

I imagine it’s a bios limitation and it cannot address the full 4GB
even though it’s a 64bit CPU. This was common with the Intel 925/945
chipsets even through they supported Intel 64bit CPUs of the time.


I'm pretty sure it can address 2^32 bit aka 4 GB. But within this range 
there is the PCI address space.  PCI cards are memory mapped. To access 
memory regions on PCI devices you can simply access main memory. But you 
cannot access PCI devices AND use full 4 GB of physical main memory 
within the 2^32 bit range.


Typically Chipsets/BIOS can map the missing memory somewhere above 4 GB 
but not every Chipset/BIOS does this. Then even in 64 Bit mode you'll 
lose memory in the range of the PCI address space. If, on the other 
side, the Chipset/BIOS maps those memory regions above 4 GB then even in 
32 Bit Mode Linux can use this memory (via PAE).


hede



Re: Wear levelling on micro-sd cards

2022-12-29 Thread hede

On 27.12.2022 10:12 Tixy wrote:

You probably can't. I've not heard of removable cards supporting the
'trim' command.


I'm running Debian on some Lenovo Flex 3 Chromebook here, booting from 
SD-Card in the internal SD Slot. And fstrim is working fine on a Sandisk 
High Endurance (one of the cheapest 128 GB SD cards I could find at my 
local electronics store).


So now you've heard first hand: (some) removable cards do indeed support 
the trim command ;-)


I think it depends on the SD Card controller AND the card itself. fstrim 
doesn't work with the same SD-Card and some cheap USB-MicroSD-Adapter 
here, while other SD-Cards do not support trim even in the Chromebooks 
SD Card Slot.


hede



Re: Wifi won't work under debian 11.6, please heeeeelp

2022-12-21 Thread hede

FYI

 Original Message 
Subject: Re: Wifi won't work under debian 11.6,  please help
Date: 22.12.2022 08:38

Hi, the problem of wifi card has been resolved special thanks to " here
" and all the support team for their attention, my objective is to make
Linux especially debian my main OS and promote it to my colleagues and
many other users, please if you have any resource from where we can
learn more about debian please share.

Have a nice day  and thank you again for all

Le mercredi 21 décembre 2022, 17:35:14 UTC+1, hede a écrit :


On Wed, 21 Dec 2022 15:23:47 + (UTC) Mansour zermello wrote:

> Hi, thanks for you response, that's reassuring, but in my sourcelist
> file i have no any non-free, please tell me all what you did step by
> step .
> Thank you so much one more time

You have answered me directly here, not to the list. No one else will
see this answer.

First you can check if you really have this specific wifi chipset via
lspci, like Charles suggested.

If this is the case, then for a first test it would be sufficient to
install the package manually, download:
http://ftp.fr.debian.org/debian/pool/non-free/f/firmware-nonfree/firmware-iwlwifi_20210818-1~bpo11+1_all.deb

as root:
dpkg -i firmware-iwlwifi_20210818-1~bpo11+1_all.deb

If this works, you can enable the non-free sources by adding "non-free"
via text editor to the corresponding lines in /etc/apt/sources.list,
like Christoph has shown, to get updates!

For further help please answer to the list and not to only to me ;-)

regards
hede




Re: Wifi won't work under debian 11.6, please heeeeelp

2022-12-21 Thread hede

Am 21.12.2022 15:19, schrieb Mansour zermello:

I have the model: Intel AX201NGW and downloaded the files from the
official website of Intel, i extracted the folder in /Downloads, then
i copied all the files into /lib/firmware but still not work.


I do also have a device with Intel AX201 and for me it was sufficient to 
install the firmware-iwlwifi package.


What's the output of the following command?

dmesg | grep iwlwifi

regards
hede



Re: Debian failed

2022-12-16 Thread hede

On 2022-12-15 08:36 hw wrote:


It turned out that the AMD card is not even an option anymore because
it's built so badly that not only the height but also the width exceeds
the slot size.  It's a few millimeters too wide and as a result, the
fans won't spin and the card overheats.


That doesn't means it's built badly, but it simply doesn't fit your 
needs. I'm pretty sure higher end NVidias typically are even built more 
badly in this regard. I doubt you will add in here to call them "built 
badly".


That's like with:


Debian needs to fix that.

... or ...

obviously not

... or 

not with Debian


Your expectation is not a general rule. If your hardware needs more 
recent Kernel and Mesa, Debian Stable is simply not the best 
distribution to choose for your* Desktop. It doesn't mean Debian stable 
is built badly, it simply doesn't fit your needs.


But that's ok. There are many alternatives.

Use Debian testing or some other more bleeding edge distribution like 
Fedora. Btw: Why not choose RHEL? Typically this is - like Debian stable 
- not to be expected to run on more recent hardware but currently RHEL 9 
is quite new and it should support the RX 6000 series (besides the fact 
that you came to the conclusion not to use the RX 6600 at all).


On my gaming-PC I'm running Arch Linux. Even more bleeding edge than 
Fedora (I think). I'm using a modern AMD card in there and I'd say 
that's much more straight forward than using NVidia hardware (no extra 
driver needed) - YMMV.


*) To be clear, your individual case doesn't count for a general rule. 
As a counterexample I'm writing this message on a modern 2in1 
flip/tablet Laptop running Debian Stable and Gnome via Wayland for 
modern features like auto-screen-rotation and auto-on-screen Keyboards. 
Works.


hede



Re: Debian failed

2022-12-11 Thread hede
On Sun, 11 Dec 2022 04:51:10 +0100 hw  wrote:

> And it works like 97% perfectly fine ...

That's an oxymoron. 

> 
> > Radeon RX 6000 series was released last year. I doubt it was possible to 
> > use one of these with Red Hat Enterprise Linux ootb in the beginning of 
> > this year before RHEL 9 was released. ;-)  
> 
> I don't know, I had NVIDIA cards before and there was never a problem
> with Debian or Fedora or Gentoo being too old for that.

Actually with NVidia it's more typical to have a kernel too new or the card to 
be too old. Or other sources for incompatibility, which emerge from time to 
time.

If you have less problems with NVidia, maybe you should simply stick to NVidia 
then. 

> Last year was at
> least a year ago and Debian still can't use the card?  Seriously? 

With Debian bookworm or sid your card should work. Both are no less Debian than 
bullseye. 

Beyond that, like others already pointed out: Hardware which needs (to get 
fully supported) a kernel version newer than 5.10 won't run perfectly fine with 
Debian "bullseye" 11 by default. But that's also true for all NVidia cards. For 
both of them you need additional sources: For newer AMD cards it's "backports" 
and for NVidia it's "non-free". 

If you have less problems with Fedora, maybe you should simply use Fedora 
instead. 

> It's
> not even some kind of special card (except being way too large) but the
> minimum card you can get away with when you have a 4k display (and has
> only about half the performance or even less of the 1080ti FE I
> surprisingly resurrected.)

I'd expect the RX 6600 XT to be a little below a GTX 1080 ti, with much less 
power usage. But not half the performance. There's probably something wrong 
with your configuration or you're using a workload which favours NVidia. Both 
is possible.

> On top of that, the AMD drivers are open source and in the standard
> kernel and are supposed to work.  It doesn't make any sense. 

Indeed it makes sense if the kernel you use is older than the minimum required. 
Update your kernel and it should work. 

> Who is
> cooperative with their drivers now, NVIDIA or AMD?

AMD (regarding Linux kernel or distribution inclusion)

> 
> Wayland still doesn't work with NVIDIA, but I can live without it for
> now ...

It works fine with AMD and Intel.

hede



Re: Monitor traffic on a port.

2022-12-10 Thread hede
On Fri, 09 Dec 2022 23:25:36 -0600 pe...@easthope.ca wrote:

> Appears nettools is deprecated.
> How is traffic on a specific port monitored now?

Do you mean netstat? There's "ss" in the iproute2 package. 

But I wouldn't call either of them "traffic monitoring" tools. None of the 
net-tools. 

With "traffic monitoring" I would think of iptraf-ng or iftop, but both of them 
are neither part of net-tools nor iproute2. 

Maybe I simply do not understand the question. ;-) 

hede



Re: Debian failed

2022-12-05 Thread hede

On 04.12.2022 23:09 hw wrote:

On Sun, 2022-12-04 at 15:00 +, Andrew M.A. Cater wrote:

On Sun, Dec 04, 2022 at 03:52:31PM +0100, hw wrote:
[...]
How did you install - what image, what steps?


debian-11.5.0-amd64-netinst.iso


see below...

On 04.12.2022 21:49 hw wrote:

> [...]
> So I'm stuck with Fedora.  What's wrong with Debian that we can't even
> get AMD cards to work.

I think you need around the 5.15 kernel.


It was fully updated and the amdgpu module would load after forcing it,
yet it didn't work right.  This is something that should --- and does
with Fedora --- work right out of the box.


Fedora is a bleeding edge distribution. As such it's more comparable to 
Debian Unstable or maybe Testing than to Debian Stable (like currently 
Debian 11 "Bullseye").


Radeon RX 6000 series was released last year. I doubt it was possible to 
use one of these with Red Hat Enterprise Linux ootb in the beginning of 
this year before RHEL 9 was released. ;-)


hede



Re: Dell Precision 3570 - Debian instead of Ubuntu

2022-11-28 Thread hede

Am 28.11.2022 10:04, schrieb B.M.:

Hi,

I'm going to buy a Dell Precision 3570 laptop [...]
How would you proceed? [...]

e) other...

Thank you for your ideas.


Maybe as an option: If there are special drivers needed, patches to the 
kernel, etc., you can run Debian on the Ubuntu kernel if everything else 
fails.


I'm running a Chromebook* this way, using some ChromiumOS Kernel** plus 
Debian userland in Chromebooks dev-mode.


regards
hede

*) Coreboot+Depthcharge, no u-boot or UEFI, Googles Embedded Controler, 
etc.
**) self compiled kernel with mixed ChromiumOS+Debian config plus some 
fixes as the ChromiumOS patchset for Linux 5.10 a little buggy and 
doesn't even compile with some config options Debian sets but ChromiumOS 
does not use.




Re: FIREFOX is killing the whole PC Part III

2022-11-26 Thread hede
On 23.11.2022 17:09, Schwibinger Michael wrote:
> Good afternoon
> 
>  I saw cgroups.
> 
>  Thank You.
> 
>  But I cant understand.
> 
>  Is it a concept to limit all tasks?

You can limit all, a single one or (the name suggests this one) a group of 
processes. There are plenty of system resources which can be configured. 

>  Where can I find a introduction?

I don't know either. But maybe this one is fine to start with:

https://www.redhat.com/sysadmin/cgroups-part-one

Some examples limiting Firefox to 500 MB of RAM and low CPU resources, do as 
root:



# cd into the cgroup filesystem, defaults to cgroup2 on Debian Bullseye, others 
may vary
cd /sys/fs/cgroup/

# create a new cgroup
mkdir ffox

# cd into the cgroup dir
cd ffox/

# add all firefox-esr processes to the cgroup, all newly created subprocesses 
will automatically get added to ffox cgroup
for i in $(pidof firefox-esr); do echo $i > cgroup.procs ; done

# limit memory usage to 500 MB
echo 500M > memory.high 

# limit cpu usage to less wight (defaults to 100)
echo 10 >  cpu.weight

# also limit swap memory because else it will fill swap after the memory limit 
kicks in
echo 100M > memory.swap.max 



... now watch Firefox stall if other processes either use CPU resources or 
Firefox tries to consume more than 600 MB of Memory (which is small nowadays 
for multiple tabs; but mind, if Firefox already has more memory allocated it 
will keep this). 

Typically you can use some more friendly userspace interface. But don't think 
of using cgroup-tools with Debian Bullseye: Version 0.41 in Bullseyes 
repository supports cgroups V1 only while Bullseyes Kernel defaults to V2.

There's plenty of help you can find with your favourite Internet search engine. 

regards
hede


PS: to remove Firefox from the ffox cgroup either restart Firefox or:

cd /sys/fs/cgroup/ffox/ && for i in $(cat cgroup.procs); do echo $i > 
../cgroup.procs ; done 



Re: MacOS VM on Debian: is it reasonably possible?

2022-11-22 Thread hede



Whilst I had mistakenly believed that CentOS was a freeware, open 
source

kind of MacOS clone,


CentOS was derived from Red Hat Enterprise Linux and was mostly 
compatible to RHEL.

"May God rest its soul."


and found that it is not, when I searched for it, I
had understood that a freeware, open source kind of MacOS kind of 
clone,
is available, and, when I searched on  the three word combination - 
open

source macos - I found, in the results, the above URL.

So, as an observer, I wonder whether licencing restrictions apply, to
running MacOS on Linux, as a virtual machine.


If you click through the links on that page, it looks like Apple is
just linking to the source code for open source components used in
their operating systems (things like awk, bash, bind, bzip, etc.), but
the operating systems themselves are certainly not open source, and
cannot be legally used except in accordance with Apple's license terms
and / or applicable law.


Darwin is the core of modern Apple OSes. It is Open Source and POSIX 
compatible.

https://en.wikipedia.org/wiki/Darwin_(operating_system)

But there are plenty of Closed Source parts missing to form either macOS 
or iOS from it. Both - macOS and iOS - are proprietary OSes where you 
have strict license terms to fulfill to use it. One of them is  - AFAIK 
- buying Apple Hardware and running the OS only on Apples Hardware.


hede



Re: AW: FIREFOX is killing the whole PC Part III

2022-11-21 Thread hede

Am 20.11.2022 12:06, schrieb Schwibinger Michael:

To avoid problems by surfing
I tried
nice.
No good enough.

I did try

nice -n 19 chromium-browser
 cpulimit -e chrome -l 30

But this also did not work,
cause URLS do open other URLs.

I found this.

2 Questions:

What does it do?

Why does it not work.

# Find and limit all child processes of all browsers.
for name in firefox firefox-esr chromium chrome
do
for ppid in $(pgrep "$name")
do
cpulimit --pid="$ppid" --limit="$LIMIT" &
for pid in "$ppid" $(pgrep --parent "$ppid")
do
cpulimit --pid="$pid" --limit="$LIMIT" &
done
done
done


(sorry I have not read the whole thread but as some hint for this 
specific task:)


Probably the best solution may work on cgroups, limiting not only single 
processes but groups of processes, including subprocesses created later 
on. This way it's possible not only to limit several processes to some 
value for their own but to have several processes combined not to exceed 
a limit.


For example if all firefox processes share a cgroup which is limited to 
512 cpu.shares, while all other processes in the system have their 
default 1024 cpu.shares, firefox (in a whole, with all subprocesses) 
will be limited to ~30% CPU time if any other task wants to access the 
CPU.


But ad-hoc I do not have a working command example (cgcreate, cgset, 
cgexec, ...).


hede



Re: Intel X540-AT2 and Debian: intermittent connection

2022-11-20 Thread hede
On Sun, 20 Nov 2022 00:51:20 +0100 hw  wrote:

> Unfortunately it doesn't work anymore with Fedora either ...  I tried it with 
> a
> live system if it would work and it didn't.

The source of connection resets can be diverse. Sometimes dmesg will show 
useful info, sometimes not. It can be anything, from the link layer (ethernet 
re-negitiation) to some upper layers (arp, ip, etc.). What kind of logs and 
status apps did you examined already? (dmesg, ethtool/mii-tool, syslog, systemd 
journal, journal for which kind of services, etc.)

Does the live system use the same kernel as the installed one? That's typically 
not the case as those get updated very frequently. As such the driver can still 
be different. (can, maybe, not a must)

Does the X540-AT2 uses external or builtin firmware? With external firmware 
even that can differ between systems and firmware is also a potential source of 
connection problems. 

For the cable: my own experience is that with shorter connections the cable is 
more irrelevant. On shorter connections even cat 5 works on 10 GBit. I had to 
use those for some room-to-room connection (wall-moulded cables for fire 
protection between two adjacent rooms, not simply exchangeable). They are 
perfectly working in full speed. So if you tried several cat 6 cables 10 m and 
less, which are working between other systems, I don't think(!) the cable is of 
interest here... 

hede



Re: MTBF interpretations (Re: ZFS performance)

2022-11-13 Thread hede
On Sat, 12 Nov 2022 07:52:32 -0500 Dan Ritter  wrote:

> No, my interpretation is that the average (mean) lifetime
> between failures should be the listed value. At 114 years, half
> of the population of drives should still be working.
> 
> This is obviously not congruent with reality.

I'd say it's like with many things in life: If expectation doesn't match the 
definition, outcome is "not congruent with reality".

MTBF is, like Stefan already said, not a lifetime expectation.

> [...]
> Some drive models are lucky. Some are unlucky. Overall you
> should expect about 1.54% of disks to fail each year in a large
> mixed-age population, not quite double what you estimated. But
> the range of annualized failure rates is from 0 to 9% -- you
> could be lucky, or very unlucky.

That's what is to expect here. They are using desktop drives in enterprise 
environment. This naturally gives results not foreseen by the disk vendor. Even 
for some of the "sub par" disks the result is probably fine. And Backblaze 
itself do know this. They safe money with having some more failed cheap disks 
like the ST4000DM000 - a disk with a power-on-ratio of 9/5 instead of 24/7 
(2400 hours per year) and no MTBF rating (AFAICS)?

I'd simply not mix this into some 1.000.000 hours MTBF rating expectation... 

hede



MTBF interpretations (Re: ZFS performance)

2022-11-12 Thread hede
On Fri, 11 Nov 2022 14:05:33 -0500 Dan Ritter  wrote:

> Claimed MTBF: 1 million hours. Believe it or not, this is par
> for the course for high-end disks.
> 
> 24 hours a day, 365 days a year: 8760 hours per year.
> 100/8760 = 114 years.
> 
> So, no: MTBF numbers must be presumed to be malicious lies.

With your interpretation every single drive would not be allowed to fail before 
its MTBF value. That's wrong. MTBF is a mean value for all drives of this type, 
not a guaranteed minimum value for a single drive. 

If there are one million drives, statistically with every hour one drive will 
fail. That means, even with correctly claiming an MTBF of 1 million hours, one 
of those million drives will statistically(!) run only one hour before it 
fails. 

Some more practical explanation: 

If you have one thousand drives then you have to expect one drive failure for 
every one thousand hours (MTBF=1,000,000). Statistically you have to expect 
nearly nine of those drives to fail per year (if I did my math correctly). With 
higher MTBF values this value will be smaller, with lower MTBF values you'd 
have to expect more drives to fail.

But this is only a pure statistical value. If you are the one of a million, 
your single drive will fail within the first hour...

(and on top there are other details, like: the MTBF is calculated for specific 
environmental conditions; if your environment is more rogue, then you'd have to 
expect a lower MTBF than the one declared by the vendor... and vice versa)

regards
hede



Re: removing file??

2022-11-11 Thread hede

Am 11.11.2022 04:32, schrieb Amn:

I am trying to remove this file :
'/var/cache/apt/archives/codeblocks-dev_20.03-3_amd64.deb', but after
trying, even as a 'su', but I am unable to. Any suggestion how to do
this?


The others are trying to solve the problem on the package layer. But if 
removing such an archive file is not possible even with root 
permissions, this could potentially indicate some more fundamental 
problem. Like errors in the data layer of the partition (broken 
harddisk) or read-only mounts.


My question would be: _How_ did you try to delete the file with root 
permissions? And what was the result (error message)?


And like others suggested: If you are not familiar with deeper Debian 
system knowledge, the (unstable) SID distribution may not be your best 
option. I'd suggest to use some stable distribution (like Debian 
Bullseye currently).


regards
hede



network raid (Re: deduplicating file systems: VDO with Debian?)

2022-11-11 Thread hede

Am 10.11.2022 14:40, schrieb Curt:

(or maybe a RAID array is
conceivable over a network and a distance?).


Not only conceivable, but indeed practicable: Linbit DRBD



Re: ZFS performance

2022-11-11 Thread hede

On 10.11.2022 16:44, hw wrote:


I accidentally trash files on occasion.  Being able to restore them
quickly and easily with a cp(1), scp(1), etc., is a killer feature.


indeed


I'd say the same and I do use a file based backup solution and love 
having cp, scp, etc.


Still, having a tool which could do that out of a block based backup 
solution makes no difference here. And backup solutions typically do 
offer those.


I guess btrfs could, in theory, make something like boot environments 
possible,
but you can't even really boot from btrfs because it'll fail to boot as 
soon as
the boot volume is degraded, like when a disc has failed, and then 
you're
screwed because you can't log in through ssh to fix anything but have 
to
actually go to the machine to get it back up.  That's a non-option and 
you have

to use something else than btrfs to boot from.


First, is this still the case? I do use btrfs for the root disk, had 
such a failure and do not remember to had any issues like that. 
(disclaimer: booting via EFI and systemd boot stub and secureboot signed 
kernel and initrd on the EFI partition - no grub etc.)


Second, there's still dropbear inside the initrd. For Debian it's simple 
to use: install dropbear-initramfs.
I use it because I do use full drive encryption on a headless system. If 
it fails to decrypt and boot via TPM (which is a nice feature for fully 
encrypted headless systems!) then I have to manually boot the system via 
SSH. (happened in the "learning phase")


regards
hede



Re: deduplicating file systems: VDO with Debian?

2022-11-10 Thread hede

On Wed, 09 Nov 2022 13:52:26 +0100 hw  wrote:

Does that work?  Does bees run as long as there's something to 
deduplicate and

only stops when there isn't?


Bees is a service (daemon) which runs 24/7 watching btrfs transaction 
state (the checkpoints). If there are new transactions then it kicks in. 
But it's a niced service (man nice, man ionice). If your backup process 
has higher priority than "idle" (which is typically the case) and 
produces high load it will potentially block out bees until the backup 
is finished (maybe!).



I thought you start it when the data is place and
not before that.


That's the case with fdupes, duperemove, etc.

You can easily make changes to two full copies --- "make changes" 
meaning that
you only change what has been changed since last time you made the 
backup.


Do you mean to modify (make changes) to one of the backups? I never 
considered making changes to my backups. I do make changes to the live 
data and next time (when the incremental backup process runs) these 
changes do get into backup storage. Making changes to some backups ... I 
won't call that backups anymore.


Or do you mean you have two copies and alternatively "update" these 
copies to reflect the live state? I do not see a benefit in this. At 
least if both reside on the same storage system. There's a waste in 
storage space (doubled files). One copy with many incremental backups 
would be better. And if you plan to deduplicate both copys, simply use a 
backup solution with incremental backups.


Syncing two adjacent copies means to submit all changes a second time, 
which was already transferred for the first copy. The second copy is 
still on some older state the moment you update this one.


Yet again I do prefer a single process for having one[sic] consistent 
backup storage with a working history.


Two copies on two different locations is some other story, that indeed 
can have benefits.



> For me only the first backup is a full backup, every other backup is
> incremental.

When you make a second full backup, that second copy is not 
incremental.  It's a

full backup.


correct. That's the reason I do make incremental backups. And with 
incremental backups I do mean that I can restore "full" backups for 
several days: every day of the last week, one day for every month of the 
year, even several days of past years and so on. But the whole backup of 
all those "full" backups is not even two full backups in size. It's less 
in size but offers more.


For me a single full backup needs several days (Terabytes via DSL upload 
to the backup location) while incremental backups are MUCH faster 
(typically a few minutes if there wasn't changed that much). So I use 
the later one.


What difference does it make wether the deduplication is block based or 
somehow

file based (whatever that means).


File based deduplication means files do get compared in a whole. Result: 
Two big and nearly identical files need to get stored in full: they do 
differ.
Say for example a backup of a virtual machine image which got started 
between two backup runs. More than 99% of the image is the same as 
before, but because there's some log written inside the VM image they do 
differ. Those files are nearly identical, even in position of identical 
data.


Block based deduplication can find parts of a file to be exclusive 
(changed blocks) and other parts to set shared (blocks with same 
content):


#
# btrfs fi du file1 file2

 Total   Exclusive  Set shared  Filename
   2.30GiB23.00MiB 2.28GiB  file1
   2.30GiB   149.62MiB 2.16GiB  file2
#
here both files share data but do also have their exclusive data.


I'm flexible, but I distrust "backup solutions".


I would say, it depends on. I do also distrust everything, but some sane 
solution maybe I do distrust a little less then my "self built" one. ;-)


Don't trust your own solution more than others "on principle", without 
some real reasons for distrust.


Sounds good.  Before I try it, I need to make a backup in case 
something goes

wrong.


;-)

regards
hede



Re: deduplicating file systems: VDO with Debian?

2022-11-08 Thread hede

On 08.11.2022 05:31, hw wrote:
That still requires you to have enough disk space for at least two full 
backups.


Correct, if you do always full backups then the second run will consume 
full backup space in the first place. (not fully correct with bees 
running -> *)


That would be the first thing I'd address. Even the simplest backup 
solutions (i.e. based on rsync) do make use of destination rotation and 
only submitting changes to the backup (-> incremental or differential 
backups). I never considered successive full backups as a backup 
"solution".


For me only the first backup is a full backup, every other backup is 
incremental.


Regarding dedublication, I do see benefits in dedublication either if 
the user moves files from one directory to some other directory, in 
partly changed files (my backup solution dedubes on file basis via 
hardlinks only), and with system backups of several different machines.


I prefer file based backups. So my backup solutions dedublication skills 
are really limited. But a good block based backup solution can handle 
all these cases by itself. Then no filesystem based dedublication is 
needed.


If your problem is only backup related and you are flexible regarding 
your backup solution, then probably choosing a backup solution with a 
good dedublication feature should be your best choice. The solution 
don't has to be complex. Even simple backup solutions like borg backup 
are fine here (borg: chunk based deduplication even of parts of files 
across several backups of several different machines). Even your 
criteria to not write duplicate data in the first place is fulfilled 
here.


(see borgbackup in Debian repository; disclaimer: I do not have personal 
experience with borg as I'm using other solutions)


 I wouldn't mind running it from time to time, though I don't know that 
I
would have a lot of duplicate data other than backups.  How much space 
might I
expect to gain from using bees, and how much memory does it require to 
run?


Bees should run as a service 24/7 and catches all written data right 
after it gets written. That's comparable to in-band dedublication even 
if it's out-of-band by definition. (*) This way writing many duplicate 
files will potentially result in removing duplicates even if not all 
data has already written to disk.


Therefore also memory consumption is like with in-band deduplication 
(ZFS...), which means you should reserve more than 1 GB RAM per 1 TB 
data. But it's flexible. Even less memory is usable. But then it cannot 
find all duplicates as the hash table of all the data doesn't fit into 
memory. (Nevertheless even then dedublication is more efficient than 
expected: if it finds some duplicate block it looks for any blocks 
around this block. So for big files only one match in the hash table is 
sufficient to dedublicate the whole file.)


regards
hede



Re: deduplicating file systems: VDO with Debian?

2022-11-07 Thread hede

Am 07.11.2022 16:29, schrieb hede:

Am 07.11.2022 02:57, schrieb hw:

Hi,

Is there no VDO in Debian, and what would be good to use for 
deduplication with

Debian?  Why isn't VDO in the stardard kernel? Or is it?


I have used vdo in Debian some time ago and didn't remember big
problems.


Btw. please keep in mind: VDO is transparent to the filesystem on-top. 
And deduplication (likewise compression) is some non-deterministic task.


Where btrfs' calculation of the real free space is tricky if compression 
and/or dedup is in use, it's quite impossible for a filesystem ontop of 
VDO. It's much wore with VDO. The filesystem on top sees a "virtual" 
size of the device which is a vague guess at best and is predefined on 
creation time. You need to carefully monitor the actual disk usage of 
the VDO device and stop writing data to the filesystem if it fills up. 
It stalls if the filesystem wants to write more data than is available.

(At least if I remember correctly. Please correct me if I'm wrong here.)

So if you are expecting issues with space, there's some risk in damaging 
your (file-)system.


With something like btrfs or ZFS there's less risk in that. Both do know 
the free space and even if this was indeed a Problem in first days*, 
rebalancing and filled up filesystems are (AFAIK) no longer a problem 
with btrfs.


*) running out of space on btrfs could render filesystems read-only, 
deleting files was no longer possible. COW means even deleting a file 
needs some space so it got broken. This is AFAIK resolved. For deleting 
files there's always some reserved space.


regards
hede



Re: deduplicating file systems: VDO with Debian?

2022-11-07 Thread hede

Am 07.11.2022 02:57, schrieb hw:

Hi,

Is there no VDO in Debian, and what would be good to use for 
deduplication with

Debian?  Why isn't VDO in the stardard kernel? Or is it?


I have used vdo in Debian some time ago and didn't remember big 
problems. AFAIR I did compile it myself - no prebuild packages.


I switched to btrfs for other reasons. Not even for performance. The VDO 
Layer eats performance, yes, but compared to naked ext4 even btrfs is 
slow.


I'm not looking for deduplication that happens some time after files 
have
already been written like btrfs would allow: There is no point in 
deduplicating
backups after they're done because I don't need to save disk space for 
them when

I can fit them in the first place.


That's only one point. And it's not really some valid one, I think, as 
you do typically not run into space problems with one single action 
(YMMV). Running multiple sessions and out-of-band deduplication between 
them works for me.


In-band deduplication (that's the one you want) has some drawbacks, too: 
High Ressource usage. You need plenty of RAM (up to several Gigabytes 
per Terabyte Storage) and write success is delayed (-> slow direct i/o).


For Out-of-Band deduplication there are multiple different 
implementations. File based dedup on directory basis can be very fast 
and resource economical, for example via rdfind or jdupes. Block based 
like via bees for btrfs (that's the one I use) is more close to in-band 
deduplication (including high RAM usage). Bees can be switched off and 
on at any time (for example if it's a small home-system which runs more 
demanding tasks from time to time) and switching it on again resumes at 
the last state (it starts at the last transaction id which was processed 
-> btrfs knows its transactions).


regards
hede



Re: Apt upgrade problem

2022-10-17 Thread hede
On Sun, 16 Oct 2022 23:44:00 +0100 Mark Fletcher  wrote:

> After "sudo apt update", the system informs me there is 1 package that can
> be upgraded.
> 
> "sudo apt upgrade" reports nothing to do, 0 packages upgraded, 0 newly
> installed, 0 to remove and 0 not upgraded...

One possible reason could be the new package conflict with some other installed 
package. "apt upgrade" doesn't remove packages. Maybe "apt full-upgrade" is at 
hand?

Maybe also "apt policy workspacesclient" gives some help. 

Or maybe "apt install workspacesclient=4.3.0.1766" installs the current version?

(sorry, just some ideas and no safe solution from me, as I'm not using amazons 
workspaceclient)

regards 
hede



Re: How to use hw vendor EFI diagnostics ?

2022-10-11 Thread hede

Am 11.10.2022 11:38, schrieb Alain D D Williams:

How do I integrate the HP diagnostics into the current EFI ?


It should be fine to copy the files to the EFI partition (if not already 
there) and add some boot entry via efibootmgr (if not already present).


Running efibootmgr (as root) without options will tell you if there are 
already old entries. If there's an entry for HP diagnostics, "efibootmgr 
-v" will show you details.


You can add a new one via:

efibootmgr --create --disk [EFItargetDisk] --part [partNumber] --label 
"HP diagnostics" --loader \\EFI\\[targetEXE].efi


For further help see man page of efibootmgr. There are also many blog 
posts regarding efibootmgr in the internet.


For the correct targetEXE in your case, I do not know which one of the 
named efi files is the right one. Either wait for some other answer 
here, search the internet for this specific question or (maybe) some 
try will do it.


regards
hede



Re: bindfs for web docroot - is this sane?

2022-10-11 Thread hede

On 11.10.2022 10:03 Richard Hector wrote:

[...]
Then for site developers (who might be contractors to my client) to be
able to update teh site, they need read/write access to the docroot,
but I don't want them all logging in using the same
account/credentials.
[...]
Does that sound like a sane plan? Are there gotchas I haven't spotted?


I think I'm not able to assess the bind-mount question, but...
Isn't that a use case for ACLs? (incl. default ACLs for the webservers 
user here?)


Files will then still be owned by the user who created them. But your 
default-user has all  (predefined) rights on them.


I'd probably prefer that because - by instinct - I have a bad feeling 
regarding security if one user can slip/foist(?) a file to be "created" 
by some other user. But that's only a feeling without knowing all the 
circumstances.


And this way it's always clear which users have access by looking at the 
ACLs while elsewhere defined bind mount commands are (maybe) less 
transparent. And you always knows who created them, if something goes 
wrong, for example.


regards
hede

?) I'm not native English and slip or foist are maybe the wrong terms / 
wrongly translated. The context is that one user creates files and the 
system marks them as "created by" some other user.




Re: nginx.conf woes

2022-10-06 Thread hede

On 02.10.2022 15:07, Patrick Kirk wrote:

I have 2 sites to run from one server.  ...
If I try lynx http://cleardragon.com a similar redirect
takes place and I get a "Alert!: Unable to connect to
remote host" error and lynx closes down. ...


You can check if the correct SNI information results in the correct 
certificate via openssl:


openssl s_client -connect [servernameorIP]:443 -servername "kirks.net"
openssl s_client -connect [servernameorIP]:443 -servername 
"cleardragon.com"


hede



Re: Debian 8

2022-09-29 Thread hede

Am 29.09.2022 21:40, schrieb to...@tuxteam.de:

On Thu, Sep 29, 2022 at 08:13:56PM +0100, David wrote:

[...]


The software I want to run is provided by Ubiquity as an NVR for their
cameras, I have versions for Debian 7, 8 & 9.


I think the warnings from both Andys and hede are a bit too one-sided,
if well-meant.


Where's my one-sided warning? It's quite the opposite, as I clearly 
stated there are good reasons to do so.



If you need to install an older Debian, that's what the archives are
for, after all (I had to, for a customer's embedded system, and the
archives were invaluable for the system itself as well as for the
chroot cross-compile environment to create specific binaries:


Yes, archive.debian.org is your friend here.


said
system had 4 megabytes of RAM and 4 megabytes of flash (yes *mega*,
you read correctly) so no chance a current debian could play there).


I doubt you are doing anything useful with something like Debian 8 and 4 
MB of RAM, I bet it was way older. The smallest system I'd ever use with 
a post-2000 Linux had 32 MB of RAM and used Linux 2.4 even after way 
newer kernels were released, for reasons.



You should somewhat know what you are doing: exposing such a system
to the wide internet would be asking for trouble, of course.


of course; and hopefully David will take this into account.


I wouldn't
use that as my "dayly driver" without a very good reason, either.


There is a good reason if the software runs way easier with Debian 8. 
Some internal camera network should be possible to operate separated 
from the internet. Data transfers and user/admin access can be filtered 
and/or VPNed. If the alternative is to buy new cameras and if money does 
matter, I'd do so.


So good luck with your working root access now, David :-)

hede



Re: Debian 8

2022-09-29 Thread hede

Am 29.09.2022 18:02, schrieb Andy Smith:

Hello,

On Thu, Sep 29, 2022 at 03:43:32PM +0100, David wrote:

The reason for Debian 8 is the software I want to run on it.


This is a really bad idea. The whole thing is lacking security
fixes, so it will only continue to get worse.

If there was no way to make this software work on a non-obsolete
operating system and there was no way to avoid running the software,
I'd be looking into more extreme measures like running just that app
in a chroot or container on an otherwise up to date system.


That was also my first thought. I'd also prefer this one in most cases. 
But a native system could also have its reasons. For example if an old 
(proprietary) machine control system needs old drivers or an old kernel 
and full control of hardware (for example for some bit banging 
interface).


And a solution with old and unsupported software doesn't have to mean 
you cannot operate it in a secure manner. If there's a dedicated or 
secured network (DMZ, firewalled, no direct internet access, etc.) or 
even for offline systems I see no reason not to do it.


hede



Re: OT: apache in shared environment

2022-09-29 Thread hede
Am 29. September 2022 07:31:56 MESZ schrieb Sven Hartge :
>basti  wrote:
>
>> Is there a way to get http2 work with mod_itk?
>
>If mod_itk *needs* preforking, then nothing can be done.
>

At least not with a single instance of Apache. But why not run multiple 
Apaches? The one on port 80+443 with http2 enabled (the frontend) and the other 
one with itk on some other port? (the backend)

Maybe it even works with two different virtuals?

Serving the brackend's  data on the frontend. 

Alternatively some other http2 as frontend (you don't need two Apaches, some 
lighttp, nginx, whatever... would do).

This way you can even hide some old non-ssl/tls server behind a modern 
http3+tls1.3 frontend. 

hede



Re: idempotent (was Re: exif --remove not idempotent, and a Debian man page bug)

2022-09-25 Thread hede

Am 25.09.2022 14:42, schrieb The Wanderer:

On 2022-09-25 at 08:22, rhkra...@gmail.com wrote:


On second thought, what hede wrote is correct, it is just stated in a
way that I wasn't famiiar with (and I haven't had my morning coffee
yet)


Are you sure?


Meanwhile, I do think my description was not correct. Sorry for that. 
But probably the tool matches the definition, if it writes the same data 
in the second run. ;-)


((( off topic addition:
On 2022-09-25 at 08:22, rhkra...@gmail.com wrote


[...]

[...]

[...]
If you reply: [...] avoid HTML; [...]


I love this kind of humor ;-)

)))



Re: exif --remove not idempotent, and a Debian man page bug

2022-09-24 Thread hede

Am 21.09.2022 14:46, schrieb Emanuel Berg:

I don't know what the intended behaviof of "exif --remove -o
file file" is. I'm imagining [...]


exif(1) which says on line 57 that --remove

  Remove the tag or (if no tag is specified) the entire IFD.


"Idempotent" means, that a task with the same input data and the same 
config (for example to remove a tag via exif-tool) results in the same 
output data. Is this the case here?




Only if it does, why is it there the next time to be removed
as well?

Maybe related to the '-o f f' part as your imagination
tells you ...


The "-o" means: "Write output image to FILE". And it does so, as far as 
I can see.


hede



[solved] kmod not on-demand-loading modules with custom kernel

2022-09-21 Thread hede

Hi all.

Problem solved. Kernel misconfiguration.

It's simple and clear. At least for anyone who knows details regarding 
different security options in ChromeOS and Debian.


SystemTap for the win! Probing parameters for call_modprobe(), argv[0] 
for call_usermodehelper_setup() and the return value of 
call_usermodehelper_exec() via "stap" [1] shows the problem: ENOENT on 
/sbin/usermode-helper. This file is non-existent in Debian but ChromeOS 
uses it to filter all application calls from kernel- to userspace.


Solution is a change in the kernel config:
- CONFIG_STATIC_USERMODEHELPER=y
- CONFIG_STATIC_USERMODEHELPER_PATH="/sbin/usermode-helper"
+ # CONFIG_STATIC_USERMODEHELPER is not set

Because I didn't find the solution via both of my favourite search 
engines I write this mail for archive purposes. Maybe someone else with 
a similar problem in the future finds the solution via internet 
search...

(therefore the full quote!)

[1] not fancy but works (I'm no SystemTap expert):
stap -ve 'probe kernel.function("call_modprobe") { printf("%s\n", 
$$parms$) }'
stap -ve 'probe kernel.function("call_usermodehelper_setup") { 
printf("%s\n", $argv[0]$) }'
stap -ve 'probe kernel.function("call_usermodehelper_exec").return { 
retval=$return; printf("%s\nretval:%d\n", $$return$, retval) }'


regards
hede


Am 20.09.2022 15:39, schrieb hede:

hi all.

Am 19.09.2022 16:27, schrieb hede:
I need help getting module on-demand-loading working with a custom 
kernel.


Additional information:

My problem seems less related to udev but more probably related the
kernel kmod subsystems!?

The kernel usually calls /sbin/modprobe if functionality is missing.
Check kmod.c in kernel sources. If I create a file modprobe.fake [1]
and modify /proc/sys/kernel/modprobe to call this file, a standard
Debian shows the same behaviour than my Chromebook: If fat/vfat
modules are not loaded and I try to mount some fat filesystem
afterwards this fails with the same error message.

But while a standard Debian system obviously calls the fake modprobe
command (as it creates the txt file) the Chromebook does NOT do so.

[1]
### /usr/local/bin/modprobe.fake ###
#!/bin/sh
date >> /tmp/modprobe.txt
echo "$@" >> /tmp/modprobe.txt
exit 1
### (chmod +x)

default config on Chromebook:
###
root@cbtest:~# cat /proc/sys/kernel/modules_disabled
0
root@cbtest:~# cat /proc/sys/kernel/modprobe
/sbin/modprobe
###

still searching a solution...

hede

Am 19.09.2022 16:27, schrieb hede:
> Hi all.
>
> I need help getting module on-demand-loading working with a custom kernel.
>
> Currently I'm running Debian 11 for x86_64 on a Chromebook in
> developer mode directly via Coreboot/Depthcharge. Not having UEFI or
> classical BIOS boot code means that the default Debian kernel doesn't
> work, right? So I'm using a kernel from the chromiumOS project
> (ChromeOS 5.10) with a custom config.
>
> I do need a patched kernel anyways as there's no UEFI/ACPI but a
> special Chromebook embedded controller for all those fancy sensors and
> a like.
>
> The system is working fine, including wifi, rotation sensors, graphics
> and so on except the on demand kernel module loading doesn't work.
> Running "edevadm monitor" I do get many UEVENTs when plugging in an
> usb stick, for example. The event device system itself does work. But
> trying to mount the filesystem doesn't work as no vfat module gets
> loaded (as an example).
>
> Likewise adding rules via iptables doesn't work, as the netfilter
> modules are missing. I have to manually load the nf* modules and
> _then_ I'm able to use iptables.
>
> I can load all those modules by hand via modprobe, but autoloading via
> kernel/udev doesn't work.
>
> Running "depmod -a" was fine. The files
> /lib/modules/[kernelversions]/modules.* seem(!) also to be ok. "find
> /sys/ -name "uevent" | wc -l" seems also fine with more than a
> thousand results.
>
> When I try for example mounting the fat system without having the vfat
> module ready, on my standard desktop system "udevadm monitor" shows
> events and mount succeeds. But on the Chromebook with custom kernel
> there's no such event shown and mount fails with:
> "mount: /mnt: unknown filesystem type 'vfat'."
> After "modprobe vfat" everything is fine and mount succeeds. Indeed
> the udev events do show when manually running modprobe.
>
> systemd-udevd.service is running. The files in /run/udev/* seem to be
> the same on the desktop (where everything is fine) and Chromebook (not
> working).
>
> Does anyone has an idea how to solve this? Feel free to ask me further
> details of the system. I don't know how the module autoloading works
> so I have no idea which additional information is useful.
>
> regards
> hede
>




Re: [kmod] not on-demand-loading modules with custom kernel

2022-09-20 Thread hede

hi all.

Am 19.09.2022 16:27, schrieb hede:
I need help getting module on-demand-loading working with a custom 
kernel.


Additional information:

My problem seems less related to udev but more probably related the 
kernel kmod subsystems!?


The kernel usually calls /sbin/modprobe if functionality is missing. 
Check kmod.c in kernel sources. If I create a file modprobe.fake [1] and 
modify /proc/sys/kernel/modprobe to call this file, a standard Debian 
shows the same behaviour than my Chromebook: If fat/vfat modules are not 
loaded and I try to mount some fat filesystem afterwards this fails with 
the same error message.


But while a standard Debian system obviously calls the fake modprobe 
command (as it creates the txt file) the Chromebook does NOT do so.


[1]
### /usr/local/bin/modprobe.fake ###
#!/bin/sh
date >> /tmp/modprobe.txt
echo "$@" >> /tmp/modprobe.txt
exit 1
### (chmod +x)

default config on Chromebook:
###
root@cbtest:~# cat /proc/sys/kernel/modules_disabled
0
root@cbtest:~# cat /proc/sys/kernel/modprobe
/sbin/modprobe
###

still searching a solution...

hede



udev not on-demand-loading modules with custom kernel

2022-09-19 Thread hede

Hi all.

I need help getting module on-demand-loading working with a custom 
kernel.


Currently I'm running Debian 11 for x86_64 on a Chromebook in developer 
mode directly via Coreboot/Depthcharge. Not having UEFI or classical 
BIOS boot code means that the default Debian kernel doesn't work, right? 
So I'm using a kernel from the chromiumOS project (ChromeOS 5.10) with a 
custom config.


I do need a patched kernel anyways as there's no UEFI/ACPI but a special 
Chromebook embedded controller for all those fancy sensors and a like.


The system is working fine, including wifi, rotation sensors, graphics 
and so on except the on demand kernel module loading doesn't work. 
Running "edevadm monitor" I do get many UEVENTs when plugging in an usb 
stick, for example. The event device system itself does work. But trying 
to mount the filesystem doesn't work as no vfat module gets loaded (as 
an example).


Likewise adding rules via iptables doesn't work, as the netfilter 
modules are missing. I have to manually load the nf* modules and _then_ 
I'm able to use iptables.


I can load all those modules by hand via modprobe, but autoloading via 
kernel/udev doesn't work.


Running "depmod -a" was fine. The files 
/lib/modules/[kernelversions]/modules.* seem(!) also to be ok. "find 
/sys/ -name "uevent" | wc -l" seems also fine with more than a thousand 
results.


When I try for example mounting the fat system without having the vfat 
module ready, on my standard desktop system "udevadm monitor" shows 
events and mount succeeds. But on the Chromebook with custom kernel 
there's no such event shown and mount fails with:

"mount: /mnt: unknown filesystem type 'vfat'."
After "modprobe vfat" everything is fine and mount succeeds. Indeed the 
udev events do show when manually running modprobe.


systemd-udevd.service is running. The files in /run/udev/* seem to be 
the same on the desktop (where everything is fine) and Chromebook (not 
working).


Does anyone has an idea how to solve this? Feel free to ask me further 
details of the system. I don't know how the module autoloading works so 
I have no idea which additional information is useful.


regards
hede



Re: Soundblaster 16

2001-07-25 Thread hede

Hi,

There are a couple of things you could look out for to check
whether sound is working. As root, run:

  # cat /dev/sndstat

It should say something like:

Audio devices:
0: Sound Blaster 16 (4.13) (DUPLEX)

If that works, look for a .au file on your machine (eg by using
locate .au), and run:

  # cat foo.au  /dev/audio

You should hear a sound here :)

Getting sound to work in Debian is definitely more tricky
than doing so in Suse or Redhat. If you've still got your
Suse partition, try looking for the sb kernel modules options
in /etc/conf.modules or /etc/modules.conf, and use those
values in Debian. That's what I did. The settings I used
are:

  options sb io=0x220 irq=5 dma=1 dma16=5 mpu_io=0x330

If you have the sb kernel module (eg from kernel-image-2.2.19),
you shouldn't have to recompile your kernel just as yet.

As for MP3/CD players, I use xmms. To get xmms to work with
the sb kernel module, you'd need to change your preferences
to use the libOSS output plugin instead of the Esound one.

Hope that helps.

-hoeteck