Re: Fresh install, Bookworm, XFCE keeps recreating directories

2023-09-15 Thread Max Nikulin

On 16/09/2023 10:09, Greg Wooledge wrote:

Altering the contents of an existing file in ~/.config/ upon login
sounds incredibly wrong to me, to the point where I have a hard time
believing it's a default behavior.


user-dirs.dirs(5)

The $HOME/.config/user-dirs.dirs file is a text file that contains the
user-specific values for the XDG user dirs. It is created and updated by
the xdg-user-dirs-update command.


xdg-user-dirs-update(1)

On the first run a user-dirs.locale file is created containing the
locale that was used for the translation. This is used later by GUI
tools like xdg-user-dirs-gtk-update to detect if the locale was changed,
letting you to migrate from the old names.


So the declared goal is keeping folder names consistent with the current 
locale.




Re: Fresh install, Bookworm, XFCE keeps recreating directories

2023-09-15 Thread Felix Miata
David Wright composed on 2023-09-15 22:22 (UTC-0500):

> On Fri 15 Sep 2023 at 21:24:53 (-0400), Curt Howland wrote:

>> On Friday 15 September 2023, Curt Howland was heard to say:

>>> I'm not interested in having directories like "Public" and
>>> "Videos", but every time I delete them something recreates those
>>> directories.

>> Found /etc/xdg/user-dirs.conf, changed to "enable=False", no change.

>> Found $/.config/user-dirs.dirs, commented out the ones I didn't want, 
>> no change. File $/.config/user-dirs.dirs reverted to original on 
>> logout/login.

>> Changed all the pointers in $/.config/user-dirs.dirs to "Desktop" so 
>> that no other directory would be created, success. Deleted 
>> directories stay deleted. File $/.config/user-dirs.dirs not reverting 
>> to original form on logout/login.

>> What an immense waste of time. I can understand having the directories 
>> created once, when the user is created. This automatic regeneration 
>> is utterly pointless and annoying.

> AIUI (which is not very well), this comes with the territory when you
> install a Desktop Environment.

> If you don't really want "Desktop" either, then according to:
> 
>   https://www.freedesktop.org/wiki/Software/xdg-user-dirs/
> 
> you can set all those environment variables to point to your $HOME
> directory. (The Note that follows explains that just deleting
> (≡commenting out) a variable doesn't work.)

Example ~/.config/user-dirs.dirs file written 4 years ago that WFM:

# This file is written by xdg-user-dirs-update
# If you want to change or add directories, just edit the line you're
# interested in. All local changes will be retained on the next run
# Format is XDG_xxx_DIR="$HOME/yyy", where yyy is a shell-escaped
# homedir-relative path, or XDG_xxx_DIR="/yyy", where /yyy is an
# absolute path. No other format is supported.
#
XDG_DESKTOP_DIR="$HOME/Desktop/"
XDG_DOCUMENTS_DIR="$HOME/Documents/"
XDG_DOWNLOAD_DIR="/home/downloads/"
XDG_MUSIC_DIR="/home/AV/music/"
XDG_PICTURES_DIR="/home/AV/pix/"
XDG_PUBLICSHARE_DIR="/home/pub/"
XDG_TEMPLATES_DIR="/home/"
XDG_VIDEOS_DIR="/home/AV/videos/"
-- 
Evolution as taught in public schools is, like religion,
based on faith, not based on science.

 Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

Felix Miata



Re: Fresh install, Bookworm, XFCE keeps recreating directories

2023-09-15 Thread David Wright
On Fri 15 Sep 2023 at 21:24:53 (-0400), Curt Howland wrote:
> On Friday 15 September 2023, Curt Howland was heard to say:
> > I'm not interested in having directories like "Public" and
> > "Videos", but every time I delete them something recreates those
> > directories.
> 
> Found /etc/xdg/user-dirs.conf, changed to "enable=False", no change.
> 
> Found $/.config/user-dirs.dirs, commented out the ones I didn't want, 
> no change. File $/.config/user-dirs.dirs reverted to original on 
> logout/login.
> 
> Changed all the pointers in $/.config/user-dirs.dirs to "Desktop" so 
> that no other directory would be created, success. Deleted 
> directories stay deleted. File $/.config/user-dirs.dirs not reverting 
> to original form on logout/login.
> 
> What an immense waste of time. I can understand having the directories 
> created once, when the user is created. This automatic regeneration 
> is utterly pointless and annoying.

AIUI (which is not very well), this comes with the territory when you
install a Desktop Environment.

If you don't really want "Desktop" either, then according to:

  https://www.freedesktop.org/wiki/Software/xdg-user-dirs/

you can set all those environment variables to point to your $HOME
directory. (The Note that follows explains that just deleting
(≡commenting out) a variable doesn't work.)

Cheers,
David.



Re: Fresh install, Bookworm, XFCE keeps recreating directories

2023-09-15 Thread Greg Wooledge
On Fri, Sep 15, 2023 at 09:24:53PM -0400, Curt Howland wrote:
> Found $/.config/user-dirs.dirs, commented out the ones I didn't want, 
> no change. File $/.config/user-dirs.dirs reverted to original on 
> logout/login.

Is that *normal* behavior of XFCE?  I would look into that, probably
first by Googling, then perhaps by asking in an XFCE help forum, or
whatever it is they have.

Also, check the obvious thing: is ~/.config/user-dirs.dirs a regular
file, or a symlink?

Altering the contents of an existing file in ~/.config/ upon login
sounds incredibly wrong to me, to the point where I have a hard time
believing it's a default behavior.  Maybe your system admin has done
something locally on your system to overwrite this file on every login.



Re: sata driver compataility Q

2023-09-15 Thread gene heskett

On 9/15/23 20:12, David Christensen wrote:

On 9/15/23 15:04, gene heskett wrote:

On 9/15/23 17:35, David Christensen wrote:

On 9/15/23 12:28, gene heskett wrote:

Greetings all;

I've just ordered some stuff to rebuild or expand my Raid setup.
This 16 port sata-III pci-e card:

along with a bigger drive cage, cables and such and some gigastone 
2T drives to make a raid big enough to run amanda. And maybe put a 
new card in front of my 2T /home raid10.


Is everything going into one chassis?  Have you considered an 
external drive chassis?


Got one of those coming too. Small possibility it will fit it the 
bottom of this huge and old tiger direct tower.  Because the radiator 
for the 6 core i5 is too tall, it hasn't had a side cover on it in 
years. If not, theres a 3 3.5" cage at the bottom of the stack I can 
stuff  with 2 SSD's per bay slot. A hidey place, front cover is solid, 
but cooling might be a problem. If worse comes to worse I could 
shoe-goo a 120x15 to the side of the cage.



This is what I meant:

     https://www.pc-pitstop.com/16-bay-25inch-sas-sata-jbod-tower


David
Call me cheap, my choice is diy assembly, SS stampings you put together, 
all drive brackets and a dozen cables and a 5 drive power splitter, $30.

Less than 5% of the price of that nice looking box.


.


Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: Fresh install, Bookworm, XFCE keeps recreating directories

2023-09-15 Thread Curt Howland
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On Friday 15 September 2023, Curt Howland was heard to say:
> I'm not interested in having directories like "Public" and
> "Videos", but every time I delete them something recreates those
> directories.

Found /etc/xdg/user-dirs.conf, changed to "enable=False", no change.

Found $/.config/user-dirs.dirs, commented out the ones I didn't want, 
no change. File $/.config/user-dirs.dirs reverted to original on 
logout/login.

Changed all the pointers in $/.config/user-dirs.dirs to "Desktop" so 
that no other directory would be created, success. Deleted 
directories stay deleted. File $/.config/user-dirs.dirs not reverting 
to original form on logout/login.

What an immense waste of time. I can understand having the directories 
created once, when the user is created. This automatic regeneration 
is utterly pointless and annoying.

Sorry to bother y'all with this.

Curt-




- -- 
You may my glories and my state dispose,
But not my griefs; still am I king of those.
 --- William Shakespeare, "Richard II"

-BEGIN PGP SIGNATURE-

iHUEAREIAB0WIQTaYVhJsIalt8scIDa2T1fo1pHhqQUCZQUD5QAKCRC2T1fo1pHh
qQmuAQCCb1k5ne2E7erKtt1tZelXl736zJKx0q1zjNACCtowiwD9H8RS7Djx5JCW
Fnihrk2L+rH614VF70GgyYAXFzfi9Wo=
=A/xJ
-END PGP SIGNATURE-



Re: Fresh install, Bookworm, XFCE keeps recreating directories

2023-09-15 Thread Richard Hector

On 16/09/23 12:19, Curt Howland wrote:


Good evening. Did a fresh install of Bookworm, installing desktop with
XFCE.

I'm not interested in having directories like "Public" and "Videos",
but every time I delete them something recreates those directories.

I can't find where these are set to be created, and re-re-re created.

Is there a way to turn this off?


Have a look at the output of "apt show xdg-user-dirs" - looks like you 
need to edit .config/user-dirs.dirs


Cheers,
Richard



Fresh install, Bookworm, XFCE keeps recreating directories

2023-09-15 Thread Curt Howland
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256


Good evening. Did a fresh install of Bookworm, installing desktop with 
XFCE.

I'm not interested in having directories like "Public" and "Videos", 
but every time I delete them something recreates those directories.

I can't find where these are set to be created, and re-re-re created.

Is there a way to turn this off? 

Curt-

- -- 
You may my glories and my state dispose,
But not my griefs; still am I king of those.
 --- William Shakespeare, "Richard II"

-BEGIN PGP SIGNATURE-

iHUEAREIAB0WIQTaYVhJsIalt8scIDa2T1fo1pHhqQUCZQT0mAAKCRC2T1fo1pHh
qbwTAP4nmc+aIG8km8oimFr6Kn18VItrxrXQYc81aLeGUgagfQD9GvgWpxox3y85
1JOtDeYxqMYReToSL4OJUz3Sk5eR+88=
=wKPb
-END PGP SIGNATURE-



Re: sata driver compataility Q

2023-09-15 Thread David Christensen

On 9/15/23 15:04, gene heskett wrote:

On 9/15/23 17:35, David Christensen wrote:

On 9/15/23 12:28, gene heskett wrote:

Greetings all;

I've just ordered some stuff to rebuild or expand my Raid setup.
This 16 port sata-III pci-e card:

along with a bigger drive cage, cables and such and some gigastone 2T 
drives to make a raid big enough to run amanda. And maybe put a new 
card in front of my 2T /home raid10.


Is everything going into one chassis?  Have you considered an external 
drive chassis?


Got one of those coming too. Small possibility it will fit it the bottom 
of this huge and old tiger direct tower.  Because the radiator for the 6 
core i5 is too tall, it hasn't had a side cover on it in years. If not, 
theres a 3 3.5" cage at the bottom of the stack I can stuff  with 2 
SSD's per bay slot. A hidey place, front cover is solid, but cooling 
might be a problem. If worse comes to worse I could shoe-goo a 120x15 to 
the side of the cage.



This is what I meant:

https://www.pc-pitstop.com/16-bay-25inch-sas-sata-jbod-tower


David



Re: sata driver compataility Q

2023-09-15 Thread gene heskett

On 9/15/23 17:56, Andy Smith wrote:

Hello,

On Fri, Sep 15, 2023 at 05:35:40PM -0400, gene heskett wrote:

This setup worked instantly under buster and bullseye, but takes from 30
secs to 5 minutes to open a write requestor window asking where to put the
download I clicked on under bookworrm.


I think you should work out why that happens before spending a lot
of money on new hardware. It doesn't seem at all likely to me that
your existing hardware is at fault. Buying new hardware risks
experiencing the same thing with still no idea why.

Thanks,
Andy

I won't argue on that point Andy, but it now been several months asking 
the same question from different angles and not getting a single helpful 
reply. I know my reputation is bad because I often use the box I'm 
supposed to stay inside of, as kindling to start the next campfire. I 
don't "stay inside that famous box" unless there is someone outside 
shooting at me, it which case I might shoot back. Computers can do 
anything you can write an interface to tickle the hardware for.


Now if someone can have me check something that might be aglay, my 
fingers to do that check are at your disposal. It eventually works, 100% 
of the time. But why the 30 sec to 5 minute delay before it allows me to 
do what I asked?


Frustrated is the PC word, but that is certainly not how I describe it 
in my lonelyness. There is not anyone here to hear me rant except me, 
which is probably a "good thing".


Thanks, take care and stay well Andy.

Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: sata driver compataility Q

2023-09-15 Thread gene heskett

On 9/15/23 17:35, David Christensen wrote:

On 9/15/23 12:28, gene heskett wrote:

Greetings all;

I've just ordered some stuff to rebuild or expand my Raid setup.
This 16 port sata-III pci-e card:

along with a bigger drive cage, cables and such and some gigastone 2T 
drives to make a raid big enough to run amanda. And maybe put a new 
card in front of my 2T /home raid10.


The card claims linux compatibility.

Can anyone advise me on the gotcha's of such a 16 port beast?

Thanks all.

Cheers, Gene Heskett.



Searching Amazon for "gigastone 2T", I see:


https://www.amazon.com/Gigastone-Internal-Compatible-Desktop-Laptop/dp/B0BN5978X1

     540 MB per second


PCIe 3.0 x1 is rated for 985 MB/s.

     https://en.wikipedia.org/wiki/Pcie


So, the PCIe 3.0 x1 connector is going to be a bottleneck when accessing 
more than one SSD.



I suggest that you pick an HBA with a wider PCIe connector -- PCIe 3.0 
x8 (7.88 GB/s) is a reasonable match for sixteen SSD's (8.64 GB/s). PCIe 
3.0 x16 would eliminate the PCIe bottleneck.



Is everything going into one chassis?  Have you considered an external 
drive chassis?
Got one of those coming too. Small possibility it will fit it the bottom 
of this huge and old tiger direct tower.  Because the radiator for the 6 
core i5 is too tall, it hasn't had a side cover on it in years. If not, 
theres a 3 3.5" cage at the bottom of the stack I can stuff  with 2 
SSD's per bay slot. A hidey place, front cover is solid, but cooling 
might be a problem. If worse comes to worse I could shoe-goo a 120x15 to 
the side of the cage.



Make sure your power supply(s) are adequate to the task.


400 watter, presently runs dead cold. 5V line rated at 16A, probably not 
using half that now.


David

.


Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: sata driver compataility Q

2023-09-15 Thread Andy Smith
Hello,

On Fri, Sep 15, 2023 at 05:35:40PM -0400, gene heskett wrote:
> This setup worked instantly under buster and bullseye, but takes from 30
> secs to 5 minutes to open a write requestor window asking where to put the
> download I clicked on under bookworrm.

I think you should work out why that happens before spending a lot
of money on new hardware. It doesn't seem at all likely to me that
your existing hardware is at fault. Buying new hardware risks
experiencing the same thing with still no idea why.

Thanks,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: sata driver compataility Q

2023-09-15 Thread gene heskett

On 9/15/23 15:56, Dan Ritter wrote:

gene heskett wrote:

Greetings all;

I've just ordered some stuff to rebuild or expand my Raid setup.
This 16 port sata-III pci-e card:

along with a bigger drive cage, cables and such and some gigastone 2T drives
to make a raid big enough to run amanda. And maybe put a new card in front
of my 2T /home raid10.

The card claims linux compatibility.

Can anyone advise me on the gotcha's of such a 16 port beast?


What you actually have there is 1 SATA-3 controller which should
be able to support 4 disks, and 4 port multipliers to support
16.

And I forgot to ask, which would be faster, 4 drives on port1 1-4, or 4 
drives on ports 1-5-9-13?



Effectively, every group of four disks is competing against
themselves. So performance is going to be mediocre.

It should work, though, as a giant dumping ground.

But why are you buying 16 x 2TB disks, if not for performance?

I've never heard of Gigastone and can offer no assessment of
them.

-dsr-
.


Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: usrmerge on root NFS will not be run automatically

2023-09-15 Thread Marco
On Fri, 15 Sep 2023 17:55:06 +
Andy Smith  wrote:

> I haven't followed this thread closely, but is my understanding
> correct:
> 
> - You have a FreeBSD NFS server with an export that is a root
>   filesystem of a Debian 11 install shared by multiple clients

Almost. It's not *one* Debian installation, it's many (diskless
workstations PXE boot). Each host has it's own root on the NFS.
Some stuff is shared, but that's not relevant here.

> - You're trying to do an upgrade to Debian 12 running on one of the
>   clients.

Not on one on *all* clients.

> - It tries to do a usrmerge but aborts because NFS is not supported
>   by that script?

Correct. Strangely the usrmerge script succeeded on one host. But on
all others it throws errors. Either relating to NFS being not
supported or because of duplicate files.

> If so, have you tried reporting a bug on this yet?

No I haven't. As far as I understand it's a known issue and the
developer has decided to just have the script fail on NFS.

> If you don't get anywhere with that, I don't think you have much
> choice except to take away the root directory tree to a Linux host,
> chroot into it and complete the merge there, then pack it up again
> and bring it back to your NFS server. Which is very far from ideal.

I'll try to solve the conflicts manually. If that fails, that's what
I have to do, I guess. I didn't expect that level of fiddling with
system files for a simple upgrade. But hey, here we are now.

> The suggestions about running a VM on the NFS server probably aren't
> going to work as you won't be able to take the directory tree out of
> use and export it as a block device to the VM.

Indeed.

> The option of making the usrmerge script work from FreeBSD might not
> be too technically challenging but I wouldn't want to do it without
> assistance from the Debian developers responsible for the script.

I won't do that. I don't speak Perl and will not rewrite the
usrmerge script.

Marco



Re: sata driver compataility Q

2023-09-15 Thread gene heskett

On 9/15/23 15:56, Dan Ritter wrote:

gene heskett wrote:

Greetings all;

I've just ordered some stuff to rebuild or expand my Raid setup.
This 16 port sata-III pci-e card:

along with a bigger drive cage, cables and such and some gigastone 2T drives
to make a raid big enough to run amanda. And maybe put a new card in front
of my 2T /home raid10.

The card claims linux compatibility.

Can anyone advise me on the gotcha's of such a 16 port beast?


What you actually have there is 1 SATA-3 controller which should
be able to support 4 disks, and 4 port multipliers to support
16.

Effectively, every group of four disks is competing against
themselves. So performance is going to be mediocre.

Since the pci-e plug is the narrow one I suspected as much, the data 
path is too narrow, This asus mobo has 6 ports but I'd assume the mobo 
ports would be wider, perhaps even x4 but it is also not stated,
The existing 4 1T per drive raid10 is on its own x1 based 6 port 
controller. The other 2 ports are not used at present,


This setup worked instantly under buster and bullseye, but takes from 30 
secs to 5 minutes to open a write requestor window asking where to put 
the download I clicked on under bookworrm.  And just as often as I've 
mentioned it the subject of the reply if any, is changed w/o changing 
the subject line, to ignore it.




It should work, though, as a giant dumping ground.

Such as an amanda backup raid, and running it the wee hours, I could 
care less how long it takes as long as it is done by 05:30 or 06:00. If 
I can rearrange the usb breakouts, I can uncover another pci-e x1 socket 
for this card.



But why are you buying 16 x 2TB disks, if not for performance?


I'm not, I'll only have 6 of them when the rest get here.  One of them 
may go into one of my milling machines to replace the last spinning rust 
on the property.



I've never heard of Gigastone and can offer no assessment of
them.
Relatively new on amazon. A month ago nearly the only 2T available, in a 
month the 2T list has been had the decimal point shifted right at least 
1 place. And the prices have risen 10 bucks. $89/copy today.


-dsr-
.

Take care and stay well, Dan.

Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: sata driver compataility Q

2023-09-15 Thread David Christensen

On 9/15/23 12:28, gene heskett wrote:

Greetings all;

I've just ordered some stuff to rebuild or expand my Raid setup.
This 16 port sata-III pci-e card:

along with a bigger drive cage, cables and such and some gigastone 2T 
drives to make a raid big enough to run amanda. And maybe put a new card 
in front of my 2T /home raid10.


The card claims linux compatibility.

Can anyone advise me on the gotcha's of such a 16 port beast?

Thanks all.

Cheers, Gene Heskett.



Searching Amazon for "gigastone 2T", I see:


https://www.amazon.com/Gigastone-Internal-Compatible-Desktop-Laptop/dp/B0BN5978X1

540 MB per second


PCIe 3.0 x1 is rated for 985 MB/s.

https://en.wikipedia.org/wiki/Pcie


So, the PCIe 3.0 x1 connector is going to be a bottleneck when accessing 
more than one SSD.



I suggest that you pick an HBA with a wider PCIe connector -- PCIe 3.0 
x8 (7.88 GB/s) is a reasonable match for sixteen SSD's (8.64 GB/s). 
PCIe 3.0 x16 would eliminate the PCIe bottleneck.



Is everything going into one chassis?  Have you considered an external 
drive chassis?



Make sure your power supply(s) are adequate to the task.


David



Re: Chromium under Xfce/bookworm anyone? Semi-solved!

2023-09-15 Thread David
`The situation here is with Chromium on a SID laptop, while Chromium on
a Stable desktop is just fine.
Starting from a terminal gives:

Gtk-Message: 06:46:36.954: Failed to load module "appmenu-gtk-module"
libva error: /usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so init failed
[12968:12968:0913/064637.144984:ERROR:chrome_browser_cloud_management_c
ontroller.cc(163)] Cloud management controller initialization aborted
as CBCM is not enabled.
[12968:12999:0913/064637.896873:ERROR:nss_util.cc(357)] After loading
Root Certs, loaded==false: NSS error code: -8018
[0913/064638.283625:ERROR:elf_dynamic_array_reader.h(64)] tag not found
[13049:1:0100/00.316639:ERROR:broker_posix.cc(41)] Recvmsg error:
Connection reset by peer (104)
Segmentation fault

It just looked like something that needed the developers to wake up in
the morning, so I've left it for now.
I do get a flash of Chromium on the screen before it dies, so it's
classic seg fault. It just looks like the latest development phase
hasn't been married up too well with the context'.

This issue, on SID at any rate, is now solved with the latest update.
Cheers!

-- 
`One day, the great European war will come out of some damned foolish
thing in the Balkans'.

-- Otto von Bismarck (1888)



Re: using ddrescue on the root partition - boot with / as read-only

2023-09-15 Thread David Christensen

On 9/15/23 05:46, Vincent Lefevre wrote:

On 2023-09-14 22:24:59 -0700, David Christensen wrote:

On 9/14/23 03:17, Vincent Lefevre wrote:

I get UNC errors like

2023-09-10T11:50:59.858670+0200 zira kernel: ata1.00: exception Emask 0x0 SAct 
0xc00 SErr 0x4 action 0x0
2023-09-10T11:51:00.117366+0200 zira kernel: ata1.00: irq_stat 0x4008
2023-09-10T11:51:00.117431+0200 zira kernel: ata1: SError: { CommWake }
2023-09-10T11:51:00.117474+0200 zira kernel: ata1.00: failed command: READ 
FPDMA QUEUED
2023-09-10T11:51:00.117511+0200 zira kernel: ata1.00: cmd 
60/00:50:b8:12:c5/02:00:1f:00:00/40 tag 10 ncq dma 262144 in
res 
41/40:00:90:13:c5/00:02:1f:00:00/00 Emask 0x409 (media error) 
2023-09-10T11:51:00.117537+0200 zira kernel: ata1.00: status: { DRDY ERR }
2023-09-10T11:51:00.117560+0200 zira kernel: ata1.00: error: { UNC }
2023-09-10T11:51:00.117583+0200 zira kernel: ata1.00: supports DRM functions 
and may not be fully accessible
2023-09-10T11:51:00.117614+0200 zira kernel: ata1.00: supports DRM functions 
and may not be fully accessible
2023-09-10T11:51:00.117651+0200 zira kernel: ata1.00: configured for UDMA/133
2023-09-10T11:51:00.117681+0200 zira kernel: sd 0:0:0:0: [sda] tag#10 FAILED 
Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
2023-09-10T11:51:00.117953+0200 zira kernel: sd 0:0:0:0: [sda] tag#10 Sense Key 
: Medium Error [current]
2023-09-10T11:51:00.118165+0200 zira kernel: sd 0:0:0:0: [sda] tag#10 Add. 
Sense: Unrecovered read error - auto reallocate failed
2023-09-10T11:51:00.118366+0200 zira kernel: sd 0:0:0:0: [sda] tag#10 CDB: 
Read(10) 28 00 1f c5 12 b8 00 02 00 00
2023-09-10T11:51:00.118557+0200 zira kernel: I/O error, dev sda, sector 
533009296 op 0x0:(READ) flags 0x80700 phys_seg 37 prio class 2
2023-09-10T11:51:00.118582+0200 zira kernel: ata1: EH complete
2023-09-10T11:51:00.118608+0200 zira kernel: ata1.00: Enabling 
discard_zeroes_data


What is the make and model of the laptop?


HP ZBook 15 G2 (2015)



That is a good laptop.





What is the make and model of the disk drive?


Samsung 870 EVO 1TB SATA (since January 2022)



That is a good SSD.





When and where do you see the above error messages?


It seems that this occurs when bad sectors are read, either when some
files (using these bad sectors) are read or when I use the badblocks
utility (until now, I've used it only with the read test, i.e. with
no options). The messages appear in the journalctl output.



Okay.





and after these errors, the kernel remount the root partition as
read-only.


That sounds like a reasonable boot loader response to an OS drive error
during boot.


There are no errors during boot. Only when I read the affected files
or use badblocks, but only after some given number of errors.



Oops -- I misread "remount" as "mount".



Due to these errors, some files are unreadable.

badblocks says that there are 25252 bad blocks.



That number is large enough to make me worry.




I'm using ddrescue before doing anything else (mainly in case things
would go worse), but I would essentially be interested in knowing
which files are affected.


Was the computer working correctly in the past?


Yes, except a few days before the first disk errors on 6 December 2022:
I got crashes from time to time (which never happened before). About
2 hours before the first errors, I upgraded the kernel and the NVIDIA
drivers from 390.154 to 390.157. In the changelog of 390.157-1:

nvidia-graphics-drivers-legacy-390xx (390.157-1) unstable; urgency=medium

   * New upstream legacy branch release 390.157 (2022-11-22).
 * Fixed CVE-2022-34670, CVE-2022-34674, CVE-2022-34675, CVE-2022-34677,
   CVE-2022-34680, CVE-2022-42257, CVE-2022-42258, CVE-2022-42259.
   https://nvidia.custhelp.com/app/answers/detail/a_id/5415
   (Closes: #1025281)
 * Improved compatibility with recent Linux kernels.

   [ Andreas Beckmann ]
   * Refresh patches.
   * Rename the internally used ARCH variable which might clash on externally
 set values.
   * Use substitutions for ${nvidia-kernel} and friends (510.108.03-1).
   * Try to compile a kernel module at package build time (510.108.03-1).

  -- Andreas Beckmann   Sat, 03 Dec 2022 22:17:01 +0100

I'm wondering whether the crashes were due to the compatibility
with the kernel (which was the latest Debian/unstable one).



The sum total of the clues make me think the SSD is failing.





When did you first notice the error messages?  What was the computer doing
at the time?


I first got errors on 6 December 2022 when I was reading these files.
At that time, I identified 5 files, which I put in a
private/unreadable-files directory. Then everything was OK
until a few days ago, when I wanted to duplicate a big directory
(to try to reproduce a bug).


Did you make any changes to the computer (hardware, software, configuration,
apps, other) immediately prior to the start of the error messages?


See above (and

sata driver compataility Q

2023-09-15 Thread gene heskett

Greetings all;

I've just ordered some stuff to rebuild or expand my Raid setup.
This 16 port sata-III pci-e card:

along with a bigger drive cage, cables and such and some gigastone 2T 
drives to make a raid big enough to run amanda. And maybe put a new card 
in front of my 2T /home raid10.


The card claims linux compatibility.

Can anyone advise me on the gotcha's of such a 16 port beast?

Thanks all.

Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: memtest86

2023-09-15 Thread gene heskett

On 9/15/23 13:46, Felix Miata wrote:

gene heskett composed on 2023-09-15 12:24 (UTC-0400):


Mandrake was good, went bust


Its core has survived in the form of Mageia, which just released v9 a few weeks 
ago.


Hopefully with support for recent hdwe. The last time I tried it, it 
didn't like my then new AMD Phenom hdwe. That was 2 or 3 motherboards in 
this tower ago, so I won't bad mouth today's. That Phenom must have been 
happy running win-98 but was definitely unhappy running linux when 
kernels were in the mid-2's. IRQ latencies ran in the 3 digits worth of 
milliseconds range.


Take care & stay well, Felix.

Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: usrmerge on root NFS will not be run automatically

2023-09-15 Thread Andy Smith
Hello,

On Fri, Sep 15, 2023 at 01:52:27PM +0200, Marco wrote:
> On Thu, 14 Sep 2023 16:43:09 -0400
> Dan Ritter  wrote:
> > Each of these things could be rewritten to be compatible with
> > FreeBSD; I suspect it would take about twenty minutes to an hour,
> > most of it testing, for someone who was familiar with FreeBSD's
> > userland
> 
> I'm not going down that route.

I haven't followed this thread closely, but is my understanding
correct:

- You have a FreeBSD NFS server with an export that is a root
  filesystem of a Debian 11 install shared by multiple clients

- You're trying to do an upgrade to Debian 12 running on one of the
  clients.

- It tries to do a usrmerge but aborts because NFS is not supported
  by that script?

If so, have you tried reporting a bug on this yet? It seems like an
interesting problem which although being quite a corner case, might
spark the interest of the relevant Debian developers.

If you don't get anywhere with that, I don't think you have much
choice except to take away the root directory tree to a Linux host,
chroot into it and complete the merge there, then pack it up again
and bring it back to your NFS server. Which is very far from ideal.

The suggestions about running a VM on the NFS server probably aren't
going to work as you won't be able to take the directory tree out of
use and export it as a block device to the VM. Or rather, you could
do that, but it's probably not quicker/easier than the method of
taking a copy of it elsewhere then bringing it back.

The option of making the usrmerge script work from FreeBSD might not
be too technically challenging but I wouldn't want to do it without
assistance from the Debian developers responsible for the script.

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: memtest86

2023-09-15 Thread Felix Miata
gene heskett composed on 2023-09-15 12:24 (UTC-0400):

> Mandrake was good, went bust 

Its core has survived in the form of Mageia, which just released v9 a few weeks 
ago.
-- 
Evolution as taught in public schools is, like religion,
based on faith, not based on science.

 Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

Felix Miata



Re: memtest86

2023-09-15 Thread gene heskett

On 9/15/23 09:34, Stefan Monnier wrote:

latest version is 10.6, get it at:


This, Stefan, looks exactly like the memtest86 we've all been using for
nearly 30 years.


"we"?  I sure haven't run that one, ever.  I've run memtest86+ many
times, yes, that's the one that "look like the old familiar one" for me
(and presumably for others who stay away from proprietary code :)


Possibly under new management, and you may not agree with
the pro version costing money,


The problem is not the money but the impossibility to know what it is
you're running because you can't see the source code.


If you think the peanuts are free, check the price of the beer.


Have you ever heard of Free Software (or maybe "open source")?  :-)

Sure, great believer in it, ever since my first and last encounter with 
Bill Gates about 39 years ago. but eventually the coder might want to 
buy some grocery's or pay the rent.  That's something I would support 
monetarily.  But the difficulty of moving the money internationally 
makes it way to difficult to be assured the money gets to the intended 
recipient. Way the hell and gone too many local tax laws so the big guy 
gets his 50% or even all of it. Corruption in the monetary exchange 
field seems rampant and uncontrollable. Ours here in the USA is no 
exception.


I can afford to buy that which I can inspect, maybe you can't. IDK.  I 
bailed out of the RedHat world when they went commercial but used their 
fedora to make always sick lab rats out of us. Mandrake was good, went 
bust, tried ubuntu but that was then seemingly wanting to be a free 
windows lookalike then. Now its much better so armbian for wannabe pi's 
is the thing.


I use several copies of linuxcnc here, so when they switched to debian 
to base their program on, I followed, but it appears debian is also 
short handed when a copy/paste bug somewhere along the path from coder 
to release broke one of udevs functions during bullseye that will not be 
fixed before trixie.



And with all the caveats discussed in the faq list, I'll be a bit
spooky. Like running it w/o a net cable so it can't call home.


Of course the `.com` version won't push all its own caveats in your
face, that would go against its own commercial interests.


No argument there.


That's another advantage of Free Software: it doesn't have a commercial
incentive to hide its limitations in order to trick you into choosing
over some competitor.  IOW Free Software encourages honesty.


That it does, but it should not impede the coders ability to feed and 
house his family.



BTW, you can also run a memory tester from the comfort of a running
Debian machine, using the `memtester` package.  Clearly it won't be able
to check *all* the memory of your machine (e.g. it can't access the
memory used by the kernel), but I've anecdotally found it "good enough"
(the two times memtest86+ found a problem in one of my machines,
`memtester` also found a problem on that machine).


Installing now, Along with a new kernel. 3rd reboot this week, where is 
the famous debian dependability...


Take care and stay well, Stefan


 Stefan

.


Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: memtest86

2023-09-15 Thread Stefan Monnier
>>> latest version is 10.6, get it at:
>>> 
> This, Stefan, looks exactly like the memtest86 we've all been using for
> nearly 30 years.

"we"?  I sure haven't run that one, ever.  I've run memtest86+ many
times, yes, that's the one that "look like the old familiar one" for me
(and presumably for others who stay away from proprietary code :)

> Possibly under new management, and you may not agree with
> the pro version costing money,

The problem is not the money but the impossibility to know what it is
you're running because you can't see the source code.

> If you think the peanuts are free, check the price of the beer.

Have you ever heard of Free Software (or maybe "open source")?  :-)

> And with all the caveats discussed in the faq list, I'll be a bit
> spooky. Like running it w/o a net cable so it can't call home.

Of course the `.com` version won't push all its own caveats in your
face, that would go against its own commercial interests.

That's another advantage of Free Software: it doesn't have a commercial
incentive to hide its limitations in order to trick you into choosing
over some competitor.  IOW Free Software encourages honesty.

BTW, you can also run a memory tester from the comfort of a running
Debian machine, using the `memtester` package.  Clearly it won't be able
to check *all* the memory of your machine (e.g. it can't access the
memory used by the kernel), but I've anecdotally found it "good enough"
(the two times memtest86+ found a problem in one of my machines,
`memtester` also found a problem on that machine).


Stefan



Re: usrmerge on root NFS will not be run automatically

2023-09-15 Thread Stefan Monnier
> So the file in /lib appears to be newer. So what to do? Can I delete
> the one in /usr/lib ?

Yes.


Stefan



apache2ctl vs systemctl?

2023-09-15 Thread Wim Bertels
Hello,

i notice a difference in behaviour in using both commands,

apache2ctl:
* (re)start works fine, sites are available

systemctl:
* (re)start seems to work, but sites are not available

Has anyone experienced this?

(in this case a multi vhost and cert setup)

(apachectl is a symlink to apache2ctl)

--

#apache2ctl status
  Apache Server Status for localhost (via ::1)

   Server Version: Apache/2.4.57 (Debian) mpm-itk/2.4.7-04
OpenSSL/3.0.9
   Server MPM: prefork
   Server Built: 2023-04-13T03:26:51
 __

   Current Time: Friday, 15-Sep-2023 12:15:46 CEST
   Restart Time: Friday, 15-Sep-2023 12:06:20 CEST
   Parent Server Config. Generation: 2
   Parent Server MPM Generation: 1
   Server uptime: 9 minutes 25 seconds
   Server load: 0.01 0.04 0.09
   Total accesses: 110 - Total Traffic: 35.9 MB - Total Duration: 13341
   CPU Usage: u1.6 s1.08 cu.6 cs.67 - .699% CPU load
   .195 requests/sec - 65.0 kB/second - 333.8 kB/request - 121.282
  ms/request

   1 requests currently being processed, 9 idle workers

_.__W__.

..

   Scoreboard Key:
   "_" Waiting for Connection, "S" Starting up, "R" Reading Request,
   "W" Sending Reply, "K" Keepalive (read), "D" DNS Lookup,
   "C" Closing connection, "L" Logging, "G" Gracefully finishing,
   "I" Idle cleanup of worker, "." Open slot with no current process

--

# systemctl status apache2.service
○ apache2.service - The Apache HTTP Server
 Loaded: loaded (/lib/systemd/system/apache2.service; enabled;
preset: enabled)
 Active: inactive (dead) since Fri 2023-09-15 12:01:55 CEST; 14min
ago
   Docs: https://httpd.apache.org/docs/2.4/
Process: 5766 ExecStart=/usr/sbin/apachectl start (code=exited,
status=0/SUCCESS)
Process: 5770 ExecStop=/usr/sbin/apachectl graceful-stop
(code=exited, status=0/SUCCESS)
CPU: 424ms

sep 15 12:01:54 server systemd[1]: Starting apache2.service - The
Apache HTTP Server...
sep 15 12:01:54 server apachectl[5769]: [Fri Sep 15 12:01:54.969836
2023] [core:trace3] [pid 5769] core.>
sep 15 12:01:54 server apachectl[5769]: [Fri Sep 15 12:01:54.970266
2023] [core:trace6] [pid 5769] core.>
sep 15 12:01:54 server apachectl[5769]: [Fri Sep 15 12:01:54.970331
2023] [core:trace3] [pid 5769] core.>
sep 15 12:01:55 server apachectl[5769]: httpd (pid 5425) already
running
sep 15 12:01:55 server apachectl[5772]: [Fri Sep 15 12:01:55.220741
2023] [core:trace3] [pid 5772] core.>
sep 15 12:01:55 server apachectl[5772]: [Fri Sep 15 12:01:55.220809
2023] [core:trace6] [pid 5772] core.>
sep 15 12:01:55 server apachectl[5772]: [Fri Sep 15 12:01:55.220813
2023] [core:trace3] [pid 5772] core.>
sep 15 12:01:55 server systemd[1]: apache2.service: Deactivated
successfully.
sep 15 12:01:55 server systemd[1]: Started apache2.service - The Apache
HTTP Server.

mvg,
Wim


Re: using ddrescue on the root partition - boot with / as read-only

2023-09-15 Thread Vincent Lefevre
On 2023-09-14 22:24:59 -0700, David Christensen wrote:
> On 9/14/23 03:17, Vincent Lefevre wrote:
> > I get UNC errors like
> > 
> > 2023-09-10T11:50:59.858670+0200 zira kernel: ata1.00: exception Emask 0x0 
> > SAct 0xc00 SErr 0x4 action 0x0
> > 2023-09-10T11:51:00.117366+0200 zira kernel: ata1.00: irq_stat 0x4008
> > 2023-09-10T11:51:00.117431+0200 zira kernel: ata1: SError: { CommWake }
> > 2023-09-10T11:51:00.117474+0200 zira kernel: ata1.00: failed command: READ 
> > FPDMA QUEUED
> > 2023-09-10T11:51:00.117511+0200 zira kernel: ata1.00: cmd 
> > 60/00:50:b8:12:c5/02:00:1f:00:00/40 tag 10 ncq dma 262144 in
> >res 
> > 41/40:00:90:13:c5/00:02:1f:00:00/00 Emask 0x409 (media error) 
> > 2023-09-10T11:51:00.117537+0200 zira kernel: ata1.00: status: { DRDY ERR }
> > 2023-09-10T11:51:00.117560+0200 zira kernel: ata1.00: error: { UNC }
> > 2023-09-10T11:51:00.117583+0200 zira kernel: ata1.00: supports DRM 
> > functions and may not be fully accessible
> > 2023-09-10T11:51:00.117614+0200 zira kernel: ata1.00: supports DRM 
> > functions and may not be fully accessible
> > 2023-09-10T11:51:00.117651+0200 zira kernel: ata1.00: configured for 
> > UDMA/133
> > 2023-09-10T11:51:00.117681+0200 zira kernel: sd 0:0:0:0: [sda] tag#10 
> > FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
> > 2023-09-10T11:51:00.117953+0200 zira kernel: sd 0:0:0:0: [sda] tag#10 Sense 
> > Key : Medium Error [current]
> > 2023-09-10T11:51:00.118165+0200 zira kernel: sd 0:0:0:0: [sda] tag#10 Add. 
> > Sense: Unrecovered read error - auto reallocate failed
> > 2023-09-10T11:51:00.118366+0200 zira kernel: sd 0:0:0:0: [sda] tag#10 CDB: 
> > Read(10) 28 00 1f c5 12 b8 00 02 00 00
> > 2023-09-10T11:51:00.118557+0200 zira kernel: I/O error, dev sda, sector 
> > 533009296 op 0x0:(READ) flags 0x80700 phys_seg 37 prio class 2
> > 2023-09-10T11:51:00.118582+0200 zira kernel: ata1: EH complete
> > 2023-09-10T11:51:00.118608+0200 zira kernel: ata1.00: Enabling 
> > discard_zeroes_data
> 
> What is the make and model of the laptop?

HP ZBook 15 G2 (2015)

> What is the make and model of the disk drive?

Samsung 870 EVO 1TB SATA (since January 2022)

> When and where do you see the above error messages?

It seems that this occurs when bad sectors are read, either when some
files (using these bad sectors) are read or when I use the badblocks
utility (until now, I've used it only with the read test, i.e. with
no options). The messages appear in the journalctl output.

> > and after these errors, the kernel remount the root partition as
> > read-only.
> 
> That sounds like a reasonable boot loader response to an OS drive error
> during boot.

There are no errors during boot. Only when I read the affected files
or use badblocks, but only after some given number of errors.

> > Due to these errors, some files are unreadable.
> > 
> > badblocks says that there are 25252 bad blocks.
> > 
> > I'm using ddrescue before doing anything else (mainly in case things
> > would go worse), but I would essentially be interested in knowing
> > which files are affected.
> 
> Was the computer working correctly in the past?

Yes, except a few days before the first disk errors on 6 December 2022:
I got crashes from time to time (which never happened before). About
2 hours before the first errors, I upgraded the kernel and the NVIDIA
drivers from 390.154 to 390.157. In the changelog of 390.157-1:

nvidia-graphics-drivers-legacy-390xx (390.157-1) unstable; urgency=medium

  * New upstream legacy branch release 390.157 (2022-11-22).
* Fixed CVE-2022-34670, CVE-2022-34674, CVE-2022-34675, CVE-2022-34677,
  CVE-2022-34680, CVE-2022-42257, CVE-2022-42258, CVE-2022-42259.
  https://nvidia.custhelp.com/app/answers/detail/a_id/5415
  (Closes: #1025281)
* Improved compatibility with recent Linux kernels.

  [ Andreas Beckmann ]
  * Refresh patches.
  * Rename the internally used ARCH variable which might clash on externally
set values.
  * Use substitutions for ${nvidia-kernel} and friends (510.108.03-1).
  * Try to compile a kernel module at package build time (510.108.03-1).

 -- Andreas Beckmann   Sat, 03 Dec 2022 22:17:01 +0100

I'm wondering whether the crashes were due to the compatibility
with the kernel (which was the latest Debian/unstable one).

> When did you first notice the error messages?  What was the computer doing
> at the time?

I first got errors on 6 December 2022 when I was reading these files.
At that time, I identified 5 files, which I put in a
private/unreadable-files directory. Then everything was OK
until a few days ago, when I wanted to duplicate a big directory
(to try to reproduce a bug).

> Did you make any changes to the computer (hardware, software, configuration,
> apps, other) immediately prior to the start of the error messages?

See above (and no hardware change).

> Does the computer now generate error messages?  Consistently?  What is it
> doing w

Re: usrmerge on root NFS will not be run automatically

2023-09-15 Thread Marco
On Thu, 14 Sep 2023 16:54:27 -0400
Stefan Monnier  wrote:

> Still going on with this?

I am.

> Have you actually looked at those two files:
> 
> /lib/udev/rules.d/60-libsane1.rules and
> /usr/lib/udev/rules.d/60-libsane1.rules
> 
> to see if they're identical or not and to see if you might have an
> idea how to merge them?

Yes, I did. On some hosts they are identical, on others they're
different. That's why I asked how to handle that.

> `usrmerge` did give you a pretty clear explanation of the problem it's
> facing (AFAIC)

It does indeed.

> and I believe it should be very easy to address it

Everything is easy if you only know how to do it.

As I said, on some hosts they are identical. So what to do? Can I
delete one of them? If yes, which one?

On other hosts they differ, here the first lines:

/lib/

# This file was generated from description files (*.desc)
# by sane-desc 3.6 from sane-backends 1.1.1-debian
#
# udev rules file for supported USB and SCSI devices
#
# For the list of supported USB devices see /lib/udev/hwdb.d/20-sane.hwdb
#
# The SCSI device support is very basic and includes only
# scanners that mark themselves as type "scanner" or
# SCSI-scanners from HP and other vendors that are entitled "processor"
# but are treated accordingly.
#
# If your SCSI scanner isn't listed below, you can add it to a new rules
# file under /etc/udev/rules.d/.
#
# If your scanner is supported by some external backend (brother, epkowa,
# hpaio, etc) please ask the author of the backend to provide proper
# device detection support for your OS
#
# If the scanner is supported by sane-backends, please mail the entry to
# the sane-devel mailing list (sane-de...@alioth-lists.debian.net).
#
ACTION=="remove", GOTO="libsane_rules_end"

…

/usr/lib/

# This file was generated from description files (*.desc)
# by sane-desc 3.6 from sane-backends 1.0.31-debian
#
# udev rules file for supported USB and SCSI devices
#
# For the list of supported USB devices see /lib/udev/hwdb.d/20-sane.hwdb
#
# The SCSI device support is very basic and includes only
# scanners that mark themselves as type "scanner" or
# SCSI-scanners from HP and other vendors that are entitled "processor"
# but are treated accordingly.
#
# If your SCSI scanner isn't listed below, you can add it to a new rules
# file under /etc/udev/rules.d/.
#
# If your scanner is supported by some external backend (brother, epkowa,
# hpaio, etc) please ask the author of the backend to provide proper
# device detection support for your OS
#
# If the scanner is supported by sane-backends, please mail the entry to
# the sane-devel mailing list (sane-de...@alioth-lists.debian.net).
#
ACTION!="add", GOTO="libsane_rules_end"

…

So the file in /lib appears to be newer. So what to do? Can I delete
the one in /usr/lib ?

> (no need to play with anything funny like setting up a VM or
> mounting the disk from some other system).

Which is good because that's not that easy, apparently.

Thank you for your replies and support regarding this matter.

Marco



Re: usrmerge on root NFS will not be run automatically

2023-09-15 Thread Marco
On Thu, 14 Sep 2023 16:43:09 -0400
Dan Ritter  wrote:

> The heart of the convert-usrmerge perl script is pretty
> reasonable. However:
> 
> […]
> 
> Similarly, there are calls to stat and du which probably have
> some incompatibilities.
> 
> The effect of running this would be fairly safe, but also not do
> anything: you would get some errors and then it would die.

Ok, then I'll not try that. Would be a waste of time.

> Each of these things could be rewritten to be compatible with
> FreeBSD; I suspect it would take about twenty minutes to an hour,
> most of it testing, for someone who was familiar with FreeBSD's
> userland

I'm not going down that route.

Marco



Re: memtest86

2023-09-15 Thread gene heskett

On 9/14/23 22:17, Stefan Monnier wrote:

latest version is 10.6, get it at:



This, Stefan, looks exactly like the memtest86 we've all been using for 
nearly 30 years. Possibly under new management, and you may not agree 
with the pro version costing money, but TANSTAAFL is a universal law. 
Somebody has to pay the bills. If you think the peanuts are free, check 
the price of the beer.



Freeware!  Eww!

I recommend https://www.memtest.org/ instead (admittedly, until a year
or two ago, this Free Software was severely outdated and didn't really
work with UEFI, but things have picked up again since).

I've pulled this on down too, but the previews don't quite look like the 
old familiar one I found above. And with all the caveats discussed in 
the faq list, I'll be a bit spooky. Like running it w/o a net cable so 
it can't call home.


But I'll burn the 64 bit grub iso to a dvd and try it. I have had 
several "use the back panel power switch" to recover from lockups since 
being forced to upgrade to bookworm by something wiping my passwd during 
a bullseye update. bookworm does NOT like my raid10 /home, taking from 
30 secs to 5 minutes to open the file requestor just to save this 
download. Perhaps this one can tell me something the v9.4 I got from my 
link didn't. Lockups that are strange, the mouse pointer still moves 
normally, but clicks are ignored and so is the keyboard.  The cute 
analog clock is frozen, as is the rest of the gkrellm display.


 Stefan

Take care and stay well..

Cheers, Gene Heskett.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis
Genes Web page 



Re: memtest86

2023-09-15 Thread Michel Verdier
On 2023-09-14, Stefan Monnier wrote:

> I recommend https://www.memtest.org/ instead (admittedly, until a year
> or two ago, this Free Software was severely outdated and didn't really
> work with UEFI, but things have picked up again since).

This is the package memtest86+