Re: which package to file a bug report ?

2024-02-23 Thread Frank Weißer
First of all: I use german during installation; but I doubt that is 
relevant.


Marco Moock:

Am 22.02.2024 schrieb Frank Weißer :


I only choose ext2 for formatting the encrypted partition, because
nothing else is offered.


That is really strange. If I did install Debian 12, it offered me a
list of different file systems, including ext2/3/4.

It does on non-crypt partitions, but not if I choose 'physical volume 
for encryption' there; then afterwards I only have the choices to use it 
as ext2, swap or lvm or leave it unused for the encrypted partition.



Despite that the partition in fact is getting formatted ext4, so the
entry ext2 in /etc/fstab leads into emergency mode.


Does the installer format it as ext4, but shows ext2 and places that in
fstab?
Or do you format it manually?


The installer does format it as ext4, but shows ext2 and places that in
fstab, what ends up in emergency mode. That's why I'm here



Re: which package to file a bug report ?

2024-02-23 Thread Marco Moock
Am 22.02.2024 schrieb Frank Weißer :

> I only choose ext2 for formatting the encrypted partition, because 
> nothing else is offered.

That is really strange. If I did install Debian 12, it offered me a
list of different file systems, including ext2/3/4.

> Despite that the partition in fact is getting formatted ext4, so the
> entry ext2 in /etc/fstab leads into emergency mode.

Does the installer format it as ext4, but shows ext2 and places that in
fstab?
Or do you format it manually?

> I think the partitioning tool in installer should offer to format the 
> encrypted partition in ext4, as LUKS (?) does, instead of ext2 and
> must write ext4 to /etc/fstab, as this is, how it ends up.

LUKS is only a container and doesn't care about the file system inside.
After opening it, it is a file under /dev/mapper that can be formatted
like /dev/sdXY.



Re: which package to file a bug report ?

2024-02-22 Thread Frank Weißer




Marco Moock:

Am 22.02.2024 um 13:18:48 Uhr schrieb Frank Weißer:


I use to encrypt my swap and /var/tmp partitions during
installation.


That is LUKS.


the partition tool in debian installer offers me randomized keys
for that and has 'delete partition' set to 'yes', which costs lot
of time, not necessary on new hdd/ssd and - my opinion - on
randomized keys. I propose switching to 'no', when selecting
randomized keys.


Why? A user can rather easy select what he wants.


As I said: My opinion; if you miss setting 'no' you have to wait a lot
of time...


Further I can select ext2 or swap for partition format.


That is really strange. swap is only for the special-purpose swap 
partition.



Yes, I choose it for the swap partition


I use ext2 for /var/tmp, but - in /etc/crypttab the marker 'tmp' is
missing for the /var/tmp partition


Which marker?


This one:
frank@pc:~$ cat /etc/crypttab
sda4_crypt /dev/sda4 /dev/urandom cipher=aes-xts-
plain64,size=256,swap,discard
sda5_crypt /dev/sda5 /dev/urandom cipher=aes-xts-
plain64,size=256,tmp,discard
 ^^^

crypttab is only for decrypting the partition and creating a device 
file for the encrypted one.



- in /etc/fstab ext2 is set instead of ext4, that cryptsetup
defaults to. So on reboot I end up in emergency mode.


If you format it in ext2, choose that. Or was that an automatic
decision by the installer?

I only choose ext2 for formatting the encrypted partition, because 
nothing else is offered. Despite that the partition in fact is getting 
formatted ext4, so the entry ext2 in /etc/fstab leads into emergency mode.


I think the partitioning tool in installer should offer to format the 
encrypted partition in ext4, as LUKS (?) does, instead of ext2 and must 
write ext4 to /etc/fstab, as this is, how it ends up.




Re: which package to file a bug report ?

2024-02-22 Thread Marco Moock
Am 22.02.2024 um 13:18:48 Uhr schrieb Frank Weißer:

> I use to encrypt my swap and /var/tmp partitions during installation.

That is LUKS.

> the partition tool in debian installer offers me randomized keys for 
> that and has 'delete partition' set to 'yes', which costs lot of
> time, not necessary on new hdd/ssd and - my opinion - on randomized
> keys. I propose switching to 'no', when selecting randomized keys.

Why?
A user can rather easy select what he wants.

> Further I can select ext2 or swap for partition format.

That is really strange. swap is only for the special-purpose swap
partition.

> I use ext2 for /var/tmp, but
> - in /etc/crypttab the marker 'tmp' is missing for the /var/tmp
> partition

Which marker?
crypttab is only for decrypting the partition and creating a device
file for the encrypted one.

> - in /etc/fstab ext2 is set instead of ext4, that cryptsetup defaults 
> to. So on reboot I end up in emergency mode.

If you format it in ext2, choose that.
Or was that an automatic decision by the installer?

-- 
Gruß
Marco

Spam und Werbung bitte an ichschickerekl...@cartoonies.org



which package to file a bug report ?

2024-02-22 Thread Frank Weißer

Hello!

I use to encrypt my swap and /var/tmp partitions during installation.

the partition tool in debian installer offers me randomized keys for 
that and has 'delete partition' set to 'yes', which costs lot of time, 
not necessary on new hdd/ssd and - my opinion - on randomized keys. I 
propose switching to 'no', when selecting randomized keys.


Further I can select ext2 or swap for partition format. I use ext2 for 
/var/tmp, but

- in /etc/crypttab the marker 'tmp' is missing for the /var/tmp partition
- in /etc/fstab ext2 is set instead of ext4, that cryptsetup defaults 
to. So on reboot I end up in emergency mode.


What package have I to file the bug report against?

Please apologize my poor english.

Kind regards

readU
Frank



Re: shred bug? [was: Unidentified subject!]

2024-02-16 Thread Michael Stone

On Sun, Feb 11, 2024 at 08:02:12AM +0100, to...@tuxteam.de wrote:

What Thomas was trying to do is to get a cheap, fast random number
generator. Shred seems to have such.


You're better off with /dev/urandom, it's much easier to understand what 
it's trying to do, vs the rather baroque logic in shred. In fact, 
there's nothing in shred's documenation AFAICT that suggests it should 
be used as a random number generator. For pure speed, playing games with 
openssl enc and /dev/zero will generally win. If speed doesn't matter, 
we're back to /dev/urandom as the simplest and most direct solution.


FWIW, the main use for shred in 2024 is: to be there so someone's old 
script doesn't break. There's basically no good use case for it, and it 
probably shouldn't have gotten into coreutils in the first place. The 
multipass pattern stuff is cargo-cult voodoo--a single overwrite with 
zeros will be as effective as anything else--and on modern 
storage/filesystems there's a good chance your overwrite won't overwrite 
anything anyway. Probably the right answer is a kernel facility 
(userspace can't guarantee anything). If you're really sure that 
overwrites work on your system, `shred -n0 -z` will be the fastest way 
to do that. The docs say don't do that because SSDs might optimize that 
away, but SSDs probably aren't overwriting anything anyway (also 
mentioned in the docs). ¯\_(ツ)_/¯




Re: shred bug? [was: Unidentified subject!]

2024-02-14 Thread David Wright
On Tue 13 Feb 2024 at 11:21:08 (-0500), Greg Wooledge wrote:
> On Tue, Feb 13, 2024 at 09:35:11AM -0600, David Wright wrote:
> > On Tue 13 Feb 2024 at 07:15:48 (-0500), Greg Wooledge wrote:
> > > On Mon, Feb 12, 2024 at 11:01:47PM -0600, David Wright wrote:
> > > > … but not much. For me, "standard output" is /dev/fd/1, yet it seems
> > > > unlikely that anyone is going to use >&1 in the manner of the example.
> > > 
> > > Standard output means "whatever file descriptor 1 points to".  That
> > > could be a file, a pipe, a terminal (character device), etc.
> > 
> > Why pick on 1?
> 
> It's the definition.  Standard input is FD 0, standard output is FD 1,
> and standard error is FD 2.
> 
> > . It demonstrates the shell syntax element required (&) in order to
> >   avoid truncating the file, rather than shred overwriting it.
> 
> You are confused.  You're making assumptions about shell syntax that
> are simply not true.

You're right. I was looking too hard at the right side of the > and
neglecting the implied left side. It's always worth running these
things past your eyes. Thanks for the clear exposition that followed.

Cheers,
David.



Re: shred bug? [was: Unidentified subject!]

2024-02-13 Thread tomas
On Tue, Feb 13, 2024 at 01:03:44PM -0800, David Christensen wrote:
> On 2/13/24 09:40, debian-u...@howorth.org.uk wrote:
> > Greg Wooledge  wrote:
> > 
> > > Shred will determine the size of the file, then write data to the
> > > file, rewind, write data again, etc.  On a traditional hard drive,
> > > that will overwrite the original private information.  On modern
> > > devices, it may not.
> > 
> > Thanks for the excellent explanation :)
> > 
> > One nitpick. You say "On a traditional hard drive, that will overwrite
> > the original private information" but that's not quite true. It also
> > needs to be a "traditional" file system! That is, not journalled or COW.
> > 
> > So nowadays I would expect shred not to work unless you got very
> > lucky, or planned carefully.
> 
> 
> Perhaps zerofree(8)?

On a SATA, it won't get at (some) of the spare blocks, since it
doesn't know that they are there.

If your data is so sensitive that you don't want it to escape,
your best bet seems to plan ahead and not let it hit your media
unencrypted.

Use LUKS. And oh, use argon2id as key derivation function [1]
these days.

Cheers
[1] https://mjg59.dreamwidth.org/66429.html
-- 
t


signature.asc
Description: PGP signature


Re: shred bug? [was: Unidentified subject!]

2024-02-13 Thread gene heskett

On 2/13/24 16:00, David Christensen wrote:

On 2/13/24 11:31, gene heskett wrote:
Next experiment is a pair of 4T Silicon Power SSD's When they & the 
startech usb3 adapters arrive.  I'll get that NAS built for amanda yet.



2.5" SATA SSD's and SATA to USB adapter cables for $187.97 + $10.99 = 
$198.96 each set?


https://www.amazon.com/dp/B0BVLRFFWQ

https://www.amazon.com/dp/B00HJZJI84


Why not external USB drives for $192.99?

https://www.amazon.com/dp/B0C6XVZS4K


For $7 more, you can get the "Pro edition" in black with two cables:

https://www.amazon.com/dp/B0C69QD5NK


You could plug those into the two USB-C 3.1 Gen 2 ports on your Asus 
PRIME Z370-A II motherboard.


Maybe, but these sata types have the the mounting bolts the usb versions 
don't. And fits the drive adapters I already have that put both in one 
drive tray. Not to mention if Taiwan has a similar product, I tend to 
buy it.  So are the 4 2T gigastones I'll fill the next 2 drawers with so 
I should wind up with a 16T backup server's LVM. with a 1/2T Samsung 870 
as a holding disk.  Running a bpi-m5 headless, maybe < 20 watts.  Whats 
not to like?


David


.


Cheers, Gene Heskett, CET.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis



Re: shred bug? [was: Unidentified subject!]

2024-02-13 Thread David Christensen

On 2/13/24 09:40, debian-u...@howorth.org.uk wrote:

Greg Wooledge  wrote:


Shred will determine the size of the file, then write data to the
file, rewind, write data again, etc.  On a traditional hard drive,
that will overwrite the original private information.  On modern
devices, it may not.


Thanks for the excellent explanation :)

One nitpick. You say "On a traditional hard drive, that will overwrite
the original private information" but that's not quite true. It also
needs to be a "traditional" file system! That is, not journalled or COW.

So nowadays I would expect shred not to work unless you got very
lucky, or planned carefully.



Perhaps zerofree(8)?


David



Re: shred bug? [was: Unidentified subject!]

2024-02-13 Thread David Christensen

On 2/13/24 11:31, gene heskett wrote:
Next experiment is a pair of 4T 
Silicon Power SSD's When they & the startech usb3 adapters arrive.  I'll 
get that NAS built for amanda yet.



2.5" SATA SSD's and SATA to USB adapter cables for $187.97 + $10.99 = 
$198.96 each set?


https://www.amazon.com/dp/B0BVLRFFWQ

https://www.amazon.com/dp/B00HJZJI84


Why not external USB drives for $192.99?

https://www.amazon.com/dp/B0C6XVZS4K


For $7 more, you can get the "Pro edition" in black with two cables:

https://www.amazon.com/dp/B0C69QD5NK


You could plug those into the two USB-C 3.1 Gen 2 ports on your Asus 
PRIME Z370-A II motherboard.



David




Re: shred bug? [was: Unidentified subject!]

2024-02-13 Thread gene heskett

On 2/13/24 14:44, Thomas Schmitt wrote:

Hi,

Gene Heskett wrote:

Next experiment is a pair of 4T Silicon Power SSD's


When f3 has (hopefully) given its OK, the topic of a full write-and-read
test will come up again. I'm looking forward to all the spin-off topics.

I'll have to admit it has been quite educational. Now, can I remember it 
next week? YTBD.>

Have a nice day :)


You too Thomas.
Take care and stay well.


Thomas


Cheers, Gene Heskett, CET.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis



Re: shred bug? [was: Unidentified subject!]

2024-02-13 Thread Thomas Schmitt
Hi,

Gene Heskett wrote:
> Next experiment is a pair of 4T Silicon Power SSD's

When f3 has (hopefully) given its OK, the topic of a full write-and-read
test will come up again. I'm looking forward to all the spin-off topics.


Have a nice day :)

Thomas



Re: shred bug? [was: Unidentified subject!]

2024-02-13 Thread Thomas Schmitt
Hi,

Greg Wooledge wrote:
> Heh.  Don't forget your own attempts to use a shredder as a PRNG stream.

My original idea was to watch a minimal shred run by teeing its work into
a checksummer.

But then topic drift came in. So we got a farm show of random generators
and a discussion about what exactly is a bug in shred's documentation.
Plus the shell programming webinar. And a diagnosis about a rightously
failed attempt to change the partition table type from MBR/DOS to GPT.

And this all because Gene Heskett was adventurous enough to buy a cheap
fake USB disk.


Have a nice day :)

Thomas



Re: shred bug? [was: Unidentified subject!]

2024-02-13 Thread gene heskett

On 2/13/24 12:56, Thomas Schmitt wrote:

Hi,

Greg Wooledge wrote:

Let me write out the example again, but with the bug fixed, and then
explain what each line does, [... lecture about advanced shell
programming ...]


And this all because Gene Heskett was adventurous enough to buy a cheap
fake USB disk. :))


Guilty as charged, Thomas. My advantage is that it won't affect the 
length of the ladder up my side of the hog.  If I save someone else from 
getting bit by that fraud I'm pleased.  Next experiment is a pair of 4T 
Silicon Power SSD's When they & the startech usb3 adapters arrive.  I'll 
get that NAS built for amanda yet.



Have a nice day :)


You too.


Thomas


Cheers, Gene Heskett, CET.
--
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author, 1940)
If we desire respect for the law, we must first make the law respectable.
 - Louis D. Brandeis



Re: shred bug? [was: Unidentified subject!]

2024-02-13 Thread Greg Wooledge
On Tue, Feb 13, 2024 at 06:54:58PM +0100, Thomas Schmitt wrote:
> Greg Wooledge wrote:
> > Let me write out the example again, but with the bug fixed, and then
> > explain what each line does, [... lecture about advanced shell
> > programming ...]
> 
> And this all because Gene Heskett was adventurous enough to buy a cheap
> fake USB disk. :))

Heh.  Don't forget your own attempts to use a shredder as a PRNG stream.

Shell redirections can be complicated, so this topic is going to
come up once in a while.  The example in the shred info page is fairly
unintuitive, so it deserves a bit of explanation.  I imagine most readers
who saw it simply accepted it as written, which is why the bug went
undiscovered for two decades.



Re: shred bug? [was: Unidentified subject!]

2024-02-13 Thread Thomas Schmitt
Hi,

Greg Wooledge wrote:
> Let me write out the example again, but with the bug fixed, and then
> explain what each line does, [... lecture about advanced shell
> programming ...]

And this all because Gene Heskett was adventurous enough to buy a cheap
fake USB disk. :))


Have a nice day :)

Thomas



Re: shred bug? [was: Unidentified subject!]

2024-02-13 Thread debian-user
Greg Wooledge  wrote:

> Shred will determine the size of the file, then write data to the
> file, rewind, write data again, etc.  On a traditional hard drive,
> that will overwrite the original private information.  On modern
> devices, it may not.

Thanks for the excellent explanation :)

One nitpick. You say "On a traditional hard drive, that will overwrite
the original private information" but that's not quite true. It also
needs to be a "traditional" file system! That is, not journalled or COW.

So nowadays I would expect shred not to work unless you got very
lucky, or planned carefully.



Re: shred bug? [was: Unidentified subject!]

2024-02-13 Thread Greg Wooledge
On Tue, Feb 13, 2024 at 09:35:11AM -0600, David Wright wrote:
> On Tue 13 Feb 2024 at 07:15:48 (-0500), Greg Wooledge wrote:
> > On Mon, Feb 12, 2024 at 11:01:47PM -0600, David Wright wrote:
> > > … but not much. For me, "standard output" is /dev/fd/1, yet it seems
> > > unlikely that anyone is going to use >&1 in the manner of the example.
> > 
> > Standard output means "whatever file descriptor 1 points to".  That
> > could be a file, a pipe, a terminal (character device), etc.
> 
> Why pick on 1?

It's the definition.  Standard input is FD 0, standard output is FD 1,
and standard error is FD 2.

> . It demonstrates the shell syntax element required (&) in order to
>   avoid truncating the file, rather than shred overwriting it.

You are confused.  You're making assumptions about shell syntax that
are simply not true.

> > > >A FILE of ‘-’ denotes standard output.  The intended use of this is
> > > >to shred a removed temporary file.  For example:
> > > > 
> > > >   i=$(mktemp)
> > > >   exec 3<>"$i"
> > > >   rm -- "$i"
> > > >   echo "Hello, world" >&3
> > > >   shred - >&3
> > > >   exec 3>-

> Ironic that it truncates a file, and then immediately warns against
> truncating a file instead of shredding it.

No.  This is not what it does (if we fix the bug).

Let me write out the example again, but with the bug fixed, and then
explain what each line does, because apparently there is a LOT of
misunderstanding here.

  i=$(mktemp)
  exec 3<>"$i"
  rm -- "$i"
  echo "Hello, world" >&3
  shred - >&3
  exec 3>&-

The first line runs mktemp(1), which is a GNU coreutils program that
creates a temporary file, and then writes its name to standard output.
The shell syntax grabs that name and stores it in the variable "i".

So, after line 1, we have an (empty) temporary file, which was created
by a child process that has terminated.  We have its name in a variable.

Creation of temporary files works a little bit differently in shell
scripts than it does in regular programs.  In most other languages,
you would call a library function that creates the temporary file
(keeping it open), optionally unlinks it, and returns the open file
descriptor to you for use.  But you can't do that in a shell script
that needs an external program to do the file creation.  So we have this
slightly clumsy approach.

The second line opens this file for reading and writing, and ensures
that file descriptor 3 points to it.  It's important to understand that
while "exec 3>$i" would have truncated the file's contents, "exec 3<>$i"
does not.  Of course, there wasn't any content to truncate, since it was
created empty, but that's not the important part.  The important part
is that this FD is opened for read+write, allowing the temporary file
to be used for storage *and* retrieval.  We aren't doing any retrieval
in this example, but it could be done, with specialized tools.

The third line unlinks the file from the file system.  However, the shell
still has an open file descriptor which points to the file.  Therefore,
the file is still accessible through this FD.  Its inode is not recycled,
and any blocks containing file content are not marked for reuse.

This "unlink before using" technique is traditional on Unix systems.
It allows you to bypass setting up an exit handler to clean up the
temporary file.  Once the open file descriptor is closed, the file
system will mark the inode and any blocks as ready for reuse.  Even if
the script is killed by SIGKILL, that cleanup will still happen.

The fourth line writes some content via the open file descriptor 3.  At
this point, our unlinked file now has data in it.  Presumably this data
is super private, and we don't want anyone to be able to recover it.
When the script exits, the open file descriptor will close, and the file
system will mark the file's blocks as reusable, but it won't *actually*
reuse them until something else comes along and claims them.  But that's
what shred is designed for.

The fifth line calls shred(1), instructing it to destroy the content
that's in the unlinked file.  Since the file is unlinked, it has no name,
and therefore shred can't be *given* a name.  However, we have a file
descriptor that points to it.  So, what we *can* do is point standard
output to the file (that's what >&3 does), and then tell shred to destroy
the file that's pointed to by stdout.

Shred will determine the size of the file, then write data to the file,
rewind, write data again, etc.  On a traditional hard drive, that will
ov

Re: shred bug? [was: Unidentified subject!]

2024-02-13 Thread David Wright
On Tue 13 Feb 2024 at 07:15:48 (-0500), Greg Wooledge wrote:
> On Mon, Feb 12, 2024 at 11:01:47PM -0600, David Wright wrote:
> > … but not much. For me, "standard output" is /dev/fd/1, yet it seems
> > unlikely that anyone is going to use >&1 in the manner of the example.
> 
> Standard output means "whatever file descriptor 1 points to".  That
> could be a file, a pipe, a terminal (character device), etc.

Why pick on 1?

> > I might write something like: "The option ‘-’ shreds the file specified
> > by the redirection ‘>&N’", though there could be a better name for ‘>&N’.
> 
> You're assuming the program will be used from a shell.  This is *usually*
> going to be true, but nothing prevents you from writing a C program
> which closes stdout, opens a file, ensures that it's using FD 1,
> and then calls "shred -".  The documentation has to support this use
> case as well.

/As well/ — which is why I wrote N in place of 1. The original bug
report (which I hadn't seen until Thomas' post) says:
 "If you redirect output to a file it will work. Shredding a tty doesn't
  make much sense, after all."
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=155175#10

Now, you can't write "If you redirect output to a file it will work"
in a man page—it needs recasting into something more like what
I wrote above, which contains two key points:

. It points out that '-' is an option, not a filename or a stand-in
  for one, and it doesn't use the word standard, which is totally
  irrelevant in the circumstances.

. It demonstrates the shell syntax element required (&) in order to
  avoid truncating the file, rather than shred overwriting it.

I think that getting the "&" into the man page would be helpful
to anybody who doesn't look at the info page for the example.
It might have shortened the early part of this thread as well.

As for C programmers, neither FD number nor truncation is relevant.
Sure, you can pick 1. But you don't have to document that for shred.
And truncation is an accident that can occur because of shell's
redirect syntax: there's no equivalent in programs.

> > >A FILE of ‘-’ denotes standard output.  The intended use of this is
> > >to shred a removed temporary file.  For example:
> > > 
> > >   i=$(mktemp)
> > >   exec 3<>"$i"
> > >   rm -- "$i"
> > >   echo "Hello, world" >&3
> > >   shred - >&3
> > >   exec 3>-
> > 
> > I can see that the last line truncates the "anonymous" file,
> 
> No, that's not what it does at all.  In fact, that last line is
> written incorrectly.  It should say "exec 3>&-" and what that does
> is close file descriptor 3, which was previously opened on line 2.
> 
> What it actually does *as written* is create/truncate a file whose
> name is "-", close the previously opened FD 3, and make FD 3 point
> to the file named "-".
> 
> unicorn:~$ exec 3>-
> unicorn:~$ ls -ld -- -
> -rw-r--r-- 1 greg greg 0 Feb 13 07:12 -
> unicorn:~$ ls -l /dev/fd/3
> l-wx-- 1 greg greg 64 Feb 13 07:12 /dev/fd/3 -> /home/greg/-
> 
> This is an obvious bug in the info page.  I wonder how many years
> this has gone unnoticed.

Well spotted. That's what an experienced eye brings to a line like
that, whereas I assumed it meant something beyond my experience,
and searched for it.

Ironic that it truncates a file, and then immediately warns against
truncating a file instead of shredding it.

Cheers,
David.



Re: shred bug? [was: Unidentified subject!]

2024-02-13 Thread Thomas Schmitt
Hi,

"info shred" says:
> > >   i=$(mktemp)
> > >   exec 3<>"$i"
> > >   rm -- "$i"
> > >   echo "Hello, world" >&3
> > >   shred - >&3
> > >   exec 3>-

Greg Wooledge wrote:
> In fact, that last line is
> written incorrectly.  It should say "exec 3>&-" and what that does
> is close file descriptor 3, which was previously opened on line 2.
> [...]
> This is an obvious bug in the info page.  I wonder how many years
> this has gone unnoticed.

  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=155175#36
of 22 Dec 2005 states:

  "I'll assume that this is now adequately explained in the info page
   (below).  If not then please reopen.  // Thomas Hood
   [...]
   exec 3>-"

The bug report is from 02 Aug 2002 and states that the info page contains
the short and broad promise which we can still see in "man shred".

So we can assume a bug age of 18 to 22 years.


Have a nice day :)

Thomas



Re: shred bug? [was: Unidentified subject!]

2024-02-13 Thread tomas
On Tue, Feb 13, 2024 at 07:36:14AM -0500, Greg Wooledge wrote:
> On Tue, Feb 13, 2024 at 07:15:48AM -0500, Greg Wooledge wrote:
> > This is an obvious bug in the info page.  I wonder how many years
> > this has gone unnoticed.
> 
> I've filed Bug#1063837 for it.  <https://bugs.debian.org/1063837>

Well, thanks for doing TRT :-)

Cheers
-- 
t


signature.asc
Description: PGP signature


Re: shred bug? [was: Unidentified subject!]

2024-02-13 Thread Greg Wooledge
On Tue, Feb 13, 2024 at 07:15:48AM -0500, Greg Wooledge wrote:
> This is an obvious bug in the info page.  I wonder how many years
> this has gone unnoticed.

I've filed Bug#1063837 for it.  <https://bugs.debian.org/1063837>



Re: shred bug? [was: Unidentified subject!]

2024-02-13 Thread Greg Wooledge
On Mon, Feb 12, 2024 at 11:01:47PM -0600, David Wright wrote:
> … but not much. For me, "standard output" is /dev/fd/1, yet it seems
> unlikely that anyone is going to use >&1 in the manner of the example.

Standard output means "whatever file descriptor 1 points to".  That
could be a file, a pipe, a terminal (character device), etc.

> I might write something like: "The option ‘-’ shreds the file specified
> by the redirection ‘>&N’", though there could be a better name for ‘>&N’.

You're assuming the program will be used from a shell.  This is *usually*
going to be true, but nothing prevents you from writing a C program
which closes stdout, opens a file, ensures that it's using FD 1,
and then calls "shred -".  The documentation has to support this use
case as well.

> >A FILE of ‘-’ denotes standard output.  The intended use of this is
> >to shred a removed temporary file.  For example:
> > 
> >   i=$(mktemp)
> >   exec 3<>"$i"
> >   rm -- "$i"
> >   echo "Hello, world" >&3
> >   shred - >&3
> >   exec 3>-
> 
> I can see that the last line truncates the "anonymous" file,

No, that's not what it does at all.  In fact, that last line is
written incorrectly.  It should say "exec 3>&-" and what that does
is close file descriptor 3, which was previously opened on line 2.

What it actually does *as written* is create/truncate a file whose
name is "-", close the previously opened FD 3, and make FD 3 point
to the file named "-".

unicorn:~$ exec 3>-
unicorn:~$ ls -ld -- -
-rw-r--r-- 1 greg greg 0 Feb 13 07:12 -
unicorn:~$ ls -l /dev/fd/3
l-wx-- 1 greg greg 64 Feb 13 07:12 /dev/fd/3 -> /home/greg/-

This is an obvious bug in the info page.  I wonder how many years
this has gone unnoticed.



Re: shred bug? [was: Unidentified subject!]

2024-02-12 Thread tomas
On Mon, Feb 12, 2024 at 10:07:45PM +0100, Thomas Schmitt wrote:
> Hi,
> 
> > https://en.wikipedia.org/wiki/Everything_is_a_file
> > But, there is more than one kind of file.
> 
> "All files are equal.
>  But some files are more equal than others."
> 
> (George Orwell in his dystopic novel "Server Farm".)

:-D

Yesterday I was on the brink of quoting that. Great minds and
all of that...

Reality though is, that if you don't design your file with
magical properties from the get-go, you will keep stumbling
on stuff you want to model and which don't fit the real
life file design you chose.

And yes, Plan 9, as someone else noted in this thread, takes
that to a bigger extreme: in their windowing system, windows
are files, too -- you can remove a window by deleting the
corresponding file.

Cheers
-- 
t


signature.asc
Description: PGP signature


Re: shred bug? [was: Unidentified subject!]

2024-02-12 Thread David Wright
On Sun 11 Feb 2024 at 09:16:00 (-0600), David Wright wrote:
> On Sun 11 Feb 2024 at 09:54:24 (-0500), Greg Wooledge wrote:
> > On Sun, Feb 11, 2024 at 03:45:21PM +0100, to...@tuxteam.de wrote:
> > > Still there's the discrepancy between doc and behaviour.
> > 
> > There isn't.  The documentation says:
> > 
> > SYNOPSIS
> >shred [OPTION]... FILE...
> > 
> > DESCRIPTION
> >Overwrite  the specified FILE(s) repeatedly, in order to make it 
> > harder
> >for even very expensive hardware probing to recover the data.
> > 
> >If FILE is -, shred standard output.
> > 
> > In every sentence, the word FILE appears.  There's nothing in there
> > which says "you can operate on a non-file".
> > 
> > Once you grasp what the command is *intended* to do (rewind and overwrite
> > a file repeatedly), it makes absolutely perfect sense that it should
> > only operate on rewindable file system objects.
> > 
> > If you want it to write a stream of data instead of performing its normal
> > operation (rewinding and rewriting), that's a new feature.
> > 
> > If you'd prefer the documentation to say explicitly "only regular files
> > and block devices are allowed", that would be an upstream documentation
> > *clarification* request.
> 
> Perhaps info puts it better?

… but not much. For me, "standard output" is /dev/fd/1, yet it seems
unlikely that anyone is going to use >&1 in the manner of the example.

I might write something like: "The option ‘-’ shreds the file specified
by the redirection ‘>&N’", though there could be a better name for ‘>&N’.

>A FILE of ‘-’ denotes standard output.  The intended use of this is
>to shred a removed temporary file.  For example:
> 
>   i=$(mktemp)
>   exec 3<>"$i"
>   rm -- "$i"
>   echo "Hello, world" >&3
>   shred - >&3
>   exec 3>-

I can see that the last line truncates the "anonymous" file, but where
is that construction documented¹, and how would one parse the syntax
elements   FD  >  -   to make them mean truncate?

>However, the command ‘shred - >file’ does not shred the contents of
>FILE, since the shell truncates FILE before invoking ‘shred’.  Use
>the command ‘shred file’ or (if using a Bourne-compatible shell) the
>command ‘shred - 1<>file’ instead.

¹ the string ">-" doesn't appear in /usr/share/doc/bash/bashref.pdf,
  ver 5.1, for example.

Cheers,
David.



Re: shred bug? [was: Unidentified subject!]

2024-02-12 Thread Max Nikulin

On 12/02/2024 05:41, David Christensen wrote:


Apparently, shred(1) has both an info(1) page (?) and a man(1) page. The 
obvious solution is to write one document that is complete and correct, 
and use it everywhere -- e.g. DRY.


https://www.gnu.org/prep/standards/html_node/Man-Pages.html
6.9 Man Pages in "GNU Coding Standards"

In the GNU project, man pages are secondary. It is not necessary or
expected for every GNU program to have a man page, but some of them do.


A standalone man page is not the same as a section in a document 
describing the whole bundle.


Notice that man uses direct formatting, not logical markup. E.g. there 
is no dedicated structure for links to other man pages. They were not 
designed as hypertext documents, they are to be printed on paper. In 
this sense texinfo is a more advanced format.


P.S. I admit that in some cases "man bash" may be more convenient for 
searching than an info browser.




Re: shred bug? [was: Unidentified subject!]

2024-02-12 Thread Thomas Schmitt
Hi,

> https://en.wikipedia.org/wiki/Everything_is_a_file
> But, there is more than one kind of file.

"All files are equal.
 But some files are more equal than others."

(George Orwell in his dystopic novel "Server Farm".)


Have a nice day :)

Thomas



Re: shred bug? [was: Unidentified subject!]

2024-02-12 Thread David Christensen

On 2/12/24 08:50, Curt wrote:

On 2024-02-11,   wrote:



On Sun, Feb 11, 2024 at 09:54:24AM -0500, Greg Wooledge wrote:

[...]


If FILE is -, shred standard output.
=20
In every sentence, the word FILE appears.  There's nothing in there
which says "you can operate on a non-file".


Point taken, yes.


I thought everything was a file.



"Everything is a file" is a design feature of the Unix operating system:

https://en.wikipedia.org/wiki/Everything_is_a_file


But, there is more than one kind of file.


And, not every program supports every kind of file.


The manual page for find(1) provides a shopping list of file types it 
supports:


2024-02-12 12:32:13 dpchrist@laalaa ~
$ man find | egrep -A 20 '^   .type c'
   -type c
  File is of type c:

  b  block (buffered) special

  c  character (unbuffered) special

  d  directory

  p  named pipe (FIFO)

  f  regular file

  l  symbolic link; this is never true if  the
 -L option or the -follow option is in ef-
 fect, unless the symbolic link is broken.
 If  you want to search for symbolic links
 when -L is in effect, use -xtype.

  s  socket


As for shred(1), the argument FILE is conventionally a regular file.  We 
are discussing the special case described in the manual page:


   If FILE is -, shred standard output.


David



Re: shred bug? [was: Unidentified subject!]

2024-02-12 Thread Greg Wooledge
On Mon, Feb 12, 2024 at 04:50:50PM -, Curt wrote:
> On 2024-02-11,   wrote:
> >
> >
> > On Sun, Feb 11, 2024 at 09:54:24AM -0500, Greg Wooledge wrote:
> >
> > [...]
> >
> >>If FILE is -, shred standard output.
> >>=20
> >> In every sentence, the word FILE appears.  There's nothing in there
> >> which says "you can operate on a non-file".
> >
> > Point taken, yes.
> 
> I thought everything was a file.

An anonymous pipe(2) in memory between two processes isn't even close to
a file.  Also, you're confusing Linux and Plan 9.  Linux has a bunch of
things that aren't files, such as network interfaces.



Re: shred bug? [was: Unidentified subject!]

2024-02-12 Thread Curt
On 2024-02-11,   wrote:
>
>
> On Sun, Feb 11, 2024 at 09:54:24AM -0500, Greg Wooledge wrote:
>
> [...]
>
>>If FILE is -, shred standard output.
>>=20
>> In every sentence, the word FILE appears.  There's nothing in there
>> which says "you can operate on a non-file".
>
> Point taken, yes.

I thought everything was a file.



Re: shred bug? [was: Unidentified subject!]

2024-02-11 Thread David Christensen

On 2/11/24 06:54, Greg Wooledge wrote:

On Sun, Feb 11, 2024 at 03:45:21PM +0100, to...@tuxteam.de wrote:

On Sun, Feb 11, 2024 at 09:37:31AM -0500, Greg Wooledge wrote:

On Sun, Feb 11, 2024 at 08:02:12AM +0100, to...@tuxteam.de wrote:


[...]


What Thomas was trying to do is to get a cheap, fast random number
generator. Shred seems to have such.


Well... I certainly wouldn't call it a bug.  Maybe a feature request.


Still there's the discrepancy between doc and behaviour.


There isn't.  The documentation says:

SYNOPSIS
shred [OPTION]... FILE...



I interpret the above line to be a prototype for invoking the shred(1) 
program:


* "shred" is the program name

* "[OPTION]..." is one or more option specifiers that may be omitted. 
Each should be described below.


* "FILE..." is one or more argument specifies that should be file system 
paths (strings).




DESCRIPTION
Overwrite  the specified FILE(s) repeatedly, in order to make it harder
for even very expensive hardware probing to recover the data.

If FILE is -, shred standard output.



I interpret the above line at face value -- if the caller provides a 
dash as the argument, shred(1) will operate on standard output.




In every sentence, the word FILE appears.  There's nothing in there
which says "you can operate on a non-file".



Dash is not a file, yet the above sentence says shred(1) can operate on it.



Once you grasp what the command is *intended* to do (rewind and overwrite
a file repeatedly), it makes absolutely perfect sense that it should
only operate on rewindable file system objects.



An expert may infer what you have stated, but I prefer manual pages that 
are explicit.



The GNU project must have found a need for the FILE='-' feature, 
otherwise it would not exist.  The manual page should describe that need 
(e.g. why) and how to properly use shred(1) to solve the need.




If you want it to write a stream of data instead of performing its normal
operation (rewinding and rewriting), that's a new feature.



Humans are (in)famous for doing unexpected things.



If you'd prefer the documentation to say explicitly "only regular files
and block devices are allowed", that would be an upstream documentation
*clarification* request.



Apparently, shred(1) has both an info(1) page (?) and a man(1) page. 
The obvious solution is to write one document that is complete and 
correct, and use it everywhere -- e.g. DRY.



David



Re: shred bug? [was: Unidentified subject!]

2024-02-11 Thread David Wright
On Sun 11 Feb 2024 at 09:54:24 (-0500), Greg Wooledge wrote:
> On Sun, Feb 11, 2024 at 03:45:21PM +0100, to...@tuxteam.de wrote:
> > On Sun, Feb 11, 2024 at 09:37:31AM -0500, Greg Wooledge wrote:
> > > On Sun, Feb 11, 2024 at 08:02:12AM +0100, to...@tuxteam.de wrote:
> > 
> > [...]
> > 
> > > > What Thomas was trying to do is to get a cheap, fast random number
> > > > generator. Shred seems to have such.
> > > 
> > > Well... I certainly wouldn't call it a bug.  Maybe a feature request.
> > 
> > Still there's the discrepancy between doc and behaviour.
> 
> There isn't.  The documentation says:
> 
> SYNOPSIS
>shred [OPTION]... FILE...
> 
> DESCRIPTION
>Overwrite  the specified FILE(s) repeatedly, in order to make it harder
>for even very expensive hardware probing to recover the data.
> 
>If FILE is -, shred standard output.
> 
> In every sentence, the word FILE appears.  There's nothing in there
> which says "you can operate on a non-file".
> 
> Once you grasp what the command is *intended* to do (rewind and overwrite
> a file repeatedly), it makes absolutely perfect sense that it should
> only operate on rewindable file system objects.
> 
> If you want it to write a stream of data instead of performing its normal
> operation (rewinding and rewriting), that's a new feature.
> 
> If you'd prefer the documentation to say explicitly "only regular files
> and block devices are allowed", that would be an upstream documentation
> *clarification* request.

Perhaps info puts it better?

   A FILE of ‘-’ denotes standard output.  The intended use of this is
   to shred a removed temporary file.  For example:

  i=$(mktemp)
  exec 3<>"$i"
  rm -- "$i"
  echo "Hello, world" >&3
  shred - >&3
  exec 3>-

   However, the command ‘shred - >file’ does not shred the contents of
   FILE, since the shell truncates FILE before invoking ‘shred’.  Use
   the command ‘shred file’ or (if using a Bourne-compatible shell) the
   command ‘shred - 1<>file’ instead.

Cheers,
David.


Re: shred bug? [was: Unidentified subject!]

2024-02-11 Thread Thomas Schmitt
Hi,

to...@tuxteam.de wrote:
> Still there's the discrepancy between doc and behaviour.

Depends at which documentation you look. Obviously stemming from
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=155175#36
i read in
  https://www.gnu.org/software/coreutils/manual/html_node/shred-invocation.html

 "A file of ‘-’ denotes standard output. The intended use of this is to
  shred a removed temporary file. For example: [shell wizzardry]"

It works as long as stdout is connected to a data file, or block device,
or directory, or symbolic link, or to a character device that is not a
terminal.
(Maybe it refuses later on some of these types, but not at the location
with the message "invalid file type". I wonder if i can connect stdout
to a symbolic link instead of its target.)

The bug would thus have to be filed against the man page
  https://sources.debian.org/src/coreutils/9.4-3/man/shred.1/
which says only

  "If FILE is \-, shred standard output."

The info empire of coreutils says what above web manual says.
  https://sources.debian.org/src/coreutils/9.4-3/doc/coreutils.texi/#L10705


Have a nice day :)

Thomas



Re: shred bug? [was: Unidentified subject!]

2024-02-11 Thread tomas
On Sun, Feb 11, 2024 at 09:54:24AM -0500, Greg Wooledge wrote:

[...]

>If FILE is -, shred standard output.
> 
> In every sentence, the word FILE appears.  There's nothing in there
> which says "you can operate on a non-file".

Point taken, yes.

Cheers
-- 
t


signature.asc
Description: PGP signature


Re: shred bug? [was: Unidentified subject!]

2024-02-11 Thread Greg Wooledge
On Sun, Feb 11, 2024 at 03:45:21PM +0100, to...@tuxteam.de wrote:
> On Sun, Feb 11, 2024 at 09:37:31AM -0500, Greg Wooledge wrote:
> > On Sun, Feb 11, 2024 at 08:02:12AM +0100, to...@tuxteam.de wrote:
> 
> [...]
> 
> > > What Thomas was trying to do is to get a cheap, fast random number
> > > generator. Shred seems to have such.
> > 
> > Well... I certainly wouldn't call it a bug.  Maybe a feature request.
> 
> Still there's the discrepancy between doc and behaviour.

There isn't.  The documentation says:

SYNOPSIS
   shred [OPTION]... FILE...

DESCRIPTION
   Overwrite  the specified FILE(s) repeatedly, in order to make it harder
   for even very expensive hardware probing to recover the data.

   If FILE is -, shred standard output.

In every sentence, the word FILE appears.  There's nothing in there
which says "you can operate on a non-file".

Once you grasp what the command is *intended* to do (rewind and overwrite
a file repeatedly), it makes absolutely perfect sense that it should
only operate on rewindable file system objects.

If you want it to write a stream of data instead of performing its normal
operation (rewinding and rewriting), that's a new feature.

If you'd prefer the documentation to say explicitly "only regular files
and block devices are allowed", that would be an upstream documentation
*clarification* request.



Re: shred bug? [was: Unidentified subject!]

2024-02-11 Thread tomas
On Sun, Feb 11, 2024 at 09:37:31AM -0500, Greg Wooledge wrote:
> On Sun, Feb 11, 2024 at 08:02:12AM +0100, to...@tuxteam.de wrote:

[...]

> > What Thomas was trying to do is to get a cheap, fast random number
> > generator. Shred seems to have such.
> 
> Well... I certainly wouldn't call it a bug.  Maybe a feature request.

Still there's the discrepancy between doc and behaviour.

Cheers
-- 
t


signature.asc
Description: PGP signature


Re: shred bug? [was: Unidentified subject!]

2024-02-11 Thread Greg Wooledge
On Sun, Feb 11, 2024 at 08:02:12AM +0100, to...@tuxteam.de wrote:
> On Sat, Feb 10, 2024 at 07:10:54PM -0500, Greg Wooledge wrote:
> > On Sat, Feb 10, 2024 at 04:05:21PM -0800, David Christensen wrote:
> > > 2024-02-10 16:03:50 dpchrist@laalaa ~
> > > $ shred -s 1K - | wc -c
> > > shred: -: invalid file type
> > > 0
> > > 
> > > 
> > > It looks like a shred(1) needs a bug report.
> > 
> > I'm confused what you expected this command to do.  You wanted to
> > "destroy" (by overwriting with random data) a pipe to wc?  What
> > would that even look like?
> 
> What Thomas was trying to do is to get a cheap, fast random number
> generator. Shred seems to have such.

Well... I certainly wouldn't call it a bug.  Maybe a feature request.



Re: shred bug?

2024-02-11 Thread Thomas Schmitt
Hi,

debian-u...@howorth.org.uk wrote:
> Maybe it is unstated but mandatory to use -n 1 as well?
> And optionally -s N?

Naw. It just doesn't want to work pipes.

Initially i tried with these options:

  shred -n 1 -s 1K -v - | sha256sum

as preparation for a proposal to Gene Heskett, like:
  shred -n 1 -s 204768K -v - | tee /dev/sdm1 | sha256sum


> I expect reading the code would tell.

My code analysis is in
  
https://lists.debian.org/msgid-search/1162291656137153...@scdbackup.webframe.org

to...@tuxteam.de found bug 155175 from 2002, which explains why. See
  https://lists.debian.org/msgid-search/zcdxzya0ayrmp...@tuxteam.de


Have a nice day :)

Thomas



Re: shred bug? [was: Unidentified subject!]

2024-02-11 Thread debian-user
David Christensen  wrote:
> On 2/10/24 16:10, Greg Wooledge wrote:
> > On Sat, Feb 10, 2024 at 04:05:21PM -0800, David Christensen wrote:  
> >> 2024-02-10 16:03:50 dpchrist@laalaa ~
> >> $ shred -s 1K - | wc -c
> >> shred: -: invalid file type
> >> 0
> >>
> >>
> >> It looks like a shred(1) needs a bug report.  
> > 
> > I'm confused what you expected this command to do.  You wanted to
> > "destroy" (by overwriting with random data) a pipe to wc?  What
> > would that even look like?
> > 
> > The basic premise of shred is that it determines the size of the
> > file, then writes data over it, rewinds it, and repeats this a few
> > times. A pipe to a process has no size, and can't be rewound.
> > 
> > Declaring a pipe to be an "invalid file type" for shredding sounds
> > pretty reasonable to me.  
> 
> 
> The documentation is confusing:
> 
> On 2/10/24 16:05, David Christensen wrote:
>  > 2024-02-10 16:03:42 dpchrist@laalaa ~
>  > $ man shred | grep 'If FILE is -'
>  > If FILE is -, shred standard output.  

Maybe it is unstated but mandatory to use -n 1 as well?
And optionally -s N?
I expect reading the code would tell.

First time I've read the man page properly.
Interesting point about COW filesystems such as btrfs :)



Re: shred bug? [was: Unidentified subject!]

2024-02-10 Thread tomas
On Sat, Feb 10, 2024 at 07:10:54PM -0500, Greg Wooledge wrote:
> On Sat, Feb 10, 2024 at 04:05:21PM -0800, David Christensen wrote:
> > 2024-02-10 16:03:50 dpchrist@laalaa ~
> > $ shred -s 1K - | wc -c
> > shred: -: invalid file type
> > 0
> > 
> > 
> > It looks like a shred(1) needs a bug report.
> 
> I'm confused what you expected this command to do.  You wanted to
> "destroy" (by overwriting with random data) a pipe to wc?  What
> would that even look like?

What Thomas was trying to do is to get a cheap, fast random number
generator. Shred seems to have such.

> The basic premise of shred is that it determines the size of the file,
> then writes data over it, rewinds it, and repeats this a few times.
> A pipe to a process has no size, and can't be rewound.

That's right: stdout is a (potentially) infinite file, so only one
pass (-n 1, as Thomas put in the command line) really makes sense.
Unless you are into transfinite numbers, that is :-)

> Declaring a pipe to be an "invalid file type" for shredding sounds
> pretty reasonable to me.

This is one of those cases of the toolmaker knowing better than the
tool user. One of the things I like UNIX is that this attitude isn't
that widespread (it is slowly spreading, alas). I much prefer those
tool makers who say "surprise me".

Of course, helping people to not shoot themselves in the foot is
also a honourable endeavour. So "you are trying to shred a pipe.
Use option -f for that (see the manual page" would be fully OK
in my book.

I don't like opinionated software. It's messy enough as it is
when people have opinions :-)

Cheers
-- 
t


signature.asc
Description: PGP signature


Re: Debian weather bug

2024-02-10 Thread Greg Marks
> > I have a bug report but am not sure which package it should be filed
> > against.  The Weather Report application, version 1.24.1, is affected,
> > as is the weather reported by the Clock application, version 1.24.1, in
> > the MATE desktop environment.  Neither reports the correct weather, and
> > the log files contain the error "Failed to get METAR data."  I suspect
> > the issue is related to a URL change described in these two threads:
> > 
> >
> > https://www.reddit.com/r/flying/comments/179zm7o/for_those_of_you_who_made_metar_live/
> >https://www.reddit.com/r/flying/comments/179bmcj/metar_map_issues/
> > 
> > How should this be reported?
> > 
> 
> Update: the following are relevant lines from /var/log/syslog:
> 
>Oct 19 18:10:09 debian mateweather-app[3922]: Failed to get METAR data: 
> 404 Not Found.
>Oct 19 18:10:09 debian mateweather-app[3922]: Failed to get IWIN forecast 
> data: 404 Not Found
>Oct 19 18:10:09 debian mateweather-app[3922]: Source ID 50501 was not 
> found when attempting to remove it
>Oct 19 18:14:54 debian mateweather-app[3922]: Failed to get METAR data: 
> 404 Not Found.
>Oct 19 18:14:54 debian mateweather-app[3922]: Source ID 50562 was not 
> found when attempting to remove it
>Oct 19 18:23:55 debian clock-applet[3927]: Failed to get METAR data: 404 
> Not Found.
> 
> Note that two different applications are affected by this bug.

The latest upgrade of the packages libmateweather-common and
libmateweather1 (both to version 1.24.1-1+deb11u1) has fixed the
problem; the two weather applications are now functioning properly.

Best regards,
Greg Marks


signature.asc
Description: PGP signature


Re: shred bug? [was: Unidentified subject!]

2024-02-10 Thread David Christensen

On 2/10/24 16:10, Greg Wooledge wrote:

On Sat, Feb 10, 2024 at 04:05:21PM -0800, David Christensen wrote:

2024-02-10 16:03:50 dpchrist@laalaa ~
$ shred -s 1K - | wc -c
shred: -: invalid file type
0


It looks like a shred(1) needs a bug report.


I'm confused what you expected this command to do.  You wanted to
"destroy" (by overwriting with random data) a pipe to wc?  What
would that even look like?

The basic premise of shred is that it determines the size of the file,
then writes data over it, rewinds it, and repeats this a few times.
A pipe to a process has no size, and can't be rewound.

Declaring a pipe to be an "invalid file type" for shredding sounds
pretty reasonable to me.



The documentation is confusing:

On 2/10/24 16:05, David Christensen wrote:
> 2024-02-10 16:03:42 dpchrist@laalaa ~
> $ man shred | grep 'If FILE is -'
> If FILE is -, shred standard output.


David




Re: shred bug? [was: Unidentified subject!]

2024-02-10 Thread Greg Wooledge
On Sat, Feb 10, 2024 at 04:05:21PM -0800, David Christensen wrote:
> 2024-02-10 16:03:50 dpchrist@laalaa ~
> $ shred -s 1K - | wc -c
> shred: -: invalid file type
> 0
> 
> 
> It looks like a shred(1) needs a bug report.

I'm confused what you expected this command to do.  You wanted to
"destroy" (by overwriting with random data) a pipe to wc?  What
would that even look like?

The basic premise of shred is that it determines the size of the file,
then writes data over it, rewinds it, and repeats this a few times.
A pipe to a process has no size, and can't be rewound.

Declaring a pipe to be an "invalid file type" for shredding sounds
pretty reasonable to me.



Re: shred bug? [was: Unidentified subject!]

2024-02-10 Thread David Christensen

On 2/10/24 04:40, to...@tuxteam.de wrote:

On Sat, Feb 10, 2024 at 11:38:21AM +0100, Thomas Schmitt wrote:

[...]


But shred(1) on Debian 11 refuses on "-" contrary to its documentation:
   shred: -: invalid file type
A non-existing file path causes "No such file or directory".


Hmm. This looks like a genuine bug: the man page mentions it.

Also, /dev/stdout as target runs into the very same problem.

Cheers



Testing:

2024-02-10 16:01:54 dpchrist@laalaa ~
$ cat /etc/debian_version ; uname -a
11.8
Linux laalaa 5.10.0-27-amd64 #1 SMP Debian 5.10.205-2 (2023-12-31) 
x86_64 GNU/Linux


2024-02-10 16:02:34 dpchrist@laalaa ~
$ bash --version | head -n 1
GNU bash, version 5.1.4(1)-release (x86_64-pc-linux-gnu)

2024-02-10 16:02:48 dpchrist@laalaa ~
$ shred --version | head -n 1
shred (GNU coreutils) 8.32

2024-02-10 16:03:42 dpchrist@laalaa ~
$ man shred | grep 'If FILE is -'
   If FILE is -, shred standard output.

2024-02-10 16:03:50 dpchrist@laalaa ~
$ shred -s 1K - | wc -c
shred: -: invalid file type
0


It looks like a shred(1) needs a bug report.


David



Re: shred bug?

2024-02-10 Thread tomas
On Sat, Feb 10, 2024 at 02:58:06PM +0100, Thomas Schmitt wrote:
> Hi,
> 
> to...@tuxteam.de wrote:
> > Ah, it seems to be this one, from 2002:
> >  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=155175
> 
> So it's not a bug but a feature. :(
> 
> I'm riddling over the code about the connection to an old graphics
> algorithm (Bresenham's Algorithm) and how shred produces a random pattern
> at all.

This [1] perhaps? It's not a good random generator (and not crypto,
by a long stretch) but it's pretty well equidistributed, ain't it?

Cheers

[1] https://en.wikipedia.org/wiki/Lehmer_random_number_generator
-- 
t


signature.asc
Description: PGP signature


Re: shred bug?

2024-02-10 Thread Thomas Schmitt
Hi,

to...@tuxteam.de wrote:
> Ah, it seems to be this one, from 2002:
>  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=155175

So it's not a bug but a feature. :(

I'm riddling over the code about the connection to an old graphics
algorithm (Bresenham's Algorithm) and how shred produces a random pattern
at all.


Have a nice day :)

Thomas



Re: shred bug?

2024-02-10 Thread Gremlin

On 2/10/24 08:32, Thomas Schmitt wrote:

Hi,

i wrote:

   shred: -: invalid file type


to...@tuxteam.de wrote:

Hmm. This looks like a genuine bug: the man page mentions it.


Even the help text in
   https://sources.debian.org/src/coreutils/9.4-3/src/shred.c/
says
   If FILE is -, shred standard output.

The name "-" is recognized in line 1257 and leads to a call of wipefd()
in line 958. The error messages there look different from above.
So i hop from line 973 to do_wipefd() and find the message in line 845.
fd is supposed to be 1 (= stdout). fstat(2) was successful but now this
condition snaps:

   if ((S_ISCHR (st.st_mode) && isatty (fd))
   || S_ISFIFO (st.st_mode)
   || S_ISSOCK (st.st_mode))

The problem seems to be in the S_ISFIFO part.
In a little test program fstat() reports about fd==1:
   st.st_mode= 010600 , S_ISFIFO(st.st_mode)= 1
(st_mode shown in octal as in man 2 fstat.)

It looks like the test expects a pipe(2) file descriptor to be
classified as S_ISCHR and !isatty().
Without redirection through a pipe, fd 1 has st.st_mode 20620, S_ISCHR,
and isatty()==1. The isatty() result is indeed a good reason, not to
flood stdout by a zillion random bytes.


Does anybody have an old GNU/Linux system where a file descriptor from
pipe(2) is classified as character device (S_IFCHR, S_ISCHR) ?



What about dash?





Re: shred bug?

2024-02-10 Thread Thomas Schmitt
Hi,

i wrote:
> >   shred: -: invalid file type

to...@tuxteam.de wrote:
> Hmm. This looks like a genuine bug: the man page mentions it.

Even the help text in
  https://sources.debian.org/src/coreutils/9.4-3/src/shred.c/
says
  If FILE is -, shred standard output.

The name "-" is recognized in line 1257 and leads to a call of wipefd()
in line 958. The error messages there look different from above.
So i hop from line 973 to do_wipefd() and find the message in line 845.
fd is supposed to be 1 (= stdout). fstat(2) was successful but now this
condition snaps:

  if ((S_ISCHR (st.st_mode) && isatty (fd))
  || S_ISFIFO (st.st_mode)
  || S_ISSOCK (st.st_mode))

The problem seems to be in the S_ISFIFO part.
In a little test program fstat() reports about fd==1:
  st.st_mode= 010600 , S_ISFIFO(st.st_mode)= 1
(st_mode shown in octal as in man 2 fstat.)

It looks like the test expects a pipe(2) file descriptor to be
classified as S_ISCHR and !isatty().
Without redirection through a pipe, fd 1 has st.st_mode 20620, S_ISCHR,
and isatty()==1. The isatty() result is indeed a good reason, not to
flood stdout by a zillion random bytes.


Does anybody have an old GNU/Linux system where a file descriptor from
pipe(2) is classified as character device (S_IFCHR, S_ISCHR) ?

Does anybody have an idea why shred would want to exclude fifos ?
(What ciould shred do with a non-tty character device that cannot be done
 with a fifo ?)


Have a nice day :)

Thomas



Re: shred bug? [was: Unidentified subject!]

2024-02-10 Thread tomas
On Sat, Feb 10, 2024 at 01:40:35PM +0100, to...@tuxteam.de wrote:
> On Sat, Feb 10, 2024 at 11:38:21AM +0100, Thomas Schmitt wrote:
> 
> [...]
> 
> > But shred(1) on Debian 11 refuses on "-" contrary to its documentation:
> >   shred: -: invalid file type
> > A non-existing file path causes "No such file or directory".
> 
> Hmm. This looks like a genuine bug: the man page mentions it.
> 
> Also, /dev/stdout as target runs into the very same problem.

Ah, it seems to be this one, from 2002:

  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=155175

which is archived. The argument seems to be that shred on stdout
doesn't make any sense, because the shell would truncate the
file anyway when you did

  shred - > /this/file/to/be/shredded

... which, of course, undermines shred's purpose. It seems they
hadn't your sneaky use case in mind :-)

Cheers
-- 
t


signature.asc
Description: PGP signature


shred bug? [was: Unidentified subject!]

2024-02-10 Thread tomas
On Sat, Feb 10, 2024 at 11:38:21AM +0100, Thomas Schmitt wrote:

[...]

> But shred(1) on Debian 11 refuses on "-" contrary to its documentation:
>   shred: -: invalid file type
> A non-existing file path causes "No such file or directory".

Hmm. This looks like a genuine bug: the man page mentions it.

Also, /dev/stdout as target runs into the very same problem.

Cheers
-- 
t


signature.asc
Description: PGP signature


Re: NUC freeazing due to kernel bug

2024-02-07 Thread Dan Ritter
Tim Janssen wrote: 
> I use debian server on my NUC to run a low powered home server. It freezes
> every 2-3 days what looks to be a kernel bug. From a lot of testing it only
> occurs when the ethernet cable is inserted and it seems it has to do
> something with low power mode (c-states). These issues have been reported
> ever since kernel 5.10. I wonder if the debian devs are aware of this issue
> and if a fix is undereway.


Things that you really ought to tell us:

- CPU?
- model or motherboard identifier?
- have you tried disabling low C states in BIOS?
- you say it's been reported across major kernel releases --
  what bug numbers?
- are there any log entries written before it freezes?
- can you avoid it by not unplugging/replugging the ethernet
  cable?
- what ethernet NIC is in use?
- have you run htop or another means to disable power saving on
  the NIC?

...and probably a dozen other things, but start there.

-dsr-



NUC freeazing due to kernel bug

2024-02-07 Thread Tim Janssen
Dear Sir/Madam,

I use debian server on my NUC to run a low powered home server. It freezes
every 2-3 days what looks to be a kernel bug. From a lot of testing it only
occurs when the ethernet cable is inserted and it seems it has to do
something with low power mode (c-states). These issues have been reported
ever since kernel 5.10. I wonder if the debian devs are aware of this issue
and if a fix is undereway.

Best regards,

Tim Janssen


Re: Bug: Tab completion for pdf files with blanks in path

2024-02-04 Thread Max Nikulin

On 30/01/2024 12:50, David Wright wrote:

On 30/01/2024 02:51, David Wright wrote:

. Press HOME,
. Type any letter that makes a "wrong" command name (eg aokular),
. Press END,

[...]

However, using my "wrong" command method, Tab Tab lists are complete
all the way down the path. You can then correct the command in order
to prune the Tab Tab listing to include just the candidates
(and in preparation for actually executing the command, of course).


I used a trick with a non-existing command till I figured out that 
[Alt+/] may complete paths for real commands. Pressed twice it gives 
list of candidates, so I do not see any difference from Tab Tab. Perhaps 
I just use it rarely enough, so I believe that moving cursor is less 
convenient. 2 keys instead of single Tab is not a problem, anyway I use 
[Ctrl+/] (undo) frequently enough.


Concerning the bug, maybe upstream is aware of it
https://github.com/scop/bash-completion/issues



Re: Bug: Tab completion for pdf files with blanks in path

2024-01-29 Thread David Wright
On Tue 30 Jan 2024 at 10:34:21 (+0700), Max Nikulin wrote:
> On 30/01/2024 02:51, David Wright wrote:
> > . Press HOME,
> > . Type any letter that makes a "wrong" command name (eg aokular),
> > . Press END,
> 
> The escape "Esc /" workaround has been posted in this thread already.

Yes, I believe I posted it.

But if you have a long path with many ambiguous branches along the
way, using Esc / gets very tedious because you get no help with
choosing what character to write next. Tab Tab doesn't do anything
until you reach a directory with "candidates" in it (ie files with
appropriate extensions).

But even then, Tab Tab does the wrong thing. It only lists the
candidates, not any directories that can continue the path further.

However, using my "wrong" command method, Tab Tab lists are complete
all the way down the path. You can then correct the command in order
to prune the Tab Tab listing to include just the candidates
(and in preparation for actually executing the command, of course).

> It uses built-in readline path completion instead of BASH programmable
> completion. It may be available as [Alt+/] (in xterm it requires
> xterm*vt100.metaSendsEscape: true)
> 
> [Ctrl+A] and [Ctrl+E] are alternatives for [Home] and [End].
> 
> For details see the BASH manual
> 
> info '(bash) Commands For Completion'
> 
> "complete-filename" function and other sections related to readline
> and completion.
> https://www.gnu.org/software/bash/manual/html_node/Commands-For-Completion.html#index-complete_002dfilename-_0028M_002d_002f_0029

To Greg: Thanks for explaining Michael's true motives.

Cheers,
David.



Re: Bug: Tab completion for pdf files with blanks in path

2024-01-29 Thread Max Nikulin

On 30/01/2024 02:51, David Wright wrote:

. Press HOME,
. Type any letter that makes a "wrong" command name (eg aokular),
. Press END,


The escape "Esc /" workaround has been posted in this thread already. It 
uses built-in readline path completion instead of BASH programmable 
completion. It may be available as [Alt+/] (in xterm it requires 
xterm*vt100.metaSendsEscape: true)


[Ctrl+A] and [Ctrl+E] are alternatives for [Home] and [End].

For details see the BASH manual

info '(bash) Commands For Completion'

"complete-filename" function and other sections related to readline and 
completion.

https://www.gnu.org/software/bash/manual/html_node/Commands-For-Completion.html#index-complete_002dfilename-_0028M_002d_002f_0029



Re: Bug: Tab completion for pdf files with blanks in path

2024-01-29 Thread Michael Kiermaier

On 1/29/24 20:59, Greg Wooledge wrote:

complete -r isn't intended as a workaround.  It's intended as a diagnostic
step.

Seeing the problem go away when completion goes away means that the
problem is *in* the completion.  Thus, he knows which package to file
a bug report against.


Yes, I understood that 'complete -r' is for diagnostics.

I've submitted this bug report now:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1061831

Thank you again for your help.



Re: Bug: Tab completion for pdf files with blanks in path

2024-01-29 Thread Greg Wooledge
On Mon, Jan 29, 2024 at 01:51:19PM -0600, David Wright wrote:
> On Mon 29 Jan 2024 at 19:31:50 (+0100), Michael Kiermaier wrote:
> > Thank you for your responses! After 'complete -r' the problem
> > disappears. I should add that I never touched the autocomplete settings.
> 
> No, but you lose your so-called component (2) filtering.
> 
> For me, a better workaround is, when the directory path gets "stuck":
> 
> . Press HOME,
> . Type any letter that makes a "wrong" command name (eg aokular),
> . Press END,
> . Press TAB and carry on using completion for directory/filenames,
> . Once you reach the right directory, and if you need filtering,
>   press HOME DELETE END and you've got filtering back again.
> . Obviously press HOME DELETE if you didn't do the previous step.
> 
> > I will submit a bug report for the package bash-completion.

complete -r isn't intended as a workaround.  It's intended as a diagnostic
step.

Seeing the problem go away when completion goes away means that the
problem is *in* the completion.  Thus, he knows which package to file
a bug report against.



Re: Bug: Tab completion for pdf files with blanks in path

2024-01-29 Thread David Wright
On Mon 29 Jan 2024 at 19:31:50 (+0100), Michael Kiermaier wrote:
> On 1/29/24 18:59, Greg Wooledge wrote:
> > On Tue, Jan 30, 2024 at 12:05:24AM +0700, Max Nikulin wrote:
> > > On 29/01/2024 19:40, Greg Wooledge wrote:
> > > > Let me test that as well
> > > [...]
> > > > unicorn:/tmp$ xyz dir\ with\ blanks/dir2/file
> > > 
> > > "okular" is important here. Only limited set of file name suffixes are
> > > allowed for some commands. You do not need to have okular installed,
> > > completion rules are part of bash-completion.
> > 
> > That's my point as well.  I'm trying to get the OP to determine whether
> > it's the programmable completion for "okular" in particular that's at
> > fault, or bash itself (hint: it's not).
> 
> Thank you for your responses! After 'complete -r' the problem
> disappears. I should add that I never touched the autocomplete settings.

No, but you lose your so-called component (2) filtering.

For me, a better workaround is, when the directory path gets "stuck":

. Press HOME,
. Type any letter that makes a "wrong" command name (eg aokular),
. Press END,
. Press TAB and carry on using completion for directory/filenames,
. Once you reach the right directory, and if you need filtering,
  press HOME DELETE END and you've got filtering back again.
. Obviously press HOME DELETE if you didn't do the previous step.

> I will submit a bug report for the package bash-completion.

Cheers,
David.



Re: Bug: Tab completion for pdf files with blanks in path

2024-01-29 Thread Michael Kiermaier

On 1/29/24 18:59, Greg Wooledge wrote:

On Tue, Jan 30, 2024 at 12:05:24AM +0700, Max Nikulin wrote:

On 29/01/2024 19:40, Greg Wooledge wrote:

Let me test that as well

[...]

unicorn:/tmp$ xyz dir\ with\ blanks/dir2/file


"okular" is important here. Only limited set of file name suffixes are
allowed for some commands. You do not need to have okular installed,
completion rules are part of bash-completion.


That's my point as well.  I'm trying to get the OP to determine whether
it's the programmable completion for "okular" in particular that's at
fault, or bash itself (hint: it's not).


Thank you for your responses! After 'complete -r' the problem
disappears. I should add that I never touched the autocomplete settings.

I will submit a bug report for the package bash-completion.



Re: Bug: Tab completion for pdf files with blanks in path

2024-01-29 Thread David Wright
On Mon 29 Jan 2024 at 12:59:39 (-0500), Greg Wooledge wrote:
> On Tue, Jan 30, 2024 at 12:05:24AM +0700, Max Nikulin wrote:
> > On 29/01/2024 19:40, Greg Wooledge wrote:
> > > Let me test that as well
> > [...]
> > > unicorn:/tmp$ xyz dir\ with\ blanks/dir2/file
> > 
> > "okular" is important here. Only limited set of file name suffixes are
> > allowed for some commands. You do not need to have okular installed,
> > completion rules are part of bash-completion.
> 
> That's my point as well.  I'm trying to get the OP to determine whether
> it's the programmable completion for "okular" in particular that's at
> fault, or bash itself (hint: it's not).
> 
> In my demonstration, all programmable completions were disabled.  I
> never use them to begin with.  So, in my demonstration, the command
> name is completely irrelevant.

No, it's pretty much any command that wants to match particular
extensions, like xpdf, dvips, unzip, etc. Obviously bash-completion
should always attempt to match directories as it doesn't know what
they might contain.

Cheers,
David.



Re: Bug: Tab completion for pdf files with blanks in path

2024-01-29 Thread Greg Wooledge
On Tue, Jan 30, 2024 at 12:05:24AM +0700, Max Nikulin wrote:
> On 29/01/2024 19:40, Greg Wooledge wrote:
> > Let me test that as well
> [...]
> > unicorn:/tmp$ xyz dir\ with\ blanks/dir2/file
> 
> "okular" is important here. Only limited set of file name suffixes are
> allowed for some commands. You do not need to have okular installed,
> completion rules are part of bash-completion.

That's my point as well.  I'm trying to get the OP to determine whether
it's the programmable completion for "okular" in particular that's at
fault, or bash itself (hint: it's not).

In my demonstration, all programmable completions were disabled.  I
never use them to begin with.  So, in my demonstration, the command
name is completely irrelevant.



Re: Bug: Tab completion for pdf files with blanks in path

2024-01-29 Thread Max Nikulin

On 29/01/2024 19:40, Greg Wooledge wrote:

Let me test that as well

[...]

unicorn:/tmp$ xyz dir\ with\ blanks/dir2/file


"okular" is important here. Only limited set of file name suffixes are 
allowed for some commands. You do not need to have okular installed, 
completion rules are part of bash-completion.





Re: Bug: Tab completion for pdf files with blanks in path

2024-01-29 Thread David Wright
On Mon 29 Jan 2024 at 07:40:13 (-0500), Greg Wooledge wrote:
> On Mon, Jan 29, 2024 at 09:32:18AM +0100, Michael Kiermaier wrote:
> > I would like to run okular opening the pdf file
> > ~/dir1\ with\ blanks/dir2/file.pdf
> > via command line. In konsole I type
> > okular ~/dir1\ with\ blanks/
> > and hit the tab key twice for autocomplete. But I won't get offered
> > dir2. After adding more letters like
> 
> My first question would be: does the problem still occur if you disable
> bash-completion?  Open a new instance of bash and run "complete -r" to
> remove all programmable completions.  See if the problem still occurs.
> Then close that instance of bash.
> 
> > okular ~/dir1\ with\ blanks/di
> > to make the completion to dir2 unique
> 
> Oh, there's more than one subdir?  Let me test that as well
> 
> Yeah, even with both dir1 and dir2 (each containing a file), I still get
> the expected behavior in bash without bash-completion in the picture.
> 
> unicorn:~$ cd /tmp
> unicorn:/tmp$ mkdir -p 'dir with blanks'/dir2
> unicorn:/tmp$ touch "$_"/file
> 
> (first experiments with tab completion, not shown)
> 
> unicorn:/tmp$ mkdir -p 'dir with blanks'/dir1
> unicorn:/tmp$ touch "$_"/otherfile
> unicorn:/tmp$ xyz dir\ with\ blanks/dir
> dir1/ dir2/ 
> unicorn:/tmp$ xyz dir\ with\ blanks/dir2/file 
> 
> I'm assuming whatever issue you're seeing is the result of a
> bash-completion bug, not a bash bug.  If you can confirm that, then
> you'll know which package to file a bug against.

Unless I missed a bit in the OP, the bug is actually worse.
Type di and press TAB, and bash-completion gives you
dir\ with\ blanks/ ok. But now rub out the "nks/" at the end
and press TAB. It fails to complete even that directory name.

However, there's a workaround, which you really have to know
about if you're a bash-completion user, and that is:

  ESCAPE /

AFAICT you won't get the list of possibilities as you would normally,
but it should autocomplete as far as the string remains unique.

Cheers,
David.



Re: Bug: Tab completion for pdf files with blanks in path

2024-01-29 Thread Greg Wooledge
On Mon, Jan 29, 2024 at 09:32:18AM +0100, Michael Kiermaier wrote:
> I would like to run okular opening the pdf file
>   ~/dir1\ with\ blanks/dir2/file.pdf
> via command line. In konsole I type
> okular ~/dir1\ with\ blanks/
> and hit the tab key twice for autocomplete. But I won't get offered
> dir2. After adding more letters like

My first question would be: does the problem still occur if you disable
bash-completion?  Open a new instance of bash and run "complete -r" to
remove all programmable completions.  See if the problem still occurs.
Then close that instance of bash.

>   okular ~/dir1\ with\ blanks/di
> to make the completion to dir2 unique

Oh, there's more than one subdir?  Let me test that as well

Yeah, even with both dir1 and dir2 (each containing a file), I still get
the expected behavior in bash without bash-completion in the picture.

unicorn:~$ cd /tmp
unicorn:/tmp$ mkdir -p 'dir with blanks'/dir2
unicorn:/tmp$ touch "$_"/file

(first experiments with tab completion, not shown)

unicorn:/tmp$ mkdir -p 'dir with blanks'/dir1
unicorn:/tmp$ touch "$_"/otherfile
unicorn:/tmp$ xyz dir\ with\ blanks/dir
dir1/ dir2/ 
unicorn:/tmp$ xyz dir\ with\ blanks/dir2/file 

I'm assuming whatever issue you're seeing is the result of a
bash-completion bug, not a bash bug.  If you can confirm that, then
you'll know which package to file a bug against.



Bug: Tab completion for pdf files with blanks in path

2024-01-29 Thread Michael Kiermaier

Dear Debian Team,

I think I found a bug, and I'm writing to this list as I don't know the
associated package (according to https://www.debian.org/Bugs/Reporting).
I'm experiencing this bug in konsole (KDE's terminal emulator), but the
same bug has been reported here
https://askubuntu.com/q/133
in xfce-terminal on Ubuntu 20.04, so I don't think it is a konsole or a
KDE bug. Maybe the affected package is bash-autocompletion, but I don't
know for sure.


I would like to run okular opening the pdf file
~/dir1\ with\ blanks/dir2/file.pdf
via command line. In konsole I type
okular ~/dir1\ with\ blanks/
and hit the tab key twice for autocomplete. But I won't get offered
dir2. After adding more letters like
okular ~/dir1\ with\ blanks/di
to make the completion to dir2 unique, nothing happens at all after
hitting tab (twice). Only after spelling out the complete directory as
okular ~/dir1\ with\ blanks/dir2
and then hitting tab, autocomplete works again as expected.


My feeling is that there are two components which together trigger this bug.

(1) Blanks in the path.
The blanks in dir1\ with\ blanks, because renaming it to something
without a blank like dir1 makes the problem disappear. Also, note that
adding a blank to dir2 is not a problem.

(2) Automatic filtering of autocompletion candidates.
Starting the command with 'konsole' and then hitting tab will only
complete to pdf files. When I do the same with 'ls' instead of 'okular',
no filtering takes place, and the above problem disappears (meaning that
typing
ls ~/dir1\ with\ blanks/
and hitting TAB twice will offer me dir2.


Relevant packages:

I'm on Debian 12 (stable) amd64.

konsole  4:22.12.3-1
bash  5.2.15-2+b2
bash-completion   1:2.11-6

$ uname -a
Linux goblin 6.1.0-17-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.69-1
(2023-12-30) x86_64 GNU/Linux

Thank you.

Best,

~Michael



Re: mv bug - cannot move to subdirecctory of itself.

2024-01-28 Thread David Christensen

On 1/28/24 03:44, Brett Sutton wrote:

So I'm not certain if I'm in the right spot but I had to start somewhere.

I have a docker container that was working but has suddenly stopped working.
I believe the possible cause was when I added a second drive to my zfs
rpool - the timing was a little too coincidental.



Please post:

# zpool status rpool



The docker command sequence I'm running is:


RUN wget
https://storage.googleapis.com/downloads.webmproject.org/releases/webp/libwebp-1.3.2-linux-x86-64.tar.gz
-O /tmp/webp/webp.tar.gz
RUN tar -xvf /tmp/webp/webp.tar.gz --directory /tmp/webp/unzipped
RUN mv /tmp/webp/unzipped/libwebp-1.3.2-linux-x86-64/bin/cwebp
/usr/bin/cwebp
```
which results in the error:

```
mv: cannot move '/tmp/webp/unzipped/libwebp-1.3.2-linux-x86-64/bin/cwebp'
to a subdirectory of itself, '/usr/bin/cwebp'
The command '/bin/sh -c mv
/tmp/webp/unzipped/libwebp-1.3.2-linux-x86-64/bin/cwebp /usr/bin/cwebp'
returned a non-zero code: 1



What happens if you run the mv(1) command by hand?

# mv /tmp/webp/unzipped/libwebp-1.3.2-linux-x86-64/bin/cwebp /usr/bin/cwebp



The reason I'm here is because of this bug:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=923420



Have you implemented and run a test case to determine if your ZFS 
supports "renameat2 RENAME_NOREPLACE flag"?



David



mv bug - cannot move to subdirecctory of itself.

2024-01-28 Thread Brett Sutton
So I'm not certain if I'm in the right spot but I had to start somewhere.

I have a docker container that was working but has suddenly stopped working.
I believe the possible cause was when I added a second drive to my zfs
rpool - the timing was a little too coincidental.

The docker command sequence I'm running is:


RUN wget
https://storage.googleapis.com/downloads.webmproject.org/releases/webp/libwebp-1.3.2-linux-x86-64.tar.gz
-O /tmp/webp/webp.tar.gz
RUN tar -xvf /tmp/webp/webp.tar.gz --directory /tmp/webp/unzipped
RUN mv /tmp/webp/unzipped/libwebp-1.3.2-linux-x86-64/bin/cwebp
/usr/bin/cwebp
```
which results in the error:

```
mv: cannot move '/tmp/webp/unzipped/libwebp-1.3.2-linux-x86-64/bin/cwebp'
to a subdirectory of itself, '/usr/bin/cwebp'
The command '/bin/sh -c mv
/tmp/webp/unzipped/libwebp-1.3.2-linux-x86-64/bin/cwebp /usr/bin/cwebp'
returned a non-zero code: 1
```

So clearly /usr/bin isn't a subdirectory of /tmp/webp so the error must be
wrong.
There are no symlinks involved.

zfs list reports:
```
rpool  402G   493G   96K  /
rpool/ROOT 141G   493G   96K
 none
rpool/ROOT/ubuntu_c520d1   141G   493G 18.8G  /
rpool/ROOT/ubuntu_c520d1/srv   208K   493G   96K
 /srv
*rpool/ROOT/ubuntu_c520d1/usr   522M   493G   96K
 /usr*
rpool/ROOT/ubuntu_c520d1/usr/local 522M   493G  515M
 /usr/local
rpool/ROOT/ubuntu_c520d1/var  36.2G   493G   96K
 /var
rpool/ROOT/ubuntu_c520d1/var/games 208K   493G   96K
 /var/games
rpool/ROOT/ubuntu_c520d1/var/lib  23.3G   493G 16.8G
 /var/lib
rpool/ROOT/ubuntu_c520d1/var/lib/AccountsService   744K   493G  100K
 /var/lib/AccountsService
rpool/ROOT/ubuntu_c520d1/var/lib/NetworkManager   2.64M   493G  236K
 /var/lib/NetworkManager
rpool/ROOT/ubuntu_c520d1/var/lib/apt   232M   493G 98.8M
 /var/lib/apt
rpool/ROOT/ubuntu_c520d1/var/lib/dpkg  327M   493G 74.1M
 /var/lib/dpkg
rpool/ROOT/ubuntu_c520d1/var/log  2.23G   493G 1002M
 /var/log
rpool/ROOT/ubuntu_c520d1/var/mail  208K   493G   96K
 /var/mail
rpool/ROOT/ubuntu_c520d1/var/snap 10.7G   493G 12.8M
 /var/snap
rpool/ROOT/ubuntu_c520d1/var/spool8.26M   493G 2.36M
 /var/spool
rpool/ROOT/ubuntu_c520d1/var/www   300K   493G  108K
 /var/www
*rpool/USERDATA 251G   493G   96K
 /*
rpool/USERDATA/bsutton_b4334o  250G   493G 68.9G
 /home/bsutton
rpool/USERDATA/root_b4334o 854M   493G  845M
 /root
rpool/var 9.39G   493G   96K
 /var
rpool/var/lib 9.39G   493G   96K
 /var/lib
*rpool/var/lib/docker  9.39G   493G 9.39G
 /var/lib/docker*
```

Of course these paths shouldn't be relevant as all of the paths in the
docker container should be inside a docker volume all mounted under
/var/lib/docker.

The reason I'm here is because of this bug:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=923420

When I run 'info coreutils' it reports :

```
This manual documents version 8.32 of the GNU core utilities, including
```
>From my reading of the bug 8.32 should have a fix for the mv bug.

Now this could well be a bug in docker as it has a somewhat dubious history
of working with zfs but the symptom I'm encountering seemed to match the
above bug so here I am.

Any help or suggestions where to go would be appreciated.

Brett

```
Step 6/30 : RUN lsb_release -a
 ---> Running in c5120a6be61b
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.3 LTS
Release: 22.04
Codename: jammy
``


Re: (SOLVED) disable trackpad when mouse is connected (GNOME bug?)

2024-01-25 Thread Keith Bainbridgge

Good afternoon

Another option is to use a keyboard shortcut. My last laptop came with 
this set up using a Fn key combo (eg fn-f5)


So I'm using a key that was set to answer MSteams calls - what?

Check keyboard - shortcuts - touchpad. cinnamon gives options of 
toggle/switch on/ switch-off. I guess gnome will be similar as it was 
the basis of cinnamon


All the best

Keith Bainbridge

keith.bainbridge.3...@gmail.com
+61 (0)447 667 468

UTC + 10:00

On 26/1/24 13:53, Max Nikulin wrote:

On 25/01/2024 21:42, Max Nikulin wrote:

Try

 lsusb --verbose --tree


I have received a private reply. Please, send messages to the mailing 
list in such cases.


I intentionally combined -vt options and I find output more convenient 
than for just "lsusb -t". The "-t" option changes behavior of "-v". If 
you do not like how it is documented, please, discuss it with usbutils 
developers.






Re: (SOLVED) disable trackpad when mouse is connected (GNOME bug?)

2024-01-25 Thread Max Nikulin

On 25/01/2024 21:42, Max Nikulin wrote:

Try

     lsusb --verbose --tree


I have received a private reply. Please, send messages to the mailing 
list in such cases.


I intentionally combined -vt options and I find output more convenient 
than for just "lsusb -t". The "-t" option changes behavior of "-v". If 
you do not like how it is documented, please, discuss it with usbutils 
developers.




Re: (SOLVED) disable trackpad when mouse is connected (GNOME bug?)

2024-01-25 Thread Max Nikulin

On 25/01/2024 20:42, Henning Follmann wrote:

The issue is a usb hub. Somehow GNOME thinks this hub is a mouse.


Try

lsusb --verbose --tree

perhaps somebody plugged in a tiny receiver for a wireless mouse and 
forgot about it.




(SOLVED) disable trackpad when mouse is connected (GNOME bug?)

2024-01-25 Thread Henning Follmann
On Wed, Jan 24, 2024 at 03:30:23PM -0500, Henning Follmann wrote:
> Hello,
> for a while I am using
> 
> gsettings set org.gnome.desktop.peripherals.touchpad send-events 
> 'disabled-on-external-mouse'
> 
> which really worked fine.
> 
> But since last week this does not work anymore; in the way that the
> trackpad is always disabled, even when the mouse is not connected.
> 
> I have to issue a 
> gsettings set org.gnome.desktop.peripherals.touchpad send-events 'enabled'
> to get it back. That kind of defeats it's purpose though.
> 
> Any hints what could be causing this?
> 
> 

Hello,
replying to my own post.
The issue is a usb hub. Somehow GNOME thinks this hub is a mouse.

-H



-- 
Henning Follmann   | hfollm...@itcfollmann.com



disable trackpad when mouse is connected (GNOME bug?)

2024-01-24 Thread Henning Follmann
Hello,
for a while I am using

gsettings set org.gnome.desktop.peripherals.touchpad send-events 
'disabled-on-external-mouse'

which really worked fine.

But since last week this does not work anymore; in the way that the
trackpad is always disabled, even when the mouse is not connected.

I have to issue a 
gsettings set org.gnome.desktop.peripherals.touchpad send-events 'enabled'
to get it back. That kind of defeats it's purpose though.

Any hints what could be causing this?


-H


-- 
Henning Follmann   | hfollm...@itcfollmann.com



Re: Probable bug in mc shell link when reading non-ASCII file names

2024-01-19 Thread Sven Joachim
On 2024-01-20 08:44 +0100, Sven Joachim wrote:

> On 2024-01-20 00:20 +0100, ju...@op.pl wrote:
>
>> I'm not sure if this is actually a bug in the mc package or maybe
>> somewhere in sshd or in some library that uses ssh. That's why I
>> didn't report it via reportbug. Anyway, I noticed the effects only in
>> Shell Link in mc. SSH in the terminal works fine. FTP Link in mc also
>> works properly.
>> The bug appeared today and is visible on all computers connecting to
>> remote Debian Testing systems regardless of MC version (I tested it
>> with mc from Ubuntu 18 and from the current Mint). File and directory
>> names on the remote Debian Testing computer containing UTF-8 Non-ASCII
>> characters are displayed incorrectly and the files and directories
>> cannot be read.
>> For example, instead of a file with a name containing the Polish
>> letters "AąCćEę", Shell Link mc sees a file named
>> "A304205C304207E304231".
>
> Interesting.  I can reproduce that, it has apparently been triggered by
> the Perl upgrade from 5.36 to 5.38.
>
>> Can anyone advise me which package this error should be reported for?
>
> The mc package.  You can tag the bug as forwarded to
> https://midnight-commander.org/ticket/4507, which has already been
> closed.  The fix will be part of mc 4.8.31, if you are lucky the Debian
> maintainers cherry-pick it earlier.

I have attached the patch which fixes the bug.  You can apply it
directly to /usr/lib/mc/fish/ls, if you do not mind that tools like
debsums and "dpkg --verify" might complain about the changed file.

Cheers,
   Sven

diff --git a/src/vfs/shell/helpers/ls b/src/vfs/shell/helpers/ls
index 4c8ca21137..c7701d644f 100644
--- a/src/vfs/shell/helpers/ls
+++ b/src/vfs/shell/helpers/ls
@@ -122,9 +122,8 @@ SHELL_DIR=$1
 perl -e '
 use strict;
 use POSIX;
-use Fcntl;
-use POSIX ":fcntl_h"; #S_ISLNK was here until 5.6
-import Fcntl ":mode" unless defined &S_ISLNK; #and is now here
+use Fcntl ":mode";  # S_ISLNK, S_IFMT, S_IMODE are here
+use POSIX ":fcntl_h";   # S_ISLNK might be here as well
 my $dirname = $ARGV[0];
 if (opendir (DIR, $dirname)) {
 while((my $filename = readdir (DIR))){


Re: Probable bug in mc shell link when reading non-ASCII file names

2024-01-19 Thread Sven Joachim
On 2024-01-20 00:20 +0100, ju...@op.pl wrote:

> I'm not sure if this is actually a bug in the mc package or maybe
> somewhere in sshd or in some library that uses ssh. That's why I
> didn't report it via reportbug. Anyway, I noticed the effects only in
> Shell Link in mc. SSH in the terminal works fine. FTP Link in mc also
> works properly.
> The bug appeared today and is visible on all computers connecting to
> remote Debian Testing systems regardless of MC version (I tested it
> with mc from Ubuntu 18 and from the current Mint). File and directory
> names on the remote Debian Testing computer containing UTF-8 Non-ASCII
> characters are displayed incorrectly and the files and directories
> cannot be read.
> For example, instead of a file with a name containing the Polish
> letters "AąCćEę", Shell Link mc sees a file named
> "A304205C304207E304231".

Interesting.  I can reproduce that, it has apparently been triggered by
the Perl upgrade from 5.36 to 5.38.

> Can anyone advise me which package this error should be reported for?

The mc package.  You can tag the bug as forwarded to
https://midnight-commander.org/ticket/4507, which has already been
closed.  The fix will be part of mc 4.8.31, if you are lucky the Debian
maintainers cherry-pick it earlier.

Cheers,
   Sven



Re: Probable bug in mc shell link when reading non-ASCII file names

2024-01-19 Thread tomas
On Sat, Jan 20, 2024 at 12:20:55AM +0100, ju...@op.pl wrote:
> I'm not sure if this is actually a bug in the mc package or maybe somewhere 
> in sshd or in some library that uses ssh. That's why I didn't report it via 
> reportbug. Anyway, I noticed the effects only in Shell Link in mc. SSH in the 
> terminal works fine. FTP Link in mc also works properly.
> The bug appeared today and is visible on all computers connecting to remote 
> Debian Testing systems regardless of MC version (I tested it with mc from 
> Ubuntu 18 and from the current Mint). File and directory names on the remote 
> Debian Testing computer containing UTF-8 Non-ASCII characters are displayed 
> incorrectly and the files and directories cannot be read.
> For example, instead of a file with a name containing the Polish letters 
> "AąCćEę", Shell Link mc sees a file named "A304205C304207E304231".
> Can anyone advise me which package this error should be reported for?

Hm. It seems that mc gets confused with the way the "other side" tells
it what encoding it uses.

I'd go for mc, because they know best how it interprets the remote
encoding.

Cheers
-- 
t


signature.asc
Description: PGP signature


Probable bug in mc shell link when reading non-ASCII file names

2024-01-19 Thread jureq
I'm not sure if this is actually a bug in the mc package or maybe somewhere in 
sshd or in some library that uses ssh. That's why I didn't report it via 
reportbug. Anyway, I noticed the effects only in Shell Link in mc. SSH in the 
terminal works fine. FTP Link in mc also works properly.
The bug appeared today and is visible on all computers connecting to remote 
Debian Testing systems regardless of MC version (I tested it with mc from 
Ubuntu 18 and from the current Mint). File and directory names on the remote 
Debian Testing computer containing UTF-8 Non-ASCII characters are displayed 
incorrectly and the files and directories cannot be read.
For example, instead of a file with a name containing the Polish letters 
"AąCćEę", Shell Link mc sees a file named "A304205C304207E304231".
Can anyone advise me which package this error should be reported for?
 
Jureq

Re: Bookworm and ZFS (zfs-dkms 2.1.11) data corruption bug

2024-01-13 Thread Jeffrey Walton
On Fri, Jan 12, 2024 at 8:18 AM Jan Ingvoldstad  wrote:
>
> On Wed, Jan 10, 2024 at 10:48 PM Xiyue Deng  wrote:
>>
>> You can check the developer page of zfs-linux[1] on which the "action
>> needed" section has information about security issues (along with
>> version info as Gareth posted).  The one you mentioned was being tracked
>> in [2] and the corresponding Debian bug is [3].  My guess is that as
>> zfs-linux is not in "main" but "contrib", and the issue is marked
>> "no-dsa" (see [4]), there may be no urgency to provide a stable update.
>> But you may send a follow up in the tracking bug and ask for
>> clarification from the maintainers on whether an (old)stable-update is
>> desired.
>
> Thanks, so it *was* my searching skills that failed me:
>
> "The fix will land in bookworm-backports and bullseye-backports-sloppy
> shortly after 2.1.14-1 migrates to testing, which will take about 2
> days hopefully. Fixes to 2.0.3-9+deb11u1 (bullseye) and 2.1.11-1
> (bookworm) are planned but will likely take more time."
>
> I think the bug is mislabeled as "security" and "important", as this is 
> primarily a severe data corruption bug, but with *possible* security 
> implications.
>
> It is far more concerning that one cannot trust that cp actually copies a 
> file, and this is a blocker for installing the ZFS packages in Debian.

Using cp with sparse files has a long history of problems. Recently
this showed up: bug#61386: [PATCH] cp,mv,install: Disable sparse copy
on macOS, 
<https://lists.nongnu.org/archive/html/bug-coreutils/2023-02/msg00010.html>.
I seem to recall the problem was a little bigger than just macOS. It
affected other OSes, too. While it was propagated through coreutils, I
believe the underlying problem was Gnulib.

The ZFS issue looks to be similar, if I am parsing things correctly:
GH #11900: SEEK_DATA fails randomly,
<https://github.com/openzfs/zfs/issues/11900>.

Jeff



Re: Bookworm and ZFS (zfs-dkms 2.1.11) data corruption bug

2024-01-12 Thread Gareth Evans
On Sat 13/01/2024 at 02:32, Gareth Evans  wrote:
> use of the actual "stable-backports" repo is not 
> recommended or implied.

"implied" might be debatable given that was indeed my first thought, but not 
intended to be implied, it seems.  Certainly not necessary.



Re: Bookworm and ZFS (zfs-dkms 2.1.11) data corruption bug

2024-01-12 Thread Gareth Evans
On Fri 12/01/2024 at 06:49, Jan Ingvoldstad  wrote:
> ...
> It is far more concerning that one cannot trust that cp actually copies a 
> file, and this is a blocker for installing the ZFS packages in Debian.

The update in bookworm-backports to 2.2.2-3 allegedly fixes this issue.

I have installed it and at least rebooted :)

I have had ZFS on root since Buster, and have upgraded to each new stable 
release since then.

I had wondered recently if the Debian wiki's recommendation [1] of installing 
from "stable-backports", and the statement that 

"Upstream stable patches will be tracked and compatibility is always maintained"

was to suggest use of the actual "stable-backports" repo, and that the 
compatibility guarantee meant that they cater for any potential ZFS issues 
arising for non-upgraded systems after new release time.  On closer inspection, 
the line the wiki provides to add this to sources.list actually adds the 
codename-backports repo (currently "bookworm-backports").  "stable-backports" 
also appears to be an alias that apt understands as referring to 
{codename}-backports for whatever current stable is, and use of the actual 
"stable-backports" repo is not recommended or implied.

Is the Debian wiki advice re installing ZFS from backports applicable in all 
circumstances?  What would be the suggested approach for installing with ZFS on 
root or re-pointing apt at backports for ZFS immediately after [upgrading to] a 
new stable release?  Do backports even exist at that point?  If so, after a 
release upgrade, can you install the same version of eg. zfs-dkms from 
backports as a sort of special case for the purposes of changing the repo?  Or 
do you just have to watch and wait for new ZFS backports?

OpenZFS instructions [2] for root on ZFS suggest using bookworm not 
bookworm-backports, so I wondered if an initial lack of backports might be the 
reason.

Thanks,
Gareth

[1] https://wiki.debian.org/ZFS#Status
[2] 
https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bookworm%20Root%20on%20ZFS.html#step-1-prepare-the-install-environment




Re: Bookworm and ZFS (zfs-dkms 2.1.11) data corruption bug

2024-01-12 Thread Махно
>I have not seen this recommendation, do you have a link?

It is from Debian wiki

https://wiki.debian.org/ZFS

2024-01-12, pn, 14:08 Jan Ingvoldstad  rašė:
>
>
> On Wed, Jan 10, 2024 at 10:48 PM Xiyue Deng  wrote:
>>
>>
>> You can check the developer page of zfs-linux[1] on which the "action
>> needed" section has information about security issues (along with
>> version info as Gareth posted).  The one you mentioned was being tracked
>> in [2] and the corresponding Debian bug is [3].  My guess is that as
>> zfs-linux is not in "main" but "contrib", and the issue is marked
>> "no-dsa" (see [4]), there may be no urgency to provide a stable update.
>> But you may send a follow up in the tracking bug and ask for
>> clarification from the maintainers on whether an (old)stable-update is
>> desired.
>
>
> Thanks, so it *was* my searching skills that failed me:
>
> "The fix will land in bookworm-backports and bullseye-backports-sloppy
> shortly after 2.1.14-1 migrates to testing, which will take about 2
> days hopefully. Fixes to 2.0.3-9+deb11u1 (bullseye) and 2.1.11-1
> (bookworm) are planned but will likely take more time."
>
> I think the bug is mislabeled as "security" and "important", as this is 
> primarily a severe data corruption bug, but with *possible* security 
> implications.
>
> It is far more concerning that one cannot trust that cp actually copies a 
> file, and this is a blocker for installing the ZFS packages in Debian.
>
> --
> Jan



Re: Bookworm and ZFS (zfs-dkms 2.1.11) data corruption bug

2024-01-11 Thread Jan Ingvoldstad
On Wed, Jan 10, 2024 at 10:48 PM Xiyue Deng  wrote:

>
> You can check the developer page of zfs-linux[1] on which the "action
> needed" section has information about security issues (along with
> version info as Gareth posted).  The one you mentioned was being tracked
> in [2] and the corresponding Debian bug is [3].  My guess is that as
> zfs-linux is not in "main" but "contrib", and the issue is marked
> "no-dsa" (see [4]), there may be no urgency to provide a stable update.
> But you may send a follow up in the tracking bug and ask for
> clarification from the maintainers on whether an (old)stable-update is
> desired.
>

Thanks, so it *was* my searching skills that failed me:

"The fix will land in bookworm-backports and bullseye-backports-sloppy
shortly after 2.1.14-1 migrates to testing, which will take about 2
days hopefully. Fixes to 2.0.3-9+deb11u1 (bullseye) and 2.1.11-1
(bookworm) are planned but will likely take more time."

I think the bug is mislabeled as "security" and "important", as this is
primarily a severe data corruption bug, but with *possible* security
implications.

It is far more concerning that one cannot trust that cp actually copies a
file, and this is a blocker for installing the ZFS packages in Debian.

-- 
Jan


Re: Bookworm and ZFS (zfs-dkms 2.1.11) data corruption bug

2024-01-11 Thread Махно
It is recommended by Debian ZFS on Linux Team to install ZFS related
packages from Backports archive. Upstream stable patches will be
tracked and compatibility is always maintained.

2024-01-11, kt, 02:08 Xiyue Deng  rašė:
>
> Jan Ingvoldstad  writes:
>
> > Hi,
> >
> > It seems that Bookworm's zfs-dkms package (from contrib) has the data
> > corruption bug that was fixed with OpenZFS 2.1.14 (and 2.2.2) on 2023-11-30.
> >
> > https://github.com/openzfs/zfs/releases/tag/zfs-2.1.14
> >
> > However, I see no relevant bug report in the bug tracker - have my
> > searching skills failed?
>
> You can check the developer page of zfs-linux[1] on which the "action
> needed" section has information about security issues (along with
> version info as Gareth posted).  The one you mentioned was being tracked
> in [2] and the corresponding Debian bug is [3].  My guess is that as
> zfs-linux is not in "main" but "contrib", and the issue is marked
> "no-dsa" (see [4]), there may be no urgency to provide a stable update.
> But you may send a follow up in the tracking bug and ask for
> clarification from the maintainers on whether an (old)stable-update is
> desired.
>
> [1] https://tracker.debian.org/pkg/zfs-linux
> [2] https://security-tracker.debian.org/tracker/CVE-2023-49298
> [3] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1056752
> [4] 
> https://security-team.debian.org/security_tracker.html#issues-not-warranting-a-security-advisory
>
> --
> Xiyue Deng
>



Re: Bookworm and ZFS (zfs-dkms 2.1.11) data corruption bug

2024-01-10 Thread Xiyue Deng
Jan Ingvoldstad  writes:

> Hi,
>
> It seems that Bookworm's zfs-dkms package (from contrib) has the data
> corruption bug that was fixed with OpenZFS 2.1.14 (and 2.2.2) on 2023-11-30.
>
> https://github.com/openzfs/zfs/releases/tag/zfs-2.1.14
>
> However, I see no relevant bug report in the bug tracker - have my
> searching skills failed?

You can check the developer page of zfs-linux[1] on which the "action
needed" section has information about security issues (along with
version info as Gareth posted).  The one you mentioned was being tracked
in [2] and the corresponding Debian bug is [3].  My guess is that as
zfs-linux is not in "main" but "contrib", and the issue is marked
"no-dsa" (see [4]), there may be no urgency to provide a stable update.
But you may send a follow up in the tracking bug and ask for
clarification from the maintainers on whether an (old)stable-update is
desired.

[1] https://tracker.debian.org/pkg/zfs-linux
[2] https://security-tracker.debian.org/tracker/CVE-2023-49298
[3] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1056752
[4] 
https://security-team.debian.org/security_tracker.html#issues-not-warranting-a-security-advisory

--
Xiyue Deng



Re: Bookworm and ZFS (zfs-dkms 2.1.11) data corruption bug

2024-01-10 Thread Gareth Evans
> On 9 Jan 2024, at 06:41, Jan Ingvoldstad  wrote:
> 
> Hi,
> 
> It seems that Bookworm's zfs-dkms package (from contrib) has the data 
> corruption bug that was fixed with OpenZFS 2.1.14 (and 2.2.2) on 2023-11-30.
> 
> https://github.com/openzfs/zfs/releases/tag/zfs-2.1.14
> 
> However, I see no relevant bug report in the bug tracker - have my searching 
> skills failed?
> 
> --
> Jan

This prompted me to look for updates.  

2.2.2-3 is available in bookworm-backports.  

Is this, or a later version, likely to be made available in bookworm-updates?

Does anyone have experience with the backports version?   

Thanks
Gareth



Bookworm and ZFS (zfs-dkms 2.1.11) data corruption bug

2024-01-08 Thread Jan Ingvoldstad
Hi,

It seems that Bookworm's zfs-dkms package (from contrib) has the data
corruption bug that was fixed with OpenZFS 2.1.14 (and 2.2.2) on 2023-11-30.

https://github.com/openzfs/zfs/releases/tag/zfs-2.1.14

However, I see no relevant bug report in the bug tracker - have my
searching skills failed?

-- 
Jan


Re: Bug on upgrade to bookworm with Apache/PHP?

2023-12-30 Thread Charles Curley
On Sat, 30 Dec 2023 17:50:03 +
Andrew Wood  wrote:

> Found the following issue when running an upgrade.
> 
> Apache refuses to restart with error:
> 
> apache2_reload: Your configuration is broken. Not restarting Apache 2
> apache2_reload: apache2: Syntax error on line 146 of 
> /etc/apache2/apache2.conf: Syntax error on line 3 of 
> /etc/apache2/mods-enabled/php7.4.load: Cannot load 
> /usr/lib/apache2/modules/libphp7.4.so into server: 
> /usr/lib/apache2/modules/libphp7.4.so: cannot open shared object
> file: No such file or directory
> 
> 
> This is because the php7.4 files have now been replaced with php8.2
> 
> Specifically sym linsk in  /etc/apache2/mods-enabled/ which link to  
> /etc/apache2/mods-available/
> php7.4.conf -> ../mods-available/php7.4.conf
> php7.4.load -> ../mods-available/php7.4.load
> 
> Should be removed and replaced with a link to
> 
> php8.2.conf -> ../mods-available/php8.2.conf
> php8.2.load -> ../mods-available/php8.2.load
> 
> 
> Is this known about?
> 
> Andrew
> 

You might want to disable any php 7.4 modules and enable php8.2.conf
and php8.2.load.

root@hawk:/etc/apache2# ls mods-enabled/
access_compat.load  autoindex.load  mpm_prefork.conf  setenvif.load
alias.conf  deflate.confmpm_prefork.load  socache_shmcb.load
alias.load  deflate.loadnegotiation.conf  ssl.conf
auth_basic.load dir.confnegotiation.load  ssl.load
authn_core.load dir.loadphp8.2.conf   status.conf
authn_file.load env.loadphp8.2.load   status.load
authz_core.load filter.load reqtimeout.conf   userdir.conf
authz_host.load headers.loadreqtimeout.load   userdir.load
authz_user.load mime.conf   rewrite.load
autoindex.conf  mime.load   setenvif.conf
root@hawk:/etc/apache2# 


-- 
Does anybody read signatures any more?

https://charlescurley.com
https://charlescurley.com/blog/



Re: Bug on upgrade to bookworm with Apache/PHP?

2023-12-30 Thread Dan Ritter
Andrew Wood wrote: 
> This is because the php7.4 files have now been replaced with php8.2
> 
> Specifically sym linsk in  /etc/apache2/mods-enabled/ which link to 
> /etc/apache2/mods-available/
> php7.4.conf -> ../mods-available/php7.4.conf
> php7.4.load -> ../mods-available/php7.4.load
> 
> Should be removed and replaced with a link to
> 
> php8.2.conf -> ../mods-available/php8.2.conf
> php8.2.load -> ../mods-available/php8.2.load
> 
> 
> Is this known about?


Yes. It is not an error, per se, because it is possible that a
person would want to keep the php7 system around a bit longer,
and not yet install the php8 system.

It is just part of the decisions that a sysadmin has to make for
their systems.

-dsr-



Bug on upgrade to bookworm with Apache/PHP?

2023-12-30 Thread Andrew Wood

Found the following issue when running an upgrade.

Apache refuses to restart with error:

apache2_reload: Your configuration is broken. Not restarting Apache 2
apache2_reload: apache2: Syntax error on line 146 of 
/etc/apache2/apache2.conf: Syntax error on line 3 of 
/etc/apache2/mods-enabled/php7.4.load: Cannot load 
/usr/lib/apache2/modules/libphp7.4.so into server: 
/usr/lib/apache2/modules/libphp7.4.so: cannot open shared object file: 
No such file or directory



This is because the php7.4 files have now been replaced with php8.2

Specifically sym linsk in  /etc/apache2/mods-enabled/ which link to  
/etc/apache2/mods-available/

php7.4.conf -> ../mods-available/php7.4.conf
php7.4.load -> ../mods-available/php7.4.load

Should be removed and replaced with a link to

php8.2.conf -> ../mods-available/php8.2.conf
php8.2.load -> ../mods-available/php8.2.load


Is this known about?

Andrew



Re: The bug

2023-12-15 Thread Kevin Price
Am 15.12.23 um 15:47 schrieb Stefan Monnier:
> But that's always true: the GNU/Linux system, like all sufficiently
> complex software systems, is chuck full of bugs, many of which can
> indeed have disastrous effects if they manifest under the
> "right" circumstances.

Here are some foreseeable and preventable ones.

> AFAICT the only thing really different about "the bug"
> (#1057967/#1057969) is that it comes right after a bug that made a lot
> of noise (bug#1057843), so people have temporarily lost faith.

No faith lost on my part. And bugs are not evaluated in the amount of
noise the preceding one made.
-- 
Kevin Price



Re: The bug

2023-12-15 Thread Stefan Monnier
>>>>> If so, then IIUC the answer is a resounding "YES, it is safe!".
> Safe not to fry your ext4 by Bug#1057843, yes, Stefan.
> Safe in general, as originally asked by Rick?
> He might be lucky, or maybe less so.

But that's always true: the GNU/Linux system, like all sufficiently
complex software systems, is chuck full of bugs, many of which can
indeed have disastrous effects if they manifest under the
"right" circumstances.

AFAICT the only thing really different about "the bug"
(#1057967/#1057969) is that it comes right after a bug that made a lot
of noise (bug#1057843), so people have temporarily lost faith.



Stefan



Re: The bug

2023-12-14 Thread Kevin Price
I largely agree with Greg.

Am 13.12.23 um 16:33 schrieb Greg Wooledge:
> On Wed, Dec 13, 2023 at 04:13:44PM +0100, to...@tuxteam.de wrote:
>> On Wed, Dec 13, 2023 at 10:10:37AM -0500, Greg Wooledge wrote:
>>> On Wed, Dec 13, 2023 at 09:56:46AM -0500, Stefan Monnier wrote:
>>>> If so, then IIUC the answer is a resounding "YES, it is safe!".

Safe not to fry your ext4 by Bug#1057843, yes, Stefan.
Safe in general, as originally asked by Rick? He might be lucky, or
maybe less so.

>>> Safety is subjective.  A great deal will depend on what kind of system
>>> is being upgraded.  If it's a remote server to which you have limited
>>> or no physical access, booting a kernel that may "just be unusable"
>>> (enough to prevent editing GRUB menus and rebooting) could be a disaster.

Absolutely, Greg.

>> ...but that one most probably won't be attached via a Broadcom to the 'net.

Tomas: Servers are most usually not connected through wifi alone. But
"the bug" (#1057967/#1057969) won't only disable the wifi adapter, but
would probably make the running computer largely unusable, even unable
to shut down. That's confirmed. In that case you still might have access
through LAN IOT possibly fix GRUB's configuration, if you find a way to
do that without sudo, and working around whatever problems you'll
encounter attempting that. But even then, that new GRUB configuration
will never come into effect until you forcibly reboot/power cycle the
computer. (which has always been a bad thing to do in the first place)

Under these circumstances, "the bug" can become a huge problem. Maybe
unlikely for many use cases, but then huge.

> My superficial understanding, after skimming through the bug report,
> is that problems could be triggered just by *loading* one of the
> affected wifi driver modules.

With less superficial understanding, I fully agree with Greg.

> This would happen for any machine that
> has one of the "right" kinds of wifi hardware, even if that hardware
> isn't actively being used.

Exactly. Mere presence of wifi adapters will cause debian to load their
respective wifi driver modules, that in turn will invoke cfg80211, maybe
or not triggering cfg80211's bug.

> (Not just Broadcom either; at least one
> person reported an issue with Realtek.)

IIUC, that was Olivier's rtl88x2bu non-free wifi driver too, causing the
bug, but not Alberto's r8169, which is for wired LAN.

> Perhaps I'm reading it incorrectly, but I still feel it's wise to wait
> a little while and see if any more problems pop up, if stability is
> important to you.

Yes. If you're up to gambling, throw in all the computers you're willing
to spare. In case of solely remote controlled servers: I wouldn't.
Risking to lose control of servers is too much of a bet for too little
of a win, IMHO.

> I also salute the courage of those who've tested
> these recent changes.  Thank you all.

Appreciation for my small part (in pointing the problem out in the first
place) accepted, but please send your muchos kudos to Salvatore
Bonaccorso , who deserves credits for solving it.
-- 
Kevin Price



Re: The bug

2023-12-13 Thread tomas
On Wed, Dec 13, 2023 at 10:33:15AM -0500, Greg Wooledge wrote:

[...]

> Perhaps I'm reading it incorrectly, but I still feel it's wise to wait
> a little while and see if any more problems pop up, if stability is
> important to you.  I also salute the courage of those who've tested
> these recent changes.  Thank you all.

Absolutely, of course. And your reading may be spot-on.

Cheers
-- 
t


signature.asc
Description: PGP signature


Re: The bug

2023-12-13 Thread Tixy
On Wed, 2023-12-13 at 10:10 -0500, Greg Wooledge wrote:
> If it's a remote server to which you have limited
> or no physical access, booting a kernel that may "just be unusable"
> (enough to prevent editing GRUB menus and rebooting) could be a disaster.

Which is what happened a few years ago to me when an update broke
kernels running under the Xen hypervisor, which the VPS running my
email was.

-- 
Tixy



Re: The bug

2023-12-13 Thread Pocket



On 12/13/23 10:33, Greg Wooledge wrote:

On Wed, Dec 13, 2023 at 04:13:44PM +0100, to...@tuxteam.de wrote:

On Wed, Dec 13, 2023 at 10:10:37AM -0500, Greg Wooledge wrote:

On Wed, Dec 13, 2023 at 09:56:46AM -0500, Stefan Monnier wrote:

If so, then IIUC the answer is a resounding "YES, it is safe!".
It just may be unusable, so you may have to downgrade to 6.1.0-13 until
the problem is fixed.

That's a very different issue from the ext4 corruption problem in
6.1.0-14 which can eat your data.

Safety is subjective.  A great deal will depend on what kind of system
is being upgraded.  If it's a remote server to which you have limited
or no physical access, booting a kernel that may "just be unusable"
(enough to prevent editing GRUB menus and rebooting) could be a disaster.

...but that one most probably won't be attached via a Broadcom to the 'net.

Who knows, though :)

My superficial understanding, after skimming through the bug report,
is that problems could be triggered just by *loading* one of the
affected wifi driver modules.  This would happen for any machine that
has one of the "right" kinds of wifi hardware, even if that hardware
isn't actively being used.  (Not just Broadcom either; at least one
person reported an issue with Realtek.)

Perhaps I'm reading it incorrectly, but I still feel it's wise to wait
a little while and see if any more problems pop up, if stability is
important to you.  I also salute the courage of those who've tested
these recent changes.  Thank you all.


BAH Humbug

I updated/upgraded my amd64 on bookworms and it has not had any issues.

Chicken little syndrome?

--

It's not easy to be me



Re: The bug

2023-12-13 Thread Greg Wooledge
On Wed, Dec 13, 2023 at 04:13:44PM +0100, to...@tuxteam.de wrote:
> On Wed, Dec 13, 2023 at 10:10:37AM -0500, Greg Wooledge wrote:
> > On Wed, Dec 13, 2023 at 09:56:46AM -0500, Stefan Monnier wrote:
> > > If so, then IIUC the answer is a resounding "YES, it is safe!".
> > > It just may be unusable, so you may have to downgrade to 6.1.0-13 until
> > > the problem is fixed.
> > > 
> > > That's a very different issue from the ext4 corruption problem in
> > > 6.1.0-14 which can eat your data.
> > 
> > Safety is subjective.  A great deal will depend on what kind of system
> > is being upgraded.  If it's a remote server to which you have limited
> > or no physical access, booting a kernel that may "just be unusable"
> > (enough to prevent editing GRUB menus and rebooting) could be a disaster.
> 
> ...but that one most probably won't be attached via a Broadcom to the 'net.
> 
> Who knows, though :)

My superficial understanding, after skimming through the bug report,
is that problems could be triggered just by *loading* one of the
affected wifi driver modules.  This would happen for any machine that
has one of the "right" kinds of wifi hardware, even if that hardware
isn't actively being used.  (Not just Broadcom either; at least one
person reported an issue with Realtek.)

Perhaps I'm reading it incorrectly, but I still feel it's wise to wait
a little while and see if any more problems pop up, if stability is
important to you.  I also salute the courage of those who've tested
these recent changes.  Thank you all.



Re: The bug

2023-12-13 Thread tomas
On Wed, Dec 13, 2023 at 10:10:37AM -0500, Greg Wooledge wrote:
> On Wed, Dec 13, 2023 at 09:56:46AM -0500, Stefan Monnier wrote:
> > If so, then IIUC the answer is a resounding "YES, it is safe!".
> > It just may be unusable, so you may have to downgrade to 6.1.0-13 until
> > the problem is fixed.
> > 
> > That's a very different issue from the ext4 corruption problem in
> > 6.1.0-14 which can eat your data.
> 
> Safety is subjective.  A great deal will depend on what kind of system
> is being upgraded.  If it's a remote server to which you have limited
> or no physical access, booting a kernel that may "just be unusable"
> (enough to prevent editing GRUB menus and rebooting) could be a disaster.

...but that one most probably won't be attached via a Broadcom to the 'net.

Who knows, though :)

> I speak for no one but myself, but I'm gonna wait at least a week before
> upgrading past 6.1.0-13-amd64 on any of my systems.

Was a crazy week, wasn't it?

Cheers
-- 
t


signature.asc
Description: PGP signature


Re: The bug

2023-12-13 Thread Greg Wooledge
On Wed, Dec 13, 2023 at 09:56:46AM -0500, Stefan Monnier wrote:
> If so, then IIUC the answer is a resounding "YES, it is safe!".
> It just may be unusable, so you may have to downgrade to 6.1.0-13 until
> the problem is fixed.
> 
> That's a very different issue from the ext4 corruption problem in
> 6.1.0-14 which can eat your data.

Safety is subjective.  A great deal will depend on what kind of system
is being upgraded.  If it's a remote server to which you have limited
or no physical access, booting a kernel that may "just be unusable"
(enough to prevent editing GRUB menus and rebooting) could be a disaster.

I speak for no one but myself, but I'm gonna wait at least a week before
upgrading past 6.1.0-13-amd64 on any of my systems.



Re: The bug

2023-12-13 Thread Stefan Monnier
to...@tuxteam.de [2023-12-13 06:35:08] wrote:
> On Tue, Dec 12, 2023 at 10:39:55PM -0600, David Wright wrote:
>> On Tue 12 Dec 2023 at 23:05:49 (-0500), Stefan Monnier wrote:
>> > > Well, the machine in question has a wi-fi but I don't plan on using it.
>> > > Though unless I'm misunderstanding, just having a wi-fi (used or not) is
>> > > enough to trigger the bug.  Please correct me if I'm wrong.
>> > 
>> > "the bug"?
>> > 
>> > What's this bug you're referring to?
>> 
>> Perhaps:
>> 
>>   https://lists.debian.org/debian-user/2023/12/msg00680.html
>> 
>>   https://lists.debian.org/debian-user/2023/12/msg00682.html
>
> Might be this:
>
>   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1057967

If so, then IIUC the answer is a resounding "YES, it is safe!".
It just may be unusable, so you may have to downgrade to 6.1.0-13 until
the problem is fixed.

That's a very different issue from the ext4 corruption problem in
6.1.0-14 which can eat your data.


Stefan



Re: The bug

2023-12-13 Thread Michael Kjörling
On 13 Dec 2023 16:23 +0900, from j...@bunsenlabs.org (John Crawley):
>> |Debian Bug report logs - #1057967
>> |Found in version linux/6.1.66-1
>> |Fixed in version linux/6.1.67-1
> 
> Good to know, but as of now (Wed 13 Dec 07:20:59 UTC 2023):
> [...]
> Have to wait a few more hours I suppose.

It was accepted into bookworm-proposed-updates on Dec 13 13:40 UTC, so
really _just now_.

https://tracker.debian.org/pkg/linux-signed-amd64

https://tracker.debian.org/news/1485406/accepted-linux-signed-amd64-61671-source-into-proposed-updates/

Let's hope this is the last of it!

-- 
Michael Kjörling 🔗 https://michael.kjorling.se
“Remember when, on the Internet, nobody cared that you were a dog?”



<    1   2   3   4   5   6   7   8   9   10   >