Re: Debian openssh option review: considering splitting out GSS-API key exchange

2024-04-03 Thread Michael Stone

On Tue, Apr 02, 2024 at 01:30:10AM +0100, Colin Watson wrote:

  * add dependency-only packages called something like
openssh-client-gsskex and openssh-server-gsskex, depending on their
non-gsskex alternatives
  * add NEWS.Debian entry saying that people need to install these
packages if they want to retain GSS-API key exchange support
  * add release note saying the same

* for Debian trixie+1 (or maybe after the next Ubuntu LTS, depending on
  exact timings):

  * add separate openssh-gsskex source package, carrying gssapi.patch
in addition to whatever's in openssh, and whose binary packages
Conflicts/Replaces/Provides the corresponding ones from openssh
  * add some kind of regular CI to warn about openssh-gsskex being out
of date relative to openssh
  * drop gssapi.patch from openssh, except for small patches to
configuration file handling to accept the relevant options with
some kind of informative warning (compare
https://bugs.debian.org/152657)


To speed things up for those who really want it, perhaps make 
openssh-client/server dependency-only packages on 
openssh-client/server-nogss? People can choose the less-compatible 
version for this release if they want to, and the default can change 
next release. Pushing back the ability to install the unpatched version 
for a few more years seems suboptimal.




Re: Debian openssh option review: considering splitting out GSS-API key exchange

2024-04-03 Thread Michael Stone

On Tue, Apr 02, 2024 at 01:30:10AM +0100, Colin Watson wrote:

  * add dependency-only packages called something like
openssh-client-gsskex and openssh-server-gsskex, depending on their
non-gsskex alternatives
  * add NEWS.Debian entry saying that people need to install these
packages if they want to retain GSS-API key exchange support
  * add release note saying the same

* for Debian trixie+1 (or maybe after the next Ubuntu LTS, depending on
  exact timings):

  * add separate openssh-gsskex source package, carrying gssapi.patch
in addition to whatever's in openssh, and whose binary packages
Conflicts/Replaces/Provides the corresponding ones from openssh
  * add some kind of regular CI to warn about openssh-gsskex being out
of date relative to openssh
  * drop gssapi.patch from openssh, except for small patches to
configuration file handling to accept the relevant options with
some kind of informative warning (compare
https://bugs.debian.org/152657)


To speed things up for those who really want it, perhaps make 
openssh-client/server dependency-only packages on 
openssh-client/server-nogss? People can choose the less-compatible 
version for this release if they want to, and the default can change 
next release. Pushing back the ability to install the unpatched version 
for a few more years seems suboptimal.




Bug#628815: coreutils: pinky makes crazy DNS queries

2024-03-19 Thread Michael Stone

On Tue, Mar 19, 2024 at 03:27:52PM +0100, you wrote:

/etc/acpi/lid.sh calls getXuser, that's defined in
/usr/share/acpi-support/power-funcs
which has on line 36
   plist=$(pinky -fw) || pwf_error "pinky lost"


I'd suggest a wishlist bug on acpi-support-base to use "who -us" in 
place of "pinky -fw". who is a posix standard command, pinky is an 
oddball that was hacked up from who years ago because someone liked the 
finger command output and wanted something that would add the full name, 
.plan, .project, etc., to the regular who output (none of which is used 
by acpi). Basically, pinky is simply not the right tool for the task at 
hand and it makes more sense IMO to use the right tool than to try to 
add functionality to a 30 year old special-purpose tool intended to 
replicate the functionality of an information program from another era.




Bug#628815: coreutils: pinky makes crazy DNS queries

2024-03-19 Thread Michael Stone

On Tue, Mar 19, 2024 at 11:54:30AM +0100, you wrote:

For example on a debian system with acpi-support, /etc/acpi/lid.sh will
make many requests to find the host $WAYLAND_DISPLAY every time the lid
is opened.


I don't see anything in lid.sh that calls pinky. 



Re: shred bug? [was: Unidentified subject!]

2024-02-16 Thread Michael Stone

On Sun, Feb 11, 2024 at 08:02:12AM +0100, to...@tuxteam.de wrote:

What Thomas was trying to do is to get a cheap, fast random number
generator. Shred seems to have such.


You're better off with /dev/urandom, it's much easier to understand what 
it's trying to do, vs the rather baroque logic in shred. In fact, 
there's nothing in shred's documenation AFAICT that suggests it should 
be used as a random number generator. For pure speed, playing games with 
openssl enc and /dev/zero will generally win. If speed doesn't matter, 
we're back to /dev/urandom as the simplest and most direct solution.


FWIW, the main use for shred in 2024 is: to be there so someone's old 
script doesn't break. There's basically no good use case for it, and it 
probably shouldn't have gotten into coreutils in the first place. The 
multipass pattern stuff is cargo-cult voodoo--a single overwrite with 
zeros will be as effective as anything else--and on modern 
storage/filesystems there's a good chance your overwrite won't overwrite 
anything anyway. Probably the right answer is a kernel facility 
(userspace can't guarantee anything). If you're really sure that 
overwrites work on your system, `shred -n0 -z` will be the fastest way 
to do that. The docs say don't do that because SSDs might optimize that 
away, but SSDs probably aren't overwriting anything anyway (also 
mentioned in the docs). ¯\_(ツ)_/¯




Re: cli64 CPU segfaults

2024-01-29 Thread Michael Stone

On Mon, Jan 29, 2024 at 07:20:14PM +, Adam Weremczuk wrote:

I have 2 bare metal Debian 12.4 servers with fairly new Intel CPUs and plenty
of memory.

On both, dmesg continuously reports:

(...)
[Mon Jan 29 12:13:00 2024] cli64[1666090]: segfault at 0 ip 0040dd3b sp
7ffc2bfba630 error 4 in cli64[40+18a000] likely on CPU 41 (core 17,
socket 0)


Well, what is cli64? I don't think it came with debian, so you'd have to 
start by looking at what that program is doing.


(If you're not sure, I'm going to guess it's the areca raid management 
software, but it's not a super distinct filename.)




bug#62572: Bug#1058752: bug#62572: cp --no-clobber behavior has changed

2024-01-29 Thread Michael Stone

On Mon, Jan 29, 2024 at 04:11:05PM +, Pádraig Brady wrote:

You've introduced a silent incompatibility and I'm trying to find some
way to make that clear. If upstream would provide a better solution I
would certainly use it. I have despaired of there being such since your
attitude thus far seems to be entirely dismissive of compatibility
concerns.


That's a bit unfair.  The current upstream -n behavior is with a view
to being _more_ compat across all systems.
Now I agree this may not be worth it in this case,
but it is a laudable goal.


You are saying that again without explicitly acknowledging that "being 
_more_ compat" in this case means "becoming _incompat_ with the vast 
majority of installed systems". IMO it could be reasonably phrased as 
"being more compatible across all systems in the long term when all 
existing legacy systems are gone", but the key here is that I read 
"_more_ compat across all systems" as dismissing the coreutils installed 
base as part of "all systems". I understand that may not be/have been 
the intent, but I also can't help feeling the way that I do when the 
benefits of compatability with freebsd are repeatedly emphasized while 
the costs of incompatibility with the coreutils installed base are 
dismissed with something along the lines of "we'll see what breaks". (If 
the costs of incompatibility are really that low in this case, why would 
compatability be a worthwhile goal in this case?)


I do wish that more users had noticed the change earlier and that we're 
fairly deep into a mess, but it's not always easy to see the impact of 
what seems like a relatively minor patch. I do appreciate that the new 
version printed some diagnostics when the change was triggered, as that 
certainly helped call attention to scripts which were impacted.



With the above in place for the next coreutils release,
then debian could remove its noisy patch.


I would certainly align with that, and the sooner the better to decrease 
the chances that different distributions handle this in different ways 
or we get to the point of having to release in an interim state. If you 
commit a final version I'll apply that patch if the next release isn't 
imminent.






Bug#1058752: bug#62572: Bug#1058752: bug#62572: cp --no-clobber behavior has changed

2024-01-29 Thread Michael Stone

On Mon, Jan 29, 2024 at 04:11:05PM +, Pádraig Brady wrote:

You've introduced a silent incompatibility and I'm trying to find some
way to make that clear. If upstream would provide a better solution I
would certainly use it. I have despaired of there being such since your
attitude thus far seems to be entirely dismissive of compatibility
concerns.


That's a bit unfair.  The current upstream -n behavior is with a view
to being _more_ compat across all systems.
Now I agree this may not be worth it in this case,
but it is a laudable goal.


You are saying that again without explicitly acknowledging that "being 
_more_ compat" in this case means "becoming _incompat_ with the vast 
majority of installed systems". IMO it could be reasonably phrased as 
"being more compatible across all systems in the long term when all 
existing legacy systems are gone", but the key here is that I read 
"_more_ compat across all systems" as dismissing the coreutils installed 
base as part of "all systems". I understand that may not be/have been 
the intent, but I also can't help feeling the way that I do when the 
benefits of compatability with freebsd are repeatedly emphasized while 
the costs of incompatibility with the coreutils installed base are 
dismissed with something along the lines of "we'll see what breaks". (If 
the costs of incompatibility are really that low in this case, why would 
compatability be a worthwhile goal in this case?)


I do wish that more users had noticed the change earlier and that we're 
fairly deep into a mess, but it's not always easy to see the impact of 
what seems like a relatively minor patch. I do appreciate that the new 
version printed some diagnostics when the change was triggered, as that 
certainly helped call attention to scripts which were impacted.



With the above in place for the next coreutils release,
then debian could remove its noisy patch.


I would certainly align with that, and the sooner the better to decrease 
the chances that different distributions handle this in different ways 
or we get to the point of having to release in an interim state. If you 
commit a final version I'll apply that patch if the next release isn't 
imminent.




Bug#1058752: bug#62572: Bug#1058752: bug#62572: cp --no-clobber behavior has changed

2024-01-29 Thread Michael Stone

On Mon, Jan 29, 2024 at 04:11:05PM +, Pádraig Brady wrote:

You've introduced a silent incompatibility and I'm trying to find some
way to make that clear. If upstream would provide a better solution I
would certainly use it. I have despaired of there being such since your
attitude thus far seems to be entirely dismissive of compatibility
concerns.


That's a bit unfair.  The current upstream -n behavior is with a view
to being _more_ compat across all systems.
Now I agree this may not be worth it in this case,
but it is a laudable goal.


You are saying that again without explicitly acknowledging that "being 
_more_ compat" in this case means "becoming _incompat_ with the vast 
majority of installed systems". IMO it could be reasonably phrased as 
"being more compatible across all systems in the long term when all 
existing legacy systems are gone", but the key here is that I read 
"_more_ compat across all systems" as dismissing the coreutils installed 
base as part of "all systems". I understand that may not be/have been 
the intent, but I also can't help feeling the way that I do when the 
benefits of compatability with freebsd are repeatedly emphasized while 
the costs of incompatibility with the coreutils installed base are 
dismissed with something along the lines of "we'll see what breaks". (If 
the costs of incompatibility are really that low in this case, why would 
compatability be a worthwhile goal in this case?)


I do wish that more users had noticed the change earlier and that we're 
fairly deep into a mess, but it's not always easy to see the impact of 
what seems like a relatively minor patch. I do appreciate that the new 
version printed some diagnostics when the change was triggered, as that 
certainly helped call attention to scripts which were impacted.



With the above in place for the next coreutils release,
then debian could remove its noisy patch.


I would certainly align with that, and the sooner the better to decrease 
the chances that different distributions handle this in different ways 
or we get to the point of having to release in an interim state. If you 
commit a final version I'll apply that patch if the next release isn't 
imminent.




Bug#1058752: bug#62572: Bug#1058752: bug#62572: cp --no-clobber behavior has changed

2024-01-29 Thread Michael Stone

On Sun, Jan 28, 2024 at 11:14:14PM -0800, Paul Eggert wrote:
I'm not sure reverting would be best. It would introduce more 
confusion, and would make coreutils incompatible with FreeBSD again.


Reverting makes more sense than the current situation. I do not 
understand why you seem to value FreeBSD compatibility more than 
compatibility with the vast majority of installed coreutils/linux

systems.

Yes, it's not a good place to be. Surely current coreutils is better 
than what Debian is doing.


You've introduced a silent incompatibility and I'm trying to find some 
way to make that clear. If upstream would provide a better solution I 
would certainly use it. I have despaired of there being such since your 
attitude thus far seems to be entirely dismissive of compatibility 
concerns.


Another possibility is to add a warning that is emitted only at the 
end of 'cp'. The warning would occur only if the exit code differs 
because of this cp -n business.


You'd only emit a notification of a change in behavior if some 
(potentially uncommon/rarely encountered) situation arises which would 
actually trigger breakage? So people can't prepare ahead of time and 
change their script to handle the necessary change in logic, they can 
only maybe figure out why something broke at 2am when the uncommon event 
occurred?


At the end of the day, -n is basically a useless option with unknowable 
semantics which should be avoided by everyone. In the past it was an 
option which wasn't portable between coreutils/linux and freebsd systems, 
and I guess you've "fixed" that (by making it an option everyone should 
avoid entirely), but let's be honest about how common that concern was.




Bug#1058752: bug#62572: Bug#1058752: bug#62572: cp --no-clobber behavior has changed

2024-01-29 Thread Michael Stone

On Sun, Jan 28, 2024 at 11:14:14PM -0800, Paul Eggert wrote:
I'm not sure reverting would be best. It would introduce more 
confusion, and would make coreutils incompatible with FreeBSD again.


Reverting makes more sense than the current situation. I do not 
understand why you seem to value FreeBSD compatibility more than 
compatibility with the vast majority of installed coreutils/linux

systems.

Yes, it's not a good place to be. Surely current coreutils is better 
than what Debian is doing.


You've introduced a silent incompatibility and I'm trying to find some 
way to make that clear. If upstream would provide a better solution I 
would certainly use it. I have despaired of there being such since your 
attitude thus far seems to be entirely dismissive of compatibility 
concerns.


Another possibility is to add a warning that is emitted only at the 
end of 'cp'. The warning would occur only if the exit code differs 
because of this cp -n business.


You'd only emit a notification of a change in behavior if some 
(potentially uncommon/rarely encountered) situation arises which would 
actually trigger breakage? So people can't prepare ahead of time and 
change their script to handle the necessary change in logic, they can 
only maybe figure out why something broke at 2am when the uncommon event 
occurred?


At the end of the day, -n is basically a useless option with unknowable 
semantics which should be avoided by everyone. In the past it was an 
option which wasn't portable between coreutils/linux and freebsd systems, 
and I guess you've "fixed" that (by making it an option everyone should 
avoid entirely), but let's be honest about how common that concern was.




bug#62572: Bug#1058752: bug#62572: cp --no-clobber behavior has changed

2024-01-29 Thread Michael Stone

On Sun, Jan 28, 2024 at 11:14:14PM -0800, Paul Eggert wrote:
I'm not sure reverting would be best. It would introduce more 
confusion, and would make coreutils incompatible with FreeBSD again.


Reverting makes more sense than the current situation. I do not 
understand why you seem to value FreeBSD compatibility more than 
compatibility with the vast majority of installed coreutils/linux

systems.

Yes, it's not a good place to be. Surely current coreutils is better 
than what Debian is doing.


You've introduced a silent incompatibility and I'm trying to find some 
way to make that clear. If upstream would provide a better solution I 
would certainly use it. I have despaired of there being such since your 
attitude thus far seems to be entirely dismissive of compatibility 
concerns.


Another possibility is to add a warning that is emitted only at the 
end of 'cp'. The warning would occur only if the exit code differs 
because of this cp -n business.


You'd only emit a notification of a change in behavior if some 
(potentially uncommon/rarely encountered) situation arises which would 
actually trigger breakage? So people can't prepare ahead of time and 
change their script to handle the necessary change in logic, they can 
only maybe figure out why something broke at 2am when the uncommon event 
occurred?


At the end of the day, -n is basically a useless option with unknowable 
semantics which should be avoided by everyone. In the past it was an 
option which wasn't portable between coreutils/linux and freebsd systems, 
and I guess you've "fixed" that (by making it an option everyone should 
avoid entirely), but let's be honest about how common that concern was.






Bug#1061612: coreutils: cp -n deprecation warning gives questionable advice

2024-01-28 Thread Michael Stone

On Sun, Jan 28, 2024 at 12:26:13PM +, Pádraig Brady wrote:

That is a very aggressive deprecation.
IMHO it would have been better for debian to have -n behave
like it did previously and (silently) skip files and not set an error exit 
status.
If it was a mess, this is a mess squared.
I guess this forces our hand a bit.
I'll address upstream...


So we should silently have debian behave differently from every other 
linux distribution moving forward? How on earth does that serve anyone's 
interest?




Re: Monospace fonts, Re: Changing The PSI Definition

2024-01-27 Thread Michael Stone

On Fri, Jan 26, 2024 at 01:50:38PM -0600, David Wright wrote:

On Fri 26 Jan 2024 at 07:25:13 (-0500), Dan Ritter wrote:

Greg Wooledge wrote:
> On Thu, Jan 25, 2024 at 07:32:38PM -0500, Thomas George wrote:
> > The current PSI works perfectly but I don't like the pale green prompt.
> >
> > Tried editing .bashrd , /ext/fprofile and /ext/bash.bashrc but no changes to
> > the PSI definition had any effect
>
> You appear to be asking about the shell prompt.
>
> In bash, the shell prompt is defined in the PS1 variable, which stands
> for "Prompt String One (1)".  The last character is the numeral 1, not
> the capital letter I.

Might be time for a new font. I like Inconsolata, but l1I!
should never look similar, nor O0@ or S$.


I'll give a shout-out for Hack,¹ which I can't fault for use in
xterms. Comparingxterm -geometry 80x25+0+0 -fa hack -fs 16
with   xterm -geometry 80x25+0+0 -fa inconsolata -fs 18
(to make the sizes roughly the same), I find the inconsolata
stroke width on the basic Roman alphabet is a little spindly.


I've been pretty happy with the Intel One Mono font lately, it seems to 
incorporate the lessons learned from previous attempts at a highly 
readable mono font and I find it extremely legible. There are complaints 
about certain features being "ugly", but I'm well into a stage of life 
where I care more about being able to easily read the text without 
eyestrain than what it looks like on a sample sheet.




Bug#1061612: coreutils: cp -n deprecation warning gives questionable advice

2024-01-27 Thread Michael Stone

On Sat, Jan 27, 2024 at 02:00:14PM +0100, Sven Joachim wrote:

Package: coreutils
Version: 9.4-3+b1

,
| $ cp -n /bin/true tmp
| cp: warning: behavior of -n is non-portable and may change in future; use 
--update=none instead
`

The advice to use the --update=none option is highly questionable,
because this option is even less portable than -n.  It is not available
in coreutils older than 9.3 or in other cp implementations.


There is no alternative that I can see. I didn't create this situation, 
it was created upstream. You can continue to use -n and ignore the 
warning, but in future if debian stops patching -n to behave the way it 
always has in order to match upstream, stuff will break. If debian keeps 
patching -n, then then anything you write in debian will be depending on 
behavior that differs in other distributions and will break everywhere 
else (except older versions of those distributions). It's a mess.


This warning isn't for debian developers of existing packages, because 
debian is maintaining compatibility (at least for now); you'll see a 
warning message but the actual behavior hasn't changed and won't change 
in debian without some coordination with affected packages. But for 
developers with *new* upstream code that uses -n, which behavior does 
the code expect? There are now two answers and *the only solution is to 
not use -n*; it's not possible to simply file bugs with packages and fix 
it once, because this is an ongoing incompatibility. I understand that 
the messages are somewhat obnoxious, but my attempt to address the 
situation upstream instead failed.




Re: standardize uid:gid?

2024-01-26 Thread Michael Stone

On Thu, Jan 18, 2024 at 07:31:05AM -0500, Greg Wooledge wrote:

This is one of those "the boat has already left the dock" situations.
If this were going to happen, it would have to have happened in the
early 1990s.  There is no feasible way to make it happen now.


It's also a pointless endeavor, which is why it isn't a priority. In 
fact the trend is more toward ephemeral runtime allocation rather than 
hardcoding persistent IDs as more services/subsystems are designed to 
run in isolation. The motivation in this thread seems to simply be "I 
want to copy /etc/passwd around". Why? What is the actual goal, and what 
better (existing) mechanisms could achieve that goal?




Re: Seeking a Terminal Emulator on Debian for "Passthrough" Printing

2024-01-25 Thread Michael Stone

On Sun, Jan 21, 2024 at 10:44:35PM +, phoebus phoebus wrote:

  A filter in between that in response to escape-code-1 starts sending data to 
the serial port instead of the terminal application and switches back to the 
terminal application on receiving of escape-code-2.
  Development of a transparent and responsive intermediate filter to ensure a 
smooth user experience. This filter must handle incoming and outgoing commands 
without disruption.


This is old, old, old tech. The suggestion early on to use screen was 
one reasonable answer, xterm is another. Solutions exist for this, the 
main problem is dusting off documentation old enough to be aware this 
functionality exists. (As seen by so many of the responses here, which 
are apparently unaware that at one time an actual terminal--the hardware 
device with a screen and a keyboard that a "terminal emulater" 
emulates--had a serial connection to a server and could also have a 
second serial connection to a printer, and escape sequences were used to 
switch the server output from screen to printer and back.) This isn't 
functionality that has to be developed, it was written long ago and 
simply isn't used much anymore. You'll have to find old code, because 
newer terminal emulators don't bother implementing this since not many 
people are asking for it (and those that do, can simply use the old code 
and probably aren't as interested in compositor transparency effects).



    Waiting for Returns: The filter remains attentive to returns of information 
coming from the serial printer. These returns may include information about the 
printing status, errors, or other relevant data.


This is where things go off the rails--"other...data". Most of the unix 
terminal emulators use the ANSI escape sequences which, as far as I 
know, had unidirectional printing. The datastream coming from the server 
had escape sequences to change output from screen to printer, but the 
printer had no way to interrupt that and talk back to the server. The 
documentation about "passthrough printing" you've referenced several 
times *does not* describe any capability for doing this. (In fact, the 
capability described involves a pipe and can't possibly be 
bidirectional.) There were escape sequences the server could use to 
query specific things about the printer (like "printer ready"; and, as 
far as I know, that was it). I have no idea whether any of the terminal 
emulators do much or anything with the status sequence, as they mostly 
expect to pipe output and aren't actually written to directly connect to 
a serial printer and check its status lines. (In an actual 
terminal/printer situation the query would report the status of DTR or 
CTS or whatever the specific hardware was using to communicate printer 
status.) In theory it would be a relatively trivial addition to tie the 
"printer status" escape code to something that queries printer status, 
if it's possible to do so and it's not already implemented in a terminal 
emulator that does support printing. But that's still not communication 
of arbitrary data.


Now VT100 wasn't the only terminal out there. For example, Wyse was a 
big name in terminals, and they used a completely different set of 
escape codes. One of theirs enabled a bidirectional mode, used for 
things like connecting an optical barcode scanner at a library checkout 
desk to the minicomputer in the back. I don't know of many open source 
wyse emulators, and none that implement this. IBM had their own 
proprietary terminals and control mechanisms, like the 3270 or 5250, 
with another set of capabilities. Again, I don't know of many open 
source emulators, and those that I am aware of had limited 
functionality. If this is the sort of thing you're talking about, you'd 
get much further searching for information about wyse terminal emulators 
(or whatever terminal language your software uses--there were far more 
than DEC VT or Wyse or IBM) rather than an open ended question about 
printing. (In reality, the bidirectional peripheral control might have 
been lumped into the printer escape sequences in terminal manuals and 
might have connected via the port labeled "printer", but wasn't ever 
really about printing because printing is unidirectional. This is 
obviously confusing to people not aware of the jargon.) I would not 
expect to find much open source software in this space because it's very 
niche and basically requires expensive proprietary software to test 
against if the goal is to run expensive proprietary software correctly. 

This is literally tech from the 1970s, so without the right keywords 
you're going to mostly find unrelated but newer and higher-ranked stuff 
that's not what you're looking for.




Re: SMART Uncorrectable_Error_Cnt rising - should I be worried?

2024-01-11 Thread Michael Stone

On Thu, Jan 11, 2024 at 03:25:51PM -0500, Stefan Monnier wrote:

manufacturers in different memory banks, but since it's always
possible to power down, replace or just remove memory, and power
up again,


Hmm... "always"?  What about long running computations like that
simulation (or LLM training) launched a month ago and that's expected to
finish in another month or so?


I'd expect something like that to have a checkpoint/restart capability 
if not starting over actually matters.



Some mainframes have supported hot (un)plugging RAM modules as well


Yes, mainframes have been engineered that way for a long time. It makes 
them very expensive, and their market share has been declining for 
decades because most problems can be solved more cheaply in software 
(even while maintaining high availability). Hot *spare* memory is 
relatively common, as it solves most problems without the complexity of 
hot *swapping*, at the (generally low) cost of having to schedule 
downtime at some point in the future to actually replace the failed 
module.




Re: SOLVED FOR GENE

2024-01-11 Thread Michael Stone

On Sun, Jan 07, 2024 at 06:37:08AM -0500, Felix Miata wrote:

Doing so is called a defensive response, something to be expected in response to
(needless) offensive behavior. Browsers have default fonts selectable by users 
for
good reason. Websites shouldn't be assuming user settings are wrong.


That's a fight that was lost a long time ago.



Re: disable auto-linking of /bin -> /usr/bin/

2024-01-11 Thread Michael Stone

On Wed, Jan 10, 2024 at 11:49:02AM -0800, Mike Castle wrote:

To some extent, it will make it easier for packaging.


No, not at all--new packages have not had to worry about putting things 
anywhere but /usr for a long time. Only old packages (for which the work 
you described had been done years, if not decades, ago) needed logic to 
split things up, and that required essentially zero ongoing effort.


The benefits you describe came from elminating the requirement that a 
system boot from / without /usr being mounted, and don't require moving 
anything.




Bug#1055694: initramfs-tools: After updating coreutils cp: not replacing in console when running update-initramfs

2024-01-03 Thread Michael Stone

On Sat, Nov 11, 2023 at 01:32:59AM +0100, Thorsten Glaser wrote:

On Fri, 10 Nov 2023, Sven Joachim wrote:


|   'cp -n' and 'mv -n' now exit with nonzero status if they skip their
|   action because the destination exists, and likewise for 'cp -i',


Ouch! Nonzero? That’s harsh, and bad as it’s impossible to distinguish
between error and declining to copy/move.

There is a good example in diff(1) for how to handle this better:
use distinct errorlevels for each case.

Michael, could you perhaps throw that upstream?

bye,
//mirabilos
--
15:41⎜ Somebody write a testsuite for helloworld :-)


Where do we stand on this after coreutils 9.4-3? The autopkgtest is 
failing, but I think at this point that's bogus (because of the new 
deprecation warning), and the functionality is actually ok?




Bug#1055694: initramfs-tools: After updating coreutils cp: not replacing in console when running update-initramfs

2024-01-03 Thread Michael Stone

On Sat, Nov 11, 2023 at 01:32:59AM +0100, Thorsten Glaser wrote:

On Fri, 10 Nov 2023, Sven Joachim wrote:


|   'cp -n' and 'mv -n' now exit with nonzero status if they skip their
|   action because the destination exists, and likewise for 'cp -i',


Ouch! Nonzero? That’s harsh, and bad as it’s impossible to distinguish
between error and declining to copy/move.

There is a good example in diff(1) for how to handle this better:
use distinct errorlevels for each case.

Michael, could you perhaps throw that upstream?

bye,
//mirabilos
--
15:41⎜ Somebody write a testsuite for helloworld :-)


Where do we stand on this after coreutils 9.4-3? The autopkgtest is 
failing, but I think at this point that's bogus (because of the new 
deprecation warning), and the functionality is actually ok?




Bug#1058752: bug#62572: cp --no-clobber behavior has changed

2023-12-17 Thread Michael Stone

On Sun, Dec 17, 2023 at 12:34:11AM -0800, Paul Eggert wrote:

On 2023-12-16 13:46, Bernhard Voelker wrote:

Whether the implementation is race-prone or not is an internal thing.


I wasn't referring to the internal implementation. I was referring to 
cp users. With the newer Coreutils (FreeBSD) behavior, you can 
reliably write a script to do something if cp -n didn't copy the file 
because the destination already existed. With the older Coreutils 
behavior you cannot do that reliably; there will always be a race 
condition.


You can now reliably write a script using the new long option. Changing 
the behavior of the short option helped nobody.




Bug#1058752: bug#62572: cp --no-clobber behavior has changed

2023-12-17 Thread Michael Stone

On Sun, Dec 17, 2023 at 12:34:11AM -0800, Paul Eggert wrote:

On 2023-12-16 13:46, Bernhard Voelker wrote:

Whether the implementation is race-prone or not is an internal thing.


I wasn't referring to the internal implementation. I was referring to 
cp users. With the newer Coreutils (FreeBSD) behavior, you can 
reliably write a script to do something if cp -n didn't copy the file 
because the destination already existed. With the older Coreutils 
behavior you cannot do that reliably; there will always be a race 
condition.


You can now reliably write a script using the new long option. Changing 
the behavior of the short option helped nobody.




bug#62572: cp --no-clobber behavior has changed

2023-12-17 Thread Michael Stone

On Sun, Dec 17, 2023 at 12:34:11AM -0800, Paul Eggert wrote:

On 2023-12-16 13:46, Bernhard Voelker wrote:

Whether the implementation is race-prone or not is an internal thing.


I wasn't referring to the internal implementation. I was referring to 
cp users. With the newer Coreutils (FreeBSD) behavior, you can 
reliably write a script to do something if cp -n didn't copy the file 
because the destination already existed. With the older Coreutils 
behavior you cannot do that reliably; there will always be a race 
condition.


You can now reliably write a script using the new long option. Changing 
the behavior of the short option helped nobody.






Bug#1058752: bug#62572: cp --no-clobber behavior has changed

2023-12-15 Thread Michael Stone

On Fri, Dec 15, 2023 at 11:21:06AM -0800, Paul Eggert wrote:
Stlll, Pádraig gave a reasonable summary of why the change was made, 
despite its incompatibility with previous behavior. (One thing I'd add 
is that the FreeBSD behavior is inherently less race-prone.) It seemed 
like a good idea at the time all things considered, and to my mind 
still does.


I think you underestimate the value of maintaining compatibity with 
deployed versions. In the abstract it may have been a nice cleanup, but 
there are a lot of dumb things in the posix utilities that have been 
dumb for so long it's not worth the pain of changing them. Since this 
change hasn't yet hit mainstream debian, ubuntu, rhel, or suse users, I 
strongly suspect that this is a case where the absence of complaints is 
simply a sign that most of the people who'd be impacted haven't 
experienced the change yet.


Even if we tell people not to use -n at all, that doesn't mean we 
should revert to the coreutils 9.1 behavior.


It does, IMO, as it would be less likely to break scripts written by 
existing coreutils users.


The cat is to some extent out of the bag. Unless one insists on 
(FreeBSD | coreutils 9.2-9.4), or insist on coreutils 7.1-9.1, one 
should not rely on cp -n failing or silently succeeding when the 
destination already exists. This will remain true regardless of 
whether coreutils reverts to its 7.1-9.1 behavior.


Or you use a distribution that has to patch to maintain compatibility 
between versions. Ideally upstream would revert the behavior for now, 
deprecate as the long term fix, and all distributions would work the 
same. The other option is that each distribution decides whether to be 
compatible with upstream coreutils or their own previous release.




Bug#1058752: bug#62572: cp --no-clobber behavior has changed

2023-12-15 Thread Michael Stone

On Fri, Dec 15, 2023 at 11:21:06AM -0800, Paul Eggert wrote:
Stlll, Pádraig gave a reasonable summary of why the change was made, 
despite its incompatibility with previous behavior. (One thing I'd add 
is that the FreeBSD behavior is inherently less race-prone.) It seemed 
like a good idea at the time all things considered, and to my mind 
still does.


I think you underestimate the value of maintaining compatibity with 
deployed versions. In the abstract it may have been a nice cleanup, but 
there are a lot of dumb things in the posix utilities that have been 
dumb for so long it's not worth the pain of changing them. Since this 
change hasn't yet hit mainstream debian, ubuntu, rhel, or suse users, I 
strongly suspect that this is a case where the absence of complaints is 
simply a sign that most of the people who'd be impacted haven't 
experienced the change yet.


Even if we tell people not to use -n at all, that doesn't mean we 
should revert to the coreutils 9.1 behavior.


It does, IMO, as it would be less likely to break scripts written by 
existing coreutils users.


The cat is to some extent out of the bag. Unless one insists on 
(FreeBSD | coreutils 9.2-9.4), or insist on coreutils 7.1-9.1, one 
should not rely on cp -n failing or silently succeeding when the 
destination already exists. This will remain true regardless of 
whether coreutils reverts to its 7.1-9.1 behavior.


Or you use a distribution that has to patch to maintain compatibility 
between versions. Ideally upstream would revert the behavior for now, 
deprecate as the long term fix, and all distributions would work the 
same. The other option is that each distribution decides whether to be 
compatible with upstream coreutils or their own previous release.




bug#62572: cp --no-clobber behavior has changed

2023-12-15 Thread Michael Stone

On Fri, Dec 15, 2023 at 11:21:06AM -0800, Paul Eggert wrote:
Stlll, Pádraig gave a reasonable summary of why the change was made, 
despite its incompatibility with previous behavior. (One thing I'd add 
is that the FreeBSD behavior is inherently less race-prone.) It seemed 
like a good idea at the time all things considered, and to my mind 
still does.


I think you underestimate the value of maintaining compatibity with 
deployed versions. In the abstract it may have been a nice cleanup, but 
there are a lot of dumb things in the posix utilities that have been 
dumb for so long it's not worth the pain of changing them. Since this 
change hasn't yet hit mainstream debian, ubuntu, rhel, or suse users, I 
strongly suspect that this is a case where the absence of complaints is 
simply a sign that most of the people who'd be impacted haven't 
experienced the change yet.


Even if we tell people not to use -n at all, that doesn't mean we 
should revert to the coreutils 9.1 behavior.


It does, IMO, as it would be less likely to break scripts written by 
existing coreutils users.


The cat is to some extent out of the bag. Unless one insists on 
(FreeBSD | coreutils 9.2-9.4), or insist on coreutils 7.1-9.1, one 
should not rely on cp -n failing or silently succeeding when the 
destination already exists. This will remain true regardless of 
whether coreutils reverts to its 7.1-9.1 behavior.


Or you use a distribution that has to patch to maintain compatibility 
between versions. Ideally upstream would revert the behavior for now, 
deprecate as the long term fix, and all distributions would work the 
same. The other option is that each distribution decides whether to be 
compatible with upstream coreutils or their own previous release.






Re: differences among amd64 and i386

2023-12-15 Thread Michael Stone

On Fri, Dec 15, 2023 at 09:36:19AM -0500, Jeffrey Walton wrote:

Also see x32, . It takes advantage of
amd64 benefits, and tries to reduce the memory pressures.


x32 hasn't really gone anywhere and is unlikely to at this point; amd64 
is the only reasonable choice today for a normal user who just wants to 
use the computer.




Bug#1058752: bug#62572: cp --no-clobber behavior has changed

2023-12-15 Thread Michael Stone

On Fri, Dec 15, 2023 at 06:33:00PM +, Pádraig Brady wrote:

Advantages of leaving as is:
We get consistency of "noclobber" behavior across systems / shells.


You don't, unless you ignore the coreutils/linux installed base 
entirely. Essentially the current situation is that -n shouldn't be used 
if you expect a certain behavior for this case and you are writing a 
script for linux systems. Maybe in 10 years you'll be able to assume 
the new behavior. Better to just tell people to not use it at all, and 
leave the historic behavior alone until everyone has stopped using -n 
entirely.



There is no potential for data loss etc.


There may not be, strictly speaking, if you look only at cp without 
context, but we have absolutely no idea what the impact is on the 
unknown number of existing scripts that depend on the historic behavior. 
This is causing breakages in practice.



so it just comes
down to how disruptive it is, or how often -n was used
with the "skip behavior" assumption.


IMO, it should come down to trying to avoid breaking changes in core 
system utilities. There's no compelling reason to force this change, so 
why break anything that depended on the historic behavior--especially 
without any notice or transition period--regardless of arguments over 
whether the historic behavior was right?



We've not had much push back as of yet,
and my current thinking is it's not that disruptive a change.


I suspect that's because it has not yet been widely deployed, which 
makes now the time to fix it.


Michael Stone



Bug#1058752: bug#62572: cp --no-clobber behavior has changed

2023-12-15 Thread Michael Stone

On Fri, Dec 15, 2023 at 06:33:00PM +, Pádraig Brady wrote:

Advantages of leaving as is:
We get consistency of "noclobber" behavior across systems / shells.


You don't, unless you ignore the coreutils/linux installed base 
entirely. Essentially the current situation is that -n shouldn't be used 
if you expect a certain behavior for this case and you are writing a 
script for linux systems. Maybe in 10 years you'll be able to assume 
the new behavior. Better to just tell people to not use it at all, and 
leave the historic behavior alone until everyone has stopped using -n 
entirely.



There is no potential for data loss etc.


There may not be, strictly speaking, if you look only at cp without 
context, but we have absolutely no idea what the impact is on the 
unknown number of existing scripts that depend on the historic behavior. 
This is causing breakages in practice.



so it just comes
down to how disruptive it is, or how often -n was used
with the "skip behavior" assumption.


IMO, it should come down to trying to avoid breaking changes in core 
system utilities. There's no compelling reason to force this change, so 
why break anything that depended on the historic behavior--especially 
without any notice or transition period--regardless of arguments over 
whether the historic behavior was right?



We've not had much push back as of yet,
and my current thinking is it's not that disruptive a change.


I suspect that's because it has not yet been widely deployed, which 
makes now the time to fix it.


Michael Stone



bug#62572: cp --no-clobber behavior has changed

2023-12-15 Thread Michael Stone

On Fri, Dec 15, 2023 at 06:33:00PM +, Pádraig Brady wrote:

Advantages of leaving as is:
We get consistency of "noclobber" behavior across systems / shells.


You don't, unless you ignore the coreutils/linux installed base 
entirely. Essentially the current situation is that -n shouldn't be used 
if you expect a certain behavior for this case and you are writing a 
script for linux systems. Maybe in 10 years you'll be able to assume 
the new behavior. Better to just tell people to not use it at all, and 
leave the historic behavior alone until everyone has stopped using -n 
entirely.



There is no potential for data loss etc.


There may not be, strictly speaking, if you look only at cp without 
context, but we have absolutely no idea what the impact is on the 
unknown number of existing scripts that depend on the historic behavior. 
This is causing breakages in practice.



so it just comes
down to how disruptive it is, or how often -n was used
with the "skip behavior" assumption.


IMO, it should come down to trying to avoid breaking changes in core 
system utilities. There's no compelling reason to force this change, so 
why break anything that depended on the historic behavior--especially 
without any notice or transition period--regardless of arguments over 
whether the historic behavior was right?



We've not had much push back as of yet,
and my current thinking is it's not that disruptive a change.


I suspect that's because it has not yet been widely deployed, which 
makes now the time to fix it.


Michael Stone





Bug#1058752: cp --no-clobber behavior has changed

2023-12-15 Thread Michael Stone
I tend to think this was a serious mistake: it breaks the behavior of 
existing scripts with no deprecation period. A stated advantage is 
better compatibility with freebsd, but I don't understand why that is 
more desirable than compatibility with all deployed gnu/linux systems? I 
also don't think it's sufficient to try to lawyer out by saying that the 
current behavior was undocumented: the previous documentation said that 
-n would "silently do nothing" and that the return code would be zero on 
success. Logically, unless cp fails to "do nothing", it should exit with 
a zero code.


Such a drastic change in behavior demands a new flag, not a radical 
repurposing of a widely used existing flag.


I was hoping to see more action on this bug, but that hasn't happened. 
I'm not sure I see a way forward for debian other than reverting to the 
old behavior. I am reluctant to do so as that will likely lead to 
divergent behavior between distributions, but breaking scripts without a 
compelling reason is also not good. I would encourage coreutils to 
reconsider the change and finding a non-breaking way forward.


Michael Stone



Bug#1058752: cp --no-clobber behavior has changed

2023-12-15 Thread Michael Stone
I tend to think this was a serious mistake: it breaks the behavior of 
existing scripts with no deprecation period. A stated advantage is 
better compatibility with freebsd, but I don't understand why that is 
more desirable than compatibility with all deployed gnu/linux systems? I 
also don't think it's sufficient to try to lawyer out by saying that the 
current behavior was undocumented: the previous documentation said that 
-n would "silently do nothing" and that the return code would be zero on 
success. Logically, unless cp fails to "do nothing", it should exit with 
a zero code.


Such a drastic change in behavior demands a new flag, not a radical 
repurposing of a widely used existing flag.


I was hoping to see more action on this bug, but that hasn't happened. 
I'm not sure I see a way forward for debian other than reverting to the 
old behavior. I am reluctant to do so as that will likely lead to 
divergent behavior between distributions, but breaking scripts without a 
compelling reason is also not good. I would encourage coreutils to 
reconsider the change and finding a non-breaking way forward.


Michael Stone



bug#62572: cp --no-clobber behavior has changed

2023-12-15 Thread Michael Stone
I tend to think this was a serious mistake: it breaks the behavior of 
existing scripts with no deprecation period. A stated advantage is 
better compatibility with freebsd, but I don't understand why that is 
more desirable than compatibility with all deployed gnu/linux systems? I 
also don't think it's sufficient to try to lawyer out by saying that the 
current behavior was undocumented: the previous documentation said that 
-n would "silently do nothing" and that the return code would be zero on 
success. Logically, unless cp fails to "do nothing", it should exit with 
a zero code.


Such a drastic change in behavior demands a new flag, not a radical 
repurposing of a widely used existing flag.


I was hoping to see more action on this bug, but that hasn't happened. 
I'm not sure I see a way forward for debian other than reverting to the 
old behavior. I am reluctant to do so as that will likely lead to 
divergent behavior between distributions, but breaking scripts without a 
compelling reason is also not good. I would encourage coreutils to 
reconsider the change and finding a non-breaking way forward.


Michael Stone





Re: Linking coreutils against OpenSSL

2023-11-11 Thread Michael Stone

On Sat, Nov 11, 2023 at 11:50:31AM +0100, Andreas Metzler wrote:

you seem to have missed/deleted the paragraph where Ansgar suggested how
to do this *without* tradeoff. ("explicitly disable/enable build options
per arch")


No, I didn't. That was in my original email and is one of the 
possibilities for future versions depending on the feedback from people 
testing to guide whether it makes sense to make this per-arch rather 
than global.




Re: Linking coreutils against OpenSSL

2023-11-10 Thread Michael Stone

On Fri, Nov 10, 2023 at 03:10:42PM +0100, Ansgar wrote:

Please avoid producing different results depending on the build
environment. That just results in non-reproducible issues in unclean
environments (suddenly different dependencies, different features,
...).


I think that is an acceptable tradeoff at this time; the only difference 
will be the dependencies, but that is the intent. Automated buildd 
packages should be stable. Based on further experience and feedback, one 
of the other options could be chosen instead. (I'm particularly 
interested in hearing from people who compare the different builds on 
arm, as that is where there's been an assertion of a performance 
regression.)




Re: Linking coreutils against OpenSSL

2023-11-10 Thread Michael Stone

On Fri, Nov 10, 2023 at 10:38:13AM +, Luca Boccassi wrote:

Per-architecture dependencies are possible though, so maybe starting
to add the libssl dependency only on amd64 is a good starting point,
and then users of other architectures can request to be added too if
it is beneficial for them.


I haven't seen any objections to the basic idea, so I'm starting here: 
coreutils 9.4-2 will link to libcrypto if there's a gpl-compatible 
version available at build time, but I've added the build-dependency as 
linux-amd64 only for now. That should make it fairly straightforward for 
people to control the linking on other architectures by controlling 
their build environment. Going forward, depending on feedback, I can 
roll this back, expand the build-dep, and/or make the configure option 
also depend on the arch.




Re: du enhancement

2023-11-09 Thread Michael Stone

On Fri, Oct 20, 2023 at 04:47:59PM +0100, Pádraig Brady wrote:

Yes coloring is a plausible addition to du,
and would make sense to follow the same options, env vars as ls.


I'd argue that the use of colors in coreutils shouldn't really expand 
unless/until there's a more robust & generic interface than embedding 
escape codes in the utility sources and relying on stuff like terminal 
names in dircolors. Colorizing more outputs would certainly make it 
easier to parse some of the walls of text, but it seems that going down 
that road is an excellent time to refactor the old ls color hack rather 
than trying to extend it.




Re: Looking for a good "default" font (small 'L' vs. capital 'i' problem)

2023-08-22 Thread Michael Stone

On Sat, Aug 19, 2023 at 09:19:48PM +0200, Christoph K. wrote:

Could you please recommend a "suitable" sans-serif font that


A lot of your criteria are rather subjective. For packaged fonts you 
might look at "hack" 
(https://source-foundry.github.io/Hack/font-specimen.html)

or "go"
(https://go.dev/blog/go-fonts)

There's also the not-packaged https://github.com/intel/intel-one-mono 


But you'd have to be the judge of what you like the look of.



Re: Potential MBF: packages failing to build twice in a row

2023-08-14 Thread Michael Stone

On Mon, Aug 14, 2023 at 09:40:52PM +0100, Wookey wrote:

On 2023-08-14 10:19 -0400, Michael Stone wrote:

On Thu, Aug 10, 2023 at 02:38:17PM +0200, Lucas Nussbaum wrote:
> On 08/08/23 at 10:26 +0200, Helmut Grohne wrote:
> > Are we ready to call for consensus on dropping the requirement that
> > `debian/rules clean; dpkg-source -b` shall work or is anyone interested
> > in sending lots of patches for this?
>
> My reading of the discussion is that there's sufficient interest for
> ensuring that building-source-after-successful-binary-build works.

my reading said that there was interest in making sure that binary builds
work repeatedly, and almost no interest in making sure that building source
from a rules/clean works. certainly not thousands of packages worth of busy
work level of interest.


Yes. You are right. I (and most of the others who expressed an
interest in having this working) mostly care about doing a binary
build repeatedly. But doesn't this amount to much the same thing?


no, not really. a lot of benign changes (like copying in new autoconf 
stuff) can happily be made multiple times, which doesn't affect building 
at all but causes busy work to undo.



dpkg-source will moan if the source has changed and tell you about the
nice patch it has made. OK, it will let some things slide as just
warnings, so 'builds binary twice' is a somewhat less stringent target
than 'leaves exactly the original pristine source'. I would have to check
the details, but I'm not sure how much difference this makes in
practice?


we don't know, since the test was "regenerate source"--a thing very few 
people care about--rather than "build twice" which is the thing people 
do seem to care about. It seems likely that the difference is thousands 
of packages.


I'm somewhat concerned we magically went from "should we do an MBF" to 
"I just did an MBF" without any real consensus in the middle. This being 
so painfully obvious that the MBF itself basically says there's no 
consensus. 



Re: Potential MBF: packages failing to build twice in a row

2023-08-14 Thread Michael Stone

On Thu, Aug 10, 2023 at 02:38:17PM +0200, Lucas Nussbaum wrote:

On 08/08/23 at 10:26 +0200, Helmut Grohne wrote:

Are we ready to call for consensus on dropping the requirement that
`debian/rules clean; dpkg-source -b` shall work or is anyone interested
in sending lots of patches for this?


My reading of the discussion is that there's sufficient interest for
ensuring that building-source-after-successful-binary-build works.


my reading said that there was interest in making sure that binary 
builds work repeatedly, and almost no interest in making sure that 
building source from a rules/clean works. certainly not thousands of 
packages worth of busy work level of interest.




Re: Proposed MBF - removal of pcre3 by Bookworm

2023-07-01 Thread Michael Stone

On Sat, Jul 01, 2023 at 09:44:27AM -0400, Michael Stone wrote:

On Thu, Jun 29, 2023 at 08:55:11PM +0100, Matthew Vernon wrote:
Bookworm is now out; I will shortly be increasing the severity of 
the outstanding bugs to RC, with the intention being to remove 
src:pcre3 from Debian before the trixie release.


You don't think that marking packages for removal two weeks after the 
bug is filed is a little much?


Apologies, the original bug report apparently slipped under the radar.



Re: Proposed MBF - removal of pcre3 by Bookworm

2023-07-01 Thread Michael Stone

On Thu, Jun 29, 2023 at 08:55:11PM +0100, Matthew Vernon wrote:
Bookworm is now out; I will shortly be increasing the severity of the 
outstanding bugs to RC, with the intention being to remove src:pcre3 
from Debian before the trixie release.


You don't think that marking packages for removal two weeks after the 
bug is filed is a little much?




Re: A hypervisor for a headless server?

2023-06-04 Thread Michael Stone

On Fri, Jun 02, 2023 at 05:18:38PM +0200, zithro wrote:

On 02 Jun 2023 14:31, Michael Stone wrote:
I don't recommend xen for new projects. It has more pieces and tends 
to be more fragile than qemu+kvm, for no real benefits these days. 
(IMO)


Define "more pieces" and "more fragile" ?


You need to juggle kernel version, qemu version, and xen version. You 
need a bootable dom0 *as well as* a bootable xen hypervisor. If any of 
these things mismatch or stop working, things break. The xen-specific 
pieces are generally less well known and less operationally tested 
because there are fewer users. The xen developers have gone through 
several vm models and various deprecations in the past few years, and 
there have been actual breakages for users of the debian packages due to 
the many combinations of features which can break in the presence of 
changes (such as changes needed for security issues) and the difficulty 
(infeasibility?) of testing all the possible combinations. That would be 
less of an issue if rolling your own and tracking xen upstream directly, 
but this is a debian list, and the debian packages face a different set 
of constraints.



It has a really low TCB and still used by amazon for their cloud.


As a legacy service. New VMs are deployed using different technologies. 
They were the only major cloud service to go with xen, and their 
continued use seems more a matter of leaving it running for legacy 
instances being less work than migrating everything. (Which is basically 
where I still have deployed.) Amazon is also not using a xen package 
from a general purpose OS, and has quite a large team devoted to the 
care and feeding of that infrastructure. It's basically an apples to 
boxcars comparision unless the person trying to decide which hypervisor 
to go with happens to be running one of the largest clouds in the world. 
(Which begs the question of why on earth they'd be looking for answers 
on debian-user.)



You don't even need qemu if running fully virtualized guests (PV/PVH).


xen's continuing search for the next great thing 
(pv/hvm/pvhvm/pvh/pvhv2) has itself been a source of operational pain. 
From the perspective of taking the best advantage of the technology 
available at the time it's great, but from the perspective of wanting to 
set something up and just have it keep running, it's a pain. (And, to 
the point, kvm has been less of a pain because for better or worse its 
model has remained more stable.)


None of this is to say that xen is a bad project or that some people may 
find it the best option, but I'll continue to not recommended it as a 
general solution for people looking to deploy a new vm environment. It's 
just easier to go with kvm.




Re: A hypervisor for a headless server?

2023-06-04 Thread Michael Stone

On Fri, Jun 02, 2023 at 05:34:58PM +0200, Mario Marietto wrote:

Excuse me,but there is something within your argumentation that I don't like
and I want to express what it is. Let's take Linux as an example of what I want
to say. Linux is well known to be an OS that can be installed on the old
machines,helping the people that can't buy a new computer to surf the net and
to do the basic things that they couldn't do using a more complete and modern
PC built with new hardware components. And this is a linux quality that
everyone loves and one of the reasons why Linux is growing faster on the
market.


This is a misunderstanding of a general purpose OS. netbsd is the 
project that supports old hardware for the sake of old hardware. Linux 
has never been that, and when the choice comes to supporting something 
old or supporting something new, the decision is generally to abandon 
the old. There's no other way to make progress. If both can coexist, 
fine--but if you want something that specifically caters to 
(functionally obsolete) hardware you need a project with that as a goal 
rather than a project that has general utility as a goal.



But,first of all,I think that there are a LOT of old PCs in the world,since
poor people aren't only a niche.


I think you underestimate the scale of the e-waste problem. Simply 
giving people better, less obsolete, hardware is (IMO) a much better use 
of resources than trying to continue to use older hardware for no real 
reason other than a desire to use really old hardware. Even just 
considering environmental consciousness, saving a 5 year old PC from the 
landfill and throwing out a 10 year old PC is a net positive in terms of 
energy efficiency if nothing else.




Re: A hypervisor for a headless server?

2023-06-02 Thread Michael Stone

On Fri, Jun 02, 2023 at 03:24:13PM +0200, Mario Marietto wrote:

I mean. I cant use qemu on that I5 cpu because is slow without kvm. Kvm does
not work on that cpu because it is needs some extensions from the cpu that
there arent. Bhyve is the only alternative because it is a mix between qemu and
kvm in terms of speed. So. My question is : how much old cpu there are that
cant run kvm ? I dont think mine is the only one. May be a good idea is to port
bhyve on linux to cover the little needs of the users who wants a fast hyp on
the old cpus. and not,qemu in these cpus is very slow. is not the solution. I
really think there isnt any better alternative than qemu in these situations.
The only one is bhyve
 if someone wants to try the scenarios that im talking about,they will
understand for sure. and maybe they want to start the porting of bhyve on
linux.


Realistic answer: if something can't be supported by kvm it's probably 
old enough that upgrading it makes more sense than investing developer 
resources on that niche case. 10 year old machines that do support kvm 
are basically free these days.


Other than that, I'm not going to argue the basically hypothetical case 
of "machines that can use bhyve hardware virtualization but not kvm 
hardware virtualization" because 1) there probably aren't many and 2) 
without details of what exactly is the problem with your particular 
machine I can't provide a sensible response. (There's not really much 
difference between kvm and bhyve requirements, so I'd guess something 
like a bios bug is causing issues.) Again, this just isn't a general 
problem and certainly not something worth a lot of resource expenditure.




Re: A hypervisor for a headless server?

2023-06-02 Thread Michael Stone

On Fri, Jun 02, 2023 at 03:01:04PM +0200, Mario Marietto wrote:

Using qemu is out of discussion,because it is very slow. But as I said,bhyve
works better than qemu alone.


kvm literally uses qemu as its user space, so it's very much not out of 
the discussion. If you can't use the kvm kernel extensions for 
virtualization, then running under qemu gets you the same userspace 
experience with a performance penalty. bhyve has its own requirements 
for what cpu extensions must be present, and AFAIK can't fall back to a 
non-accelerated mode if they are not, regardless of performance. 
Sometimes you might want VMs to run, even more slowly, than to not run 
at all.


I get that there's some machine that you have that can't run kvm. I have 
no idea why without access to the machine. But that's not a general 
problem, that's a problem with some specific piece of hardware. It's not 
like kvm has exotic requirements--I'm running on some hardware that's

more than a decade old.



Re: A hypervisor for a headless server?

2023-06-02 Thread Michael Stone

On Fri, Jun 02, 2023 at 08:41:44AM +, Victor Sudakov wrote:

Interestingly, libvirt claims to support bhyve, I just never felt a
need for such sophisticated tools to run just several VMs.


Yes, it sounds like you should just ignore libvirt entirely and just 
install qemu-system-x86 (and not qemu-system-gui). That's a minimal 
system with no gui, you just run qemu from the command line to start 
VMs. If you run with --enable-kvm or --machine accel=kvm, then you're 
using kvm (assuming the kernel module is loaded).


That said, there's a huge convenience factor for libvirt. You may end up 
with libraries you'll never use on the server, but so what? You can 
install virt-manager on a client system and manage with a gui that uses 
ssh in the background, or use virsh on the server. If you find yourself 
needing to do something infrequently, it's much easier to discover it in 
the virt-manager gui than it is to dig through docs on how to do it from 
the qemu command line. (This is, of course, the usual tradeoff between 
text and graphical interfaces.) It's also easier to use 
standard/documented solutions for startup, config, storage, etc, than it 
is to remember what bespoke solution you came up with several years ago 
when something breaks, even if all the abstraction layers of libvirt are 
"less efficient".




Re: A hypervisor for a headless server?

2023-06-02 Thread Michael Stone

On Fri, Jun 02, 2023 at 11:21:45AM +0200, Mario Marietto wrote:

wait wait. for sure the option should be enabled on the bios,but bhyve works in
a different way than kvm,so it works even if my cpu does not have all the virt.
parameters respected. Infact kvm does not work on that cpu. But how many cpus
there are like mine ? Does Linux feel to cover the gap of an alternative to
qemu and kvm ? not sure about xen as an alternative. 


kvm is literally the hardware acceleration piece. If your CPU doesn't 
support that, use qemu without kvm and you get exactly the same 
experience (just a bit slower). So "linux" "feels" no need to cover the 
gap, because there isn't one.




Re: A hypervisor for a headless server?

2023-06-02 Thread Michael Stone

On Fri, Jun 02, 2023 at 11:09:36AM +0200, Paul Leiber wrote:
+1 for Xen, AFAIK the standard apt installation doesn't include any 
management GUI.


This is the howto which helped me getting started:

https://wiki.xenproject.org/wiki/Xen_Project_Beginners_Guide


I don't recommend xen for new projects. It has more pieces and tends to 
be more fragile than qemu+kvm, for no real benefits these days. (IMO)




Re: A hypervisor for a headless server?

2023-06-02 Thread Michael Stone

On Thu, Jun 01, 2023 at 11:53:26PM -0400, Brian Sammon wrote:

"virt-manager", on the other hand, appears to be fundamentally a GUI tool.


But virsh from libvirt-clients isn't.



Re: update-initramfs

2023-04-13 Thread Michael Stone

On Thu, Apr 13, 2023 at 01:57:04PM -0500, David Wright wrote:

os-prober no longer scours all the other
partitions for OSes any more.¹ 


Which is wonderful--that was one of the most annoying misfeatures to 
have ever been enabled.




Re: Playing Card Symbols

2023-03-27 Thread Michael Stone

On Mon, Mar 27, 2023 at 02:13:35PM -0400, Jude DaShiell wrote:

You know, if all of those symbols were in some font set and had text
labels attached to them that could speak when a screen reader was used a
whole bunch of playing card applications would suddenly become accessible
for screen reader users.


Some? Many? text to speech engines already translate unicode emojis to 
descriptions.




Re: Playing Card Symbols

2023-03-27 Thread Michael Stone

On Mon, Mar 27, 2023 at 02:13:35PM -0400, Jude DaShiell wrote:

You know, if all of those symbols were in some font set and had text
labels attached to them that could speak when a screen reader was used a
whole bunch of playing card applications would suddenly become accessible
for screen reader users.


Some? Many? text to speech engines already translate unicode emojis to 
descriptions.




Re: Which takes priority, ipv4, or ipv6?

2023-03-27 Thread Michael Stone

On Mon, Mar 27, 2023 at 12:48:13PM +0100, Richmond wrote:

I have configured an ipv6 tunnel. If I visit this site:

http://ip6.me/

The "normal" test shows my ipv4 address, and the:

http://ip6only.me/

shows the ipv6 address.

However if I switch my DNS from opendns to the one provided by my ISP
and then run the "normal" test it shows the ipv6.

The note says:

(preference depends on your OS/client)

So how is the preference determined? It seems to be determined by the
DNS, but why or how do I tell for example with host -v?


Modern browsers generally follow https://www.rfc-editor.org/rfc/rfc6555 
and connect to both, using whichever connection completes first.


You may see different results on repeated connection attempts (but not 
necessarily for immediate retries due to connection caching).




Re: No /

2023-03-15 Thread Michael Stone

On Wed, Mar 15, 2023 at 05:05:49PM +0100, Michael Lee wrote:

Is there a way to fix this, or is a re-installation the only remedy?


For all the things people like about btrfs, IME it's not as good at 
recovery from adverse events as are ext4 or xfs. In your circumstance 
your best bet is probably a reinstall. (Hopefully from backups to make 
this easier, as was suggested several days ago but shouted down for no 
apparent reason.)


A step to try if you haven't already is to try to run "btrfs check" on 
the filesystem from the initramfs prompt. There's a --repair option 
which you may be able to add to correct the problem if the check can 
identify what's wrong.


Your last hope option would be to boot from an install cd in recovery 
mode or a live cd and to see if that version of "btrfs check" can fix 
it. The newer the boot media, the later the version of the btrfs-progs 
package and (probably) the better it will do at recovery. bullseye is 
version 5.x, bookworm or bullseye-backports have version 6.x.




Re: Partitioning an SSD?

2023-02-16 Thread Michael Stone

On Thu, Feb 16, 2023 at 02:22:56AM -0500, Felix Miata wrote:

What physical boundaries do SSDs have to report? All I know about that are 
exposed
are sector size and sector count. I have yet to find one where logical/physical
were not 512B/512B.


Don't worry about it; modern partition tools align on 1MB, which works 
for basically anything.




Re: Partitioning an SSD?

2023-02-15 Thread Michael Stone

On Wed, Feb 15, 2023 at 11:23:52PM -0500, pa...@quillandmouse.com wrote:

Here's why you would partition a drive. Reinstalling (which I end up
having to do every time Debian comes out with a new version) means
overwriting the storage.


I already acknowleged that people can do what they want based on 
personal preference. I don't think I've ever personally reinstalled 
because of an upgrade; the machine I'm writing this on was first 
installed more than a decade ago. FWIW, I've also got stuff in many 
directories other than /home that I care about, so backups are more 
important to me than only having /home.




Re: ipv6 maybe has arrived.

2023-02-15 Thread Michael Stone

On Wed, Feb 15, 2023 at 11:49:51AM +0100, to...@tuxteam.de wrote:

On Tue, Feb 14, 2023 at 03:07:08PM -0500, Michael Stone wrote:

On Fri, Feb 10, 2023 at 02:33:12PM +, Tim Woodall wrote:
> On Fri, 10 Feb 2023, jeremy ardley wrote:
> > you can ping them as in
> >
> > ping fe80::87d:c6ff:fea4:a6fc
> >
>
> ooh, I didn't know that worked.
>
> Same as
> ping fe80::87d:c6ff:fea4:a6fc%eth0
>
> on my machines at least. No idea how it picks the interface when there's
> more than one.
>
> The interface seems mandatory for ssh for me:
>
> tim@einstein(4):~ (none)$ ssh fe80::1
> ssh: connect to host fe80::1 port 22: Invalid argument
> tim@einstein(4):~ (none)$

You actually have an fe80::1 IP address on your system? That would be highly
unusual. If you don't, why would you expect it to respond?


Whether it responds or not is, I think, irrelevant here. The thing
gets cut short by the -EINVAL, which stems from the missing interface
specification (well, "zone index" in IPv6 jargon). Without zone index,
an IPv6LL is (may be?) underspecified. So it would be fe80::1%eth0
or something similar.


Ok, I didn't get what you were asking. Yes, a link local address must 
have a scope (interface) associated with it, by policy. You don't need 
it with ping because it's using a lower level raw socket (but, if there 
are multiple interfaces and you didn't specify one, the packets are 
likely going out the wrong one). The reason for this is that by 
definition the addresses are specific to a link, and there's no 
mechanism (e.g., route table with a default) for the kernel to determine 
which link to use. It's possible for the same link local address to 
be present on multiple links (the addresses are only required to be 
unique per link) so it's not a generally solvable problem, and given the 
purpose of link local addresses it doesn't really need to be.


I'd missed that the OP that started this suggested otherwise. There were 
no command outputs so I don't know if it actually works on that system 
or it was a simply a failure to remember to add the scope id in the 
mail. If it does work I have no idea how, but there'd have to be 
something in the stack adding a scope. When link local addresses are 
returned by nss-mymachines or nss-mynetworks the scope is included so it 
will "just work".




Re: Partitioning an SSD?

2023-02-15 Thread Michael Stone

On Wed, Feb 15, 2023 at 05:58:47PM -0500, PMA wrote:

I'm preparing to install Debian 11.5.0 on a new computer.
Its drives are SSDs, not the HDDs I've been accustomed
to and have always fastidiously *partitioned*.

With my file groupings already well differentiated c/o
directory-tree layout, is there any further advantage
to be had in partitioning *these* drives?

(I do understand somewhat the difference between the
drive types -- e.g., that SSDs don't assign functional
space.  I'm just not sure what other issue may apply.)


I don't personally think there's a point in partitioning any storage 
device on a user system these days beyond what's required to boot. If 
you want to do more, that's a personal preference. Being an SSD doesn't 
really change things.




Re: ipv6 maybe has arrived.

2023-02-15 Thread Michael Stone

On Wed, Feb 15, 2023 at 04:24:36PM -0500, gene heskett wrote:

you basically just made this up

No Michael, just recalling our interaction history, the general tone 
being to give me hell for using hosts files instead of running a dns.


I have not told you that you need to use bind instead of hosts, 
/etc/hosts is a supported mechanism that works fine, please stop 
asserting that I've said otherwise. We've already been asked to stop 
once so I will not reply further.




Re: OpenMPI 5.0 to be 32-bit only ?

2023-02-15 Thread Michael Stone

On Wed, Feb 15, 2023 at 11:29:25AM +, Alastair McKinstry wrote:
The counterpoint is if someone does a high-core-count 32-bit arch for 
HPC; x32 could (have been) such an architecture, but its development 
looks stalled.


That may have been a possibility 15-20 years ago, but today anything 
calling itself "HPC" is working with datasets large enough that trying 
to use 32 bit pointers would be far more trouble than it's worth; x32 is 
of interest to container services far more than HPC, IMO. (Even the GPUs 
have working sets above 4GB currently--the days of a viable 32 bit 
architecture in this space are entirely in the rear view mirror.)


I personally think it makes far more sense to excise MPI from 
architectures where it will never realistically be used than it does to 
try to keep it going just to have it.




Re: ipv6 maybe has arrived.

2023-02-15 Thread Michael Stone

On Wed, Feb 15, 2023 at 10:12:32AM -0500, Greg Wooledge wrote:

Sorry, Gene's line was actually "search hosts, nameserver".

So, "ping coyote" should have triggered name resolution for "coyote.hosts"
and/or "coyote.nameserver".

It's just barely conceivable that *something* might have created a
resolvable entry for "coyote.hosts", and that some libnss-* module might
have returned a working IP address using it.

I just don't know what that *something* could be.


Why worry about it? We'll never get the actual state of the machines 
involved and thus will never be able to do more than speculate, so why 
waste more bytes going back and forth about it, sending messages to 
thousands of people who also can't get the information necessary to do 
more than guess?




Re: ipv6 maybe has arrived.

2023-02-15 Thread Michael Stone

On Wed, Feb 15, 2023 at 09:30:57AM -0500, gene heskett wrote:
True. But I'd also suggest that if you do not want to support 
/etc/hosts files name resolution methods


/etc/hosts works and has worked fine on debian for decades

to. Your attitude that everybody with a two machine home network 
should run a bind instance just to lookup the other machine 


you basically just made this up



Re: ipv6 maybe has arrived.

2023-02-15 Thread Michael Stone

On Wed, Feb 15, 2023 at 03:46:21PM +0300, Reco wrote:

libnss-myhostname does that.
Why it chooses ipv6 link-local over ipv4 static IP is another question.


perhaps because ipv6 is preferred and there is no public ip6. it doesn't 
really matter because normal users won't notice or care whether it's 
ipv6 or ipv4.




Re: ipv6 maybe has arrived.

2023-02-15 Thread Michael Stone

On Wed, Feb 15, 2023 at 07:57:09AM -0500, gene heskett wrote:
And this disclosed that I had not properly added coyote.coyote.den to 
the /etc/hosts file on that machine. That mistake, fixed, now makes 
the local net pingable. The rest of it, whats powered up, was/is all 
pingable. It just wasn't tried. This machine is generally the master, 
and when I couldn't ping it, I assumed none of the local stuff worked.


Checking that on some of my other machines disclosed that something 
besides me is mucking with the /etc/hosts on some, but not all, of the 
other machines. And its something I'll have to fix from the machines 
own keyboard because my /sshnet/localname network doesn't allow root 
logins. The /etc/hosts file on go704 has been stripped to just itself! 
Fixed and chattr +i added to it now.


So you've finally figured out that what you told us is wrong, but you're 
doubling down on baroque workarounds that will make it even harder 
to figure out what's actually happening. And presumably, you'll be 
loudly announcing to other users that the only solution is to do 
ridiculous things like chattr +i random files, confusing future users 
who find this stuff via google and don't realize that you only got 
halfway to figuring out what was going on before announcing what the 
"proper" fix was. That's the frustrating part: you can do whatever you 
want with your own system and it doesn't really matter to anyone else, 
but we're going to keep reading wrong information proclaimed loudly and 
often--which does potentially affect others.


And I'm pretty sure I put it in there correctly when I installed 
armbian on it.


Occam suggests otherwise.



Re: ipv6 maybe has arrived.

2023-02-15 Thread Michael Stone

On Wed, Feb 15, 2023 at 07:30:44AM -0500, Greg Wooledge wrote:

That said, I'm curious about this part oF Gene's result:


> gene@bpi54:~$ grep -i bpi54 /etc/hosts
> 192.168.71.12   bpi54.coyote.denbpi54
> gene@bpi54:~$ getent hosts bpi54
> fe80::4765:bca4:565d:3c6 bpi54


Where does getent pull that IPv6 address from?  That's not what I get
when I look myself up:


probably mymachines, which is yet another new twist to add to this 
already ridiculous saga




Re: ipv6 maybe has arrived.

2023-02-14 Thread Michael Stone

On Fri, Feb 10, 2023 at 02:33:12PM +, Tim Woodall wrote:

On Fri, 10 Feb 2023, jeremy ardley wrote:

you can ping them as in

ping fe80::87d:c6ff:fea4:a6fc



ooh, I didn't know that worked.

Same as
ping fe80::87d:c6ff:fea4:a6fc%eth0

on my machines at least. No idea how it picks the interface when there's
more than one.

The interface seems mandatory for ssh for me:

tim@einstein(4):~ (none)$ ssh fe80::1
ssh: connect to host fe80::1 port 22: Invalid argument
tim@einstein(4):~ (none)$


You actually have an fe80::1 IP address on your system? That would be 
highly unusual. If you don't, why would you expect it to respond?




Re: ipv6 maybe has arrived.

2023-02-14 Thread Michael Stone

On Thu, Feb 09, 2023 at 03:02:22PM -0500, gene heskett wrote:
Yes Greg, you keep telling me that. But I'm in the process of bringing 
up a 3dprinter farm, each printer with a bpi5 to manage octoprint. 
Joing the other 4 on this net running buster and linuxcnc.


Just last week I added another bpi5, copied the /etc/hosts file and 
restarted networking. It could NOT find the other machines on my net 
UNTIL I added that search directive to resolv.conf.  This net is about 
50/50 buster and bullseye.


If what you say is true, that should not have been the fix, so explain 
again why its not working, cuz it is.


Because you change a bunch of things all the same time in a 
non-repeatable fashion, lose track of what you've done, and decide that 
the nonsense line did something.




Re: ipv6 maybe has arrived.

2023-02-14 Thread Michael Stone

On Tue, Feb 14, 2023 at 07:42:59PM +, Brian wrote:


I was attracted by this idea and it gave me pause for
thought. Leaving aside printers that include a network
interface, the IPP-over-USB standard applies to a
non-network-capable  printer.

The specs require IPP (put in firmware, I suppose) and
DNS-SD discovery (again in firmware). Little extra cost?
AFAICT, a networking stack is not specified. Why should
it be for an isolated machine?


Because that's how IPP works: IPP-over-USB essentially creates a 
point-to-point network to the printer, which is then addressed via TCP 
just like any other networked printer. Historically there was a 
significant cost associated with adding a networking stack (more 
processors, memory, etc), but as everything gets more integrated it's 
essentially a zero-cost addition. It's not going to be added to old 
hardware, but not many new designs are likely to be introduced without 
it.




Re: How can I check (and run) if an *.exe is a DOS or a Windows program?

2023-01-09 Thread Michael Stone

On Sat, Jan 07, 2023 at 11:33:44AM +, Ottavio Caruso wrote:

$ file test2/sm/SM.EXE
test2/sm/SM.EXE: MS-DOS executable, MZ for MS-DOS

Which makes me think it's DOS but it could be a false positive.


Nope, that's it. If it was windows it would say something like 
"PE32+ executable (GUI) x86-64 (stripped to external PDB), for MS Windows"
the keywords being "PE" and "for MS Windows". 32 bit programs would be 
"PE32" rather than "PE32+".


If you're running an ancient executable for an ancient (16bit) version 
of windows it would show up as something like 
"MS-DOS executable, NE for MS Windows (3.0)"




Re: tbird broken

2022-11-18 Thread Michael Stone

On Fri, Nov 18, 2022 at 06:14:23PM -0500, gene heskett wrote:
Since when does a total lack of html content, disable MIME handling?? 
That seems like a whopper of a bug to me since MIME was around and 
working quite well in the later '80's.


MIME was standardized in 1992 and wasn't particularly well supported 
until the late 90s.




Re: ZFS performance (was: Re: deduplicating file systems: VDO withDebian?)

2022-11-14 Thread Michael Stone

On Mon, Nov 14, 2022 at 08:40:47PM +0100, hw wrote:

Not really, it was just an SSD.  Two of them were used as cache and they failed
was not surprising.  It's really unfortunate that SSDs fail particulary fast
when used for purposes they can be particularly useful for.


If you buy hard drives and use them in the wrong application, they also 
fail quickly. And, again, you weren't using the right SSD so it *wasn't*
particularly useful. But at this point you seem to just want to argue 
in circles for no reason, so like others I'm done with this waste of 
time.




Re: definiing deduplication

2022-11-13 Thread Michael Stone

On Sat, Nov 12, 2022 at 01:39:56PM -0500, Stefan Monnier wrote:

But as I mentioned, higher-layers (the filesystem layer, and the
applications running on top of that) *should* try and make sure that
a hard failure (kernel crash, power failure, ... these and up taking
a snapshot of your block device) can never result in an
inconsistent state.

That's the core of the ext3 improvement over ext2, for example.


Actually, it isn't--the core of ext3 improvement over ext2 is faster 
startup time after an unplanned shutdown, by avoiding a fsck; it does 
not offer stronger consistency guarantees than ext2 if the application 
is being careful in how it writes data. It's possible to run ext3 in 
full data journalling mode, which does change things, but that isn't 
normally done because the performance impact is significant. (And 
because in most cases it doesn't help much in practice--applications 
that are careful about how they write data already cope with 
non-data-journaling filesystems because that's the normal case, and 
applications which aren't careful about how they write data can still 
end up in a situation where data is consistent from the filesystem pov 
but partially written/corrupt from the application pov.)




Re: ZFS performance (was: Re: deduplicating file systems: VDO withDebian?)

2022-11-11 Thread Michael Stone

On Fri, Nov 11, 2022 at 02:05:33PM -0500, Dan Ritter wrote:

300TB/year. That's a little bizarre: it's 9.51 MB/s. Modern
high end spinners also claim 200MB/s or more when feeding them
continuous writes. Apparently WD thinks that can't be sustained
more than 5% of the time.


Which makes sense for most workloads. Very rarely do people write 
continuously to disks *and never keep the data there to read it later*. 
There are exceptions (mostly of the transaction log type for otherwise 
memory-resident data), and you can get write optimized storage, but 
you'll pay more. For most people that's a bad deal, because it would 
mean paying for a level of write endurance that they'll never use.




Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?))

2022-11-11 Thread Michael Stone

On Fri, Nov 11, 2022 at 09:03:45AM +0100, hw wrote:

On Thu, 2022-11-10 at 23:12 -0500, Michael Stone wrote:

The advantage to RAID 6 is that it can tolerate a double disk failure.
With RAID 1 you need 3x your effective capacity to achieve that and even
though storage has gotten cheaper, it hasn't gotten that cheap. (e.g.,
an 8 disk RAID 6 has the same fault tolerance as an 18 disk RAID 1 of
equivalent capacity, ignoring pointless quibbling over probabilities.)


so with RAID6, 3x8 is 18 instead of 24


you have 6 disks of useable capacity with the 8 disk raid 6, two disks 
worth of parity. 6 disks of useable capacity on a triple redundant 
mirror is 6*3 = 18.




Re: ZFS performance (was: Re: deduplicating file systems: VDO withDebian?)

2022-11-11 Thread Michael Stone

On Fri, Nov 11, 2022 at 07:15:07AM +0100, hw wrote:

There was no misdiagnosis.  Have you ever had a failed SSD?  They usually just
disappear.


Actually, they don't; that's a somewhat unusual failure mode. I have had 
a couple of ssd failures, out of hundreds. (And I think mostly from a 
specific known-bad SSD design; I haven't had any at all in the past few 
years.) I've had way more dead hard drives, which is typical.



There was no "not normal" territory, either, unless maybe you consider ZFS cache
as "not normal".  In that case, I would argue that SSDs are well suited for such
applications because they allow for lots of IOOPs and high data transfer rates,
and a hard disk probably wouldn't have failed in place of the SSD because they
don't wear out so quickly.  Since SSDs are so well suited for such purposes,
that can't be "not normal" territory for them.  Perhaps they just need to be
more resilient than they are.


You probably bought the wrong SSD. SSDs write in erase-block units, 
which are on the order of 1-4MB. If you're writing many many small 
blocks (as you would with a ZFS ZIL cache) there's significant write 
amplification. For that application you really need a fairly expensive 
write-optimized SSD, not a commodity (read-optimized) SSD. (And in fact, 
SSD is *not* ideal for this because the data is written sequentially and 
basically never read so low seek times aren't much benefit; NVRAM is 
better suited.) If you were using it for L2ARC cache then mostly that 
makes no sense for a backup server. Without more details it's really 
hard to say any more. Honestly, even with the known issues of using 
commidity SSD for SLOG I find it really hard to believe that your 
backups were doing enough async transactions for that to matter--far 
more likely is still that you simply got a bad copy, just like you can 
get a bad hd. Sometimes you get a bad part, that's life. Certainly not 
something to base a religion on.



Considering that, SSDs generally must be of really bad quality for that to
happen, don't you think?


No, I think you're making unsubstantiated statements, and I'm mostly 
trying to get better information on the record for others who might be 
reading.




Re: else or Debian (Re: ZFS performance (was: Re: deduplicating file systems: VDO with Debian?))

2022-11-10 Thread Michael Stone

On Thu, Nov 10, 2022 at 08:32:36PM -0500, Dan Ritter wrote:

* RAID 5 and 6 restoration incurs additional stress on the other
 disks in the RAID which makes it more likely that one of them
 will fail.


I believe that's mostly apocryphal; I haven't seen science backing that 
up, and it hasn't been my experience either.



 The advantage of RAID 6 is that it can then recover
 from that...


The advantage to RAID 6 is that it can tolerate a double disk failure. 
With RAID 1 you need 3x your effective capacity to achieve that and even 
though storage has gotten cheaper, it hasn't gotten that cheap. (e.g., 
an 8 disk RAID 6 has the same fault tolerance as an 18 disk RAID 1 of 
equivalent capacity, ignoring pointless quibbling over probabilities.)




Re: ZFS performance (was: Re: deduplicating file systems: VDO withDebian?)

2022-11-10 Thread Michael Stone

On Thu, Nov 10, 2022 at 06:55:27PM +0100, hw wrote:

On Thu, 2022-11-10 at 11:57 -0500, Michael Stone wrote:

On Thu, Nov 10, 2022 at 05:34:32PM +0100, hw wrote:
> And mind you, SSDs are *designed to fail* the sooner the more data you write
> to
> them.  They have their uses, maybe even for storage if you're so desperate,
> but
> not for backup storage.

It's unlikely you'll "wear out" your SSDs faster than you wear out your
HDs.



I have already done that.


Then you're either well into "not normal" territory and need to buy an 
SSD with better write longevity (which I seriously doubt for a backup 
drive) or you just got unlucky and got a bad copy (happens with 
anything) or you've misdiagnosed some other issue.




Re: ZFS performance (was: Re: deduplicating file systems: VDO withDebian?)

2022-11-10 Thread Michael Stone

On Thu, Nov 10, 2022 at 05:34:32PM +0100, hw wrote:

And mind you, SSDs are *designed to fail* the sooner the more data you write to
them.  They have their uses, maybe even for storage if you're so desperate, but
not for backup storage.


It's unlikely you'll "wear out" your SSDs faster than you wear out your 
HDs.




Bug#1023725: rasdaemon: kernel null pointer dereference oops with rasdaemon

2022-11-08 Thread Michael Stone
Package: rasdaemon
Version: 0.6.7-1+b1
Severity: important
Tags: upstream

With linux-image-6.0.0-2-amd64 rasdaemon causes a kernel oops with a signature 
similar to this:
 BUG: kernel NULL pointer dereference, address: 01c8
 #PF: supervisor write access in kernel mode
 #PF: error_code(0x0002) - not-present page
 PGD 0 P4D 0 
 Oops: 0002 [#1] PREEMPT SMP NOPTI
 CPU: 11 PID: 1227 Comm: rasdaemon Tainted: P   OE  6.0.0-2-amd64 
#1  Debian 6.0.6-2
 RIP: 0010:ring_buffer_wake_waiters+0x1c/0xa0

See
https://lore.kernel.org/lkml/CAM6Wdxft33zLeeXHhmNX5jyJtfGTLiwkQSApc=10fqf+rqh...@mail.gmail.com/T/
for a discussion of the bug (easiest to start from the bottom). It seems that
on systems which allow more cpus than are initialized[1], rasdaemon will attempt
to poll non-existent cpus which causes a kernel oops. The fix for this
reportedly causes rasdaemon to segfault which will likely require a fix there
as well.

A workaround for systems experiencing the oops with linux-image-6.0.0-2 is to
disable rasdaemon.

[1] On my system, dmesg reports
smpboot: Allowing 32 CPUs, 16 hotplug CPUs
for a system with 8 cores/16 threads



Re: support for ancient peripherals

2022-11-08 Thread Michael Stone

On Sun, Nov 06, 2022 at 06:31:04AM -0500, Dan Ritter wrote:

3. An HP LaserJet 5MP printer from 1995 with a parallel-port connector.


StarTech sells a $42 PCIe card with a parallel port and two
serial ports. If you're getting a desktop, this might be your
preferred path.

Two other options:

The 5MP has an expansion slot where you can put a network
interface. HP J2552-60001 is the part number. Refurbs of these
sell for about $75, which is about half the cost of a new laser
printer which will have a network port, duplex, and be about 8x
faster.

StarTech also makes a $75 parallel printer network server, which
is probably more available than the internal card, and can work
on other parallel interface printers.


IME a usb->parallel adapter will work fine and easily <$10



Bug#1013259: samba-libs: Possible policy violation (now with libndr.so.2 => libndr.so.3)

2022-11-01 Thread Michael Stone

On Tue, Nov 01, 2022 at 10:59:11AM +0300, Michael Tokarev wrote:

And this revealed one more issue here, now with samba 4.17.  Where, the
same libndr.so again, has changed soname from libndr.so.2 to libndr.so.3!

And it looks like *this* is what you're talking about now, once 4.17 with
this new libndr.so.3 hits unstable.


Yes, sorry I wasn't as clear as I thought. :)



Bug#1013259: samba-libs: Possible policy violation (now with libndr.so.2 => libndr.so.3)

2022-11-01 Thread Michael Stone

On Tue, Nov 01, 2022 at 10:59:11AM +0300, Michael Tokarev wrote:

And this revealed one more issue here, now with samba 4.17.  Where, the
same libndr.so again, has changed soname from libndr.so.2 to libndr.so.3!

And it looks like *this* is what you're talking about now, once 4.17 with
this new libndr.so.3 hits unstable.


Yes, sorry I wasn't as clear as I thought. :)



Bug#1013259: samba-libs: Possible policy violation

2022-10-31 Thread Michael Stone
If you can come up with a better solution than a strict dependency, great. But 
the current situation, in which samba upgrades randomly render systems unusable 
is, in my opinion, much much much worse than an overly strict dependency. 
Fundamentally the problem is that you're promising future compatibility but not 
providing that. One way or another samba-libs either needs to not suggest that 
linked binaries will work with future versions, or make sure that they do.
-- 
Michael Stone
(From phone, please excuse typos)



Bug#1013259: samba-libs: Possible policy violation

2022-10-31 Thread Michael Stone
The issue here is that packages built against samba-libs get a 
dependency on samba-libs >= version, and they really either need a 
dependency on samba-libs == version or the samba-libs package needs to 
be versioned (e.g., samba-libs2, samba-libs3, etc.) and conflict with 
other versions, or samba-libs needs to Breaks: every dependent package 
on every update, or somesuch. The current state of affairs results in 
every change to the samba-libs libraries breaks other packages compiled 
against them (namely sssd) until those packages are recompiled, but 
there is nothing in the dependencies to enforce that.




Bug#1023204: sssd-ipa: sssd fails to start due to broken dependency

2022-10-31 Thread Michael Stone
Package: sssd-ipa
Version: 2.7.4-1+b1
Severity: critical
Justification: breaks the whole system

After upgrade of samba-libs syslog has messages like

... sssd[448823]: /usr/libexec/sssd/sssd_pac: error while loading shared 
libraries: libndr.so.2: cannot open shared object file: No such file or 
directory

/usr/lib/x86_64-linux-gnu/sssd/libsss_ipa.so seems to have a dependency on
libndr.so.2 while current samba-libs only provides libndr.so.3.

As a result, there is no uid/gid resolution, etc, for all centrally managed
users, making the system unusable for non-local accounts.

This may be an issue with the way the dependency in the way samba is generating
library dependencies deserving of a separate bug, but for now sssd is broken
until recompiled against latest samba library. The dependency on samba-libs
should probably be a strict version rather than a >= version.

-- System Information:
Debian Release: bookworm/sid
  APT prefers unstable
  APT policy: (500, 'unstable')
Architecture: amd64 (x86_64)
Foreign Architectures: i386

Kernel: Linux 6.0.0-1-amd64 (SMP w/16 CPU threads; PREEMPT)
Kernel taint flags: TAINT_PROPRIETARY_MODULE, TAINT_OOT_MODULE, 
TAINT_UNSIGNED_MODULE
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8), LANGUAGE not set
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled

Versions of packages sssd-ipa depends on:
ii  libc6 2.35-4
ii  libdhash1 0.6.2-1
ii  libipa-hbac0  2.7.4-1+b1
ii  libldap-2.5-0 2.5.13+dfsg-2+b1
ii  libldb2   2:2.6.1+samba4.17.2-3
ii  libpopt0  1.19+dfsg-1
ii  libselinux1   3.4-1+b2
ii  libsemanage2  3.4-1+b2
ii  libsss-idmap0 2.7.4-1+b1
ii  libtalloc22.3.4-2
ii  libtevent00.13.0-2
ii  samba-libs2:4.17.2+dfsg-3
ii  sssd-ad-common2.7.4-1+b1
ii  sssd-common   2.7.4-1+b1
ii  sssd-krb5-common  2.7.4-1+b1

sssd-ipa recommends no packages.

sssd-ipa suggests no packages.

-- no debconf information



Bug#1023204: sssd-ipa: sssd fails to start due to broken dependency

2022-10-31 Thread Michael Stone
Package: sssd-ipa
Version: 2.7.4-1+b1
Severity: critical
Justification: breaks the whole system

After upgrade of samba-libs syslog has messages like

... sssd[448823]: /usr/libexec/sssd/sssd_pac: error while loading shared 
libraries: libndr.so.2: cannot open shared object file: No such file or 
directory

/usr/lib/x86_64-linux-gnu/sssd/libsss_ipa.so seems to have a dependency on
libndr.so.2 while current samba-libs only provides libndr.so.3.

As a result, there is no uid/gid resolution, etc, for all centrally managed
users, making the system unusable for non-local accounts.

This may be an issue with the way the dependency in the way samba is generating
library dependencies deserving of a separate bug, but for now sssd is broken
until recompiled against latest samba library. The dependency on samba-libs
should probably be a strict version rather than a >= version.

-- System Information:
Debian Release: bookworm/sid
  APT prefers unstable
  APT policy: (500, 'unstable')
Architecture: amd64 (x86_64)
Foreign Architectures: i386

Kernel: Linux 6.0.0-1-amd64 (SMP w/16 CPU threads; PREEMPT)
Kernel taint flags: TAINT_PROPRIETARY_MODULE, TAINT_OOT_MODULE, 
TAINT_UNSIGNED_MODULE
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8), LANGUAGE not set
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled

Versions of packages sssd-ipa depends on:
ii  libc6 2.35-4
ii  libdhash1 0.6.2-1
ii  libipa-hbac0  2.7.4-1+b1
ii  libldap-2.5-0 2.5.13+dfsg-2+b1
ii  libldb2   2:2.6.1+samba4.17.2-3
ii  libpopt0  1.19+dfsg-1
ii  libselinux1   3.4-1+b2
ii  libsemanage2  3.4-1+b2
ii  libsss-idmap0 2.7.4-1+b1
ii  libtalloc22.3.4-2
ii  libtevent00.13.0-2
ii  samba-libs2:4.17.2+dfsg-3
ii  sssd-ad-common2.7.4-1+b1
ii  sssd-common   2.7.4-1+b1
ii  sssd-krb5-common  2.7.4-1+b1

sssd-ipa recommends no packages.

sssd-ipa suggests no packages.

-- no debconf information



Re: Firmware GR result - what happens next?

2022-10-19 Thread Michael Stone

On Fri, Oct 14, 2022 at 10:52:01AM +0200, Santiago Ruano Rincón wrote:

5. transitional packages along with a helper package (that fails or
success during install) to prompt the user so they add non-free-firmware
section when needed.

Is there any reason why you are not considering 5.?


The danger we're trying to avoid is that a system with a working 
"something" (say, networking) gets upgraded, user reboots (or machine 
crashes, or there's a power failure, etc, etc.), the working "something" 
is now a not-working "something", and fixing it is really hard for the 
user who has no idea what happened and maybe doesn't have a network or a 
console or whatever any more. A package that fails during install will 
prevent the upgrade from completing, but will leave things in an 
in-between state until some action is taken, the upgrade restarted, and 
the upgrade manages to finish successfully. What happens if the 
reboot/crash/powercycle/etc happens during that in-between state? How do 
you make a firmware helper package that reliably prevents a kernel 
installation when the kernel doesn't have any dependencies on the 
firmware package, and also doesn't yank out the old working firmware, 
etc. I'm sure you can make the install explode, but making it reliably 
explode at just the right time seems harder. I guess this could all 
work, but I'm seeing a lot of potential for partial installs/failures 
with this approach and I suspect this would require transition code in a 
number of packages' preinsts, not a discrete "helper package".




Re: Firmware GR result - what happens next?

2022-10-19 Thread Michael Stone

On Fri, Oct 14, 2022 at 10:52:01AM +0200, Santiago Ruano Rincón wrote:

5. transitional packages along with a helper package (that fails or
success during install) to prompt the user so they add non-free-firmware
section when needed.

Is there any reason why you are not considering 5.?


The danger we're trying to avoid is that a system with a working 
"something" (say, networking) gets upgraded, user reboots (or machine 
crashes, or there's a power failure, etc, etc.), the working "something" 
is now a not-working "something", and fixing it is really hard for the 
user who has no idea what happened and maybe doesn't have a network or a 
console or whatever any more. A package that fails during install will 
prevent the upgrade from completing, but will leave things in an 
in-between state until some action is taken, the upgrade restarted, and 
the upgrade manages to finish successfully. What happens if the 
reboot/crash/powercycle/etc happens during that in-between state? How do 
you make a firmware helper package that reliably prevents a kernel 
installation when the kernel doesn't have any dependencies on the 
firmware package, and also doesn't yank out the old working firmware, 
etc. I'm sure you can make the install explode, but making it reliably 
explode at just the right time seems harder. I guess this could all 
work, but I'm seeing a lot of potential for partial installs/failures 
with this approach and I suspect this would require transition code in a 
number of packages' preinsts, not a discrete "helper package".




Re: Unification of discussion-forum types (was Re: signing up to fourms)

2022-10-19 Thread Michael Stone

On Wed, Oct 19, 2022 at 08:57:54AM -0400, The Wanderer wrote:

If that doesn't happen in practice, I'd be all for the idea, but as far
as I can recall I have yet to encounter a case where it doesn't. We
already get too many examples of people failing to quote correctly even
here on this mailing list, without having actively brought Web-based
interfaces into it.


And, sadly, some people just can't let it go and insist on long threads 
berating people for incorrect quoting, instead of just either ignoring 
the message or focusing on the substance.




Re: How to configure (aka deal with) /tmp in the best way?

2022-10-05 Thread Michael Stone

On Wed, Oct 05, 2022 at 02:07:17PM +0100, Tixy wrote:

I seem to remember many releases ago playing with this, and there was a
config file to set /tmp to tmpfs. A quick google leads me to to look at
'man tmpfs' which says:

/tmp  Previously configured using RAMTMP in /etc/default/rcS. Note that the
 setting in /etc/default/rcS, if present, will still be used, but the
 setting in /etc/default/tmpfs will take precedence if enabled.
 Configured using RAMTMP, TMPFS_SIZE and TMP_SIZE. If desired, the
 defaults may also be overridden with an entry in in /etc/fstab, for
 example:

I also found a 10 year old debian-devel post [1] where it looks like
Debian we're thinking of making /tmp be in RAM by default. (I though it
was at some point, but could well be mistaken).


I don't think it was ever the default, but it was certainly more 
popular. There were performance advantages with that config (which have 
mostly gone away for various reasons) but the downsides tended to have 
more practical impact (e.g., large files dumped in /tmp fill it up).




Re: Firmware GR result - what happens next?

2022-10-03 Thread Michael Stone

On Sun, Oct 02, 2022 at 08:21:31PM +0100, Steve McIntyre wrote:

Plus, as Shengjing Zhu points out: we already expect people to manage
the sources.list anyway on upgrades.


We also try to avoid silent install problems that might or might not 
result in a system that doesn't boot properly.




Re: Firmware GR result - what happens next?

2022-10-03 Thread Michael Stone

On Sun, Oct 02, 2022 at 08:21:31PM +0100, Steve McIntyre wrote:

Plus, as Shengjing Zhu points out: we already expect people to manage
the sources.list anyway on upgrades.


We also try to avoid silent install problems that might or might not 
result in a system that doesn't boot properly.




Re: usrmerge

2022-10-02 Thread Michael Stone

On Sun, Oct 02, 2022 at 06:12:45PM +0100, billium wrote:
may be I am idiot for keeping it so long since a re-install.  I think 
stretch was the install, and it is now on bookworm.


FYI, skipping releases is not supported; in future, go through each 
release in order when upgrading. Whether that was the cause of the 
problem it's impossible to say based on the information available.




Re: Firmware GR result - what happens next?

2022-10-02 Thread Michael Stone

On Sun, Oct 02, 2022 at 03:53:00PM +0100, Steve McIntyre wrote:

On Sun, Oct 02, 2022 at 04:43:47PM +0200, Michael Biebl wrote:

What's the plan for upgraded systems with an existing /etc/apt/sources.list.
Will the new n-f-f section added on upgrades automatically(if non-free was
enabled before)?


So this is the one bit that I don't think we currently have a good
answer for. We've never had a specific script to run on upgrades (like
Ubuntu do), so this kind of potentially breaking change doesn't really
have an obvious place to be fixed.


Is there a reason to not continue to make the packages available in 
non-free? I don't see a reason to force any change on existing systems.




Re: Firmware GR result - what happens next?

2022-10-02 Thread Michael Stone

On Sun, Oct 02, 2022 at 03:53:00PM +0100, Steve McIntyre wrote:

On Sun, Oct 02, 2022 at 04:43:47PM +0200, Michael Biebl wrote:

What's the plan for upgraded systems with an existing /etc/apt/sources.list.
Will the new n-f-f section added on upgrades automatically(if non-free was
enabled before)?


So this is the one bit that I don't think we currently have a good
answer for. We've never had a specific script to run on upgrades (like
Ubuntu do), so this kind of potentially breaking change doesn't really
have an obvious place to be fixed.


Is there a reason to not continue to make the packages available in 
non-free? I don't see a reason to force any change on existing systems.




Re: Switch default from PulseAudio to PipeWire (and WirePlumber) for audio

2022-09-30 Thread Michael Stone

On Thu, Sep 29, 2022 at 04:26:45PM +0200, Vincent Bernat wrote:

On 2022-09-29 15:01, Michael Stone wrote:

On Wed, Sep 28, 2022 at 09:02:15PM -0600, Sam Hartman wrote:

* Finally, I can use bluetooth on linux with reasonably good audio
 quality!


Aren't they both using the same backend? ldac/aptx weren't in 
pulseaudio for a long time, but they are now. Or is there something 
else?


Pipewire has AAC, but not in Debian because libfdk-aac is still 
considered non-free by us while everyone else, including the FSF, 
consider it free.


but that wouldn't be a distinction between pulseaudio and pipewire in 
debian, right?




Re: Switch default from PulseAudio to PipeWire (and WirePlumber) for audio

2022-09-29 Thread Michael Stone

On Wed, Sep 28, 2022 at 09:02:15PM -0600, Sam Hartman wrote:

* Finally, I can use bluetooth on linux with reasonably good audio
 quality!


Aren't they both using the same backend? ldac/aptx weren't in pulseaudio 
for a long time, but they are now. Or is there something else?




  1   2   3   4   5   6   7   8   9   10   >